Osaka second capital now anchors a fragile coalition
Capital shift could widen or rebalance regional education
Classrooms can use this debate to teach coalition politics
Japan’s new government was formed with two seats short.
Big firms helped drive the 2021–2022 inflation surge
Granular data links market power and inflation
Policy and teaching must reflect this granular reality
In late 2022, inflation in the euro area reached a peak of 10.6%.
Japan–China tensions are now a social problem; deterrence alone cannot steady the region
Dedicate a sliver of rising defense spend to history education, exchanges, and crisis literacy
Pair hard power with hard learning so classrooms absorb shocks instead of amplifying them
China’s rare earth strategy is shifting as Western supply chains rapidly diversify
Beijing can trade short-term leverage for long-term alliances built on stable access
Educators and policymakers should treat rare earths as a core case in managing interdependence
AI Resume Verification and the End of Blind Trust in CVs
Picture
Member for
1 year 1 month
Real name
Keith Lee
Bio
Keith Lee is a Professor of AI and Data Science at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI), where he leads research and teaching on AI-driven finance and data science. He is also a Senior Research Fellow with the GIAI Council, advising on the institute’s global research and financial strategy, including initiatives in Asia and the Middle East.
Published
Modified
AI resume verification is now essential for hiring
Verified records make algorithmic screening fair and transparent
Without them, AI quietly decides who gets work
Recent surveys show that about two-thirds of U.S. workers admit they have lied on a resume at least once, or they would consider doing so. At the same time, AI is becoming the primary filter for job applications. A recent study suggests that nearly half of hiring managers already use AI to screen resumes. That number is expected to rise to 83% by the end of 2025. Another survey found that 96% of hiring experts use AI at some point in the recruiting process, mainly for resume screening and review. This creates a trust issue. Self-reported claims, often overstated or false, are evaluated by algorithms without transparency. AI resume verification is becoming important because the old belief—that an experienced human would catch the lies—is no longer true for most job applicants. If AI decides who gets seen by a human, the information it assesses must be testable and verifiable. It should be treated as essential labor-market infrastructure, not just an add-on in applicant tracking software. Emphasizing AI's potential to improve fairness can help build trust among stakeholders.
Why AI Resume Verification Has Become Unavoidable
The push to automate screening is not just a trend. Many employers in white-collar roles now receive hundreds of applications for each position. Some applicant tracking systems report more than 200 applications per job, about three times the pre-2017 level. With this flood of applications, human screeners cannot read every CV, much less investigate every claim. AI tools are used to filter, score, and often reject candidates before a human ever sees their profile. One research review projects that by the end of 2025, 83% of companies will use AI to review resumes, up from 48% today. A growing number of businesses are becoming comfortable allowing AI to reject applicants at various stages. AI resume verification is now essential because the primary hiring process has shifted to rely on algorithmic gatekeepers, even when organizations recognize that these systems can be biased or inaccurate.
Figure 1: AI resume screening is shifting from exception to default, with adoption rising from under half of employers today to more than four in five by 2025.
The issue lies in the data entering these systems, which is often unreliable. One study estimates that around 64% of U.S. workers have lied about personal details, skills, experience, or references on a resume at least once. Another survey indicates that about 70% of applicants have lied or would consider lying to enhance their chances. References, once seen as a safeguard, are also questionable. One poll revealed that a quarter of workers admit to lying about their references, with some using friends, family, or even paid services to impersonate former supervisors. Additionally, recruiters note a rise in AI-generated resumes and deepfake candidates in remote interviews. Vendors are now promoting AI fraud-detection tools that claim to reduce hiring-related fraud by up to 85%. However, these tools are layered on top of systems still reliant on unverified self-reports. Therefore, AI resume verification is not just another feature; it is the essential backbone of a hiring system that is already automated but lacks reliability. Clarifying how verification enhances transparency can reassure policymakers and educators about its integrity.
Designing AI Resume Verification Around Verified Data
A typical response to this issue is to focus on more intelligent algorithms. The idea is that current AI screening tools will be retrained to spot inconsistencies and statistical red flags in resumes. However, recent research indicates that more complex models are not always more trustworthy. A study highlighted by a leading employment law firm found that popular AI tools favored resumes with names commonly associated with white individuals 85% of the time, with a slight bias for male-associated names. Analyses of AI hiring systems also reveal explicit age and gender biases, even when protected characteristics are not explicitly included. If AI learns from historical hiring data and text patterns that already include discrimination and exaggeration, simply adding more parameters won’t address the underlying issues. AI resume verification cannot rely solely on clever pattern recognition or self-reported narratives; it must be based on verified data from the source.
Figure 2: Across major surveys, between a quarter and around two-thirds of candidates admit to lying on their resumes, underlining how fragile self-reported data is for AI screening.
This leads to a second approach: obtaining better data, not just making the model more complex. AI resume verification means connecting algorithms to verified employment and education records that can be checked automatically. The rapid growth of the background-screening industry shows both the demand and existing challenges. One market analysis estimates that the global background-screening market was about USD 3.2 billion in 2023 and is expected to more than double to USD 7.4 billion by 2030. Currently, these checks are slow, manual, and conducted late in the hiring process. In an AI resume verification system, verified information would be prioritized. Employment histories, degrees, major certifications, and key licenses would exist as portable, cryptographically signed records that candidates control but cannot change, allowing platforms to query them via standard APIs. AI would then rank candidates primarily based on these verifiable signals—skills, tenure, and performance ratings, where appropriate—while treating unverified claims and free-text narratives as secondary. A full return to human-only screening is becoming impractical at the current volume of applicants. The real future lies in establishing standards for verified data, inspiring industry leaders and AI developers to innovate responsibly and ethically.
AI Resume Verification and the Future of Education Records
Once hiring moves in this direction, education systems become part of the equation. Degrees, short courses, micro-credentials, and workplace learning all contribute to resumes. Suppose AI resume verification becomes standard in hiring. In that case, the formats and standards of educational records will influence who stands out to employers. However, current educational data is often fragmented and unclear. The OECD’s Skills Outlook 2023 notes that the pace of digital and green transitions is outpacing the capacity of education and skills policies, and that too few adults engage in the formal or informal learning needed to keep up. When learners take short courses or stackable credentials, the results are typically recorded as PDFs or informal badges that many systems cannot reliably interpret. This creates another trust gap: AI may see a degree title and a course name, but it does not capture the skills or performance behind them.
AI resume verification can help fix this, but only if educators take action. Universities, colleges, and training providers can issue digital credentials that are both easy to read and machine-verifiable. These would be structured statements of skills linked to assessments, signed by the institution, and revocable if found to be fraudulent later. Large learning platforms and professional organizations can do the same for bootcamps, MOOCs, and continuing education. Labor-market institutions are already concerned that workers in non-standard or platform jobs lack clear records of their experience. Recent work by the OECD and ILO on measuring platform work shows how some of this labor remains invisible. If these workers could maintain verified records of hours worked, roles held, and ratings received, AI resume verification could highlight their histories rather than overlook them. For education leaders, the challenge is clear: transcripts and certificates must evolve from static documents into living, portable data assets that support both human decisions and AI-driven screening.
Governing AI Resume Verification Before It Governs Us
Having trustworthy data alone will not make AI resume verification fair. People have good reasons to be cautious about AI in hiring. A 2023 Pew Research Center survey found that most Americans expect AI to significantly impact workers, but more believe it will harm workers overall. Recent research in Australia illustrates this concern: an independent study showed that AI interview tools mis-transcribe and misinterpret candidates with strong accents or speech-affecting disabilities, with error rates reaching 22% for some non-native speakers. OECD surveys also indicate that outcomes are better when AI use is combined with consulting and training for workers, not just technology purchases. Therefore, AI resume verification needs governance guidelines alongside technical standards. Candidates should know when AI is used, what verified data is being evaluated, and how to challenge mistakes or outdated information. Regulators can mandate regular bias audits of AI systems, especially when using shared employment and education records that act as public infrastructure.
For educators and labor-market policymakers, the goal is to shape AI resume verification before it quietly determines opportunity for a whole generation. Some immediate priorities are clear. Employers can ensure that automated decisions to reject candidates are never based solely on unverifiable claims. Any negative flags from AI resume verification, such as suspected fraud or inconsistencies, should lead to human review rather than automatic exclusion. Vendors can be required to keep the verification layer separate from the scoring layer, preventing bias in ranking models from affecting the integrity of shared records. Education authorities can support the development of open, interoperable standards for verifiable credentials and employment records, making their use a condition for public funding. Over time, AI resume verification can support fairer hiring and more precise financial aid, better matching workers to retraining, and improved measurement of the benefits from different types of learning.
If two-thirds of workers admit to lying or would lie on a resume, and over four-fifths of employers are moving towards AI screening, maintaining the current system is not feasible. A choice is being made by default: unverified text, filtered by unregulated algorithms, quietly decides who gets an interview and who disappears. The solution is to view AI resume verification as a fundamental public good. This means developing verifiable records of learning and work that can move with people across sectors and borders, designing AI systems that rely on these records instead of guesswork, and enforcing rules to keep human judgment involved where it matters most. For educators, administrators, and policymakers, the urgency to take action is apparent. They must help create an ecosystem in which AI resume verification makes skills and experiences more visible, especially for those outside elite networks. Otherwise, they must accept a future where automated hiring reinforces existing biases. The technology is already available; what is lacking is the commitment to use it to build trust.
The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.
References
Business Insider. (2025, July). Companies are relying on aptitude and personality tests more to combat AI-powered job hunters. Business Insider. Codica. (2025, April 1). How to use AI for fraud detection and candidate verification in hiring platforms. Codica. Fisher Phillips. (2024, November 11). New study shows AI resume screeners prefer white male candidates. Fisher & Phillips LLP. HRO Today. (2023). Over half of employees report lying on resumes. HRO Today. iSmartRecruit. (2025, November 13). Stop recruiting scams: Use AI to identify fake candidates. iSmartRecruit. OECD. (2023a). The impact of AI on the workplace: Main findings from the OECD AI surveys of employers and workers. OECD Publishing. OECD. (2023b). OECD skills outlook 2023. OECD Publishing. OECD & ILO. (2023). Handbook on measuring digital platform employment and work. OECD Publishing / International Labour Organization. Pew Research Center. (2023, April 20). AI in hiring and evaluating workers: What Americans think. Pew Research Center. StandOut-CV. (2025, March 25). How many people lie on their resume to get a job? StandOut-CV. The Interview Guys. (2025, October 15). 83% of companies will use AI resume screening by 2025 (despite 67% acknowledging bias concerns). The Interview Guys. Time. (2025, August 4). When your job interviewer isn’t human. Time Magazine. UNC at Chapel Hill. (2024, June 28). The truth about lying on resumes. University of North Carolina Policy Office. Verified Market Research. (2024). Background screening market size and forecast. Verified Market Research.
Picture
Member for
1 year 1 month
Real name
Keith Lee
Bio
Keith Lee is a Professor of AI and Data Science at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI), where he leads research and teaching on AI-driven finance and data science. He is also a Senior Research Fellow with the GIAI Council, advising on the institute’s global research and financial strategy, including initiatives in Asia and the Middle East.
Export restrictions shift rare earth profits and risks inside producing countries
Indonesia and China gain leverage but concentrate benefits in big firms
Stronger labour and environmental rules are needed so local communities share the gains
Euro stablecoins can limit digital dollar dominance
MiCA rules let Europe steer stablecoins toward multilateral payments
Education and public sectors can jump-start euro stablecoin use
French fiscal stress is raising the risk of eurozone debt contagion
Shocks in France or Italy would spill into Spain, Greece, and the Balkans through banks and bond markets
A permanent ECB–ESM backstop is needed to protect public investment, especially education
AI Hiring Discrimination Is a Design Choice, Not an Accident
Picture
Member for
1 year
Real name
Catherine Maguire
Bio
Catherine Maguire is a Professor of Computer Science and AI Systems at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI). She specializes in machine learning infrastructure and applied data engineering, with a focus on bridging research and large-scale deployment of AI tools in financial and policy contexts. Based in the United States (with summer in Berlin and Zurich), she co-leads SIAI’s technical operations, overseeing the institute’s IT architecture and supporting its research-to-production pipeline for AI-driven finance.
Published
Modified
AI hiring discrimination comes from human design choices, not neutral machines
“Autonomous” systems let organizations hide responsibility while deepening bias
Education institutions must demand audited, accountable AI hiring tools that protect fair opportunity
Two numbers highlight the troubling moment we are in. By 2025, more than 65% of recruiters say they have used AI to hire people. However, 66% of adults in the United States say they would not apply for a job if AI were used to decide who gets hired. The hiring process is quietly becoming a black box that many people, especially those already facing bias, are avoiding. At the same time, recent studies reveal how strongly biased tools can influence human judgment. In one extensive study, people saw racially biased AI recommendations. They chose majority-white or majority-non-white candidates over 90% of the time. Without AI, their choices were nearly fifty-fifty. The lesson is clear: AI hiring discrimination is not a side effect of neutral automation. It directly results from how systems are designed, managed, and governed.
AI hiring discrimination as a design problem
Much of the public debate still views AI hiring discrimination as an unfortunate glitch in otherwise efficient systems. That viewpoint is comforting, but it is incorrect. Bias in automated screening is not a ghost in the machine. It stems from the data designers choose, the goals they set, and the guidelines they ignore. Recent research on résumé-screening models emphasizes this point. One university study found that AI tools ranked names associated with white candidates ahead of others 85% of the time, even when the underlying credentials were the same. Another large experiment with language-model-based hiring tools showed that leading systems favored female candidates overall while disadvantaging Black male candidates with identical profiles. These patterns are not random noise. They demonstrate that design choices embed old hierarchies into new code, shifting discrimination from the interview room into the hiring infrastructure.
Seeing this issue from another angle, autonomy is not a feature of the AI system. It is a narrative people tell. Research on autonomy and AI argues that threats to human choice arise less from mythical “self-aware” systems and more from unclear, poorly governed design. When we label a hiring tool as “autonomous,” we let everyone in the chain disown responsibility for its outcomes. Yet the same experimental evidence that raises concerns about autonomy also shows that people follow biased recommendations with startling compliance. When a racially biased assistant favored white candidates for high-status roles, human screeners chose majority white shortlists more than 90% of the time. When the assistant favored non-white candidates, they switched to majority non-white lists at almost the same rate. The algorithm did not erase human agency; it subtly redirected it.
Figure 1: Recruiters are embracing AI hiring tools at almost the same rate that the public says they would avoid AI-screened jobs, showing that AI hiring discrimination is a design and trust crisis, not just a technical issue.
When autonomy becomes a shield for AI hiring discrimination
This redirection has real legal and social consequences. Anti-discrimination laws in the United States already treat hiring tests and scoring tools as “selection procedures,” regardless of how they are implemented. In 2023, the Equal Employment Opportunity Commission issued guidance confirming that AI-driven assessments fall under Title VII's disparate impact rules. Employers cannot hide behind a vendor or a model. If a screening system filters out protected groups at higher rates and the employer cannot justify the practice, the employer is liable. The EEOC’s first settlement in an AI hiring discrimination case illustrates this point. A tutoring company that used an algorithmic filter to exclude older applicants agreed to pay compensation and change its practices, even though the tool itself appeared simple. The message to the market is clear: AI hiring discrimination is not an accident in the eyes of the law. It is a form of design-mediated bias.
Figure 2: In recent resume-screening experiments, people chose candidates from all racial groups at roughly 50–50 rates without AI, but when they worked with a severely biased AI recommender, their decisions matched the AI’s preferred group around 90 percent of the time, showing how design choices in AI can override human autonomy.
Yet the rhetoric of autonomy continues to dull the moral implications. When companies say “the algorithm made the decision,” they imply that no human intent was involved and thus bear less blame. This matters for people who already carry the burden of structural bias. Suppose Black or disabled candidates find out that automated video interviews mis-transcribe non-standard accents, with error rates up to 22% for some groups. In that case, many will simply choose not to apply. If students from low-income backgrounds hear that employers use unclear resume filters trained on past “top performers,” they may expect their non-traditional paths to count against them. Surveys indicate that two-thirds of U.S. adults would avoid applying for roles that rely on AI in hiring. Opting out is a rational choice for self-protection. Still, it also pushes people away from mainstream opportunities, undermining decades of effort to increase access.
AI hiring discrimination in education and early careers
These issues are not limited to corporate hiring. Education systems are now at the center of AI hiring discrimination, both as employers and as entry points for students. Universities and school networks increasingly use AI-enabled applicant tracking systems to screen staff and faculty. Recruitment firms specializing in higher education promote automated resume ranking, predictive scoring, and chatbot pre-screening as standard features. In international school recruitment, AI tools are marketed as a way to sift through global teacher pools and reduce time-to-hire. The promise is speed and efficiency, but the risk is quietly excluding the diverse voices that education claims to value. When an algorithm learns from past hiring data, in which certain nationalities, genders, or universities dominate, it can replicate those patterns at scale unless designers intervene.
AI hiring discrimination also affects students long before they enter the job market. Large educational employers and graduate programs are experimenting with AI-based games, video interviews, and written assessments. One Australian study found that hiring AI struggled to handle diverse accents, with speech recognition error rates of 12 to 22% for some non-native English speakers. These errors can affect scores, even when the answers' content is strong. Meanwhile, teacher-training programs like Teach First have had to rethink their recruitment because many applicants now use AI to generate application essays. The hiring pipeline is filled with AI on both sides: candidates rely on it to write, and institutions rely on it to evaluate. Without clear safeguards, that cycle can eliminate nuance, context, and individuality, particularly for first-generation students and international graduates who do not fit the patterns in the training data.
Educators cannot consider this someone else's problem. Career services are now routinely coaching students on how to navigate AI-driven hiring. Business schools are briefing students on what AI-based screening means for job seekers and how to prepare for it. At the same time, higher education recruiters are also adopting AI to shortlist staff and adjunct faculty. This dual exposure means that universities influence both sides of the AI hiring discrimination issue. They help create the systems and send students into them. This position carries a special responsibility. Institutions that claim to support equity cannot outsource hiring to unclear tools, then express surprise when appointment lists and fellowships reflect familiar lines of race, gender, and class.
Designing accountable AI hiring systems
If AI hiring discrimination is a design problem, then design is where policy must take action. The first step is to change the default narrative. AI systems should be treated as supportive tools within a human-led hiring framework, not as autonomous decision-makers. Legal trends are moving in this direction. New state-level laws in the United States, such as recent statutes in Colorado and elsewhere, treat both intentional and unintentional algorithmic harms as subjects of regulation rather than merely technical adjustments. In Europe, anti-discrimination and data-protection rules, along with the emerging AI Act, place clear duties on users to test high-risk systems and document their impacts. The core idea is straightforward: if a tool screens people out of jobs or education, its operators must explain how it works, measure who it harms, and address those harms in practice, not just in code.
Education systems can move faster than general regulators. Universities and school networks can require any AI hiring or admissions tool they use to undergo regular fairness audits, with results reported to governance bodies that include both student and staff representatives. They can prohibit fully automated rejections for teaching roles, scholarships, and student jobs, insisting that a human decision-maker review any adverse outcome from a high-risk model. They can enforce procurement rules that reject systems whose vendors will not disclose essential information about training data, evaluation, and error rates across demographic groups. Most importantly, they can integrate design literacy into their curricula. Computer-science and data-science programs should treat questions of autonomy, bias, and accountability as fundamental, rather than optional.
None of this will be easy. Employers will argue that strict rules on AI hiring discrimination slow down recruitment and hurt competitiveness. Vendors will caution that revealing details of their systems exposes trade secrets. Some policymakers will worry that strong liability rules could drive innovation elsewhere. These concerns deserve consideration, but they should not dominate the agenda. Studies of AI in recruitment already show that well-designed systems can promote fairness when built on diverse, high-quality data and linked to clear accountability. Public trust, especially among marginalized groups, will increase when people see that there are ways to contest decisions, independent audits of outcomes, and genuine consequences when tools fail. In the long run, hiring systems that protect autonomy and dignity are more likely to attract talent than those that treat applicants as mere data points.
The initial numbers show why this debate cannot wait. Most recruiters use AI to help make decisions, while most adults say they would prefer to walk away rather than apply through an AI-driven process. Something fundamental has broken in the social contract around work. The evidence showing how biased recommendations can skew human choices to extremes emphasizes that leaving design unchecked will not only maintain existing inequalities; it will exacerbate them. For education systems, the stakes are even higher. They are preparing the next generation of designers and decision-makers while depending on systems that could shut those students out. The answer is not to ban AI from hiring. Instead, we must recognize that every “autonomous” system is a series of human decisions in disguise. Policy should make that chain visible, traceable, and accountable. Only then can we move from a landscape of AI hiring discrimination to one that genuinely broadens human autonomy.
The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.
References
Aptahire. (2025). Top 10 reasons why AI hiring is improving talent acquisition in education. Brookings Institution. (2025). AI’s threat to individual autonomy in hiring decisions. Chen, Z. (2023). Ethics and discrimination in artificial intelligence-enabled recruitment. Humanities and Social Sciences Communications. DemandSage. (2025). AI recruitment statistics for 2025. Equal Employment Opportunity Commission. (2023). Assessing adverse impact in software, algorithms, and artificial intelligence used in employment selection procedures under Title VII of the Civil Rights Act of 1964. Equal Employment Opportunity Commission. (2024). What is the EEOC’s role in AI? European Commission. (2024). AI Act: Regulatory framework on artificial intelligence. European Union Agency for Fundamental Rights. (2022). Bias in algorithms – Artificial intelligence and discrimination. Harvard Business Publishing Education. (2025). What AI-based hiring means for your students. Hertie School. (2024). The threat to human autonomy in AI systems is a design problem. Milne, S. (2024). AI tools show biases in ranking job applicants’ names according to perceived race and gender. University of Washington News. Taylor, J. (2025). People interviewed by AI for jobs face discrimination risks, Australian study warns. The Guardian. The Guardian. (2025). Teach First job applicants will get in-person interviews after more apply using AI. Thomson Reuters. (2021). New study finds AI-enabled anti-Black bias in recruiting. TPP Recruitment. (2024). How is AI impacting higher education recruitment? TeachAway. (2025). AI tools that are changing how international schools hire teachers. U.Va. Sloane Lab. (2025). Talent acquisition and recruiting AI index. Wilson, K., & Caliskan, A. (2025). Gender, race, and intersectional bias in AI resume screening via language model retrieval.
Picture
Member for
1 year
Real name
Catherine Maguire
Bio
Catherine Maguire is a Professor of Computer Science and AI Systems at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI). She specializes in machine learning infrastructure and applied data engineering, with a focus on bridging research and large-scale deployment of AI tools in financial and policy contexts. Based in the United States (with summer in Berlin and Zurich), she co-leads SIAI’s technical operations, overseeing the institute’s IT architecture and supporting its research-to-production pipeline for AI-driven finance.
US interest costs are soaring and debt is mounting
This US federal debt crisis squeezes education and future investment
The article urges early fiscal reform that shields human capital