Skip to main content

AI Resume Verification and the End of Blind Trust in CVs

AI Resume Verification and the End of Blind Trust in CVs

Picture

Member for

1 year 1 month
Real name
Keith Lee
Bio
Keith Lee is a Professor of AI and Data Science at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI), where he leads research and teaching on AI-driven finance and data science. He is also a Senior Research Fellow with the GIAI Council, advising on the institute’s global research and financial strategy, including initiatives in Asia and the Middle East.

Modified

AI resume verification is now essential for hiring
Verified records make algorithmic screening fair and transparent
Without them, AI quietly decides who gets work

Recent surveys show that about two-thirds of U.S. workers admit they have lied on a resume at least once, or they would consider doing so. At the same time, AI is becoming the primary filter for job applications. A recent study suggests that nearly half of hiring managers already use AI to screen resumes. That number is expected to rise to 83% by the end of 2025. Another survey found that 96% of hiring experts use AI at some point in the recruiting process, mainly for resume screening and review. This creates a trust issue. Self-reported claims, often overstated or false, are evaluated by algorithms without transparency. AI resume verification is becoming important because the old belief—that an experienced human would catch the lies—is no longer true for most job applicants. If AI decides who gets seen by a human, the information it assesses must be testable and verifiable. It should be treated as essential labor-market infrastructure, not just an add-on in applicant tracking software. Emphasizing AI's potential to improve fairness can help build trust among stakeholders.

Why AI Resume Verification Has Become Unavoidable

The push to automate screening is not just a trend. Many employers in white-collar roles now receive hundreds of applications for each position. Some applicant tracking systems report more than 200 applications per job, about three times the pre-2017 level. With this flood of applications, human screeners cannot read every CV, much less investigate every claim. AI tools are used to filter, score, and often reject candidates before a human ever sees their profile. One research review projects that by the end of 2025, 83% of companies will use AI to review resumes, up from 48% today. A growing number of businesses are becoming comfortable allowing AI to reject applicants at various stages. AI resume verification is now essential because the primary hiring process has shifted to rely on algorithmic gatekeepers, even when organizations recognize that these systems can be biased or inaccurate.

Figure 1: AI resume screening is shifting from exception to default, with adoption rising from under half of employers today to more than four in five by 2025.

The issue lies in the data entering these systems, which is often unreliable. One study estimates that around 64% of U.S. workers have lied about personal details, skills, experience, or references on a resume at least once. Another survey indicates that about 70% of applicants have lied or would consider lying to enhance their chances. References, once seen as a safeguard, are also questionable. One poll revealed that a quarter of workers admit to lying about their references, with some using friends, family, or even paid services to impersonate former supervisors. Additionally, recruiters note a rise in AI-generated resumes and deepfake candidates in remote interviews. Vendors are now promoting AI fraud-detection tools that claim to reduce hiring-related fraud by up to 85%. However, these tools are layered on top of systems still reliant on unverified self-reports. Therefore, AI resume verification is not just another feature; it is the essential backbone of a hiring system that is already automated but lacks reliability. Clarifying how verification enhances transparency can reassure policymakers and educators about its integrity.

Designing AI Resume Verification Around Verified Data

A typical response to this issue is to focus on more intelligent algorithms. The idea is that current AI screening tools will be retrained to spot inconsistencies and statistical red flags in resumes. However, recent research indicates that more complex models are not always more trustworthy. A study highlighted by a leading employment law firm found that popular AI tools favored resumes with names commonly associated with white individuals 85% of the time, with a slight bias for male-associated names. Analyses of AI hiring systems also reveal explicit age and gender biases, even when protected characteristics are not explicitly included. If AI learns from historical hiring data and text patterns that already include discrimination and exaggeration, simply adding more parameters won’t address the underlying issues. AI resume verification cannot rely solely on clever pattern recognition or self-reported narratives; it must be based on verified data from the source.

Figure 2: Across major surveys, between a quarter and around two-thirds of candidates admit to lying on their resumes, underlining how fragile self-reported data is for AI screening.

This leads to a second approach: obtaining better data, not just making the model more complex. AI resume verification means connecting algorithms to verified employment and education records that can be checked automatically. The rapid growth of the background-screening industry shows both the demand and existing challenges. One market analysis estimates that the global background-screening market was about USD 3.2 billion in 2023 and is expected to more than double to USD 7.4 billion by 2030. Currently, these checks are slow, manual, and conducted late in the hiring process. In an AI resume verification system, verified information would be prioritized. Employment histories, degrees, major certifications, and key licenses would exist as portable, cryptographically signed records that candidates control but cannot change, allowing platforms to query them via standard APIs. AI would then rank candidates primarily based on these verifiable signals—skills, tenure, and performance ratings, where appropriate—while treating unverified claims and free-text narratives as secondary. A full return to human-only screening is becoming impractical at the current volume of applicants. The real future lies in establishing standards for verified data, inspiring industry leaders and AI developers to innovate responsibly and ethically.

AI Resume Verification and the Future of Education Records

Once hiring moves in this direction, education systems become part of the equation. Degrees, short courses, micro-credentials, and workplace learning all contribute to resumes. Suppose AI resume verification becomes standard in hiring. In that case, the formats and standards of educational records will influence who stands out to employers. However, current educational data is often fragmented and unclear. The OECD’s Skills Outlook 2023 notes that the pace of digital and green transitions is outpacing the capacity of education and skills policies, and that too few adults engage in the formal or informal learning needed to keep up. When learners take short courses or stackable credentials, the results are typically recorded as PDFs or informal badges that many systems cannot reliably interpret. This creates another trust gap: AI may see a degree title and a course name, but it does not capture the skills or performance behind them.

AI resume verification can help fix this, but only if educators take action. Universities, colleges, and training providers can issue digital credentials that are both easy to read and machine-verifiable. These would be structured statements of skills linked to assessments, signed by the institution, and revocable if found to be fraudulent later. Large learning platforms and professional organizations can do the same for bootcamps, MOOCs, and continuing education. Labor-market institutions are already concerned that workers in non-standard or platform jobs lack clear records of their experience. Recent work by the OECD and ILO on measuring platform work shows how some of this labor remains invisible. If these workers could maintain verified records of hours worked, roles held, and ratings received, AI resume verification could highlight their histories rather than overlook them. For education leaders, the challenge is clear: transcripts and certificates must evolve from static documents into living, portable data assets that support both human decisions and AI-driven screening.

Governing AI Resume Verification Before It Governs Us

Having trustworthy data alone will not make AI resume verification fair. People have good reasons to be cautious about AI in hiring. A 2023 Pew Research Center survey found that most Americans expect AI to significantly impact workers, but more believe it will harm workers overall. Recent research in Australia illustrates this concern: an independent study showed that AI interview tools mis-transcribe and misinterpret candidates with strong accents or speech-affecting disabilities, with error rates reaching 22% for some non-native speakers. OECD surveys also indicate that outcomes are better when AI use is combined with consulting and training for workers, not just technology purchases. Therefore, AI resume verification needs governance guidelines alongside technical standards. Candidates should know when AI is used, what verified data is being evaluated, and how to challenge mistakes or outdated information. Regulators can mandate regular bias audits of AI systems, especially when using shared employment and education records that act as public infrastructure.

For educators and labor-market policymakers, the goal is to shape AI resume verification before it quietly determines opportunity for a whole generation. Some immediate priorities are clear. Employers can ensure that automated decisions to reject candidates are never based solely on unverifiable claims. Any negative flags from AI resume verification, such as suspected fraud or inconsistencies, should lead to human review rather than automatic exclusion. Vendors can be required to keep the verification layer separate from the scoring layer, preventing bias in ranking models from affecting the integrity of shared records. Education authorities can support the development of open, interoperable standards for verifiable credentials and employment records, making their use a condition for public funding. Over time, AI resume verification can support fairer hiring and more precise financial aid, better matching workers to retraining, and improved measurement of the benefits from different types of learning.

If two-thirds of workers admit to lying or would lie on a resume, and over four-fifths of employers are moving towards AI screening, maintaining the current system is not feasible. A choice is being made by default: unverified text, filtered by unregulated algorithms, quietly decides who gets an interview and who disappears. The solution is to view AI resume verification as a fundamental public good. This means developing verifiable records of learning and work that can move with people across sectors and borders, designing AI systems that rely on these records instead of guesswork, and enforcing rules to keep human judgment involved where it matters most. For educators, administrators, and policymakers, the urgency to take action is apparent. They must help create an ecosystem in which AI resume verification makes skills and experiences more visible, especially for those outside elite networks. Otherwise, they must accept a future where automated hiring reinforces existing biases. The technology is already available; what is lacking is the commitment to use it to build trust.


The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.


References

Business Insider. (2025, July). Companies are relying on aptitude and personality tests more to combat AI-powered job hunters. Business Insider.
Codica. (2025, April 1). How to use AI for fraud detection and candidate verification in hiring platforms. Codica.
Fisher Phillips. (2024, November 11). New study shows AI resume screeners prefer white male candidates. Fisher & Phillips LLP.
HRO Today. (2023). Over half of employees report lying on resumes. HRO Today.
iSmartRecruit. (2025, November 13). Stop recruiting scams: Use AI to identify fake candidates. iSmartRecruit.
OECD. (2023a). The impact of AI on the workplace: Main findings from the OECD AI surveys of employers and workers. OECD Publishing.
OECD. (2023b). OECD skills outlook 2023. OECD Publishing.
OECD & ILO. (2023). Handbook on measuring digital platform employment and work. OECD Publishing / International Labour Organization.
Pew Research Center. (2023, April 20). AI in hiring and evaluating workers: What Americans think. Pew Research Center.
StandOut-CV. (2025, March 25). How many people lie on their resume to get a job? StandOut-CV.
The Interview Guys. (2025, October 15). 83% of companies will use AI resume screening by 2025 (despite 67% acknowledging bias concerns). The Interview Guys.
Time. (2025, August 4). When your job interviewer isn’t human. Time Magazine.
UNC at Chapel Hill. (2024, June 28). The truth about lying on resumes. University of North Carolina Policy Office.
Verified Market Research. (2024). Background screening market size and forecast. Verified Market Research.

Picture

Member for

1 year 1 month
Real name
Keith Lee
Bio
Keith Lee is a Professor of AI and Data Science at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI), where he leads research and teaching on AI-driven finance and data science. He is also a Senior Research Fellow with the GIAI Council, advising on the institute’s global research and financial strategy, including initiatives in Asia and the Middle East.

AI Hiring Discrimination Is a Design Choice, Not an Accident

AI Hiring Discrimination Is a Design Choice, Not an Accident

Picture

Member for

1 year
Real name
Catherine Maguire
Bio
Catherine Maguire is a Professor of Computer Science and AI Systems at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI). She specializes in machine learning infrastructure and applied data engineering, with a focus on bridging research and large-scale deployment of AI tools in financial and policy contexts. Based in the United States (with summer in Berlin and Zurich), she co-leads SIAI’s technical operations, overseeing the institute’s IT architecture and supporting its research-to-production pipeline for AI-driven finance.

Modified

AI hiring discrimination comes from human design choices, not neutral machines
“Autonomous” systems let organizations hide responsibility while deepening bias
Education institutions must demand audited, accountable AI hiring tools that protect fair opportunity

Two numbers highlight the troubling moment we are in. By 2025, more than 65% of recruiters say they have used AI to hire people. However, 66% of adults in the United States say they would not apply for a job if AI were used to decide who gets hired. The hiring process is quietly becoming a black box that many people, especially those already facing bias, are avoiding. At the same time, recent studies reveal how strongly biased tools can influence human judgment. In one extensive study, people saw racially biased AI recommendations. They chose majority-white or majority-non-white candidates over 90% of the time. Without AI, their choices were nearly fifty-fifty. The lesson is clear: AI hiring discrimination is not a side effect of neutral automation. It directly results from how systems are designed, managed, and governed.

AI hiring discrimination as a design problem

Much of the public debate still views AI hiring discrimination as an unfortunate glitch in otherwise efficient systems. That viewpoint is comforting, but it is incorrect. Bias in automated screening is not a ghost in the machine. It stems from the data designers choose, the goals they set, and the guidelines they ignore. Recent research on résumé-screening models emphasizes this point. One university study found that AI tools ranked names associated with white candidates ahead of others 85% of the time, even when the underlying credentials were the same. Another large experiment with language-model-based hiring tools showed that leading systems favored female candidates overall while disadvantaging Black male candidates with identical profiles. These patterns are not random noise. They demonstrate that design choices embed old hierarchies into new code, shifting discrimination from the interview room into the hiring infrastructure.

Seeing this issue from another angle, autonomy is not a feature of the AI system. It is a narrative people tell. Research on autonomy and AI argues that threats to human choice arise less from mythical “self-aware” systems and more from unclear, poorly governed design. When we label a hiring tool as “autonomous,” we let everyone in the chain disown responsibility for its outcomes. Yet the same experimental evidence that raises concerns about autonomy also shows that people follow biased recommendations with startling compliance. When a racially biased assistant favored white candidates for high-status roles, human screeners chose majority white shortlists more than 90% of the time. When the assistant favored non-white candidates, they switched to majority non-white lists at almost the same rate. The algorithm did not erase human agency; it subtly redirected it.

Figure 1: Recruiters are embracing AI hiring tools at almost the same rate that the public says they would avoid AI-screened jobs, showing that AI hiring discrimination is a design and trust crisis, not just a technical issue.

When autonomy becomes a shield for AI hiring discrimination

This redirection has real legal and social consequences. Anti-discrimination laws in the United States already treat hiring tests and scoring tools as “selection procedures,” regardless of how they are implemented. In 2023, the Equal Employment Opportunity Commission issued guidance confirming that AI-driven assessments fall under Title VII's disparate impact rules. Employers cannot hide behind a vendor or a model. If a screening system filters out protected groups at higher rates and the employer cannot justify the practice, the employer is liable. The EEOC’s first settlement in an AI hiring discrimination case illustrates this point. A tutoring company that used an algorithmic filter to exclude older applicants agreed to pay compensation and change its practices, even though the tool itself appeared simple. The message to the market is clear: AI hiring discrimination is not an accident in the eyes of the law. It is a form of design-mediated bias.

Figure 2: In recent resume-screening experiments, people chose candidates from all racial groups at roughly 50–50 rates without AI, but when they worked with a severely biased AI recommender, their decisions matched the AI’s preferred group around 90 percent of the time, showing how design choices in AI can override human autonomy.

Yet the rhetoric of autonomy continues to dull the moral implications. When companies say “the algorithm made the decision,” they imply that no human intent was involved and thus bear less blame. This matters for people who already carry the burden of structural bias. Suppose Black or disabled candidates find out that automated video interviews mis-transcribe non-standard accents, with error rates up to 22% for some groups. In that case, many will simply choose not to apply. If students from low-income backgrounds hear that employers use unclear resume filters trained on past “top performers,” they may expect their non-traditional paths to count against them. Surveys indicate that two-thirds of U.S. adults would avoid applying for roles that rely on AI in hiring. Opting out is a rational choice for self-protection. Still, it also pushes people away from mainstream opportunities, undermining decades of effort to increase access.

AI hiring discrimination in education and early careers

These issues are not limited to corporate hiring. Education systems are now at the center of AI hiring discrimination, both as employers and as entry points for students. Universities and school networks increasingly use AI-enabled applicant tracking systems to screen staff and faculty. Recruitment firms specializing in higher education promote automated resume ranking, predictive scoring, and chatbot pre-screening as standard features. In international school recruitment, AI tools are marketed as a way to sift through global teacher pools and reduce time-to-hire. The promise is speed and efficiency, but the risk is quietly excluding the diverse voices that education claims to value. When an algorithm learns from past hiring data, in which certain nationalities, genders, or universities dominate, it can replicate those patterns at scale unless designers intervene.

AI hiring discrimination also affects students long before they enter the job market. Large educational employers and graduate programs are experimenting with AI-based games, video interviews, and written assessments. One Australian study found that hiring AI struggled to handle diverse accents, with speech recognition error rates of 12 to 22% for some non-native English speakers. These errors can affect scores, even when the answers' content is strong. Meanwhile, teacher-training programs like Teach First have had to rethink their recruitment because many applicants now use AI to generate application essays. The hiring pipeline is filled with AI on both sides: candidates rely on it to write, and institutions rely on it to evaluate. Without clear safeguards, that cycle can eliminate nuance, context, and individuality, particularly for first-generation students and international graduates who do not fit the patterns in the training data.

Educators cannot consider this someone else's problem. Career services are now routinely coaching students on how to navigate AI-driven hiring. Business schools are briefing students on what AI-based screening means for job seekers and how to prepare for it. At the same time, higher education recruiters are also adopting AI to shortlist staff and adjunct faculty. This dual exposure means that universities influence both sides of the AI hiring discrimination issue. They help create the systems and send students into them. This position carries a special responsibility. Institutions that claim to support equity cannot outsource hiring to unclear tools, then express surprise when appointment lists and fellowships reflect familiar lines of race, gender, and class.

Designing accountable AI hiring systems

If AI hiring discrimination is a design problem, then design is where policy must take action. The first step is to change the default narrative. AI systems should be treated as supportive tools within a human-led hiring framework, not as autonomous decision-makers. Legal trends are moving in this direction. New state-level laws in the United States, such as recent statutes in Colorado and elsewhere, treat both intentional and unintentional algorithmic harms as subjects of regulation rather than merely technical adjustments. In Europe, anti-discrimination and data-protection rules, along with the emerging AI Act, place clear duties on users to test high-risk systems and document their impacts. The core idea is straightforward: if a tool screens people out of jobs or education, its operators must explain how it works, measure who it harms, and address those harms in practice, not just in code.

Education systems can move faster than general regulators. Universities and school networks can require any AI hiring or admissions tool they use to undergo regular fairness audits, with results reported to governance bodies that include both student and staff representatives. They can prohibit fully automated rejections for teaching roles, scholarships, and student jobs, insisting that a human decision-maker review any adverse outcome from a high-risk model. They can enforce procurement rules that reject systems whose vendors will not disclose essential information about training data, evaluation, and error rates across demographic groups. Most importantly, they can integrate design literacy into their curricula. Computer-science and data-science programs should treat questions of autonomy, bias, and accountability as fundamental, rather than optional.

None of this will be easy. Employers will argue that strict rules on AI hiring discrimination slow down recruitment and hurt competitiveness. Vendors will caution that revealing details of their systems exposes trade secrets. Some policymakers will worry that strong liability rules could drive innovation elsewhere. These concerns deserve consideration, but they should not dominate the agenda. Studies of AI in recruitment already show that well-designed systems can promote fairness when built on diverse, high-quality data and linked to clear accountability. Public trust, especially among marginalized groups, will increase when people see that there are ways to contest decisions, independent audits of outcomes, and genuine consequences when tools fail. In the long run, hiring systems that protect autonomy and dignity are more likely to attract talent than those that treat applicants as mere data points.

The initial numbers show why this debate cannot wait. Most recruiters use AI to help make decisions, while most adults say they would prefer to walk away rather than apply through an AI-driven process. Something fundamental has broken in the social contract around work. The evidence showing how biased recommendations can skew human choices to extremes emphasizes that leaving design unchecked will not only maintain existing inequalities; it will exacerbate them. For education systems, the stakes are even higher. They are preparing the next generation of designers and decision-makers while depending on systems that could shut those students out. The answer is not to ban AI from hiring. Instead, we must recognize that every “autonomous” system is a series of human decisions in disguise. Policy should make that chain visible, traceable, and accountable. Only then can we move from a landscape of AI hiring discrimination to one that genuinely broadens human autonomy.


The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.


References

Aptahire. (2025). Top 10 reasons why AI hiring is improving talent acquisition in education.
Brookings Institution. (2025). AI’s threat to individual autonomy in hiring decisions.
Chen, Z. (2023). Ethics and discrimination in artificial intelligence-enabled recruitment. Humanities and Social Sciences Communications.
DemandSage. (2025). AI recruitment statistics for 2025.
Equal Employment Opportunity Commission. (2023). Assessing adverse impact in software, algorithms, and artificial intelligence used in employment selection procedures under Title VII of the Civil Rights Act of 1964.
Equal Employment Opportunity Commission. (2024). What is the EEOC’s role in AI?
European Commission. (2024). AI Act: Regulatory framework on artificial intelligence.
European Union Agency for Fundamental Rights. (2022). Bias in algorithms – Artificial intelligence and discrimination.
Harvard Business Publishing Education. (2025). What AI-based hiring means for your students.
Hertie School. (2024). The threat to human autonomy in AI systems is a design problem.
Milne, S. (2024). AI tools show biases in ranking job applicants’ names according to perceived race and gender. University of Washington News.
Taylor, J. (2025). People interviewed by AI for jobs face discrimination risks, Australian study warns. The Guardian.
The Guardian. (2025). Teach First job applicants will get in-person interviews after more apply using AI.
Thomson Reuters. (2021). New study finds AI-enabled anti-Black bias in recruiting.
TPP Recruitment. (2024). How is AI impacting higher education recruitment?
TeachAway. (2025). AI tools that are changing how international schools hire teachers.
U.Va. Sloane Lab. (2025). Talent acquisition and recruiting AI index.
Wilson, K., & Caliskan, A. (2025). Gender, race, and intersectional bias in AI resume screening via language model retrieval.

Picture

Member for

1 year
Real name
Catherine Maguire
Bio
Catherine Maguire is a Professor of Computer Science and AI Systems at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI). She specializes in machine learning infrastructure and applied data engineering, with a focus on bridging research and large-scale deployment of AI tools in financial and policy contexts. Based in the United States (with summer in Berlin and Zurich), she co-leads SIAI’s technical operations, overseeing the institute’s IT architecture and supporting its research-to-production pipeline for AI-driven finance.

Beyond the 40 Jobs at Risk: AI Job Market Data Educators Can Actually Use

Beyond the 40 Jobs at Risk: AI Job Market Data Educators Can Actually Use

Picture

Member for

1 year 1 month
Real name
Keith Lee
Bio
Keith Lee is a Professor of AI and Data Science at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI), where he leads research and teaching on AI-driven finance and data science. He is also a Senior Research Fellow with the GIAI Council, advising on the institute’s global research and financial strategy, including initiatives in Asia and the Middle East.

Modified

AI risk lists miss real hiring patterns
Job data shows specific roles shrinking and new AI roles growing
Education and policy must follow real vacancy data, not myths

Every week, new headlines warn that artificial intelligence could disrupt our careers. One Microsoft-backed analysis of workplace chats indicates that for specific roles, up to 90% of daily tasks could potentially be performed by AI tools, with interpreters, historians, and writers at the highest risk. However, AI job market data presents a different picture. Analyzing 180 million recent job postings, overall demand decreased by a modest 8% year over year, while postings for machine learning engineers increased by 40% following a 78% rise the previous year. Meanwhile, fewer than 1% of advertised roles explicitly require generative AI skills, despite those skills offering higher pay and appearing more frequently on resumes. For educators and policymakers, the clear message is that AI job market data already reveals where work is changing, and it often does not align neatly with the commonly circulated "40 jobs at risk" lists.

AI Job Market Data vs. Headline Risk Lists

The risk lists currently circulating are based on clever but abstract models. The Microsoft study that supports one "40 jobs most at risk" article examined how well AI could perform tasks in different occupations. It produced scores that placed translators, writers, data scientists, and customer service representatives at the top, with some roles deemed 90% automatable. In contrast, manual or relational jobs such as roofing, cleaning, and nursing support rank much lower, exposing only a small portion of tasks. A separate economic study, using two centuries of patent and job data, reaches a similar conclusion: unlike past automation waves that primarily affected low-paid, low-education jobs, AI is expected to affect more well-paid, degree-intensive, often female-dominated fields.

These models are significant as they help identify which types of thinking and communication overlap most with AI's current strengths in pattern recognition, text generation, and number analysis. They also reveal a crucial structural risk: AI may increase pressure on knowledge workers rather than on routine manual staff, reversing the usual trend of technological disruption. However, these models are fundamentally thought experiments. They treat each occupation as an average set of tasks, assume rapid adoption of tools, and do not fully account for budgets, regulations, and managerial decisions that shape hiring in real companies. They indicate where automation could occur, while AI job market data, derived from tens or hundreds of millions of postings, illustrates where employers are actually focusing their efforts today.

What AI Job Market Data Reveals About Real Demand

The 8% decrease in job postings identified in a recent analysis of 180 million global vacancies establishes the context: labor demand is cooling, but not collapsing. Within this context, specific roles at risk of being replaced by AI stand out. Postings for computer graphic artists dropped by 33% in 2025, while those for photographers and writers fell by 28%, following similar declines the year prior. Jobs for journalists and reporters decreased by 22%, and public relations specialists saw a 21% decline. These are not vague categories; they represent the execution end of creative work, where tasks often involve producing large volumes of images or text based on a brief. The data indicates that when AI reduces the cost of that output, employers require fewer entry-level workers, even as they continue to hire designers and product teams engaged in research, client interaction, and strategic decisions.

The same dataset reveals a second, less expected cluster of losses in regulatory and sustainability roles. Postings for corporate compliance specialists fell by around 29%, while sustainability specialists saw a 28% decline, closely followed by environmental technicians. The drop occurs across all job grades. Sustainability managers and directors decreased by more than 30%, and chief compliance officer roles fell by over a third. Here, AI is not the main factor. Instead, political backlash against environmental, social, and governance rules, along with changing enforcement priorities, appears to be influencing this pullback. The lesson is uncomfortable yet crucial: some of the most significant job losses in the current AI era arise not from automation but from policy decisions that make entire areas of expertise optional. No exposure index based solely on technical abilities can reflect that reality.

Figure 1: Creative executors, compliance staff and medical scribes are shrinking much faster than the overall job market, while most other roles move only slightly.

Healthcare provides a clearer view of AI automation in practice. Jobs for medical scribes, who listen to clinical encounters and create structured notes, have dropped by 20%, even as similar healthcare administrative roles remain relatively stable. Medical coders show no significant change, while medical assistants are slightly below the broader market. This trend aligns with the use of AI-powered documentation tools in consultation rooms to transcribe doctor-patient conversations and generate clinical notes almost instantly. Nevertheless, even in this case, the data indicate a limited scope of substitution rather than a widespread wave of job losses. A specific documentation task has become more cost-effective, while the broader team supporting patients remains intact.

On the demand side, AI job market data presents a contrary trend to the alarming headlines. Postings for machine learning engineers surged by 40% in 2025 after a 78% increase in 2024, making it the fastest-growing job title in the 180-million-job dataset. Robotics engineers, research scientists, and data center engineers also experienced growth in the high single digits. Senior leadership jobs decreased only 1.7% compared to the 8% market baseline, while middle management roles fell 5.7% and individual contributor jobs declined 9%. In marketing, most roles mirrored the overall market. Still, postings for influencer marketing specialists increased by 18.3%, on top of a 10% rise the previous year, signaling significant demand for trusted human figures in a landscape filled with AI-generated content.

New research from Indeed, summarized in an AI skill transformation index, further supports this view. By examining nearly 3,000 work skills in job postings from May 2024 to April 2025, the team estimates that 26% of jobs will undergo significant changes due to AI, 54% will see moderate changes, and 20% will have low exposure to AI. Yet only 0.7% of skills are considered fully replaceable today. Software development, data and analytics, accounting, marketing, and administrative assistance rank among the most affected groups, with two-thirds to four-fifths of their tasks potentially being supported by AI. Still, job postings in many of these categories remain stable or evolve, indicating that employers are redesigning roles instead of eliminating them. When AI job market data and exposure models are analyzed together, the consistent message leans more toward "significant task reshuffling" rather than "mass unemployment."

Figure 2: Indeed’s AI at Work data shows 80% of jobs undergoing moderate or major change, yet less than 1% of skills can be fully automated today.

Using AI Job Market Data to Redesign Education and Policy

For educators, the main risk is not that AI will render entire degrees obsolete overnight. The real danger lies in curricula continuing to follow outdated job titles while the underlying tasks evolve. AI job market data already illustrates that within the same broad field, some roles are declining while others are expanding. In creative industries, execution roles centered on producing content to a brief face pressure. At the same time, strategy, research, and client-facing design work tend to be more robust. In data and software, routine coding and reporting tasks are increasingly performed by tools, while higher-level architecture, problem framing, and governance gain value. Educators who still define careers solely as "graphic designer" or "software developer" risk steering students toward aspects of those jobs that are already being automated.

A more practical approach begins with the signals of skills in job postings. Although ads explicitly seeking generative AI skills remain a small portion of the total, demand for AI-related skills has surged. A study of job vacancies in the UK finds that jobs requiring AI skills pay about 23% more than similar positions without those skills. LinkedIn also reports a 65% year-on-year increase in members listing AI skills. Furthermore, global surveys indicate that AI competence is now a key expectation for nearly half of the positions employers are hiring for. For universities, colleges, and online providers, this implies two responsibilities. First, they must incorporate practical AI literacy—how to use tools to draft, analyze, and prototype—into most degree programs, not just those in computer science. Second, they need to teach students how to interpret AI job market data on their own, helping them parse postings, understand which skills are bundled, and identify where tasks are shifting within their chosen fields.

Administrators and quality assurance teams can also leverage AI job market data to rethink how programs are evaluated. Instead of primarily relying on long-term employment statistics linked to broad occupational codes, they can monitor live vacancy data in collaboration with job platforms and labor market analytics firms. When postings for execution-focused creative roles decline significantly over two years, while postings for creative directors and product designers remain stable, this should prompt a review of how design courses allocate time between production software and client-facing research or strategy. When machine learning engineers, data center engineers, and robotics specialists all experience substantial growth, technical programs should adjust prerequisites, lab time, and final projects to enable students to practice directing and debugging AI systems rather than just manual coding.

For policymakers, the contrast between theoretical exposure lists and AI job market data serves as a caution against broad, one-size-fits-all narratives. Suppose regulatory and sustainability roles are declining due to political decisions instead of technical obsolescence. In that case, workers in those fields require a different support package than those in areas where AI is clearly reducing demand for specific tasks. Career-change grants, regional transition funds, and public-sector hiring guidelines should align with real vacancy trends rather than abstract rankings of which jobs are "exposed" to AI. At the same time, growth in high-skill AI infrastructure roles suggests that investments in advanced training must complement industrial policy regarding data centers, cloud infrastructure, and robotics so local education systems can fill the jobs created by capital spending.

The education system also plays a key role in helping new workers interpret the confusing landscape of alarming forecasts and optimistic stories. Students now encounter lists of "40 jobs at risk," projections that a quarter of jobs will be "highly transformed," and examples of medical scribes or junior copywriters losing work to AI tools. Without proper direction, the instinctive response can lead to paralysis. Programs that present AI job market data to learners—showing which roles in their fields are shrinking, which are growing, and which skills command higher pay—can help ground those fears in reality. They can also highlight a recurring pattern in the data: the jobs that withstand these changes are those where humans provide direction, exercise judgment, build trust, and interpret outputs that AI alone cannot safely manage.

In this context, the most helpful question for educators and policymakers is no longer "Which 40 jobs are most at risk?" but rather "Which human-centered tasks does AI increase in value, and how do we train for those?" The current wave of AI job market data provides early answers. It indicates declining demand for repetitive execution, uncertain and politically influenced signals in specific regulatory areas, and significant growth in roles related to design, governance, and infrastructure for building and supervising AI systems. It shows that many jobs will be redefined rather than eliminated, and that skills in directing, critiquing, and contextualizing AI outputs are already commanding higher wages. If institutions treat this evidence as a living syllabus—reviewed each year, openly discussed, and translated into course design—they can move past hypothetical lists and support learners in navigating the jobs AI is actively reshaping today, rather than those it might replace in the future.


The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.


References

Bloomberry / Chiu, H. W. (2025, November 14). I analyzed 180M jobs to see what jobs AI is actually replacing today. Bloomberry.
Brookings Institution / Eberly, J. C., Kinder, M., Papanikolaou, D., Schmidt, L. D. W., & Steinsson, J. (2025, November 20). What jobs will be most affected by AI? Brookings.
FinalRoundAI / Saini, K. (2025, September 23). 30 Jobs Most Impacted by AI in 2025, According to Indeed Research. FinalRoundAI.
Indeed Hiring Lab / Recruit Holdings. (2023, December 15). Webinar: Indeed Hiring Lab – Labor Market Insights [Transcript].
Sky News. (2025). The 40 jobs "most at risk" from AI – and 40 it can't touch. Sky News Money.
Tech23 / Dave. (2025, October 13). The 40 Jobs Most at Risk from AI…and What That Means for the Rest of Us. Tech23 Recruitment Blog.

Picture

Member for

1 year 1 month
Real name
Keith Lee
Bio
Keith Lee is a Professor of AI and Data Science at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI), where he leads research and teaching on AI-driven finance and data science. He is also a Senior Research Fellow with the GIAI Council, advising on the institute’s global research and financial strategy, including initiatives in Asia and the Middle East.