Skip to main content

Governing at Machine Speed: How to Prevent AI Bank Runs from Becoming the Next Crisis

Governing at Machine Speed: How to Prevent AI Bank Runs from Becoming the Next Crisis

Picture

Member for

1 year 1 month
Real name
Keith Lee
Bio
Keith Lee is a Professor of AI and Data Science at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI), where he leads research and teaching on AI-driven finance and data science. He is also a Senior Research Fellow with the GIAI Council, advising on the institute’s global research and financial strategy, including initiatives in Asia and the Middle East.

Modified

AI is accelerating bank-run risk into “AI bank runs”
Supervisors lag far behind banks in AI tools and skills
We need real-time oversight, automated crisis tools, and targeted training now

In March 2023, a mid-sized American bank experienced a day when depositors attempted to withdraw $42 billion. Most of this rush came from simple phone taps, not from long lines outside the bank. Although this scramble did not yet use fully autonomous systems, it demonstrated how quickly fear can spread when money moves as fast as a push notification. Now, imagine an added layer. Corporate treasurers use AI tools to track every rumor. Households rely on chatbots that adjust savings in real time. Trading engines react to each other's moves in milliseconds. The outcome is not a traditional bank run. Instead, it’s the potential for AI-driven bank runs, in which algorithms trigger, amplify, and execute waves of withdrawals and funding cuts long before human supervisors can act.

Why AI Bank Runs Change the Crisis Playbook

Systemic banking crises have always involved everyone rushing for the same exit at the same time. In the past, this coordination relied on rumors, news broadcasts, and human analysis of financial reports spread over hours or days. Today, AI systems can scan markets, social media, and private data streams in seconds. They update risk models and send orders almost immediately. The rush that caused Silicon Valley Bank’s collapse was one example. Customers attempted to withdraw tens of billions of dollars in a single day, highlighting the power of digital channels and online coordination. Future AI bank runs will shorten that timeline even more. Automated systems will learn to respond not only to general news but also to each other’s activities in markets and payment flows. Tools that smooth out minor fluctuations during calm periods can trigger sudden, synchronized movements during stressful periods.

The groundwork for AI bank runs is already established. In developing economies, the percentage of adults making or receiving digital payments grew from 35 percent in 2014 to 57 percent in 2021. In high-income economies, this percentage is nearly universal. Recent Global Findex updates show that 42 percent of adults made a digital payment to a merchant in 2024, up from 35 percent in 2021. At the same time, 86 percent of adults worldwide now own a mobile phone, and 68 percent have a smartphone. On the supply side, surveys conducted by European and national authorities indicate that a clear majority of banks currently use AI for tasks like credit scoring, fraud detection, and trading. This means that the technical ability for AI bank runs is in place today. Highly digital customers, near-instant payments, and widespread AI use among firms are all established. What is lacking is a supervisory framework that can monitor and influence these dynamics before they develop into full-blown crises.

What Emerging Markets Teach Us about AI Bank Runs

The weakest link in this situation is not the sophistication of private sector tools but the readiness of supervisory agencies. A recent survey of 27 financial sector authorities in emerging markets and developing economies shows that most anticipate AI will have a net positive effect. Yet their own use of AI for supervision remains limited. Many authorities are still in pilot mode for essential tasks such as data collection, anomaly detection, and off-site monitoring. Only about a quarter have formal internal policies on AI use. In parts of Africa, the percentage is even lower. The usual barriers are present. Data is often fragmented or incomplete. IT systems may be unreliable. Staff with the right technical skills are scarce. Concerns about data privacy, sovereignty, and vendor risk add further complications. Supervisors face the threat of AI bank runs with inadequate tools, incomplete data, and insufficient personnel.

In contrast, financial institutions in these same regions are advancing more quickly. The survey reveals that where AI is implemented, banks and firms focus on customer service chatbots, fraud detection, and anti-money laundering checks. African institutions are more inclined to use AI for credit scoring and underwriting in markets with limited data. These applications may seem low risk compared to high-frequency trading. However, they can create new pathways for a sudden loss of trust. A failure in an AI fraud detection system can freeze thousands of accounts simultaneously. A flawed credit model can halt lending in entire sectors overnight. The Financial Stability Board has noted that many authorities, including those in advanced economies, struggle to monitor AI-related vulnerabilities using current data-collection methods. Therefore, emerging markets serve as an early warning system. They illustrate how quickly a gap can widen between supervised firms and supervisory agencies. They also show how that gap can evolve into a serious vulnerability if not treated as an essential issue.

Figure 1: AI in emerging-market banks is concentrated in customer service, fraud/AML, and analytics, showing how fast private adoption is racing ahead of supervisory capacity.

Designing Concrete Defenses against AI Bank Runs

If AI bank runs pose a structural risk, then mere "AI-aware supervision" is insufficient. Authorities need a solid framework for prevention and response. The first building block is straightforward but often overlooked. Supervisors need real-time insight into where AI is used within large firms. Some authorities have begun asking banks to disclose whether they use AI in areas such as risk management, credit, and treasury. The results are inconsistent and hard to compare. A more serious method would treat high-impact AI systems like vital infrastructure. Banks would maintain a live register of models that influence liquidity management, deposit pricing, collateral valuation, and payment routing. This register should include key vendors and data sources. Supervisors could then see where similar models or vendors are used across institutions and where synchronized actions might occur under stress. This step is essential for any coherent response to AI bank runs.

The second building block is shared early warning systems that reflect, at least partially, the speed and complexity of private algorithms. Some central banks already conduct interactive stress tests in which firms adapt their strategies as conditions change. Groups of authorities could build on this idea and use methods such as federated learning to jointly train models on local data without sharing raw data. These supervisory models would monitor not just traditional indicators but also the behavior of high-impact AI systems across banks, payment providers, and large non-bank platforms. Signals from these models, combined with insights about vendor concentration, would enable authorities to detect when AI bank runs are developing. They wouldn’t simply observe deposit outflows after the fact.

The third building block is crisis infrastructure that can act quickly when early warnings appear. Authorities already use tools such as market circuit breakers and standing liquidity facilities. For AI bank runs, these tools must be redesigned with automated triggers. This could mean pre-authorized liquidity lines that activate when particular patterns of outflows are detected across multiple institutions. It might involve temporary restrictions on certain types of algorithmic orders or instant transfer products during extreme stress. None of this eliminates the need for human judgment. It simply acknowledges that by the time a crisis committee gathers, AI-driven actions may have already changed balance sheets. Without pre-set responses linked to well-defined metrics, supervisors will remain bystanders in events they should oversee.

An Education Agenda for an Era of AI Bank Runs

These defenses will be ineffective if they rely on a small group of technical experts. The survey of emerging markets highlights that internal skill shortages are a significant obstacle to the use of AI in supervision. This issue is also prevalent in many advanced economies, where central bank staff and frontline supervisors often lack practical experience with modern machine learning tools. At the same time, most adults now carry smartphones, send digital payments, and interact with AI-powered systems in their daily lives. The human bottleneck lies within the authorities, not beyond them. Bridging that gap is therefore as much an educational challenge as it is a regulatory one. It requires a new collaboration between financial authorities, universities, and online training providers.

Figure 2: Supervisors plan to use AI first for data and risk analysis, while more complex tasks stay in the medium-to-long-term bucket, revealing a strategic but still cautious adoption path.

That collaboration should start with the reality of AI bank runs, not with vague courses on "innovation." Supervisors, policymakers, and bank risk teams need access to practical programs. These should blend basic coding and data literacy with a solid understanding of liquidity risk, market dynamics, and consumer behavior in digital settings. Scenario labs where participants simulate AI bank runs can be more effective than traditional lectures for building intuition. In these exercises, chatbots, robo-advisers, treasurers, and central bank tools all interact on the same screen. Micro-credentials for board members, regulators, and journalists can spread that knowledge beyond just a small group of experts. Online education platforms can make these resources available to authorities in low-income countries that can’t afford large in-house training programs. The goal isn’t to turn every supervisor into a data scientist. It is to ensure that enough people in the system understand what an AI bank looks like in practice. Only then can the institutional response be informed, swift, and coordinated.

The window to act is still open. The same global surveys that indicate rapid adoption of AI in banks also show that supervisors’ use of AI remains at an earlier stage. The gap is vast in emerging markets. Today’s digital bank runs, like the surge that caused a mid-sized lender to collapse in 2023, still unfold over hours instead of seconds. However, that margin is decreasing as AI transitions from test projects to essential systems in both finance and everyday life. Authorities can still alter their path. Investments in data infrastructure, shared supervisory models, machine-speed crisis tools, and a serious education agenda can prevent AI bank runs from becoming the major crises of the next decade. If they hesitate, the next systemic event may not begin with a rumor in a line. It may start with a silent cascade of algorithm-driven decisions that no one in charge can detect or understand.


The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.


References

Bank of England. 2024. Artificial intelligence in UK financial services 2024. London: Bank of England.
Boeddu, Gian, Erik Feyen, Sergio Jose de Mesquita Gomes, Serafin Martinez Jaramillo, Arpita Sarkar, Srishti Sinha, Yasemin Palta, and Alexandra Gutiérrez Traverso. 2025. “AI for financial sector supervision: New evidence from emerging market and developing economies.” VoxEU, 18 November.
Danielsson, Jon. 2025. “How financial authorities best respond to AI challenges.” VoxEU, 25 November.
European Central Bank. 2024. “The rise of artificial intelligence: benefits and risks for financial stability.” Financial Stability Review, May.
Federal Deposit Insurance Corporation. 2023. “Recent bank failures and the federal regulatory response.” Speech by Chairman Martin J. Gruenberg, 27 March.
Financial Stability Board. 2025. Monitoring adoption of artificial intelligence and related technologies. Basel: FSB.
Financial Times. 2024. “Banks’ use of AI could be included in stress tests, says Bank of England deputy governor.” 6 November.
Visa Economic Empowerment Institute. 2025. “World Bank Global Findex 2025 insight.” 9 October.
World Bank. 2022. The Global Findex Database 2021: Financial inclusion, digital payments, and resilience in the age of COVID-19. Washington, DC: World Bank.
World Bank. 2025a. “Mobile-phone technology powers saving surge in developing economies.” Press release, 16 July.
World Bank. 2025b. Artificial Intelligence for Financial Sector Supervision: An Emerging Market and Developing Economies Perspective. Washington, DC: World Bank.

Picture

Member for

1 year 1 month
Real name
Keith Lee
Bio
Keith Lee is a Professor of AI and Data Science at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI), where he leads research and teaching on AI-driven finance and data science. He is also a Senior Research Fellow with the GIAI Council, advising on the institute’s global research and financial strategy, including initiatives in Asia and the Middle East.

AI Resume Verification and the End of Blind Trust in CVs

AI Resume Verification and the End of Blind Trust in CVs

Picture

Member for

1 year 1 month
Real name
Keith Lee
Bio
Keith Lee is a Professor of AI and Data Science at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI), where he leads research and teaching on AI-driven finance and data science. He is also a Senior Research Fellow with the GIAI Council, advising on the institute’s global research and financial strategy, including initiatives in Asia and the Middle East.

Modified

AI resume verification is now essential for hiring
Verified records make algorithmic screening fair and transparent
Without them, AI quietly decides who gets work

Recent surveys show that about two-thirds of U.S. workers admit they have lied on a resume at least once, or they would consider doing so. At the same time, AI is becoming the primary filter for job applications. A recent study suggests that nearly half of hiring managers already use AI to screen resumes. That number is expected to rise to 83% by the end of 2025. Another survey found that 96% of hiring experts use AI at some point in the recruiting process, mainly for resume screening and review. This creates a trust issue. Self-reported claims, often overstated or false, are evaluated by algorithms without transparency. AI resume verification is becoming important because the old belief—that an experienced human would catch the lies—is no longer true for most job applicants. If AI decides who gets seen by a human, the information it assesses must be testable and verifiable. It should be treated as essential labor-market infrastructure, not just an add-on in applicant tracking software. Emphasizing AI's potential to improve fairness can help build trust among stakeholders.

Why AI Resume Verification Has Become Unavoidable

The push to automate screening is not just a trend. Many employers in white-collar roles now receive hundreds of applications for each position. Some applicant tracking systems report more than 200 applications per job, about three times the pre-2017 level. With this flood of applications, human screeners cannot read every CV, much less investigate every claim. AI tools are used to filter, score, and often reject candidates before a human ever sees their profile. One research review projects that by the end of 2025, 83% of companies will use AI to review resumes, up from 48% today. A growing number of businesses are becoming comfortable allowing AI to reject applicants at various stages. AI resume verification is now essential because the primary hiring process has shifted to rely on algorithmic gatekeepers, even when organizations recognize that these systems can be biased or inaccurate.

Figure 1: AI resume screening is shifting from exception to default, with adoption rising from under half of employers today to more than four in five by 2025.

The issue lies in the data entering these systems, which is often unreliable. One study estimates that around 64% of U.S. workers have lied about personal details, skills, experience, or references on a resume at least once. Another survey indicates that about 70% of applicants have lied or would consider lying to enhance their chances. References, once seen as a safeguard, are also questionable. One poll revealed that a quarter of workers admit to lying about their references, with some using friends, family, or even paid services to impersonate former supervisors. Additionally, recruiters note a rise in AI-generated resumes and deepfake candidates in remote interviews. Vendors are now promoting AI fraud-detection tools that claim to reduce hiring-related fraud by up to 85%. However, these tools are layered on top of systems still reliant on unverified self-reports. Therefore, AI resume verification is not just another feature; it is the essential backbone of a hiring system that is already automated but lacks reliability. Clarifying how verification enhances transparency can reassure policymakers and educators about its integrity.

Designing AI Resume Verification Around Verified Data

A typical response to this issue is to focus on more intelligent algorithms. The idea is that current AI screening tools will be retrained to spot inconsistencies and statistical red flags in resumes. However, recent research indicates that more complex models are not always more trustworthy. A study highlighted by a leading employment law firm found that popular AI tools favored resumes with names commonly associated with white individuals 85% of the time, with a slight bias for male-associated names. Analyses of AI hiring systems also reveal explicit age and gender biases, even when protected characteristics are not explicitly included. If AI learns from historical hiring data and text patterns that already include discrimination and exaggeration, simply adding more parameters won’t address the underlying issues. AI resume verification cannot rely solely on clever pattern recognition or self-reported narratives; it must be based on verified data from the source.

Figure 2: Across major surveys, between a quarter and around two-thirds of candidates admit to lying on their resumes, underlining how fragile self-reported data is for AI screening.

This leads to a second approach: obtaining better data, not just making the model more complex. AI resume verification means connecting algorithms to verified employment and education records that can be checked automatically. The rapid growth of the background-screening industry shows both the demand and existing challenges. One market analysis estimates that the global background-screening market was about USD 3.2 billion in 2023 and is expected to more than double to USD 7.4 billion by 2030. Currently, these checks are slow, manual, and conducted late in the hiring process. In an AI resume verification system, verified information would be prioritized. Employment histories, degrees, major certifications, and key licenses would exist as portable, cryptographically signed records that candidates control but cannot change, allowing platforms to query them via standard APIs. AI would then rank candidates primarily based on these verifiable signals—skills, tenure, and performance ratings, where appropriate—while treating unverified claims and free-text narratives as secondary. A full return to human-only screening is becoming impractical at the current volume of applicants. The real future lies in establishing standards for verified data, inspiring industry leaders and AI developers to innovate responsibly and ethically.

AI Resume Verification and the Future of Education Records

Once hiring moves in this direction, education systems become part of the equation. Degrees, short courses, micro-credentials, and workplace learning all contribute to resumes. Suppose AI resume verification becomes standard in hiring. In that case, the formats and standards of educational records will influence who stands out to employers. However, current educational data is often fragmented and unclear. The OECD’s Skills Outlook 2023 notes that the pace of digital and green transitions is outpacing the capacity of education and skills policies, and that too few adults engage in the formal or informal learning needed to keep up. When learners take short courses or stackable credentials, the results are typically recorded as PDFs or informal badges that many systems cannot reliably interpret. This creates another trust gap: AI may see a degree title and a course name, but it does not capture the skills or performance behind them.

AI resume verification can help fix this, but only if educators take action. Universities, colleges, and training providers can issue digital credentials that are both easy to read and machine-verifiable. These would be structured statements of skills linked to assessments, signed by the institution, and revocable if found to be fraudulent later. Large learning platforms and professional organizations can do the same for bootcamps, MOOCs, and continuing education. Labor-market institutions are already concerned that workers in non-standard or platform jobs lack clear records of their experience. Recent work by the OECD and ILO on measuring platform work shows how some of this labor remains invisible. If these workers could maintain verified records of hours worked, roles held, and ratings received, AI resume verification could highlight their histories rather than overlook them. For education leaders, the challenge is clear: transcripts and certificates must evolve from static documents into living, portable data assets that support both human decisions and AI-driven screening.

Governing AI Resume Verification Before It Governs Us

Having trustworthy data alone will not make AI resume verification fair. People have good reasons to be cautious about AI in hiring. A 2023 Pew Research Center survey found that most Americans expect AI to significantly impact workers, but more believe it will harm workers overall. Recent research in Australia illustrates this concern: an independent study showed that AI interview tools mis-transcribe and misinterpret candidates with strong accents or speech-affecting disabilities, with error rates reaching 22% for some non-native speakers. OECD surveys also indicate that outcomes are better when AI use is combined with consulting and training for workers, not just technology purchases. Therefore, AI resume verification needs governance guidelines alongside technical standards. Candidates should know when AI is used, what verified data is being evaluated, and how to challenge mistakes or outdated information. Regulators can mandate regular bias audits of AI systems, especially when using shared employment and education records that act as public infrastructure.

For educators and labor-market policymakers, the goal is to shape AI resume verification before it quietly determines opportunity for a whole generation. Some immediate priorities are clear. Employers can ensure that automated decisions to reject candidates are never based solely on unverifiable claims. Any negative flags from AI resume verification, such as suspected fraud or inconsistencies, should lead to human review rather than automatic exclusion. Vendors can be required to keep the verification layer separate from the scoring layer, preventing bias in ranking models from affecting the integrity of shared records. Education authorities can support the development of open, interoperable standards for verifiable credentials and employment records, making their use a condition for public funding. Over time, AI resume verification can support fairer hiring and more precise financial aid, better matching workers to retraining, and improved measurement of the benefits from different types of learning.

If two-thirds of workers admit to lying or would lie on a resume, and over four-fifths of employers are moving towards AI screening, maintaining the current system is not feasible. A choice is being made by default: unverified text, filtered by unregulated algorithms, quietly decides who gets an interview and who disappears. The solution is to view AI resume verification as a fundamental public good. This means developing verifiable records of learning and work that can move with people across sectors and borders, designing AI systems that rely on these records instead of guesswork, and enforcing rules to keep human judgment involved where it matters most. For educators, administrators, and policymakers, the urgency to take action is apparent. They must help create an ecosystem in which AI resume verification makes skills and experiences more visible, especially for those outside elite networks. Otherwise, they must accept a future where automated hiring reinforces existing biases. The technology is already available; what is lacking is the commitment to use it to build trust.


The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.


References

Business Insider. (2025, July). Companies are relying on aptitude and personality tests more to combat AI-powered job hunters. Business Insider.
Codica. (2025, April 1). How to use AI for fraud detection and candidate verification in hiring platforms. Codica.
Fisher Phillips. (2024, November 11). New study shows AI resume screeners prefer white male candidates. Fisher & Phillips LLP.
HRO Today. (2023). Over half of employees report lying on resumes. HRO Today.
iSmartRecruit. (2025, November 13). Stop recruiting scams: Use AI to identify fake candidates. iSmartRecruit.
OECD. (2023a). The impact of AI on the workplace: Main findings from the OECD AI surveys of employers and workers. OECD Publishing.
OECD. (2023b). OECD skills outlook 2023. OECD Publishing.
OECD & ILO. (2023). Handbook on measuring digital platform employment and work. OECD Publishing / International Labour Organization.
Pew Research Center. (2023, April 20). AI in hiring and evaluating workers: What Americans think. Pew Research Center.
StandOut-CV. (2025, March 25). How many people lie on their resume to get a job? StandOut-CV.
The Interview Guys. (2025, October 15). 83% of companies will use AI resume screening by 2025 (despite 67% acknowledging bias concerns). The Interview Guys.
Time. (2025, August 4). When your job interviewer isn’t human. Time Magazine.
UNC at Chapel Hill. (2024, June 28). The truth about lying on resumes. University of North Carolina Policy Office.
Verified Market Research. (2024). Background screening market size and forecast. Verified Market Research.

Picture

Member for

1 year 1 month
Real name
Keith Lee
Bio
Keith Lee is a Professor of AI and Data Science at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI), where he leads research and teaching on AI-driven finance and data science. He is also a Senior Research Fellow with the GIAI Council, advising on the institute’s global research and financial strategy, including initiatives in Asia and the Middle East.