Skip to main content

Impeachment’s Uncertainty Premium: How to Protect Accountability Without Unraveling Mandates

Impeachment carries an “uncertainty premium,” seen in a 14.6% drop in Korea’s FDI pledges
Use impeachment as an emergency brake, not routine politics, to protect electoral mandates and stability
Adopt guardrails—one-shot filings, fast voter-centered elections, judicial minimalism—and ring-fence education budgets

From Parrots to Partners: A Policy Blueprint for AI-Literate Learning

From Parrots to Partners: A Policy Blueprint for AI-Literate Learning

Picture

Member for

10 months 4 weeks
Real name
Keith Lee
Bio
Keith Lee is a Professor of AI and Data Science at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI), where he leads research and teaching on AI-driven finance and data science. He is also a Senior Research Fellow with the GIAI Council, advising on the institute’s global research and financial strategy, including initiatives in Asia and the Middle East.

Modified

AI use is ubiquitous; current assessments reward fluency over thinking
Grade process, add brief vivas, and require transparent AI-use disclosure
Train teachers, ensure equity, and track outcomes to make AI a partner

Eighty-eight percent of university students report they now use generative AI to prepare assessed work, an increase from just over half a year ago. This significant shift in AI usage, while promising, also raises concerns. Nearly one in five students admits to pasting AI-generated text, whether edited or not, directly into their submissions. At the same time, new PISA results show that about one in five 15-year-olds across OECD countries struggle even with simpler creative-thinking tasks. This data highlights the need for a balanced approach to AI integration in education. The current trend reveals a growing disparity between the speed at which students can produce plausible text and the slower, more challenging task of generating ideas and making informed judgments. When we label chatbots as 'regurgitators,' we risk overlooking the real issue: a system that rewards fluent output over clear thinking, tempting students to outsource the work that learning should reinforce. The goal should not be to ban autocomplete; it should be to make cognitive effort noticeable again and valuable.

What We Get Wrong About "Regurgitation"

Calling large language models parrots lets institutions off the hook. Students respond to the incentives we create. For two decades, search engines have made copying easy; now language models make it quick to paraphrase and structure ideas. The issue isn't that students have suddenly become less honest. Many assessments still value smooth writing and recall more than the products of reasoning. Consider what educators report: a quarter of U.S. K-12 teachers believe AI does more harm than good, yet most lack clear guidance on how to use or monitor it. Teacher training is increasing but remains inconsistent. In fall 2024, fewer than half of U.S. districts had trained teachers on AI; by mid-2025, only about one in five teachers reported that their school had an AI policy. Confusion in the classroom translates into ambiguity for students. They will do what feels normal, quick, and safe.

The common, fast, "safe" use case is for ideation and summarization, not careful drafting. UK survey data from early 2025 indicate that the most frequent uses by students are explaining concepts and summarizing articles; using AI at any stage of assessment has become the standard rather than the exception. Teen usage for schoolwork is on the rise, but is still far from universal, suggesting a spread pattern where early adopters set norms that others follow under pressure. If we tell students to "use it like Google for ideas, not as a ghostwriter," we must assess in a way that clearly shows the difference. Right now, many find it hard to see a practical distinction. As detection methods become uncertain—and many major vendors avoid issuing low AI "scores" to minimize false positives—monitoring output quality alone cannot ensure academic integrity. We need a better design at the beginning.

Figure 1: Most students use AI upstream for understanding and summarising; a smaller—but policy-critical—minority move AI text into assessed work. The risk concentrates where grades reward fluent output over visible thinking.

The deeper risk isn't just copying; it's cognitive offloading. Several recent studies and evaluations, ranging from classroom surveys to EEG-based laboratory work, suggest that regular reliance on chatbots diverts mental effort away from planning, retrieval, and evaluation processes where learning actually occurs. These findings are still early and not consistent across tasks, but the trend is clear: when we let models draft or make decisions, our own attention and self-reflection can weaken. This doesn't mean AI cannot be helpful; it means we need to create tasks where human input is necessary and valued.

The Evidence—And What It Actually Implies

If 88% of students now use generative tools at some stage of assessment and 18% paste AI-generated text, we need to grasp the patterns behind these numbers. The same UK survey shows that the main uses dominate, with a quarter of students drafting with AI before making revisions; far fewer copy unedited text. In short, "regurgitation" isn't the average behavior, but it is a visible trend—and it becomes tempting in courses that reward speed and surface fluency. A Guardian analysis of misconduct cases in the UK shows that confirmed AI-related cheating increased from 1.6 to 5.1 per 1,000 students year-over-year, while traditional plagiarism declines; universities admit that detection tools struggle, and more than a quarter still do not track AI violations separately. Relying solely on enforcement cannot fix what assessment design encourages. (Method note: the Guardian figure combines institutional returns and likely undercounts untracked cases, potentially understating the actual issue.)

When we compare student abilities, we see the tension. In PISA 2022's first creative-thinking assessment, Singapore led with a mean score of 41/60; the OECD average was 33, and about one in five students couldn't complete simpler ideation tasks. Creative-thinking performance correlates with reading and math, but not as closely as those core areas relate to each other, suggesting that both practice and teaching—not just content knowledge—shape ideation skills. If AI speeds up production but our system does not clearly teach and evaluate creative thinking, students will continue to outsource the very steps we neglect.

Figure 2: Creative-thinking capacity is uneven: the OECD average has over one in five students below the baseline, while leading systems keep low-performer shares in single digits—evidence that practice and pedagogy matter.

What about the claim that AI is simply making us worse thinkers? Early findings are mixed and depend on context. Lab work from MIT Media Lab indicates reduced brain engagement and weaker recall in writing assisted by LLMs compared to "brain-only" conditions. Additionally, a synthesis notes that students offload higher-order thinking to bots in ways that can harm learning. Yet other studies, especially in structured settings, show improved performance when AI handles the routine workload, allowing students to focus their efforts on analysis. The key factor isn't the tool; it's the task and what earns credit. (Method note: many studies rely on small samples or self-reports; the best assumption is directional rather than definitive.)

Meanwhile, educators and systems are evolving, though unevenly. Duke University's pilot program offers secure campus access to generative tools, enabling the testing of learning effects and policies on a larger scale. Stanford's AI Index chapter on education notes an increasing interest among teachers in AI instruction, even as many do not feel prepared to teach it. Surveys through 2025 indicate that teachers using AI save time, and a growing, albeit still minority, share of schools have clear policies in place. In short, the necessary professional framework is developing, but slowly and with gaps. Students experience this gap as a result of mixed messages.

We should also be realistic about detection methods. Turnitin's August 2025 update specifically withholds percentage scores below 20% to reduce false positives, acknowledging that distinguishing between model-written and human-written text can be challenging at low levels. Academic integrity cannot depend on a moving target. Instead of searching for "AI DNA" after the fact, we can create assignments so that genuine thinking leaves evidence while it happens.

A Blueprint for Cognitive Integrity

If the ideal scenario is to use AI like a search tool—an idea partner rather than a ghostwriter—we need policies that make human input visible and valuable. The first step is to grade for process. Require a compact "thinking portfolio" for major assignments: a log of prompts used, a brief explanation of how the tool influenced the plan, the outline or sketch created before drafting, and a quick reflection on what changed after receiving feedback. This does not need to be burdensome: two screenshots, 150 words of rationale, and an outline snapshot would suffice. Give explicit credit for this work—perhaps 30–40% of the grade—so that the best way to succeed is to engage in thinking and demonstrate it. When possible, conclude with a brief viva or defense in class or online: five minutes, with two questions about choices and trade-offs. If a student cannot explain their claim in their own words, the problem lies in learning, not the software. (Method note: for a 12-week course with 60 students, two five-minute defenses per student add roughly 10 staff hours; rotating small panels can help manage this workload.)

The second step is to reframe tasks so that using ungrounded text is insufficient. Swap purely expository prompts with "situated" problems that require local data, classroom materials, or course-specific case notes that models will not know. Ask for two alternative solutions with an analysis of downsides; require one source that contradicts the student's argument and a brief explanation of why it was dismissed. Link claims to evidence from the course content, not just to generic literature. These adjustments force students to think within context, rather than just producing fluent prose.

Third, normalize disclosure with a simple classification. "AI-A" means ideation and outlining; "AI-B" refers to sentence-level editing or translation; "AI-C" indicates draft generation with human revision; "AI-X" means prohibited use. Students should state the highest level they used and provide minimal supporting materials. This treats AI like a calculator with memory: allowed in specific ways, with work shown, and banned where the skill being tested would be obscured. It also provides instructors with a common language, enabling departments to compare patterns across courses. (Method note: adoption is most effective when the classification fits on one rubric line, and the LMS provides a one-click disclosure form.)

Fourth, build teacher capacity quickly. Training at the district and campus levels increased in 2024, but it still leaves many educators learning on their own. Prioritize professional development on two aspects: designing tasks for visible thinking and providing feedback on process materials. Time saved by AI for routine preparation—which recent U.S. surveys estimate at around 1–6 hours weekly for active users—should be reinvested into richer prompts, oral evaluations, and targeted coaching. Teacher time is the most limited resource; policies must protect how it is used.

Fifth, address equity directly. Student interviews in the UK reveal concerns that premium models offer an advantage and that inconsistent policies across classes are perceived as unfair. Offer a baseline, institutionally supported tool with privacy safeguards; teach all students how to evaluate outputs; and ensure that those who choose not to use AI are not penalized by tasks that inherently favor rapid bot-assisted work. Gaps in creative thinking based on socioeconomic status indicate that we should prioritize practice that mitigates literacy bottlenecks—through visual expression, structured ideation frameworks, and peer review—so every student can develop the skills AI might distract them from.

Finally, measure what matters. Track the percentage of courses that evaluate process; the share employing short defenses; the distribution of student AI disclosures; and changes in results on assessments that cannot be faked by fluent text alone. Expect initial variation. Anticipate some resistance. But we make the human aspects of learning clear and valuable. In that case, the pressure to outsource will decline automatically in areas where we still need supervision—like professional licensure exams, clinical decisions, or original research—limit or prohibit generative use and explain the reasoning. The aim is not uniformity but clarity matched to the proper skills being assessed.

None of this requires waiting for standards bodies to take action. Universities can begin this semester; school systems can test it in upper-secondary courses right away. Institutions are already implementing this, with secure campus AI portals being tested in the U.S. and OECD member countries, which provide practical guidance on classroom use. Our policies should reflect this practicality: no panic or hype, just careful design.

The initial figure—eighty-eight percent—will only increase. We can continue to portray the technology as a parrot and hope to catch the worst offenders afterward, or we can adjust what earns grades so that the safest and quickest path is to think. The creative-thinking results remind us that many students need practice in generating and refining ideas, not just improving sentences. If we grade for process, hold small oral defenses, and normalize disclosure, we transform AI into the help it should be: a quick way to overcome obstacles, not a ghostwriter lurking in the shadows. This approach aligns incentives with learning honestly. It respects students by asking for their judgment and voice. It values teachers by compensating them in time for deeper feedback. And it reassures the public by ensuring that when a transcript indicates "competent," it means the student actually completed the work as required. The tools will continue to improve. Our policies can, too, if we design for visible thinking and view AI as a partner we guide, rather than a parrot we fear.


The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.


References

AP News. (2025, September 6). Duke University pilot project examining pros and cons of using artificial intelligence in college.

Gallup & Walton Family Foundation. (2025, June 25). The AI dividend: New survey shows AI is helping teachers reclaim valuable time.

Guardian. (2025, June 15). Revealed: Thousands of UK university students caught cheating using AI.

HEPI & Kortext. (2025, February). Student Generative AI Survey 2025.

Hechinger Report. (2025, May 19). University students offload critical thinking, other hard work to AI.

MIT Media Lab. (2025, June 10). Your Brain on ChatGPT: Accumulation of Cognitive Debt from LLM-Assisted Writing.

OECD. (2023, December 13). OECD Digital Education Outlook 2023: Emerging governance of generative AI in education.

OECD. (2024, June 18). New PISA results on creative thinking: Can students think outside the box? (PISA in Focus No. 125).

OECD. (2024, November 25). Education Policy Outlook 2024.

Pew Research Center. (2024, May 15). A quarter of U.S. teachers say AI tools do more harm than good in K-12 education.

Pew Research Center. (2025, January 15). About a quarter of U.S. teens have used ChatGPT for schoolwork—double the share in 2023.

RAND Corporation. (2025, April 8). More districts are training teachers on artificial intelligence.

Stanford HAI. (2025). The 2025 AI Index Report—Chapter 7: Education.

Turnitin. (2025, August 28). AI writing detection in the new, enhanced Similarity Report.

Picture

Member for

10 months 4 weeks
Real name
Keith Lee
Bio
Keith Lee is a Professor of AI and Data Science at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI), where he leads research and teaching on AI-driven finance and data science. He is also a Senior Research Fellow with the GIAI Council, advising on the institute’s global research and financial strategy, including initiatives in Asia and the Middle East.

When Algorithms Wobble: AI, Information Cascades, and the New Bank-Run Curriculum

When Algorithms Wobble: AI, Information Cascades, and the New Bank-Run Curriculum

Picture

Member for

10 months 4 weeks
Real name
Keith Lee
Bio
Keith Lee is a Professor of AI and Data Science at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI), where he leads research and teaching on AI-driven finance and data science. He is also a Senior Research Fellow with the GIAI Council, advising on the institute’s global research and financial strategy, including initiatives in Asia and the Middle East.

Modified

AI accelerates information cascades, turning rumors into rapid bank runs
Stability now hinges on dampening synchronized behavior, not just capital buffers
Build rumor-aware stress tests, fast disclosures, and drill-based curricula

The systemic risk issue is not just a concern for supervisors and traders, but also for educators and administrators. This was starkly illustrated when forty-two billion dollars left a single U.S. bank in one trading day last year, with another $100 billion set to go the next morning. This is not a typo; it reflects the new speed of panic. In March 2023, depositors at Silicon Valley Bank withdrew a quarter of the bank's deposits within hours. Management expected more than half of the remaining balance would leave the following day. The bank failed before the line could finish forming. This episode reminded us that modern finance depends on coordination. When many people act together, even a well-built structure can fail. It also highlighted that social media and mobile banking have shortened the time between rumor and collapse. As generative AI speeds up how provocative claims are created, spread, and accepted, the coordination issue at the heart of financial stability becomes an information-systems problem. Educators and administrators play a crucial role in addressing this challenge.

We often use plumbing metaphors to explain systemic risk, such as liquidity pools, transmission channels, and circuit breakers. While those metaphors are helpful, they overlook the role of shared beliefs. When everyone sees the same information and reacts similarly, small shocks can amplify. The reason for action is not just balance-sheet math; it also involves real-time narrative dynamics. By 2025, those dynamics are managed through models—ranking, recommending, summarizing, and increasingly deciding. If the next crisis moves at the speed of content, then classrooms, newsrooms, compliance desks, and central bank dashboards are all part of the same early-warning system.

A well-known engineering story illustrates this concept. On the morning London opened its Millennium Bridge, it wobbled not because of poor construction but because people adjusted their steps in sync as the deck swayed. Each person's slight movement made it more likely the next person would adapt too. After reaching a certain point, the crowd and the bridge created a feedback loop. Engineers fixed the issue by adding dampers, not by retraining pedestrians. Finance has similar thresholds and feedback. While dampers exist—like deposit insurance, lender-of-last-resort, and stress tests—the crowd's rhythm has changed.

From Wobbly Bridges to Wobbly Balance Sheets

The bridge analogy highlights two truths about coordination. First, vulnerability can exist even when design standards are met. Second, thresholds are significant. With enough walkers, or similarly positioned investors, the system can tip into a different state. The Diamond-Dybvig theory formalized this in banking: multiple equilibria allow for a self-fulfilling run to happen even when fundamentals are solid. In the 2023 run episodes, the change was not just due to interest-rate risk on bank balance sheets; it was also a result of the density and speed of shared information that led depositors to the same conclusion simultaneously. This is why uninsured concentrations were so critical: at year-end 2022, about 88 to 94 percent of SVB's deposits exceeded the insurance limit, and those accounts could—and did—move together in large amounts.

Speed is now crucial. The Federal Reserve's review recorded more than $40 billion leaving on March 9 alone, with over $100 billion expected on March 10 if conditions did not change. This is much faster than traditional cases. First Republic then revealed over $100 billion in quarterly outflows, with an estimated $25 to $40 billion departing in single days during peak stress. Coordinated messaging among concentrated depositor networks clearly played a role. This is what the bridge taught us: small, rational individual actions, synchronized by feedback, create forces the structure wasn't built to withstand. Dampers must evolve as well.

Figure 1: In hours, SVB lost $42bn and faced $100bn more the next morning; First Republic’s >$100bn quarterly drop shows how coordination risk scales from minutes to months.

AI Turns Rumors into Runs

Two developments since 2023 have strengthened the connection between rumor and withdrawal. The first is empirical: social media activity is associated with increased distress. A multi-author study using extensive Twitter (X) data shows that banks with higher pre-existing exposure on social media experienced larger losses during the run, even when controlling for uninsured deposits and unrealized bond losses. In hourly data, tweet volume about specific banks aligns with stock-price declines, indicating that attention itself amplifies vulnerability. The second is structural: AI changes how attention is produced. It reduces the cost of creating plausible claims at scale and increases the likelihood that many users will simultaneously see and act on the same narrative.

Caution is warranted in interpreting the figures. We now have early signs on the deposit side, not just prices. A UK study, reported in February 2025, indicated that AI-generated fake content about a bank's health notably increased respondents' intentions to transfer funds to the bank. The researchers estimated that, in some cases, a £10 social ad spend could influence up to £1 million in deposits. These figures, however, depend on context and design. Suppose a small budget can create a widely shared clip that trends for an hour. In that case, a local bank with a concentrated corporate clientele may face large synchronized outflows, forcing emergency liquidity measures. A simple calculation illustrates this: for a mid-size bank with $12 billion in deposits and 50 percent uninsured, a 2 percent shift among uninsured accounts equals $120 million. This is enough to trigger negative headlines, collateral haircuts, and more withdrawals. The narrative can drive causal connections.

Supervisors have taken notice. The BIS's 2024 Annual Economic Report warns that AI can both improve and threaten financial stability—enhancing monitoring while raising the risk that multiple actors take similar actions or act on shared model errors. The Financial Stability Board's 2024 assessment also highlights the concentration in standard AI tools and data, new channels for misinformation, and the need to improve supervisory capabilities. In the UK, the Bank of England has discussed the inclusion of AI risks in its annual stress tests and emphasized the need for governance that accounts for interactions among models. The trend is clear. We are transitioning from unique risks to system-level similarity risks—what engineers refer to as modes of vibration.

A reasonable critique counters that digitalization alone does not cause deposit volatility in regular times. The ECB's recent paper finds that mobile app availability and regular online use don't raise outflow volatility across the euro area by themselves; social media amplification mainly matters in specific stress events. This nuance is crucial. It shows that the risk is not "technology" itself, but rather technology interacting with shared exposure (uninsured deposits), unclear news, and time-compressed coordination. In other words, the bridge does not wobble every day. It sways when many pedestrians adjust together near a threshold—and when there are no dampers to manage the sway.

What We Teach Now Determines What Fails Next

If the coordination problem has shifted into the information landscape, the education agenda needs to adjust. Finance and public policy courses should include "information-cascade drills" alongside liquidity-coverage math. In practical terms, schools and training programs can run live simulations where students act as depositors, treasury teams, risk officers, journalists, and supervisors reacting to a sudden (synthetic) AI-generated rumor about a regional bank. The exercise should incorporate time-stamped social posts, changing search results, and models showing core funding loss linked to rumor intensity. It should also require decision-makers to rehearse emergency liquidity mechanics: positioning collateral, daylight overdraft use, and access to discount windows. One concerning data point: a year after SVB's collapse, less than half of U.S. banks and credit unions had established the legal authority to borrow from the Federal Reserve's discount window during emergencies—indicating a readiness gap now under consideration for reform. If graduates cannot execute these steps under pressure, their employers will struggle when every minute counts.

Figure 2: Fewer than half have collateral pre-positioned; most have paperwork but no pledged assets—leaving nearly one in five with no access.

For supervisors and central banks, the lesson is to add dampers to the information system, not just the balance sheet. The policy discussions already reflect this direction. SUERF authors have suggested that authorities build expertise in AI, establish "AI-to-AI links" to allow supervisory tools to analyze and react to model-driven activity in real-time, and create "triggered facilities" that activate when monitoring signals exceed certain thresholds. The FSB recommends closing data gaps on firms' AI usage, tracking concentration in models and providers, and considering misinformation risks in stability frameworks. Specifically, stress tests should integrate rumor-shock modules that combine deposit flight patterns with content-spreading dynamics. Communication protocols should require banks to publish rapid and verifiable dashboards—detailing liquidity coverage, collateral held at central banks, and deposit mix—alongside pre-approved messaging that can go out within minutes, not hours. These are dampers to counteract the narrative swings that now drive withdrawals.

The private sector also has its tasks. Treasury and communications teams should work together on early-warning signals from public feeds. This includes tracking unusually fast correlations between negative terms and the bank's name, spikes in short-horizon retweet networks, and sudden changes in search-query patterns. The evidence from Cookson et al. indicates that such attention measures contain information on an hourly basis during crises. Institutions must also learn how to respond to issues of content authenticity. While stronger provenance signals will help until watermarking and cryptographic verification become standard, it is critical to train staff to debunk quickly with relevant information. A brief method note can guide decisions: estimate your bank's one-hour runoff elasticity to a 1-sigma spike in social media attention based on past incidents; establish a "go-public" trigger that balances the risks of triggering panic against the benefits of preventing it. When that trigger activates, release easy-to-check metrics—such as available central bank capacity, cash on hand, and ratios of insured to uninsured deposits—along with links to third-party validation when possible.

Policymakers should anticipate objections. Someone might argue that rumor-aware stress tests are speculative. Another concern is that "AI-to-AI" supervisory tools may lead to excessive monitoring or moral hazards. A third might fear that this approach could stifle free speech. The correct response is not to suppress content; it is to build resilience against cascades. The ECB's findings support this: technology is not destiny. We can lower thresholds by diversifying deposit bases, capping correlated exposures, and proactively committing to transparent emergency liquidity access. We can add dampers by speeding up supervisory communications and practicing them publicly. We can also slow the most hazardous feedback loops, for example, by considering time-limited withdrawal controls on large, fast corporate transfers when a bank has high liquidity coverage at the central bank, paired with real-time disclosures and strict safeguards. Think of this as temporary control for a swaying bridge while the dampers take effect, not a permanent obstacle.

Educators have a crucial role in this redesign. The next generation of risk managers and policy analysts should be skilled in both cash-flow math and attention dynamics. A capstone course may require students to create a simple rumor-to-run model using publicly available data. They would estimate its parameters from past events (such as tweet volume, search trends, and price gaps) and then propose a communications and liquidity strategy, which would be tested in a timed simulation. The goal is not to produce a perfect forecast; it is to enable disciplined action under uncertainty. If schools offer this training, agencies will seek out graduates, and banks will adopt it. The result is a system that views information friction as a key component of financial plumbing rather than an afterthought added to press releases.

Return to that 24-hour window in March 2023: $42 billion out, and another $100 billion lined up. The noteworthy point is not just the amounts; it is the coordination. AI will not alter the human inclination to act in unison, but it will make synchronized actions easier to initiate, faster to spread, and more challenging to reverse. That is why the next generation of dampers cannot rely solely on capital and collateral. They must also include rapid, credible disclosures; rumor-aware stress design; and hands-on experience with information shocks—in classrooms, drills, and at the very desks where decisions will be made. If we recognize that the system now sways when narratives align, our goal is to lower the threshold of wobble and accelerate the dampers. We need to develop a curriculum that identifies cascading risks as key concerns and invest in supervisory tools that directly engage with models. We should welcome criticism, promote transparency, and practice through challenging situations. If we do this, the next time the crowd starts to move, the bridge will steady faster, and the line at the virtual teller will be shorter.


The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.


References


Bank for International Settlements (2024). Annual Economic Report 2024: Artificial intelligence and the economy—implications for central banks (Ch. III).
Bank of England (2024, June). Financial Stability Report.
Cookson, J. A., Fox, C., Gil-Bazo, J., Imbet, J. F., & Schiller, C. (2023). Social Media as a Bank Run Catalyst. FDIC Working Paper.
Diamond, D. W., & Dybvig, P. (1983). Bank runs, deposit insurance, and liquidity. Journal of Political Economy, 91(3), 401–419. (Classic foundation).
European Central Bank (2025). Wildmann, N. Mind https://www.ecb.europa.eu.
Financial Stability Board (2024). The Financial Stability Implications of Artificial Intelligence. https://www.fsb.org.
Federal Reserve Board, Office of Inspector General (2023). Material Loss Review of Silicon Valley Bank.
Federal Reserve (2023, April). Review of the Federal Reserve’s Supervision and Regulation of Silicon Valley Bank.
FDIC (2023, Mar. 27). Recent Bank Failures and the Federal Regulatory Response (speech).
Reuters (2023, Apr. 24). First Republic Bank deposits tumble more than $100 billion.
Reuters (2025, Feb. 14). AI-generated content raises risks of more bank runs, UK study shows.
Strogatz, S. H., Abrams, D. M., McRobie, A., Eckhardt, B., & Ott, E. (2005). Crowd synchrony on the Millennium Bridge. Nature, 438, 43–44. (See news/summary coverage).
SUERF (2025, May 15). Danielsson, J. How central banks can meet the financial stability challenges arising from artificial intelligence.
The Structural Engineer (2001). Dallard, P., et al. The London Millennium Footbridge (description of synchronous lateral excitation).

Picture

Member for

10 months 4 weeks
Real name
Keith Lee
Bio
Keith Lee is a Professor of AI and Data Science at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI), where he leads research and teaching on AI-driven finance and data science. He is also a Senior Research Fellow with the GIAI Council, advising on the institute’s global research and financial strategy, including initiatives in Asia and the Middle East.