Skip to main content

The Two-Speed Reality of Classroom AI

The Two-Speed Reality of Classroom AI

Picture

Member for

1 year 1 month
Real name
David O'Neill
Bio
David O’Neill is a Professor of Finance and Data Analytics at the Gordon School of Business, SIAI. A Swiss-based researcher, his work explores the intersection of quantitative finance, AI, and educational innovation, particularly in designing executive-level curricula for AI-driven investment strategy. In addition to teaching, he manages the operational and financial oversight of SIAI’s education programs in Europe, contributing to the institute’s broader initiatives in hedge fund research and emerging market financial systems.

Modified

Generative AI lifts advanced economies faster due to compute, connectivity, and wages
Education faces a widening generative AI productivity gap without solid infrastructure
Front-load broadband and compute, standardize platforms, and train teachers to close it

North America accounts for nearly half of the world’s live data-center capacity, yet approximately three billion people remained offline in 2023. This disparity is not merely an abstract digital divide; it establishes the foundation for a generative AI productivity gap that will shape educational institutions and workforce development over the next decade. Recent analysis from the Bank for International Settlements indicates that generative AI is likely to increase output more significantly in advanced economies than in emerging ones. This outcome is driven by the availability of advanced resources such as computing power, connectivity, and higher wages, which make automation more economically viable. As of 2024, only about 14% of firms in the OECD had adopted AI, with leading organizations advancing while others lag behind. Consequently, the most substantial benefits from classroom AI tools will initially accrue to educational systems capable of investing in robust cloud services, low energy costs, and specialized expertise. Addressing this gap now requires policy intervention alongside technological solutions.

Defining the generative AI productivity gap

We need to clearly define the problem. The generative AI productivity gap is the difference between what AI can achieve in education and what schools can actually implement after accounting for infrastructure and workforce costs. Macro data makes this gap clear. The BIS finds that, on average, generative AI’s short-term growth impact is greater in advanced economies than in emerging ones. This trend is consistent with broader patterns: although AI adoption among OECD firms has approximately doubled in recent years, overall uptake remains low, with leading organizations advancing more rapidly. In the educational context, only well-funded districts and universities are likely to adopt advanced teaching assistants, AI-graded assessments, and automated student support processes in the near term. Other institutions may be limited to basic solutions, such as chatbots that lack full integration, do not connect to secure data, and are vulnerable to network disruptions. The limited solutions, like chatbots that lack full integration, do not connect to secure data and struggle when network issues arise.

Figure 1: Finance and education sit among the most AI-exposed sectors; this skews early productivity gains toward high-skill, high-wage systems that can absorb AI at scale.

The gap also relates to larger economic factors. Generative AI impacts productivity by automating certain tasks, improving tools, and shifting capital. Preliminary models from the Penn Wharton Budget Model (PWBM) suggest that AI could enhance productivity and GDP growth over the coming decades, with noticeable budgetary effects as early as soon. EY found that business investment in AI-related areas such as software, data centers, and energy infrastructure increased at an annual rate of nearly 18% in early 2025, contributing roughly 1 percentage point to quarterly U.S. GDP growth. However, these investments are not evenly distributed. Cloud spending tends to be concentrated, with major markets capturing most of the migration, optimization efforts, and GPU capacity. In education, this translates to a clear pattern: districts and universities that can afford usage-based cloud services, reserve capacity in data-center regions, and recruit AI-savvy leaders will realize expected learning improvements. Others will face higher costs and slower feedback.

Channels of impact behind the generative AI productivity gap

AI transforms three primary dimensions: the tasks performed by individuals, the capabilities of software, and the allocation of financial resources. Regarding tasks, AI can assist in creating lesson plans, developing assessment options, summarizing feedback, and generating personalized practice materials across various educational roles. However, the productivity gains depend on how time is allocated within these systems. Educational systems with higher teacher salaries and constrained staffing benefit more, as saved hours can reduce budget pressures or enable smaller class sizes. In contrast, systems with lower wages experience smaller savings and have less incentive to modify schedules. Thus, labor market dynamics are a significant factor. On a broader scale, persistent service-sector inflation and wage trends in advanced economies have kept labor costs high relative to pre-pandemic levels, making AI-driven automation and support more attractive in these regions. The benefits increase as institutional leaders implement these changes.

Next, tools improve usability. PWBM notes that AI boosts productivity by enhancing the efficiency of complementary software and hardware. In education, this means better content platforms, learning analytics, and automated workflows that integrate with student information systems. However, these tools rely on stable internet connections, strong identity and access controls, and data-sharing agreements. The global picture is mixed here. The World Bank reports that around 63% of the global population used the internet in 2024. The ITU shows progress in mobile broadband affordability, but fixed broadband—which is crucial for schools streaming video and conducting data analysis—remains expensive in many areas. Where bandwidth is unreliable, AI features may be limited or shut off. This leads to a two-tier adoption pattern: one for high-capacity systems where everything functions well, and another for those where AI functions as a thin layer over fragile networks.

Finally, capital flows to areas where it is most needed. Generative AI requires significant computing power. Data from data centers shows decreasing vacancy rates in key locations, with rapid construction constrained by energy access. Knight Frank notes that North America accounts for almost half of global live IT capacity, while new supply in some parts of Asia lags. Gartner predicts that public cloud spending will exceed $700 billion in 2025, with much of that growth driven by AI demand. This concentration matters for schools and universities. If a region is far from a cloud center, latency increases, expenses rise, and access to specialized services becomes limited. While pricing is global, service availability is local. These structural issues favor advanced economies with dense cloud infrastructures, reliable energy connections, and experienced procurement teams.

Why advanced economies pull ahead

The BIS findings offer a broad overview, but the real details emerge at the ground level. Start with computing power. AI training relies on specialized chips and strong equipment. Supply is limited and concentrated. The World Bank report says Nvidia holds a large share of the data-center GPU market. Export controls and industrial policies affect where top chips go. Recent shifts have moved high-end GPUs to places with fewer policy issues and better energy access. Educational systems in these areas can use advanced applications and process sensitive data offline. Others must manage with limited or delayed access. This is not just about talent. Infrastructure and policy play a significant role.

Figure 2: Advanced economies have a larger share in AI-exposed services (finance, professional, information), while many emerging economies lean toward lower-exposure manufacturing—widening the near-term gap.

Next, let’s look at connectivity. Despite improvements, billions of people are still offline, and many schools rely on fixed broadband that still doesn't meet affordability goals. A system cannot offer uninterrupted tutoring if the network is unreliable. It also cannot send analytics to teachers for every class if uploads fail. Mobile connectivity is improving, but fixed lines remain essential for high-volume teaching and district operations. This is where the reality of the generative AI productivity gap widens. A school in a major city might use intelligent scheduling and AI-assisted individual education plans as the norm, while a rural district might restrict their use to exam periods. The same software can yield vastly different outcomes.

Labor costs further widen the gap. In advanced economies, growth in service-sector wages has remained high enough to enable rapid gains from AI adoption and support. An office that automates document handling, or a university that uses AI for student services, can free skilled workers to handle more complex tasks. The resulting savings can be realized within a year, encouraging further investment and process improvements. In low-wage areas, leaders see fewer short-term financial benefits and tend to operate under rigid roles with limited IT budgets. They may hesitate to adjust schedules and workflows, slowing the compounding effects that lead to real AI productivity gains. The paradox is that wealthier schools are also quicker to invest in AI-ready systems, thereby reducing their costs of further adoption.

Market structure contributes as well. Public cloud spending is heaviest in the United States and Europe. There, universities and educational technology firms can commit to multi-year contracts and standardize on cloud-native systems. Vendors prioritize features based on customer size and compliance needs. Evidence from the OECD shows that early adopters lead the way, with divisions forming along traditional lines such as firm size and sector. In education, this results in a divide between top-tier universities and underfunded colleges, as well as between urban districts with dedicated technology leaders and rural districts with part-time IT support. The outcome is an uneven distribution of benefits: a few systems gain a lot, while most gain little. This pattern aligns with the BIS’s near-term predictions: generative AI increases productivity, but the early benefits are more pronounced in advanced economies with better infrastructure and higher wages.

Policy to narrow the generative AI productivity gap

The objective of this policy is not to impede progress among leading institutions, but rather to accelerate the adoption of comprehensive AI solutions by those currently lagging. The first policy lever involves improving access to computing resources. Ministries and state agencies should establish regional education compute pools that guarantee GPU hours for public schools and teacher colleges. While cloud credits can facilitate this, the critical factor is enforcing service levels, including reserved capacity in proximate regions, low-latency connections, and stable pricing for workloads that require substantial processing during school hours. In areas with limited capacity, governments should coordinate AI rollouts with investments in data centers and power infrastructure. For this reason, data-center vacancy rates, power availability, and regional coverage must be integrated into education technology planning. Regulators should also streamline cross-border data flows for model fine-tuning on anonymized student data, ensuring robust protections. Absent these measures, classroom AI pilots will encounter significant challenges during peak periods such as examinations.

The second lever is ensuring connectivity where it's most needed. Data from the ITU and World Bank show steady improvements, but gaps remain in fixed-line access. Education funds should focus on building reliable campus networks, last-mile fiber connections, and affordable programs specific to schools that aim for fixed-line access, not just mobile packages. In cases where fiber is unrealistic, hybrid models should be planned. This could include local inference boxes for essential tasks, scheduled synchronization windows, and cached content. The goal is to keep key AI functions operational throughout the day, even with weak internet connections. Broadband is no longer just a support service; it's a critical part of learning. This perspective can help reclassify network upgrades as instructional spending, opening up new funding channels and donor support.

The third lever is procurement and capability development. The evidence from the OECD is clear: leaders make progress by standardizing, measuring, and improving. Education systems should adopt similar practices. They should buy complete platforms instead of piecemeal solutions. Require vendors to provide APIs for student information systems, assessment engines, and learning management systems. This way, AI features won’t exist as isolated trials. Contracts should be tied to meaningful outcome metrics, such as time saved for lesson preparation, earlier identification of dropout risks, and improvements in student writing assessed with consistent rubrics. Then, the results should be published. When districts can witness credible, comparable improvements, others will follow. This method allows a small group of leaders to move the entire system forward rather than extend the laggards' time.

Finally, it's essential to realign labor incentives. Advanced economies benefit in the short term because the wage structure makes calculations straightforward. Emerging systems can still succeed by directing AI towards specialized expertise rather than economical replacements. Focus on applications that maximize expert time: speech therapy triage, early-warning systems for at-risk students, and personalized support modules in the first year of university. These uses do not rely solely on wage differences. They need high-quality data, practical teacher training, and strong governance. If ministries invest in training-focused deployments—where teachers receive time off, coaching, and opportunities for community practice—the returns improve even if salaries are low. This strategy builds human capital for AI and is not just a cost-cutting effort with education in its title.

Addressing potential critiques, some might argue that open-source models and cheaper chips will level the playing field. While it's true that costs will decrease and access will widen, the immediate situation remains sticky. Data center growth is limited by power constraints in existing hubs, and establishing new regions takes time. Public cloud spending is still concentrated in markets with high enterprise demand. Even if a model license is free, running large-scale inference necessitates stable networks and employed engineers. The BIS findings reflect a short-term outlook. In this short period, the limitations are practical, financial, and institutional. The solution is not to give up; instead, we should create the necessary infrastructure now so that when costs drop, schools can take full advantage quickly.

What educators can do next. University leaders should align their programs with AI channels that actually boost productivity: time saved on routine tasks, higher-quality feedback, and earlier student support. They should implement six-week sprints to redesign one course or service from start to finish using AI, then ensure the gains are secured. District superintendents should collaborate to pool demand to get reserved cloud capacity and fixed-line upgrades. They should also require service-level credits when latency affects teaching. Teacher training programs need to incorporate AI-assisted planning and assessment into practical experience, making it a requirement rather than an elective with supervision and documentation. All of this should be based on reliable measurements rather than flashy demonstrations. We should reward teams that turn AI into lasting changes in schedules, rather than focusing solely on those who run the most impressive pilot projects.

What policymakers must decide. Ministries should view the generative AI productivity gap as a new equity issue that requires the use of old tools: targeted investment, transparent benchmarks, and shared services. They should publish a national infrastructure map for AI in education that outlines compute regions, fiber reach, school network readiness, and teacher training capacity. Grants should be tied to improvements on this map. Ministers should negotiate regional education cloud cooperation across borders where single countries are too small to attract vendors. Budget considerations must remain a priority: early estimates from PWBM indicate that national accounts will see genuine but modest fiscal effects soon. Hence, the education sector cannot wait for a growth dividend to fund infrastructure. They must invest upfront and capture savings in later years as workflows stabilize.

The outcome to aim for. We will know the gap is closing when three things occur simultaneously. First, schools in secondary cities can conduct reliable AI-assisted instruction all day without bandwidth rationing. Second, the reallocation of teacher time is reflected in schedules, not just in surveys, and attrition drops when AI reduces tedious tasks. Third, vendors can deliver their best features to more languages and regions simultaneously because demand is correctly pooled and service levels are enforced. These are tangible indicators, not just slogans. They suggest a path where emerging systems can benefit more before wages rise or local chip manufacturing comes into play.

Convert a structural head start into shared benefits

The available evidence strongly supports the need for immediate policy action. Analysis from the Bank for International Settlements indicates that the initial benefits of generative AI will accrue primarily to advanced economies, while OECD data confirms that leading institutions are progressing more rapidly. Disparities in cloud services, computing power, and wage structures contribute to this unequal landscape. However, these outcomes are not inevitable. Reports demonstrate that the gap can be narrowed through targeted investment in AI-supportive infrastructure, improved affordability, and collaborative public sector purchasing. Education leaders should prioritize reserving computing resources, ensuring service levels, establishing reliable fixed-line connections, and procuring platforms that offer data visibility and measurable outcomes. Teacher training should be integrated into regular work schedules. Although these measures may lack immediate appeal, they are essential for converting AI’s potential into tangible productivity gains for students. Achieving these objectives will transform the generative AI productivity gap from a significant global challenge into a manageable local issue, enabling disadvantaged systems to realize substantial future gains.


The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.


References

Bank for International Settlements (2025). Artificial intelligence and growth in advanced and emerging economies: short-run impact (BIS Working Paper No. 1321).
CBRE (2025). Global Data Center Trends 2025.
EY (2025). AI-powered growth: GenAI spurs US economic performance.
Gartner (2024). Worldwide Public Cloud End-User Spending Forecast, 2025.
International Monetary Fund (2024). World Economic Outlook, October 2024, Chapter 1.
International Telecommunication Union (2025). Affordability of ICT Services 2025.
Knight Frank (2025). Global Data Centres Report 2025.
OECD (2024). Fostering an Inclusive Digital Transformation as AI Spreads among Firms.
OECD (2025). AI Adoption by Small and Medium-Sized Enterprises.
Penn Wharton Budget Model (2025). The Projected Impact of Generative AI on Future Productivity Growth.
World Bank (2025). Digital Progress and Trends Report 2025.
World Bank (2024). Digital Connectivity Scorecard.

Picture

Member for

1 year 1 month
Real name
David O'Neill
Bio
David O’Neill is a Professor of Finance and Data Analytics at the Gordon School of Business, SIAI. A Swiss-based researcher, his work explores the intersection of quantitative finance, AI, and educational innovation, particularly in designing executive-level curricula for AI-driven investment strategy. In addition to teaching, he manages the operational and financial oversight of SIAI’s education programs in Europe, contributing to the institute’s broader initiatives in hedge fund research and emerging market financial systems.

California’s AI Safety Bet — and the China Test

California’s AI Safety Bet — and the China Test

Picture

Member for

1 year 1 month
Real name
Ethan McGowan
Bio
Ethan McGowan is a Professor of Financial Technology and Legal Analytics at the Gordon School of Business, SIAI. Originally from the United Kingdom, he works at the frontier of AI applications in financial regulation and institutional strategy, advising on governance and legal frameworks for next-generation investment vehicles. McGowan plays a key role in SIAI’s expansion into global finance hubs, including oversight of the institute’s initiatives in the Middle East and its emerging hedge fund operations.

Modified

SB 53 is AI safety policy that also shapes U.S. competitiveness
If it slows California labs, China’s open-weight ecosystem gains ground
Make it work by pairing clear safety templates with fast evaluation and public compute

In 2024, over 50% of global venture capital for AI and machine-learning startups went to Bay Area companies. This figure shows a dependency: a small area now shapes tools for tutoring bots, writing helpers, learning analytics, and campus support. When rules in California change, the effects quickly spread through supply chains. These changes impact labs that train advanced models, startups that build on them, public schools that depend on vendor plans, and universities that design curricula around rapidly changing tools. The California AI safety law is both a safety measure and a competition decision, made in a place that still sets the global standard.

Senate Bill 53, the California AI safety law, signed on September 29, 2025, aims to reduce catastrophic risks posed by advanced foundation models through transparency, reporting, and protections for workers who raise concerns. The goal is serious, but the situation has changed. The U.S. now competes on speed, cost, and spread, as well as model quality. China is pushing open-source models that can be downloaded, fine-tuned, and used by many people at once. If California creates a complex process that mainly affects U.S. labs, the U.S. may pay for compliance while competitors profit. For education, this risk is real. Slower model cycles can mean slower safety improvements, fewer features, and higher prices for school tools.

What SB 53 does, in plain terms

SB 53, the Transparency in Frontier Artificial Intelligence Act, focuses on “large frontier developers”. A “frontier model” is defined by training compute: over 10^26 floating-point operations. A “large” developer must also have over $500 million in revenue in the prior year. These criteria target a small group of firms with many resources, not the typical edtech vendor. The compute threshold looks ahead. This matters because policy affects investment plans early on. The California AI safety law also prevents local AI rules that clash with the state framework. It centralizes expectations. SB 53 also authorizes CalCompute, a public computing resource, to broaden access and support safe innovation. Its development depends on later state action.

Figure 1: The Bay Area captured about 52% of global AI and machine-learning venture funding in 2024. When the hub tightens rules, the cost and speed effects spread down the stack, including education tools.

The law's main requirement is a published “frontier AI framework.” Firms must explain how they identify, test, and reduce catastrophic risk during development and use, and how they apply standards and best practices. It also establishes confidential reporting to the California Office of Emergency Services, as well as a way for employees and the public to report “critical safety incidents”. The law defines catastrophic risk using thresholds, such as incidents that could cause over 50 deaths or $1 billion in damage, not everyday errors. The California Attorney General handles enforcement, with penalties up to $1 million per violation. Reporting is confidential, to encourage disclosure without creating a misuse guide. This is good for safety, but it raises a tradeoff that education buyers will notice.

How the California AI safety law can tilt competitiveness

The first competitive issue is indirect. Most education startups will not train a 10²⁶ -FLOP model. They rent model access and build tools on top of it: tutoring, lesson planning, language support, grading help, and student services. If providers add review steps or slow releases to lower legal risk, products are delayed. The Stanford AI Index says that U.S. job postings needing generative-AI skills rose from 15,741 in 2023 to 66,635 in 2024. Also, large user bases quickly adopt defaults; Reuters says that OpenAI reached 400 million weekly active users in early 2025. Even minor slowdowns in release cycles can change which tools become standard in classrooms.

The second issue is geographic. California has the highest AI demand, accounting for 15.7% of U.S. AI job postings in 2024, ahead of Texas at 8.8% and New York at 5.8%. This does not prove that firms will leave. SB 53 targets large developers, and many startups will not be directly affected. However, ecosystems shift. The question is where the next training team is, where the next compliance staff is hired, and where the next capital is invested. If the California AI safety law introduces legal uncertainty into model development, and other states are more relaxed, the easiest path can shift. Over time, this can make “relocation” seem like a steady move.

Figure 2: California alone accounts for nearly twice Texas’s share of AI job postings. That concentration is why the California AI safety law can ripple through the national AI pipeline.

The third issue is the market structure that schools experience. Districts and universities rarely have the staff to check model safety. They depend on vendor promises and shared standards. A good law can improve the market by making safety claims verifiable. A bad one can do the reverse, reducing competition and choice, while everyday harms continue. Those harms include student data leaks, biased feedback that reflects race or disability, incorrect citations in assignments, and risks of manipulation. Critics say SB 53 concerns catastrophic risk, not daily failures. Yet, the same governance choices shape both. If major providers limit features or change terms for education buyers, districts will have fewer options as demand rises.

China’s open-source push changes the baseline

China’s open-source strategy changes what “disadvantage” means. Stanford HAI says that Chinese open-source language models have caught up and may have surpassed others in capability and adoption. The pattern is breadth. Many people are building efficient models for flexible use rather than relying on a single platform. These models travel well, enabling local fine-tuning, private use, and quick adaptation to specific areas, such as education tools that must be private. The ecosystem is not just “one model,” but a system of reuse in which weights and tools spread quickly across firms. A state law that mainly slows a few U.S. labs can still reshape the global field.

Market signals support this, affecting education. Reuters reports that DeepSeek became China’s top chatbot, with 22.2 million daily active users, and expanded its open-source reach by releasing code. Reuters also says that Chinese universities launched DeepSeek-based courses in early 2025 to build AI skills. Governance is changing to keep things moving. East Asia Forum says that China removed an AI law from its 2025 agenda, leaning more on pilots. In late December 2025, Reuters reported that rules targeted AI services that mimic human traits, demonstrating quick oversight. California uses a single compliance method, while China can adjust controls as adoption changes.

Make safety a strength, not a speed bump

The solution is not to drop the California AI safety law, but to make safety a competitive advantage. Start with transparency for buyers. Districts need disclosures about testing that can be checked. California should advance its “frontier AI framework” toward a template that aligns with risk guidance, such as the NIST AI Risk Management Framework. It should also create a safe space for education pilots that follow privacy rules, so providers are not punished for sharing proof with schools. The federal tone also matters. In January 2025, the White House issued an order to lower barriers to American AI leadership. If Washington signals speed and Sacramento signals difficulty, firms will exploit the split. A template and alignment with norms can lower overlap without lowering standards.

California should also treat computer access as part of safety. SB 53 establishes CalCompute to expand access and support innovation. But much of this depends on a report due by January 1, 2027, and funding. If the state wants to keep research local, speed is essential. Public cloud can help universities run checks and stress-test models without relying on vendors. It can also support sharing findings without exposing student data. This shared proof bridges “catastrophic risk” and the risks that harm learners.

The opening statistic is a warning, not a boast. When AI startup capital is in one region, that region’s rules become policy. California can lead on safety and speed, but only if the California AI safety law rewards practice and lowers uncertainty for users. Education shows this tradeoff. Schools will adopt what works, at a price they can afford. If U.S. providers slow, costs rise, and open-source options will spread. The task is to make SB 53 a system for trusted adoption: templates, tests, incident learning, and compute access. That is how a safety law becomes a strategy, not a handicap.


The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.


References

California State Legislature. (2025). Senate Bill No. 53: Artificial intelligence models: large developers (Chapter 138, Transparency in Frontier Artificial Intelligence Act). Sacramento, CA.
Hu, B., & Au, A. (2025, December 25). China resets the path to comprehensive AI governance. East Asia Forum.
Meinhardt, C., Nong, S., Webster, G., Hashimoto, T., & Manning, C. D. (2025, December 16). Beyond DeepSeek: China’s diverse open-weight AI ecosystem and its policy implications. Stanford Institute for Human-Centered Artificial Intelligence.
National Institute of Standards and Technology. (2023). Artificial Intelligence Risk Management Framework (AI RMF 1.0) (NIST AI 100-1). U.S. Department of Commerce.
Reuters. (2025a, September 29). California governor signs bill on AI safety.
Reuters. (2025b, February 21). DeepSeek to share some AI model code, doubling down on open source.
Reuters. (2025c, February 21). Chinese universities launch DeepSeek courses to capitalise on AI boom.
Reuters. (2025d, December 27). China issues draft rules to regulate AI with human-like interaction.
Reuters. (2025e, February 20). OpenAI’s weekly active users surpass 400 million.
Stanford Institute for Human-Centered Artificial Intelligence. (2025). AI Index Report 2025: Work and employment (chapter). Stanford University.
State of California, Office of Governor Gavin Newsom. (2025, September 29). Governor Newsom signs SB 53, advancing California’s world-leading artificial intelligence industry. Sacramento, CA.
White House. (2025, January 23). Removing barriers to American leadership in artificial intelligence. Washington, DC.

Picture

Member for

1 year 1 month
Real name
Ethan McGowan
Bio
Ethan McGowan is a Professor of Financial Technology and Legal Analytics at the Gordon School of Business, SIAI. Originally from the United Kingdom, he works at the frontier of AI applications in financial regulation and institutional strategy, advising on governance and legal frameworks for next-generation investment vehicles. McGowan plays a key role in SIAI’s expansion into global finance hubs, including oversight of the institute’s initiatives in the Middle East and its emerging hedge fund operations.

Korea’s AI PhD Fast Track Won’t Fix the Talent Gap Unless It Fixes the PhD

Korea’s AI PhD Fast Track Won’t Fix the Talent Gap Unless It Fixes the PhD

Picture

Member for

1 year 2 months
Real name
Keith Lee
Bio
Keith Lee is a Professor of AI and Data Science at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI), where he leads research and teaching on AI-driven finance and data science. He is also a Senior Research Fellow with the GIAI Council, advising on the institute’s global research and financial strategy, including initiatives in Asia and the Middle East.

Modified

Fast AI PhDs won’t fix hiring if training stays shallow
Firms need proof of real research skills, not faster diplomas
Rigor and outcomes must drive funding and standards

Available data indicate that Korea's artificial intelligence workforce is bigger than people think. The Bank of Korea estimates that about 57,000 AI specialists were employed in Korea in 2024. Yet, companies still say they struggle to find qualified people. This gap shows the real test for the AI PhD fast-track program. If more people graduate, but hiring remains hard, the issue extends beyond the number of graduates. It relates to the suitability, knowledge, and credibility of potential employees. AI work needs a solid education. Systems can fail if the info changes. When those in charge of hiring see an AI PhD fast-track, they want proof that a person can do cutting-edge work, not just a degree earned quickly. The thinking behind the fast-track policy is that shortening the time to get a degree will attract more students. It seems simple, and the number of spots can be easily counted. But AI expertise takes time to grow in labs through coding and learning from mistakes. A PhD involves education, skill development, learning to deal with unforeseen issues, and time spent in a research setting. If the policy views the PhD as just longer schooling, the AI PhD fast track risks valuing speed over good research. The result might look good on paper, but it does not meet companies' needs.

The AI PhD fast track focuses on time, but companies want quality

In November 2025, Korea’s Education Ministry announced a nationwide plan to foster AI talent. A key part allows students to finish bachelor’s, master’s, and doctoral studies in 5.5 years. The plan also aims to increase the number of AI-focused schools from 730 to 2,000 by 2028 and to increase the number of science and special schools offering specialized courses from 14 to 27. The Korea Herald reported that about 1.4 trillion won is being invested. These numbers show the urgency and highlight what’s easiest to measure: time, numbers, and funding. Yet, employers care more about skills, judgment, and good research habits.

It is worth asking what the system seeks. Korea has strong educational results but relies on credentials. OECD data shows 71% of Koreans aged 25 to 34 have completed college, the highest rate among OECD countries. Only 3% of that group have a master’s or doctorate, well below the OECD average. This suggests degrees may be social symbols, while advanced study is just an addition. For the AI PhD fast-track program, that matters. It could raise the wrong question: "How fast can a student graduate?" Research should ask: "What did the student figure out, and how do we know it is correct?" If time is the primary measure, courses may cut the most challenging material, such as advanced math, statistics, and system iteration.

Figure 1: High degree completion does not automatically mean deep research training—this gap helps explain why speed-based PhD reforms can miss what employers screen for.

Public money also adds to what is at stake. Korea spends heavily on new ideas, allocating about 4.96% of its GDP to domestic R&D in 2023. With so much public support, inadequate training is more than a personal mistake; it is a loss for the country. Taxpayers pay for labs, grants, and programs to develop talent. If the AI PhD fast track leads to shallow research, companies will pay twice: first through taxes and then to fix failed systems, while graduates see their degrees mean less. Universities may also face reputational issues that are hard to fix. Speed can be helpful when it is earned through real competence and good guidance; otherwise, the program might just help people finish quickly rather than prepare them for meaningful work.

What the AI PhD fast track should provide for jobs

What companies want in AI is becoming clearer, not easier. The OECD says that in Korea, 56.5% of companies using AI have seen it replace parts of some jobs. Many also say AI has increased the types and level of skills needed for current jobs. For small to mid-size Korean businesses, the need for data examination skills is on the rise. The next need is computer skills. This tells universities designing the AI PhD fast-track that a quicker path only helps if graduates can handle complex data, create tests, and explain their decisions. This skill comes from repeatedly doing research, getting feedback, and creating tested and fixed systems. It does not come from taking courses alone.

The Bank of Korea’s research on the job market explains why more graduates do not necessarily lead to more hiring. They estimate that about 11,000 Korean AI experts worked outside Korea in 2024, about 16% of the AI workforce.

Their findings also show that AI workers' pay in Korea was only about 6% higher in 2024 than in the U.S. and other countries. When income is low, people tend to look for work elsewhere. It can also change how employers act at home. Companies raise the bar for hiring because it costs them a lot to employ someone who is not ready. They look for proof of ability, focusing on past work and solid modeling skills. In that environment, the AI PhD fast track will be tested by its results. If the degree does not point to someone who will do well on the job, it will lose value.

Estimates of job shortages show why fundamental skills matter. A 2023 forecast projected demand for AI staff to reach about 66,100 by 2027, while only about 53,300 are available, leaving a gap of roughly 12,800. SPRi stated that 81.9% of 2,354 AI firms in Korea had trouble finding enough workers. These numbers support investment but show the risk involved. Companies do not want just anyone with an AI title; they want people who can create and defend their work. One poorly skilled worker can slow a team and create hidden risks. So, the AI PhD fast track needs to raise the percentage of graduates who can contribute to research and new products from the start.

Figure 2: The headline shortage is serious, but the bigger problem is at the R&D level—exactly where a credible PhD signal should reduce risk for employers.

When speed turns into a meaningless degree

No one wants to create a worthless degree. The risk is that standards drop over time. Across the world, low-quality schools often have something in common: they promise a degree very quickly. The Council for Higher Education Accreditation warns that these schools might promise degrees in a very short period, making quick completion a main selling point. The Department of Education warns that these places may look real but fail basic quality checks, and encourages students and companies to verify claims.

Korea’s AI PhD fast track is not one of these degree mills. But local discussion already calls programs that award degrees without real training "degree mills." If 5.5 years is what it is known for and the rules are not consistent, the thinking can become flawed: students pay to save time, institutions sell degrees, and companies and taxpayers suffer when the degree does not indicate that someone is capable.

AI makes it harder to hide a poor education. A bad report might go unnoticed in some subjects, but in AI, it shows up quickly. Teams that do not comprehend testing find it hard to tell what is essential. Teams that do not understand statistics struggle with skewed results. Teams that do not understand modeling struggle to think about failures. That is why companies ask about portfolios and programs, not just grades. An AI PhD fast track can be sound, but it has to rely on proof of skill rather than time. If it takes less time and has no challenging requirements, it trains students to avoid risk, since difficult questions take longer and are more likely to be answered incorrectly. That is the reverse of how research training should be.

There is also a real cost to reputation. AI recruitment is global, and word spreads fast. Companies still judge degrees by what graduates can do, even if they never visit the school. Korea is already seeing skilled AI workers leave the country. With that said, a weak sign hurts everyone, even the best graduates. The market impacts everything, not just some departments. If the AI PhD fast track is seen as a waste of time, it can limit the chances for those it is meant to help. Therefore, consistency across institutions is essential. A few weak programs can ruin the signal for the good ones. It takes time to build a good reputation, but it can be lost quickly.

How to make the AI PhD fast track valuable for hiring

Instead of selling speed, the AI PhD fast-track should be about demonstrating skill with clear proof. Students who already have skills can move faster, but those who do should not be hurried. That requires a solid foundation in math, statistics, and modeling, which many AI research labs consider important. It also needs results that are hard to fake: tested experiments, well-documented programs, and a report that passes outside review. Schools can help by testing problem-solving skills instead of memorization. If a school cannot achieve these conditions, it should not promote itself as an AI PhD fast track.

Rules should match goals. OECD research on quality suggests that outside policies can encourage improvement inside higher education. Some might say the AI PhD fast track is only for great students so that quality will take care of itself. This will not occur when money and reputation boost productivity across departments. So, officials need rules that reward research, not just numbers. School leaders need to assign reasonable workloads that allow time for mentoring and feedback. Departments need external examiners who are not aligned with career interests. Precise results, like published work, matter too. These steps protect good students and stop companies from paying for credentials that are not credible.

The job market cannot be ignored. The Bank of Korea’s low pay rate means Korea is not valuing AI skills like other countries do. Some might argue that companies can teach what programs cannot, but fixing this is expensive. If great skill is not valued, it will be lost. Consequently, the AI PhD fast track needs a matching job plan. That could include stronger partnerships with companies, research standards, guidelines on ownership, and early job tracks that do not require talent to move abroad for good pay. Otherwise, Korea might train people quickly, pay for it publicly, and then see companies in other countries benefit. A quick degree is only one part of the plan; incentives and research also affect productivity.


The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.


References

Bank of Korea. (2025). Mismatches in Korea’s AI labor market (AI workforce estimate; overseas share; wage premium discussion).
Council for Higher Education Accreditation. (n.d.-a). Degree mills: An old problem and a new threat.
KBS World. (2025, November 10). Gov’t unveils AI education plan to nurture new talent.
Korea Herald. (2025, November 10). S. Korea to foster AI talent across all stages of life.
Korea.net. (2024, December 30). Domestic R&D spending as % of GDP ranked No. 2 in 2023.
Lee, K. (2025a, November 12). AI시대, 고급 교육을 포기한 대학의 미래는 없다. 디 이코노미.
Lee, K. (2025b, November 12). 한국 AI 연구 인력의 실상과 그 배경 (analysis of math/stat training and AI research labor markets). 디 이코노미.
MK Pulse. (2023, September 1). S. Korea faces shortage of skilled workforce in key technology fields.
OECD. (2025a). Artificial intelligence and the labour market in Korea.
OECD. (2025b). Education at a glance 2025: Korea country note.
OECD. (2025c). Ensuring quality in VET and higher education: Getting quality assurance right.
SPRi (Software Policy & Research Institute). (2024). Media page entry citing results of the 2023 AI industry survey.
The Korea Times. (2025, December 5). Korea sees brain drain of AI talent amid low wage premium: BOK (Yonhap).
U.S. Department of Education. (2025, April 24). Diploma mills and accreditation.
WIPO. (2025). Global Innovation Index 2025: Republic of Korea indicator notes.

Picture

Member for

1 year 2 months
Real name
Keith Lee
Bio
Keith Lee is a Professor of AI and Data Science at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI), where he leads research and teaching on AI-driven finance and data science. He is also a Senior Research Fellow with the GIAI Council, advising on the institute’s global research and financial strategy, including initiatives in Asia and the Middle East.