Skip to main content

AI Agents in Education Can Double Learning, So Let’s Design for Homes, Not Just Enterprises

AI Agents in Education Can Double Learning, So Let’s Design for Homes, Not Just Enterprises

Picture

Member for

11 months 2 weeks
Real name
Catherine Maguire
Bio
Catherine Maguire is a Professor of Computer Science and AI Systems at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI). She specializes in machine learning infrastructure and applied data engineering, with a focus on bridging research and large-scale deployment of AI tools in financial and policy contexts. Based in the United States (with summer in Berlin and Zurich), she co-leads SIAI’s technical operations, overseeing the institute’s IT architecture and supporting its research-to-production pipeline for AI-driven finance.

Modified

AI agents in education boost learning while cutting time
Build home-first workflows for practice, planning, and records
Scale with evidence and guardrails to protect equity and trust

One data point should change our thinking: a recent study shows that students using an AI tutor learned significantly more, and in less time, than those in an active learning class covering the same material. The AI tutor used evidence-based teaching methods and still performed well. This is not just a marketing claim; it is supported by peer-reviewed research. Suppose an AI tutor can save time and improve learning in controlled settings. What happens when these “agentic” systems are used in the family home, where most homework takes place? The effects are real. They point to the potential for families to adopt AI agents in education that assist children each night, coach parents, and allow schools to focus their limited human expertise where it is most needed. In short, AI agents in education are transitioning from novelty to necessity, and the question is how quickly we can adjust to that reality.

We can see the shift in usage data. In 2025, 34% of U.S. adults reported using ChatGPT, roughly double the 2023 figure. Among UK undergraduates, use of generative tools increased from 66% in 2024 to 92% in 2025. Teen adoption is also growing: 26% of U.S. teens reported using ChatGPT for schoolwork in 2024, up from 13% the previous year. None of this guarantees learning improvements, but it shows where attention and effort are being directed. The opportunity now is to turn basic tool use into accountable, agent-led workflows that prioritize families first, schools second, and vendors last. This means creating a dependable “home system” for tutoring, planning, and documentation that works just as well on a Tuesday night as it does on a weekend for test prep.

AI agents in education: from single tools to household systems

An AI agent is more than just a chat window. It is software that can plan tasks, use tools, and work towards a goal. Businesses are already using agents to answer complex questions, manage workflows, and connect different systems. The lesson for schools is clear: what works in business will also work at home. Families need agents that help children study, automatically log progress for teachers, draft emails about accommodations, and schedule therapy sessions without paperwork getting lost. The key points from consulting apply at home: it’s not just about how intelligent the agent is; it’s about redesigning the workflow around what learners and caregivers actually do. Focus on the weekly rhythm of assignments and supports rather than worrying about flashy demonstrations.

A clear example of where this is heading comes from a recent story about a parent who used “vibe coding” tools to build an AI tutoring platform for her dyslexic son. She did not wait for a perfect product. She combined research-based prompts, dashboards, and student intake forms to personalize the agent’s guidance, drawing on hundreds of studies about learning differences. The result was not a toy; it was a home setup that adjusted to the child’s motivation and needs. When a parent can create a functioning tutor, we have entered a new era. For context, specialized reading support in the U.S. averages about $108 per hour. An AI agent that can enhance or partially replace some of those human sessions changes the dynamics of time, cost, and access—all while keeping the human specialist for the more complex parts.

What AI agents can do for families now?

We should identify specific use cases because they relate to real challenges families face. Start with support for reading and writing. Recent evidence shows that AI tutors can deliver greater gains in less time when designed around effective teaching methods. This makes agents ideal for structured, paced, and responsive nightly practice. Add school logistics: an agent can extract deadlines from learning platforms, generate study plans, and remind both children and caregivers. It can summarize teacher feedback into two specific actions per subject. It can maintain a private, ongoing learning record that parents can share at the next meeting. Because these are agents, not static programs, they can access external tools to retrieve school calendars, assemble forms for accommodations, or draft that email you've been putting off.

Figure 1: Students using the AI tutor spent 49 minutes on task versus 60 minutes in class—about 18% less time—while the same study reports significantly larger learning gains for the AI-tutored group.

Families also need help connecting limited, costly expert support with critical moments. Reading intervention tutors are essential, but capacity and cost are issues. With agents enabling focused practice between sessions, children can arrive ready for human instruction. This does not mean total replacement; it means better use of resources. Additionally, broader tool usage suggests families are becoming comfortable asking AI for help with information and planning. Surveys show that many adults rely on AI to search and summarize, and students report routine use. Suppose we can direct that comfort toward a household agent focused on learning goals. In that case, we can reduce back-and-forth communication, minimize wasted time, and make the hours spent with a human expert more effective.

Risks, equity, and the danger of “agent washing”

There is a strong temptation to label every scripted workflow an “agent.” Analysts caution that many agent projects may fail within 2 years because teams pursue trends rather than results. This warning is essential in education, where trust is the most valuable asset. The safeguard against soaring expectations is precise planning and measurable outcomes: time on task, growth on validated assessments, and teacher-observed application. This also means avoiding unsafe autonomy. Household agents should have limited default functions, operate under strict guidelines, and provide logs that parents and teachers can quickly review. The standard must focus on verifiable improvements, not flashy showcases.

Figure 2: Meeting universal schooling by 2030 requires ~44 million teachers; Sub-Saharan Africa (15,049k) and South-Eastern Asia (9,766k) account for over half of the gap—underscoring why AI agents should target routine workload, not replace scarce experts.

We also need strong protections for equity. Global policy organizations warn that AI's potential will vanish if we ignore access, bias, and data handling. This is not just hand-wringing; it is about practical design. AI agents in education must prioritize human-centered guidance, protect student data, and be implemented with teacher training and clear rules. We should account for low-resource settings—for instance, offline options, simple devices, and multilingual prompts. Remember the staffing challenge: the world needs millions more teachers this decade to achieve universal education goals. Agent systems should help by taking on routine tasks and structured practice, not by replacing irreplaceable specialized roles.

A playbook to make AI agents in education work—at home and at school

Start with small, manageable successes. Pick one subject and one grade level. Use an agent to assign tasks, coach, and check practice aligned with the existing curriculum. Treat the agent as a redesign of the workflow, not an add-on tool. The most effective implementations in industry focus on the process rather than the tool itself; schools should do the same. As usage increases, integrate the agent with gradebooks and learning platforms to automatically populate progress logs and reduce administrative burdens. Monitor fidelity: if the agent’s prompts stray from the curriculum, correct them. The goal is to free up teacher time for valuable feedback and meetings while providing families with a reliable nightly routine.

Then scale based on evidence. Use validated measurements and compare agent-supported practice against standard methods. If the results favor the agent, make it official. If they do not, adapt or stop. Build trust within the community by showing not only that students used an AI tool but that they learned more in less time. That is the standard set by recent research on AI tutoring.

Meanwhile, monitor usage trends. Both adult and student use indicates that agents are becoming part of daily life; institutions should meet families where they are by offering approved agent options, clear data policies, and easy-to-follow guides. Finally, align purchasing strategies to avoid “agent washing” by using straightforward criteria for limits on autonomy, logging, and outcomes. This reduces vendor turnover and keeps the focus on learning rather than features.

An AI tutor can lead to much more learning in less time than an active-learning class covering the duplicate content. This single insight serves as a guiding principle for policy. It suggests that the right kind of AI—integrated into a workflow, limited for safety, and measured by outcomes—can help families gain hours back and help schools focus on their valuable human expertise. The consumer trend is moving in the same direction, as adults and students incorporate AI into their everyday tasks. The task ahead is to direct that momentum into accountable agents designed for home use and connected to schools. If accomplished, “AI agents in education” will evolve from a buzzword to a dependable part of every household’s learning resources. The key question is simple: do students learn more, faster, without compromising trust or equity? If the answer is yes, we should scale and start now.


The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.


References

Axios. (2025, September 18). AI can support 80% of corporate affairs tasks, new BCG report says.
British Dyslexia Association. (2025). Dyslexia overview.
HEPI & Kortext. (2025). Student Generative AI Survey 2025.
McKinsey & Company. (2025, March 12). The State of AI: Global survey.
McKinsey & Company. (2025, September 12). One year of agentic AI: Six lessons from the people doing the work.
Pew Research Center. (2025, January 15). About a quarter of U.S. teens have used ChatGPT for schoolwork.
Pew Research Center. (2025, June 25). 34% of U.S. adults have used ChatGPT.
Reading Guru. (2024). National reading tutoring cost study.
Reuters. (2025, June 25). Over 40% of agentic AI projects will be scrapped by 2027, Gartner says.
Scientific American. (2025). How one mom used vibe coding to build an AI tutor for her dyslexic son.
Stanford/Harvard team via Scientific Reports. (2025). AI tutoring outperforms active learning. Scientific Reports.
Teacher Task Force/UNESCO. (2024, October 2). Fact Sheet for World Teachers’ Day 2024.
UNESCO. (2023, updated 2025). Guidance for generative AI in education and research.
OECD. (2024). The potential impact of AI on equity and inclusion in education.
Oracle. (2025). 23 real-world AI agent use cases.

Picture

Member for

11 months 2 weeks
Real name
Catherine Maguire
Bio
Catherine Maguire is a Professor of Computer Science and AI Systems at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI). She specializes in machine learning infrastructure and applied data engineering, with a focus on bridging research and large-scale deployment of AI tools in financial and policy contexts. Based in the United States (with summer in Berlin and Zurich), she co-leads SIAI’s technical operations, overseeing the institute’s IT architecture and supporting its research-to-production pipeline for AI-driven finance.

The AI Cognitive Extensions Gap: Why Korea’s Test-Prep Edge Won’t Survive Generative AI

The AI Cognitive Extensions Gap: Why Korea’s Test-Prep Edge Won’t Survive Generative AI

Picture

Member for

11 months 2 weeks
Real name
Catherine Maguire
Bio
Catherine Maguire is a Professor of Computer Science and AI Systems at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI). She specializes in machine learning infrastructure and applied data engineering, with a focus on bridging research and large-scale deployment of AI tools in financial and policy contexts. Based in the United States (with summer in Berlin and Zurich), she co-leads SIAI’s technical operations, overseeing the institute’s IT architecture and supporting its research-to-production pipeline for AI-driven finance.

Modified

Korea excels at teen “creative thinking,” but adults lag in adaptive problem solving
Generative AI automates routine tasks, so value shifts to AI cognitive extensions—framing, modeling, and auditing
Reform exams, classroom routines, and admissions to reward those extensions, or the test-prep edge will fade

Fifteen-year-olds in Korea excel in “creative thinking.” In 2022, 46% reached the top levels on the OECD test, well above the OECD average of 27%. However, Korean adults struggle with flexible, real-world problem-solving. On the OECD Survey of Adult Skills, 37% score at or below Level 1 in adaptive problem solving, and only 1% reach the top level. This gap is significant. School-age achievements in "little-c" creativity do not translate into adult skills in defining problems, synthesizing evidence, and designing solutions alongside AI. Generative tools speed up this change. Randomized studies show that AI improves routine writing quality by about 18% and reduces time spent by around 40%. Customer support agents solve 14-15% more queries with the help of AI. The real value lies in the human steps before and after automation—what we term AI cognitive extensions: scoping, modeling, critiquing, and integrating across sources. Systems that depend heavily on algorithmic training will struggle, while those that teach AI cognitive extensions will thrive.

AI Cognitive Extensions: Paving the Way for a New Era in Education

Generative AI now takes on much of the execution work that schools have valued: standard writing, template coding, routine data transformations, and procedural math. In 10 OECD countries, jobs most affected by AI already prioritize management, collaboration, creativity, and communication. Seventy-two percent of high-exposure job postings require management skills, and demand for social and language skills has increased by 7-8 percentage points over the past decade. As tools handle more tasks, employers appreciate the ability to set up the task, break down challenges, evaluate outputs, and communicate decisions across teams. These are AI cognitive extensions. They are not mere soft skills; they represent crucial judgment skills in a world where basic tasks are automated.

Korea has been moving in this direction on paper. The 2022 national curriculum revision, a comprehensive update to the educational framework, aims to cultivate “inclusive and creative individuals with essential skills for the future,” with a focus on uncertainty and student agency. However, policy intent faces cultural challenges. Private after-school tutoring remains nearly universal and is optimized for high-stakes exams. In 2023-2024, participation was around 78-80%, and spending reached record levels—about ₩27-29 trillion—despite a declining student population. The education system still rewards algorithmic speed under exam pressure. This system cannot deliver AI cognitive extensions at scale.

Figure 1: Management and process skills dominate AI-exposed roles—the very AI cognitive extensions schools must teach and assess.

What the Data Says About Korea’s Strengths—and Weak Links

It is tempting to say the system is already succeeding based on high PISA scores in creative thinking. But a closer look reveals the truth. PISA measures' little-c' creativity in familiar contexts for 15-year-olds, assessing the ability to generate and refine ideas within set tasks. It does not evaluate whether graduates can define tasks themselves, combine evidence from different fields, or check an AI's reasoning under uncertainty. The picture changes for adult skills. Korea’s average literacy, numeracy, and adaptive problem-solving in the PIAAC survey fall below OECD averages, with a notably thick lower range in adaptive problem-solving. This discrepancy creates labor-market challenges, such as a mismatch between the skills students possess and the skills employers need, which schools pass to universities and employers. It also highlights the need for AI cognitive extensions.

Figure 2: Korea’s top-performer rate is ~2× the OECD average—strong “little-c” creativity at age 15, not yet proof of adult problem-framing power.

AI adoption further emphasizes the importance of framing over mere execution. Randomized studies show that generative tools improve output for routine writing and standard customer support, especially for less-experienced workers on templated tasks. Coding experiments with AI pair programmers exhibit similar time savings. Automation reduces the value of algorithmic execution, the very area that cram schools optimize. The emphasis shifts to individuals who can write effective prompts, define evaluation criteria, blend domain knowledge with statistical insight, and justify decisions in uncertain situations. Essentially, AI handles more of the “show your work” aspect, and humans must decide which work to present initially. This is the gap Korea needs to bridge.

Designing an AI Cognitive Extensions Curriculum

The solution lies in the curriculum, not superficial tweaks. Start by teaching students the tasks that AI cannot easily manage: defining problems, selecting evaluation measures, and justifying trade-offs. Make these actions clear. In math and data science, the shift from just solving problems to discussing “model choice and critique”: why choose a logistic link over a hinge loss; the significance of priors; where confounding variables might hide; and how we would identify model drift. In writing, revise assessments from finished essays into decision memos that include evidence maps and counterarguments. In programming, replace bland problem sets with “spec to system” tasks where students must gather requirements, create basic tests, and then use AI to draft and refine while documenting risks. These practices introduce AI cognitive extensions as a series of habits that can be assessed for quality and reproducibility.

Korea can move quickly because its education system already rigorously tracks performance. Substitute some timed, single-answer tests with short, frequent tasks where students submit: a problem frame, an AI-assisted plan, an audit of their model or tool choice, and reflections on their failures. Rubrics should evaluate the clarity of assumptions, the choice of evaluation metrics, and the student’s ability to highlight second-order effects. This approach maintains meritocratic standards while rewarding advanced thinking. National curriculum language that emphasizes adaptable skills allows for such shifts; the challenge is effectively applying those changes in schools still influenced by hagwon culture. Adjusting classroom incentives to promote AI cognitive extensions is essential for turning policy into practice.

The institutional evidence is telling. A selective European program that imported Western, problem-based AI education into Asia found that a large majority of its Asian students struggled to apply theory to open-ended applications and synthesis. This prompted a redesign of admissions and teaching methods. The issue was not a lack of mathematical ability; instead, it highlighted the missing connection between abstract concepts and real-world applications in uncertain situations—the very essence of AI cognitive extensions. While one case can’t replace national data, it shows how quickly high-performing test takers can stumble when tasks change from executing algorithms to creating them. We should view this as a cautionary tale and a guide for improvement.

Accountability That Rewards AI Cognitive Extensions

Assessment must evolve if curricula are going to change. The national exam system offers a straightforward opportunity for reform. Korea’s test reforms have primarily focused on removing “killer questions.” While this may reduce the competition in private tutoring, it does not assess what truly matters. Instead, introduce scored components that cannot be crammed for: scenario briefs with uncertain data, brief oral defenses of plans, and audit notes for an AI-assisted workflow that flag hallucinations, biases, and alignment risks. Assess these using double-masked rubrics and random sampling to ensure fairness and scalability. If the exam rewards AI cognitive extensions, the system will teach them.

The second area for change is teacher practices. Recent studies of curriculum decentralization show that just giving teachers more freedom does not ensure improvements; they need practical tools and shared routines. Provide curated task banks aligned with the national curriculum’s goals, along with examples of problem framing, evaluation design, and AI auditing. Pair this with quick-cycle professional learning that reflects student tasks; teachers should practice writing prompts, setting acceptance criteria, and conducting red-team reviews. When teachers have heavier workloads, AI should assist with routine paperwork to free up time for coaching higher-order skills. This approach makes AI cognitive extensions a real practice rather than just a slogan.

Lastly, align signals in higher education. Universities and employers should favor admissions and hiring practices that look for portfolios showcasing framed problems, annotated code and data notes, and decision documents with clear evaluation criteria. Analysis of Lightcast data across OECD countries shows growing demand for creativity, collaboration, and management in AI-related roles. Suppose universities publish criteria that require these materials. In that case, secondary schools will start teaching them, and this time it will be effective. Building a strong connection between signals and skills is how Korea can maintain its excellence and update its competitive edge.

The headline figures tell different stories: Korea’s teenagers shine in tested creativity, but many adults struggle with adaptive problem-solving in real-world situations. Generative AI sharpens this divide. It automates much of the task's core and shifts value to the beginning and end: the framing at the start and the audit at the end. These edges constitute AI cognitive extensions. They can be taught, assessed fairly, and scaled when exams, classroom activities, and higher education signals support them. Maintain algorithmic fluency; it still has its place. However, do not mistake fast computation under pressure for the ability to define problems, manage uncertainty, and guide tools with sound judgment. The path forward is not to slow down or ban AI in classrooms. Instead, it involves making the human aspects of the work—the parts that guide AI—visible, regular, and graded. If we act now, Korea can turn its early-stage creative strengths into adult capabilities. If we don’t, students who were prepared for yesterday’s challenges will see their advantages fade. The decision—and the opportunity—are ours.


The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.


References

Bae, S.-H. (2024). The cause of institutionalized private tutoring in Korea. Asia Pacific Education Review.
Brynjolfsson, E., Li, D., & Raymond, L. (2025). Generative AI at Work. Quarterly Journal of Economics, 140(2), 889–951.
Brynjolfsson, E., Li, D., & Raymond, L. (2023). Generative AI at Work (NBER Working Paper No. 31161).
Lightcast & OECD. (2024). Artificial intelligence and the changing demand for skills in the labour market. OECD AI Papers.
Noy, S., & Zhang, W. (2023). Experimental evidence on the productivity effects of generative artificial intelligence. Science.
OECD. (2023). PISA 2022 results (Volumes I & II): Korea country notes.
OECD. (2024). PISA 2022 results (Volume III): Creative thinking—Korea factsheet.
OECD. (2024). Survey of Adult Skills (PIAAC) 2023 Country Note—Korea and GPS Profile: Adaptive problem solving.
Peng, S., et al. (2023). The impact of AI on developer productivity. arXiv:2302.06590.
Seoul/Korean Ministry of Education; Statistics Korea. (2024–2025). Private education expenditures surveys. (News summaries and statistical releases).
SIAI (Swiss Institute of Artificial Intelligence). (2025). Why SIAI failed 80% of Asian students: A Cognitive, Not Mathematical, Problem. SIAI Memo.
UNESCO. (2023). Guidance for generative AI in education and research.

Picture

Member for

11 months 2 weeks
Real name
Catherine Maguire
Bio
Catherine Maguire is a Professor of Computer Science and AI Systems at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI). She specializes in machine learning infrastructure and applied data engineering, with a focus on bridging research and large-scale deployment of AI tools in financial and policy contexts. Based in the United States (with summer in Berlin and Zurich), she co-leads SIAI’s technical operations, overseeing the institute’s IT architecture and supporting its research-to-production pipeline for AI-driven finance.

Rethinking AI Energy Demand: Planning for Power in a Bubble-Prone Boom

Rethinking AI Energy Demand: Planning for Power in a Bubble-Prone Boom

Picture

Member for

11 months 2 weeks
Real name
David O'Neill
Bio
David O’Neill is a Professor of Finance and Data Analytics at the Gordon School of Business, SIAI. A Swiss-based researcher, his work explores the intersection of quantitative finance, AI, and educational innovation, particularly in designing executive-level curricula for AI-driven investment strategy. In addition to teaching, he manages the operational and financial oversight of SIAI’s education programs in Europe, contributing to the institute’s broader initiatives in hedge fund research and emerging market financial systems.

Modified

AI energy demand may surge—but isn’t guaranteed
Nuclear later; near-term: renewables, storage, shifting
Schools should plan for boom or bust with flexible procurement

By 2030, global electricity generation for data centers is expected to exceed 1,000 TWh, about double 2024 levels and nearly 3% of total global electricity generation. This figure is significant. It reflects both the scale of growth in computing and a deeper tension in policy. If we view AI energy demand as fixed and unavoidable, we risk embedding today's hype into tomorrow's power systems. However, this scenario isn't predetermined. Demand will depend on how quickly models are used, business adoption, and profit margins that justify ongoing expansion. If growth slows or if the AI bubble deflates, projections that assume steady increases will be off target. If growth persists, we may struggle to provide enough clean power quickly. Either way, educators, administrators, and policymakers should not rely solely on averages; they must create adaptable strategies to prepare for both increases and decreases.

The United States serves as a stress test. An assessment supported by the U.S. Department of Energy predicts that data centers will grow from approximately 4.4% of U.S. electricity use in 2023 to between 6.7% and 12% by 2028, driven by AI adoption and efficiency gains. BloombergNEF estimates that meeting AI energy demand could require 345 to 815 TWh of power in the U.S. by 2030, translating to a need for 131 to 310 GW of new generation capacity. These estimates vary widely because the future is genuinely uncertain. They depend on factors such as usage rates, inference intensity, cooling technology, and hardware advancements. This uncertainty should inform public investment and campus procurement strategies: plan for peaks while being prepared for plateaus.

Figure 1: Data-intensive work clusters in major metros, while CO₂ hotspots only partly overlap, revealing a planning gap between where AI demand will grow and where clean power sits.

The standard narrative is straightforward: AI chips consume a lot of power, cloud demand is increasing, and grids are under pressure; thus, we must expand clean power, particularly nuclear energy. This chain of reasoning holds some truth but is incomplete. The missing element is the conditional nature of demand. Much of the anticipated load growth relies on business models—such as autonomous systems taking over many white-collar jobs—that haven't yet proven effective at scale. If those assumptions don't hold, the story about AI energy demand shifts from a steep rise to a plateau. Our goal is to create policies for energy and education that apply to both potential outcomes, avoiding the pitfalls of excessive optimism or pessimism.

Productivity Claims vs. Power Reality

Evidence on productivity in specific situations is encouraging. In a significant field study, providing a generative AI assistant to over 5,000 customer support agents increased the number of issues resolved per hour by about 14% on average, with greater gains for less experienced staff. This indicates real improvements at the task level. However, this alone does not prove that automated systems will replace a large share of jobs or maintain the high inference volumes needed to ensure rapid growth in AI energy demand through 2030. Overall productivity is essential because it funds capital expenditures and keeps servers operational. In 2024, U.S. non-farm business labor productivity grew by about 2.3%. This is good news, but it isn't just about AI, nor is it enough to provide a substantial change to settle the energy debate.

Macroeconomic signals are mixed. Analysts have highlighted a notable gap between the capital expenditures for AI infrastructure and current AI revenues. Derek Thompson and others summarize estimates suggesting spending on AI-related data centers could reach $300 to $400 billion by 2025, compared to much lower current revenues—typical indicators of a speculative boom. Even some industry leaders acknowledge bubble-like traits while emphasizing the necessity for a multi-year runway. If revenue growth falls short, usage will shift from exponential to selective, leading to a decline in AI energy demand. This possibility should be considered in grid and campus planning—not as a prediction, but as a scenario with specific procurement and contracting consequences.

The baseline for data centers is also changing rapidly. The International Energy Agency (IEA) projects that global electricity consumption by data centers could double to about 945 TWh by 2030, growing roughly 15% per year, which is over four times the growth rate of total electricity demand. U.S. power agencies anticipate a break from a decade of flat demand, driven in part by AI energy needs. However, the IEA also highlights significant differences in the sources of new supply: nearly half of the additional demand through 2030 could be met by renewables, with natural gas and coal covering much of the rest, while nuclear energy's contribution will increase later in the decade and beyond. This mix highlights the policy dilemma: should we aim for slower demand growth, faster clean supply, or both?

Nuclear's Role: Urgent, but Not Instant

Nuclear power has clear advantages for meeting AI energy demand: high capacity factors, stable output, a small land footprint, and life-cycle emissions comparable to those of wind power. In 2024, global nuclear generation reached a record 2,667 TWh, with average capacity factors around 82%. This reliability is valuable to data center operators. These attributes are crucial for long-term planning. The challenge lies in time. Recent data indicate that median construction times range from approximately 5 to 7 years in China and Pakistan, to 8 to over 17 years in parts of Europe and the Gulf, with some western projects taking well beyond a decade. Small modular reactors, often touted as short-term solutions, have faced cancellations and delays; the flagship project in Idaho was terminated in 2023. In other words, while nuclear is a strategic asset, it alone cannot handle the immediate surge in AI energy demand.

This timing issue is significant because the period from 2025 to 2030 is likely to be critical. Even optimistic timelines for small modular reactors suggest that the first commercial units won't be ready until around 2030; many large reactors now under construction will not connect to the grid before the early to mid-2030s. Meanwhile, wind and solar energy sources are being added at unprecedented rates—around 510 GW of new renewable capacity in 2023 alone—but their variable generation and connection backlogs limit how much of the AI surge they can support without storage and transmission improvements. The practical takeaway is a three-part plan: push for nuclear expansion in the 2030s, accelerate renewable energy and storage investments now, and implement demand-side strategies to manage AI energy demand in the meantime.

Even in a nuclear-forward scenario, careful procurement matters. Deals for data centers near campuses should combine long-term power purchase agreements with enforceable clean-energy guarantees, not just claims about the grid mix. Where nuclear is feasible, offtake contracts can support financing; where it is not, long-duration storage and reliable low-carbon options—including geothermal and hydro updates—should form the basis of the energy strategy. The goal of policy is not to choose winners, but to secure reliable, low-carbon megawatt-hours that align with the hourly demands of AI energy.

What Schools and Systems Should Do Now

Education leaders are in a unique position: they are significant electricity consumers, early adopters of AI, and trusted community conveners. They should respond to AI energy demand with three specific actions. First, treat demand as something that can be shaped. Model the load under two scenarios: ongoing AI-driven growth and a moderated rate if automated systems don't deliver. Align procurement with both scenarios—short-term contracts with flexible volumes for the next three years, and longer-term clean power agreements that expand if usage proves sustainable. Include provisions to reduce usage during peak times, and implement price signals that encourage non-essential tasks to be done during off-peak hours. This same approach should apply to campus research groups: establish scheduling and queuing rules that prioritize daytime solar energy whenever possible and enhance heat recovery from servers for buildings.

Second, connect education to energy use. Update curricula to specify AI energy demand within computer science, data science, and EdTech programs. Teach energy-efficient model design, including quantization, distillation, and retrieval, to reduce inference costs. Introduce foundational grid knowledge—such as capacity factors and marginal emissions—so graduates can develop AI that acknowledges physical limits. Pair this learning with real-world procurement projects: allow students to analyze power purchase agreement terms, claims of additionality, and hourly matching. Future administrators will need this knowledge as much as they require skills in privacy or security.

Third, plan for both growth and decline. If usage skyrockets as anticipated, the U.S. could need between 11% and 26% more electricity by 2030 to support AI computing; campuses should prepare to adjust workloads, invest in storage, and strengthen distribution networks. If the bubble bursts, renegotiate the minimum terms of offtake agreements, direct surplus clean energy toward electrifying fleets and buildings, and retire the least efficient computing resources early. Taking these approaches safeguards budgets and supports climate goals. It is crucial to reject the notion that the only solution is to produce more energy. Good policy must also address demand.

Anticipating the Strongest Critiques

One critique suggests that efficiency will outpace demand. New designs and improved power utilization effectiveness (PUE) could limit AI energy needs. This is a plausible argument. However, history warns of the Jevons paradox: lower costs per token can lead to increased consumption. Even the most positive efficiency projections indicate that overall demand will still rise in the IEA's base case, as user growth outweighs savings from improved efficiency. Others argue that AI could offset its energy costs through enhanced productivity. Studies at the task level show gains, particularly among less experienced workers, and U.S. productivity has improved. Yet, these advancements are not substantial enough at this point to guarantee the revenue streams necessary to support permanent, exponential growth. It is wiser to plan for different possible outcomes, rather than deny potential issues.

A second critique argues that nuclear energy can meet the rising demand if we "choose to build." We should indeed make that choice—quickly and safely. However, current timelines present challenges. Recent global construction times vary greatly; early adopters of small modular reactors have faced setbacks, and large projects in the West continue to have extensive delays. While nuclear is necessary in the mix for the 2030s, it is not a quick fix for AI energy demands in 2026. This is why procurement strategies must combine short-term renewable and storage solutions with long-term firm sources. Regulators must also expedite transmission and interconnection processes; without proper infrastructure, new clean energy resources cannot reach new demand.

A third critique argues that fears of an "AI bubble" are exaggerated. This may be true. However, even industry insiders recognize bubble-like characteristics in the market while suggesting that any downturn may still be years away. For public institutions, the appropriate response is not to stake everything on either scenario. Instead, they should ensure flexibility: secure adaptable contracts, staged developments, colocated storage, and systems that maximize value from every kilowatt-hour. This strategy works well in both growth and downturn periods.

Design for Uncertainty, Not For Averages

The key numbers are concerning. Global AI energy demand for data centers could surpass 1,000 TWh by 2030. In the U.S., AI could account for 6.7% to 12% of total electricity by 2028, and meeting growth needs by 2030 could require adding 131 to 310 GW of new capacity. These estimates validate the need for urgent action without leading to despair. They also remind us to stay humble about what the future holds: if automated systems fail to generate substantial productivity consistently, usage will decline, and long-term energy investments based on steady growth will fall short. Conversely, if AI continues to expand, every clean megawatt we can create will be essential, with nuclear energy playing a more significant role later in the decade and into the 2030s. The unifying theme is design. Institutions should approach AI energy demand as a variable they can control—shaped through contracts, software, education, and operations. This involves securing flexible offtake today, investing in robust low-carbon supply tomorrow, and maintaining a constant focus on efficiency. It also means graduating students who understand how to work with the existing grid and the new grid we need to build. The call to action is clear: prepare for growth, protect against downturns, and ensure that every incremental terawatt-hour is cleaner than the last.


The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.


References

BloombergNEF. (2025, October 7). AI is a game changer for power demand.
Brynjolfsson, E., Li, D., & Raymond, L. (2023). Generative AI at Work (NBER Working Paper No. 31161).
Ember. (2025, April 8). Global Electricity Review 2025.
IEA. (2024–2025). Energy and AI: Energy demand from AI; Energy supply for AI.
IEA. (2024). Electricity 2024 – Executive Summary.
IEA. (2023). Renewables 2023 – Executive Summary.
LBNL/DOE. (2024, December 20). Increase in U.S. electricity demand from data centers. U.S. DOE summary.
Reuters. (2025, February 11). U.S. power use to reach record highs in 2025 and 2026, EIA says.
U.S. BLS. (2025, February 12). Productivity up 2.3 percent in 2024.
World Nuclear Association. (2024–2025). World Nuclear Performance Report; capacity factors and 2024 generation record.
World Nuclear Industry Status Report. (2024). Construction times 2014–2023 (Table).
Thompson, D. (2025, October 2). This Is How the AI Bubble Will Pop.
The Verge. (2025, Aug.). Sam Altman says "yes," AI is in a bubble.
Business Insider. (2025, Oct.). Former Intel CEO Pat Gelsinger says AI is a bubble that won't pop for several years.

Picture

Member for

11 months 2 weeks
Real name
David O'Neill
Bio
David O’Neill is a Professor of Finance and Data Analytics at the Gordon School of Business, SIAI. A Swiss-based researcher, his work explores the intersection of quantitative finance, AI, and educational innovation, particularly in designing executive-level curricula for AI-driven investment strategy. In addition to teaching, he manages the operational and financial oversight of SIAI’s education programs in Europe, contributing to the institute’s broader initiatives in hedge fund research and emerging market financial systems.