High housing costs lock households into hand-to-mouth budgets and suppress saving
Targeted housing support and expanded affordable supply free cash for productive spending and learning
Prioritize urban renters, index aid to rents, and track overburden rates monthly
Stop the Cross-Subsidy: AI Data Center Electricity Rates Shouldn’t Raise Household Bills
Picture
Member for
11 months 4 weeks
Real name
Ethan McGowan
Bio
Ethan McGowan is a Professor of Financial Technology and Legal Analytics at the Gordon School of Business, SIAI. Originally from the United Kingdom, he works at the frontier of AI applications in financial regulation and institutional strategy, advising on governance and legal frameworks for next-generation investment vehicles. McGowan plays a key role in SIAI’s expansion into global finance hubs, including oversight of the institute’s initiatives in the Middle East and its emerging hedge fund operations.
Published
Modified
AI data centers are pushing grid costs onto households and schools
Create a separate rate class with minimum bills, upfront upgrade payments, and full transparency
Require self-supply or co-located power for very large campuses, with local community benefits
Wholesale electricity prices near major data center clusters have jumped by as much as 267% over the past 5 years. This increase affects utility bills, which in turn affect household budgets. During the same period, local transmission upgrades worth at least $4.4 billion were made in seven PJM states, costs now imposed on regular customers so that hyperscalers can connect more quickly. This practice is unfair. It creates a cross-subsidy from families and schools to some of the world's largest corporations. Suppose the demand for AI continues to rise. In that case, regular customers will bear more grid costs that would not exist without these large-scale developments. The solution is not just a catchy phrase but a thoughtful rate design. AI data center electricity rates must be independent, with clear boundaries that shield households from stranded assets, capacity charges, and local wires built just for single tenants. Anything less allows a wealth transfer to go unnoticed.
The unfair cross-subsidy is evident
What has changed is not just the price but also the scale. Grid planners are observing a structural shift in load expectations. PJM, the largest power market in the U.S., forecasts its summer peak will grow from about 150 GW to roughly 220 GW over the next 15 years, mainly due to data-center growth. The grid’s independent market monitor estimates that data center loads contributed $9.33 billion in capacity-market revenues over one year under a scenario in which everything else remains the same. This is a clear cost signal reflected in retail bills. Residential customers cannot protect themselves from this risk; they pay for it.
The global situation mirrors this. The International Energy Agency projects that data-center electricity use will more than double by 2030, reaching about 945 TWh, and growing nearly four times faster than overall power demand. Short-term forecasts in the U.S. indicate record electricity consumption in 2025–2026, primarily driven by data centers and AI. In the meantime, utilities in rapidly growing areas have filed resource plans and capital programs that show significant growth in large loads. These filings suggest that, unless changes are made, households will face increased rates due to higher capacity, transmission, and distribution charges. AI data center electricity rates must reflect the scale and risks of these changes rather than load costs on top of the retail base.
Figure 1: Global data-centre demand roughly doubles in four years—local costs rise if rates don’t firewall the load.
The political economy makes things more complicated. Tech companies often request power faster than permits and construction can keep up. Interconnection queues expand with requests well beyond the projects that will actually be built. When utilities prepare for maximum demand but actual demand is lower or arrives later, ratepayers end up covering the costs of unused assets. The risk is clear: we share the downside and privatize the upside. This isn't an argument against AI; it’s an argument for setting AI data center electricity rates based on the actual costs and risks of demand, before new contracts lock in poor practices for 20 years.
A firewall for AI data center electricity rates
The solution starts with creating a separate rate class and establishing a tariff structure. Some regulators and legislators are beginning to do just that. One major utility has suggested a new rate class requiring long-term commitments (around 14 years) and minimum demand charges across transmission, distribution, and generation to avoid cost shifts if data center usage falls short. Analysts and policy experts recommend similar approaches: separate AI data center electricity rates with monthly minimums linked to contracted capacity; upfront contributions for grid upgrades; extended terms; and exit fees to protect other customers from stranded costs. States like Maryland and Oregon have gone even further, requiring or allowing specific rate schedules for data centers and establishing a separate class for large users. The direction is clear: place costs on those responsible for them.
The firewall tariff should be straightforward and consistent. First, minimum monthly bills should reflect the full, long-term costs of requested capacity, including local wires. Second, require upfront payments or contributions for custom upgrades to ensure ordinary customers aren't shouldering the burden. Third, establish credit standards, contract lengths, and exit fees that align with asset lifespans. These measures are not punitive; they reflect practices in generation interconnection and industrial rate design. They also align with what grid monitors already observe: in PJM, the rise in large-load interconnections and capacity pricing has noticeable effects on consumers. The goal is to make AI data center electricity rates self-sustaining, not to stifle innovation.
Fairness also requires transparency. Utilities should provide a rolling account of data-center-driven capital—by project, substation, and cost category—and track recovery against the data-center tariff instead of general rates. Where state open records laws allow, commissions should mandate deal-level disclosures of capacity reservations and associated community protections. This will empower school districts, city councils, and small-business associations with the information they need to act before costs appear in future rate cases. The choice is between targeted AI data center electricity rates now or widespread financial pain later.
If you build it, power it yourself
Rate design alone is not sufficient. The most effective way to prevent cross-subsidies is to co-locate large loads with generation or to require self-supply for campuses above a defined threshold. Federal regulators are officially reviewing colocation issues for large loads, such as AI data centers, beginning with PJM. This approach is essential. Suppose a campus seeks hundreds of megawatts on a tight timeline. In that case, it should not trigger off-site wires and peaking plants funded by the larger community. Better models exist: merchant generation combined with long-term energy service agreements; on-site renewables and storage sized to meet load needs; or gas units near the campus priced within the private contract rather than general rates.
Market players are already making progress. Major utilities and investors are forming partnerships to build dedicated gas plants to serve data-center clusters under long-term contracts. While this doesn’t settle the climate debate, it does align incentives: those benefiting pay for the asset. These deals should include strict guidelines. The new generation should not be part of the regulated rate base unless it supports the broader system. There should be a clean-energy transition or renewable PPAs that increase with use.
Most importantly, prohibit the hidden recovery of dedicated campus assets through general riders. Suppose the business case for a 500 MW campus is sound. In that case, its owners should account for energy and capacity costs in their own financial statements. This is how we handle other large loads, and it should apply to AI as well.
Zoning and siting policy should align. Jurisdictions that accept data centers can require community benefits agreements tied to energy usage—funds designated for school energy upgrades, community solar subscriptions, and bill relief in host areas. These payments should be mandatory for projects that cause new substations or long feeders. They should scale with reserved capacity, not just square footage or headcount. Where regional reliability margins are low, local planners should insist on self-supply or co-location as conditions for approval. This prevents AI data center electricity rates from affecting everyone else's bills.
Translate fairness into rules we can enforce
What should educators, administrators, and policymakers do right now? First, engage early in rate cases and resource plans. When a utility files an integrated resource plan citing “extraordinary” growth driven by data centers, school districts, and universities, those entities should participate in the case. They should demand a separate class, minimums, and a ledger of data-center capital—not a commitment to reconcile later. Many states already have filings and press materials predicting extraordinary load and related expansions; the public record is clear enough to justify immediate action.
Figure 2: Large-load growth—data centres included—pushes PJM’s peak from ~150 GW to ~220 GW; without a separate class, households absorb expansion costs.
Second, impose non-bypassable charges on the data-center class for local transmission upgrades. A recent review showed customers in seven PJM states were billed $4.4 billion for local data-center transmission projects approved in just one year. This illustrates the flow of costs when there's a regulatory gap. Commissions can close this by assigning specific costs to the customer responsible—just as they do for generator interconnections. If the tariff needs adjustment, make it now and prospectively.
Third, improve planning practices. Load requests are uncertain; many never materialize. Utilities should not build based on the most optimistic scenarios without solid minimum-bill protection. Require long-term commitments, strong credit support, and exit fees that cover the life of local wires. One major utility's proposal accomplishes this—offering a new class for high-energy users with 14-year commitments and minimum demand requirements—while national consultants recommend the same toolkit to protect residential customers. States are starting to legislate these directions, necessitating specific tariffs for data centers with clearly defined financial responsibilities.
Fourth, ensure consumer protection in the short term. Where bills are already rising, immediate assistance should focus on schools and low-income households most affected by increasing grid costs. Data-center hosts can provide local rate relief through impact fees and community benefits linked to reserved megawatts. Regulators can limit pass-throughs from data-center-related projects until a separate class is established. When new capacity auctions or transmission surcharges appear on bills, commissions should require a public breakdown of how much is caused by data centers. People have a right to understand the cause-and-effect relationship through precise numbers.
Finally, create a co-location pathway by default. FERC’s inquiry into rules for generator-load co-location enables PJM and other RTOs to follow suit. States can set a target from the governor’s office: any campus above a certain threshold—perhaps 100 MW—must self-supply, co-locate, or sign a full-requirements contract that shields its costs from general rates. If providers want public connections, they must agree to AI data center electricity rates that reflect all costs—without exceptions.
The numbers we started with should shape decisions for the next two years. Wholesale prices near data-center hubs are up as much as 267%. Billions in local wires have been approved and charged to customers. Capacity revenues have surged by billions as large loads have suddenly appeared. If left unchanged, this trend will continue. Families, schools, and small businesses will pay more for assets they neither wanted nor needed when nearby campuses reduce usage. We can prevent that future by changing the default. Give data centers their own AI-based electricity rates, with minimum bills reflecting contracted capacity. Require pre-payments for custom wires. Urge huge campuses to co-locate or self-supply, keeping their assets off the regulated rate base. Then ensure transparency so communities can see the accounts in detail. The message is transparent and fair: support the AI economy without making neighbors pay for it. The sooner we establish these rules, the sooner we halt the hidden transfer seen in monthly bills.
The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.
References
Bloomberg News. (2025, September 29). AI Data Centers Are Sending Power Bills Soaring (methodology note cites Grid Status and DC Byte). Retrieved November 5, 2025. Brookings Institution. (2025, October 30). Boom or bust: How to protect ratepayers from the AI bubble. Retrieved November 5, 2025. Deloitte. (2025, June 24). Can U.S. infrastructure keep up with the AI economy? (recommends separate rate class, minimum charges, exit fees). Dominion Energy. (2025, April 1). Dominion Energy Virginia proposes new rates… (new rate class; 14-year commitments; consumer protections). FERC. (2025, February 20). FERC orders action on co-location issues related to data centers running AI (PJM focus; reliability and fair costs). IEA. (2025, April 10). AI is set to drive surging electricity demand from data centres… (base-case ~945 TWh by 2030; ~15% CAGR in data-center electricity). PJM. (2025, January 30). 2025 Long-Term Load Forecast Report Predicts Significant Increase in Electricity Demand (summer peak path to ~220 GW). PJM Independent Market Monitor. (2025, June 25). Market Monitor Report (capacity revenue impact of data-center load; $9.33 billion scenario). Reuters. (2025, June 10). Data center demand to push U.S. power use to record highs in 2025, ’26, EIA says (STEO record consumption). Reuters. (2025, July 15). Blackstone and U.S. utility PPL to build gas power plants in JV partnership (dedicated generation for data centers). Utility Dive. (2025, October 1). Customers in seven PJM states paid $4.4B for data center transmission in 2024: report (UCS findings; regulatory gap). Virginia Mercury. (2025, September 3). Dominion proposes higher utility rates, new rate class for data centers (details on minimum demand obligations; class scope).
Picture
Member for
11 months 4 weeks
Real name
Ethan McGowan
Bio
Ethan McGowan is a Professor of Financial Technology and Legal Analytics at the Gordon School of Business, SIAI. Originally from the United Kingdom, he works at the frontier of AI applications in financial regulation and institutional strategy, advising on governance and legal frameworks for next-generation investment vehicles. McGowan plays a key role in SIAI’s expansion into global finance hubs, including oversight of the institute’s initiatives in the Middle East and its emerging hedge fund operations.
US leadership in Asia is consolidating, not fading
Japan, Korea, Taiwan, and key ASEAN partners are tightening defense and tech ties with Washington
This hardening coalition outweighs rhetoric and is reshaping education and industry
Short-video nationalism is entertainment-driven but spreads grievance fast
When it targets Japan, boycotts and tourism losses impose real costs
Teach short-form literacy and use narrow, transparent rules to curb harms
Skills-First Hiring in the Age of Agentic AI: What Schools Must Do Now
Picture
Member for
11 months 4 weeks
Real name
David O'Neill
Bio
David O’Neill is a Professor of Finance and Data Analytics at the Gordon School of Business, SIAI. A Swiss-based researcher, his work explores the intersection of quantitative finance, AI, and educational innovation, particularly in designing executive-level curricula for AI-driven investment strategy. In addition to teaching, he manages the operational and financial oversight of SIAI’s education programs in Europe, contributing to the institute’s broader initiatives in hedge fund research and emerging market financial systems.
Published
Modified
AI now touches most jobs—about 60% in advanced economies
Hire for verified skills that complement AI, using portfolios, micro-credentials, and apprenticeships
Redesign schooling around agentic AI to widen mobility and prevent exclusion
One number should guide every education and workforce plan this year. In advanced economies, about 60% of jobs are at risk of automation. Roughly half of these jobs could see wage and hiring changes as AI takes over key tasks. The other half may benefit as AI enhances human work. Globally, the figure is close to 40%, but the pressure is most significant in areas focused on knowledge jobs. This isn't just an estimate from a vendor; it's a serious report from the International Monetary Fund. Given this situation, skills-first hiring isn't just a catchphrase. It's the new standard in the labor market. The stakes are clear. If we miss the mark on skills-first hiring, we risk limiting opportunities and sidelining many learners. If we succeed, agentic AI can improve access, accelerate skill development, and help people remain employed as job tasks change. The future hinges on what schools, colleges, and employers do next.
Redefining skills-first hiring for an agentic era
Skills-first hiring isn't a new concept. Employers have been gradually relaxing degree requirements for years. Before the pandemic, nearly half of middle-skill jobs and about a third of high-skill jobs experienced a "degree reset," as listings dropped the bachelor's requirement. However, the practice varies widely. Many companies updated their job ads but kept the old filters in hiring and promotions. In other words, they changed the job advertisement, but not the job structure. Today's agentic AI makes this gap risky. If AI can handle most observable tasks, simple degree checks will overlook what really counts: the ability to work with AI, turn goals into prompts and workflows, and verify results. Skills-first hiring must transform from "no degree necessary" to "proof of specific, valuable skills that support AI."
The demand signal is changing. Surveys show that three out of five workers fear AI will threaten their job security, and many expect wage pressures in their fields. Yet, significant productivity increases are possible. GenAI might generate between $2.6 and $4.4 trillion in value annually. Employers who adopt skills-first hiring can access broader talent pools and more diverse candidates, including women and candidates without degrees. However, the shift from intent to action remains limited. A recent international survey revealed that only about 13% of employers had edited job postings to drop degree and tenure requirements. The risk is evident. If skills-first hiring stalls while automation speeds up, displacement could exceed mobility. This is a design flaw, not an unchangeable rule.
Agentic AI raises the bar by altering what is considered "skilled." The most valuable employees in a skills-first hiring environment will not only code, analyze, or write. They will break down complex problems, manage AI agents, decide when to trust automation, and communicate results to teams and clients. Those skills can be taught, but they don't fit the old credential model. They are ongoing, observable performances. This is why reform is needed in K-12 schools, community colleges, and universities. They must prepare graduates who excel in working with AI, not just consuming it. Otherwise, skills-first hiring will favor those who are already privileged, and the promise of greater access will disappear.
What the evidence says about skills-first hiring and automation risk
The labor market feedback is mixed yet consistent. The World Economic Forum estimates that nearly a quarter of jobs will change by 2027, with 69 million new roles created and 83 million eliminated. The OECD finds that jobs most at risk of automation make up about 27% of employment across member states. Being exposed doesn't guarantee loss, but it does guarantee changes. When exposure is high, skills-first hiring must be faster, fairer, and based on objective evidence. It should require clear demonstrations of skill rather than proxies. This means using performance tasks, portfolios, and on-the-job projects that showcase problem-solving with AI. Hiring teams also need training to interpret this evidence. Without these elements, skills-first hiring will remain just a slogan.
Figure 1: In rich economies, 60% of jobs face AI—about half likely to augment; half at risk.
Early employer data show the benefits when this approach succeeds. Skills-first hiring can enlarge candidate pools in ways that support equity and growth. Analyses of LinkedIn's Economic Graph reveal more qualified candidates without degrees when companies filter by skills, resulting in meaningful increases in women's representation. Yet, the practice often lags behind policy. Even as job postings remove some formal requirements, hiring decisions frequently revert to traditional credentials. This misalignment damages trust. Applicants realize the new rules are superficial. Faculty assume that industry won't recognize new learning models. The cycle continues. To break this pattern, schools and employers must collaborate to create skill signals that are hard to fake and easy to validate. Digital micro-credentials that include task evidence are one solution. Paid apprenticeships with public data on completion and wage increases are another.
Educational evidence is advancing, which is significant for hiring. Randomized trials now show that well-designed AI tutors can produce greater learning gains in less time than conventional in-class active learning. Other studies suggest that AI support for tutors enhances student mastery, particularly where tutor skills are lower. These findings are practical. They highlight a way to make skills-first hiring a reality: drastically reduce the time and cost learners incur to develop tangible skills. Suppose every student can access top-quality practice and feedback after school. In that case, the gap between well-resourced and under-resourced learners can narrow. This is a hiring issue, not just an education one. It expands the number of candidates prepared for real work.
Designing schools and colleges for a skills-first hiring world
The main change needed is in the curriculum. Programs should focus on solving problems rather than completing courses. Learners need regular practice that combines subject knowledge with working alongside AI. They should learn to define tasks, choose or develop workflows with agents, verify results, and communicate decisions. Assessments should reveal process evidence. Grading should emphasize clarity, judgment, and handling errors, not just final answers. In this model, skills-first hiring has a dependable dataset. Employers can observe how students use AI to achieve correct and valuable outcomes. Students can demonstrate their progress over time. Faculty can compare groups using shared, public standards—the concept of "job-ready" shifts from a promise to a proof.
Micro-credentials are vital. Europe has adopted a common framework for recognizing short, skill-based learning. This system allows universities and employers to exchange verified qualifications that can cross borders. When these micro-credentials include real work samples, the qualifications become more credible. A badge in "data cleaning and prompt engineering" that comprises code, prompts, logs, and error analysis is more impressive than just a line on a résumé. This is the currency needed for skills-first hiring. It also helps students build on their learning toward degrees without losing progress or support. With agentic AI in the mix, many learners will master job-related workflows more quickly than traditional courses allow. The credentialing system must keep pace.
Practical accessibility is as crucial as design. Evidence shows that AI tutors and AI-assisted tutoring can boost achievement and save time. Schools should implement these tools where time is limited: after school, in homework clubs, and in community centers. The aim is not to replace teachers but to bring practice and feedback closer to where students learn. This is how we raise skill standards. When more learners can complete problem sets with quality guidance, more can try out for jobs. This shifts the dynamics for skills-first hiring. It also changes what hiring managers notice when automated systems evaluate applicant skills.
A policy playbook to slow exclusion and boost mobility
K-12 systems should incorporate agentic AI into core subjects like literacy and numeracy rather than treating it as an add-on. Students should write with AI and then fine-tune it. They should solve math using AI and explain each step. The aim is structured collaboration, not passive dependency. School districts can publish clear guidelines aligned with UNESCO's standards and follow a simple principle: AI can assist your thinking, but you must show you understand. This approach maintains academic integrity while preparing students for environments where AI is common. It also keeps the door open for skills-first hiring, as it fosters habits of documentation and verification.
Community colleges should drive skills-first hiring efforts. They are closest to local employers and can quickly refresh programs. The model is straightforward. Assess local task demand. Work with employers to design AI workflows. Teach these workflows along with relevant theory. Evaluate using real work examples. Publish results. Each micro-credential should connect to a role with earning potential, and schools should track placement and earnings. Apprenticeships should cover not only trades but also fields such as data, healthcare, logistics, and green jobs. Statistics show this is achievable. Youth apprenticeships have increased in the United States, both in number and percentage of participants, with tens of thousands added since 2020. A skills-first hiring strategy is more effective when paid learning is common, not the exception.
Figure 2: Active apprentices grew from 360k to 667k—now a scalable skills-first pipeline.
Universities should restructure general education to focus on enduring skills for an AI-driven world. Skills like reasoning, modeling, evidence interpretation, and communication should be foundational. Every major should have an "AI + X" studio for students to create and defend workflows using AI in their fields. Capstone projects should be public, searchable, and linked in job applications. Career services should shift from résumé enhancement to evidence collection. Meanwhile, registrars should adopt micro-credentials that align with European recognition standards so international students can take verified qualifications home. To maintain their edge in a skills-first hiring market, universities need to produce graduates ready to lead AI-driven teams from day one.
Employers must also play their part. They need to move beyond just revising postings. Clearly define essential skills, publish them, and assess them. Use work samples and job trials early on. Train hiring teams to evaluate portfolios and accurately prompt records. Adjust AI-based screening to prevent old biases from reappearing in new systems. The potential benefits are substantial. Analyses show that skills-first hiring can broaden candidate pools and enhance diversity. It also promotes internal mobility when paired with clear skill frameworks. None of this is kindness. In a market where AI rapidly reshapes tasks, companies that hire for adaptability will outpace those that rely on outdated measures.
An obvious concern is that this vision may be overly optimistic about AI's role in learning and work. We shouldn't overlook the risks. While some randomized trials and field studies highlight significant benefits from AI tutoring, others caution that unsupervised use can hinder learning or widen disparities. The lesson isn't to slow down; it's to create safeguards that keep human judgment central. Clear guidelines on data privacy, model bias, and age-appropriate use are increasingly available. The practical solution in classrooms involves strict alignment with standards, transparent prompts, and visible reasoning. The sensible solution in hiring is to assess skills using real tasks and to publish success metrics by pathway. This ensures that skills-first hiring doesn't become just another empty promise.
A second concern is that skills-first hiring might be a distraction—lots of talk without meaningful change. This concern is valid. Even dedicated companies often revert to prestige screens when urgency arises. The countermeasure is public accountability. Regions can link incentives to concrete results: the number of apprenticeships created, retention rates of non-degree hires, wage increases for first-generation graduates, and the time taken to fill critical roles. Transparency is vital. When results improve, others are likely to follow. When they don't, policies should be adjusted. The same accountability should apply to schools and colleges. If programs claim to deliver job-relevant skills, they should provide evidence and placement data to substantiate those claims.
The final critique is darker. What if the need for human labor collapses in many fields, leaving only a small elite of adaptable workers? That risk is real. However, it is not the most likely outcome in the near future. Significant estimates suggest job changes rather than a full collapse. Tasks are shifting, and productivity is rising with substantial differences across sectors. Even in roles with high exposure, cooperation is possible when people effectively manage AI. Education can improve the odds for collaboration. Hiring based on skills can turn that cooperation into job mobility. Together, these approaches can reduce exclusion and spread the benefits. This is not an idealistic view. It is a practical way to prepare for the uncertainty ahead.
In conclusion, the labor market is entering a phase in which AI systems do more than predict. They start, coordinate, and learn. This change increases the importance of hiring based on skills. The IMF’s exposure figure—60% in advanced economies—should grab our attention. It shows that the risk is widespread and uneven. It also indicates significant opportunities. We can either allow exposure to lead to exclusion, or we can rethink how people learn and how companies hire. The direction is clear. Create curricula that focus on AI collaboration and reasoning. Expand micro-credentials that include evidence of skills. Increase paid apprenticeships. Require hiring teams to evaluate what candidates can actually do. Use AI tutors and AI-supported tutoring where time is limited and feedback is scarce. Share results. Make quick adjustments. If we take these steps, skills-first hiring can become the means through which capable AI enhances human work rather than diminishes it. The wave is already here. Our role is to mold it.
The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.
References
Bastani, H., et al. (2025). Generative AI without guardrails can harm learning. Proceedings of the National Academy of Sciences. Burning Glass Institute. (2022). The Emerging Degree Reset. Business Insider. (2024, Feb.). Companies vowed to hire more workers without college degrees. But a study says they’re not following through. European Commission. (2024). A European approach to micro-credentials. IMF. (2024a). AI will transform the global economy. Let’s make sure it benefits humanity. International Monetary Fund Blog. IMF. (2024b). Melina, G., et al. Gen-AI: Artificial Intelligence and the Future of Work (Staff Discussion Note). International Monetary Fund. Indeed/YouGov. (2025, Apr.). How to take real action on skills-first hiring. Indeed Lead. Kestin, G., et al. (2025). AI tutoring outperforms in-class active learning. Scientific Reports. LinkedIn. (2024). Future of Recruiting 2024. LinkedIn Talent Solutions. LinkedIn Economic Graph. (2025). Skills-Based Hiring (Report). Loeb, S., Wang, R. E., Ribeiro, A. T., Robinson, C. D., & Demszky, D. (2024). Tutor CoPilot: A human-AI approach for scaling real-time expertise (Working paper and RCT). Stanford SCALE. McKinsey & Company. (2023). The economic potential of generative AI: The next productivity frontier. OECD. (2023). OECD Employment Outlook 2023. Organisation for Economic Cooperation and Development. OECD. (2024a). Using AI in the workplace. Organisation for Economic Cooperation and Development. OECD. (2024b). Lane, M. Who will be the workers most affected by AI? Organisation for Economic Cooperation and Development. Stanford SCALE. (2024a). AI tutoring outperforms active learning (Project page). U.S. Department of Labor. (2024, Nov.). Trendlines: Youth and women in Registered Apprenticeship. https://www.dol.gov/…/Trendlines_November_2024.html UNESCO. (2023). Guidance for generative AI in education and research. United Nations Educational, Scientific and Cultural Organization. World Economic Forum. (2023). The Future of Jobs Report 2023. World Economic Forum.
Picture
Member for
11 months 4 weeks
Real name
David O'Neill
Bio
David O’Neill is a Professor of Finance and Data Analytics at the Gordon School of Business, SIAI. A Swiss-based researcher, his work explores the intersection of quantitative finance, AI, and educational innovation, particularly in designing executive-level curricula for AI-driven investment strategy. In addition to teaching, he manages the operational and financial oversight of SIAI’s education programs in Europe, contributing to the institute’s broader initiatives in hedge fund research and emerging market financial systems.
Digital services trade is booming, shrinking global gaps but widening domestic ones
Remote work and uneven AI adoption heighten wage pressure and regional divides
Adopt a compact: wage insurance, sectoral training, portable benefits, and remote-first, AI-literate education
Power of Siberia 2 shifts China’s energy risk from tankers to pipelines
With Alaska LNG, Asia gains buyer leverage and softer price spikes—not an oil crash
Winners will master contract design, sanctions exposure, and long build timelines
Informality and “tax exodus” rise when governments hike taxes in downturns
UK and France show wealth flight
Predictable, narrow rules keep money formal
One key fact should guide every budget meeting this winter.
European debt guilt is deep-rooted and won’t be reversed by information campaigns
Rising interest costs, defence needs, and ageing populations are crowding out core services
Be honest: set real plans and rank priorities
AI Agents in Education Can Double Learning, So Let’s Design for Homes, Not Just Enterprises
Picture
Member for
11 months 4 weeks
Real name
Catherine Maguire
Bio
Catherine Maguire is a Professor of Computer Science and AI Systems at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI). She specializes in machine learning infrastructure and applied data engineering, with a focus on bridging research and large-scale deployment of AI tools in financial and policy contexts. Based in the United States (with summer in Berlin and Zurich), she co-leads SIAI’s technical operations, overseeing the institute’s IT architecture and supporting its research-to-production pipeline for AI-driven finance.
Published
Modified
AI agents in education boost learning while cutting time
Build home-first workflows for practice, planning, and records
Scale with evidence and guardrails to protect equity and trust
One data point should change our thinking: a recent study shows that students using an AI tutor learned significantly more, and in less time, than those in an active learning class covering the same material. The AI tutor used evidence-based teaching methods and still performed well. This is not just a marketing claim; it is supported by peer-reviewed research. Suppose an AI tutor can save time and improve learning in controlled settings. What happens when these “agentic” systems are used in the family home, where most homework takes place? The effects are real. They point to the potential for families to adopt AI agents in education that assist children each night, coach parents, and allow schools to focus their limited human expertise where it is most needed. In short, AI agents in education are transitioning from novelty to necessity, and the question is how quickly we can adjust to that reality.
We can see the shift in usage data. In 2025, 34% of U.S. adults reported using ChatGPT, roughly double the 2023 figure. Among UK undergraduates, use of generative tools increased from 66% in 2024 to 92% in 2025. Teen adoption is also growing: 26% of U.S. teens reported using ChatGPT for schoolwork in 2024, up from 13% the previous year. None of this guarantees learning improvements, but it shows where attention and effort are being directed. The opportunity now is to turn basic tool use into accountable, agent-led workflows that prioritize families first, schools second, and vendors last. This means creating a dependable “home system” for tutoring, planning, and documentation that works just as well on a Tuesday night as it does on a weekend for test prep.
AI agents in education: from single tools to household systems
An AI agent is more than just a chat window. It is software that can plan tasks, use tools, and work towards a goal. Businesses are already using agents to answer complex questions, manage workflows, and connect different systems. The lesson for schools is clear: what works in business will also work at home. Families need agents that help children study, automatically log progress for teachers, draft emails about accommodations, and schedule therapy sessions without paperwork getting lost. The key points from consulting apply at home: it’s not just about how intelligent the agent is; it’s about redesigning the workflow around what learners and caregivers actually do. Focus on the weekly rhythm of assignments and supports rather than worrying about flashy demonstrations.
A clear example of where this is heading comes from a recent story about a parent who used “vibe coding” tools to build an AI tutoring platform for her dyslexic son. She did not wait for a perfect product. She combined research-based prompts, dashboards, and student intake forms to personalize the agent’s guidance, drawing on hundreds of studies about learning differences. The result was not a toy; it was a home setup that adjusted to the child’s motivation and needs. When a parent can create a functioning tutor, we have entered a new era. For context, specialized reading support in the U.S. averages about $108 per hour. An AI agent that can enhance or partially replace some of those human sessions changes the dynamics of time, cost, and access—all while keeping the human specialist for the more complex parts.
What AI agents can do for families now?
We should identify specific use cases because they relate to real challenges families face. Start with support for reading and writing. Recent evidence shows that AI tutors can deliver greater gains in less time when designed around effective teaching methods. This makes agents ideal for structured, paced, and responsive nightly practice. Add school logistics: an agent can extract deadlines from learning platforms, generate study plans, and remind both children and caregivers. It can summarize teacher feedback into two specific actions per subject. It can maintain a private, ongoing learning record that parents can share at the next meeting. Because these are agents, not static programs, they can access external tools to retrieve school calendars, assemble forms for accommodations, or draft that email you've been putting off.
Figure 1: Students using the AI tutor spent 49 minutes on task versus 60 minutes in class—about 18% less time—while the same study reports significantly larger learning gains for the AI-tutored group.
Families also need help connecting limited, costly expert support with critical moments. Reading intervention tutors are essential, but capacity and cost are issues. With agents enabling focused practice between sessions, children can arrive ready for human instruction. This does not mean total replacement; it means better use of resources. Additionally, broader tool usage suggests families are becoming comfortable asking AI for help with information and planning. Surveys show that many adults rely on AI to search and summarize, and students report routine use. Suppose we can direct that comfort toward a household agent focused on learning goals. In that case, we can reduce back-and-forth communication, minimize wasted time, and make the hours spent with a human expert more effective.
Risks, equity, and the danger of “agent washing”
There is a strong temptation to label every scripted workflow an “agent.” Analysts caution that many agent projects may fail within 2 years because teams pursue trends rather than results. This warning is essential in education, where trust is the most valuable asset. The safeguard against soaring expectations is precise planning and measurable outcomes: time on task, growth on validated assessments, and teacher-observed application. This also means avoiding unsafe autonomy. Household agents should have limited default functions, operate under strict guidelines, and provide logs that parents and teachers can quickly review. The standard must focus on verifiable improvements, not flashy showcases.
Figure 2: Meeting universal schooling by 2030 requires ~44 million teachers; Sub-Saharan Africa (15,049k) and South-Eastern Asia (9,766k) account for over half of the gap—underscoring why AI agents should target routine workload, not replace scarce experts.
We also need strong protections for equity. Global policy organizations warn that AI's potential will vanish if we ignore access, bias, and data handling. This is not just hand-wringing; it is about practical design. AI agents in education must prioritize human-centered guidance, protect student data, and be implemented with teacher training and clear rules. We should account for low-resource settings—for instance, offline options, simple devices, and multilingual prompts. Remember the staffing challenge: the world needs millions more teachers this decade to achieve universal education goals. Agent systems should help by taking on routine tasks and structured practice, not by replacing irreplaceable specialized roles.
A playbook to make AI agents in education work—at home and at school
Start with small, manageable successes. Pick one subject and one grade level. Use an agent to assign tasks, coach, and check practice aligned with the existing curriculum. Treat the agent as a redesign of the workflow, not an add-on tool. The most effective implementations in industry focus on the process rather than the tool itself; schools should do the same. As usage increases, integrate the agent with gradebooks and learning platforms to automatically populate progress logs and reduce administrative burdens. Monitor fidelity: if the agent’s prompts stray from the curriculum, correct them. The goal is to free up teacher time for valuable feedback and meetings while providing families with a reliable nightly routine.
Then scale based on evidence. Use validated measurements and compare agent-supported practice against standard methods. If the results favor the agent, make it official. If they do not, adapt or stop. Build trust within the community by showing not only that students used an AI tool but that they learned more in less time. That is the standard set by recent research on AI tutoring.
Meanwhile, monitor usage trends. Both adult and student use indicates that agents are becoming part of daily life; institutions should meet families where they are by offering approved agent options, clear data policies, and easy-to-follow guides. Finally, align purchasing strategies to avoid “agent washing” by using straightforward criteria for limits on autonomy, logging, and outcomes. This reduces vendor turnover and keeps the focus on learning rather than features.
An AI tutor can lead to much more learning in less time than an active-learning class covering the duplicate content. This single insight serves as a guiding principle for policy. It suggests that the right kind of AI—integrated into a workflow, limited for safety, and measured by outcomes—can help families gain hours back and help schools focus on their valuable human expertise. The consumer trend is moving in the same direction, as adults and students incorporate AI into their everyday tasks. The task ahead is to direct that momentum into accountable agents designed for home use and connected to schools. If accomplished, “AI agents in education” will evolve from a buzzword to a dependable part of every household’s learning resources. The key question is simple: do students learn more, faster, without compromising trust or equity? If the answer is yes, we should scale and start now.
The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.
References
Axios. (2025, September 18). AI can support 80% of corporate affairs tasks, new BCG report says. British Dyslexia Association. (2025). Dyslexia overview. HEPI & Kortext. (2025). Student Generative AI Survey 2025. McKinsey & Company. (2025, March 12). The State of AI: Global survey. McKinsey & Company. (2025, September 12). One year of agentic AI: Six lessons from the people doing the work. Pew Research Center. (2025, January 15). About a quarter of U.S. teens have used ChatGPT for schoolwork. Pew Research Center. (2025, June 25). 34% of U.S. adults have used ChatGPT. Reading Guru. (2024). National reading tutoring cost study. Reuters. (2025, June 25). Over 40% of agentic AI projects will be scrapped by 2027, Gartner says. Scientific American. (2025). How one mom used vibe coding to build an AI tutor for her dyslexic son. Stanford/Harvard team via Scientific Reports. (2025). AI tutoring outperforms active learning. Scientific Reports. Teacher Task Force/UNESCO. (2024, October 2). Fact Sheet for World Teachers’ Day 2024. UNESCO. (2023, updated 2025). Guidance for generative AI in education and research. OECD. (2024). The potential impact of AI on equity and inclusion in education. Oracle. (2025). 23 real-world AI agent use cases.
Picture
Member for
11 months 4 weeks
Real name
Catherine Maguire
Bio
Catherine Maguire is a Professor of Computer Science and AI Systems at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI). She specializes in machine learning infrastructure and applied data engineering, with a focus on bridging research and large-scale deployment of AI tools in financial and policy contexts. Based in the United States (with summer in Berlin and Zurich), she co-leads SIAI’s technical operations, overseeing the institute’s IT architecture and supporting its research-to-production pipeline for AI-driven finance.