Public R&D crowd-in effect primes private investment and productivity growth
Cuts and freezes break the catalyst, raising risk and slowing diffusion
Protect catalytic grants, require private matching, and use procurement to anchor demand
Make Spatial Intelligence in Education the Next Platform, Not the Next Fad
Picture
Member for
1 year 1 month
Real name
Catherine Maguire
Bio
Catherine Maguire is a Professor of Computer Science and AI Systems at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI). She specializes in machine learning infrastructure and applied data engineering, with a focus on bridging research and large-scale deployment of AI tools in financial and policy contexts. Based in the United States (with summer in Berlin and Zurich), she co-leads SIAI’s technical operations, overseeing the institute’s IT architecture and supporting its research-to-production pipeline for AI-driven finance.
Published
Modified
Spatial intelligence in education measurably boosts maths and STEM outcomes
Use world models, but prioritize curriculum, tasks, and teacher practice
Fund weekly spatial lessons and assess visual reasoning to scale
The most striking education statistic this autumn did not come from a national exam league table. It went from a classroom experiment. In Scotland, a low-cost, 16-lesson course taught children to visualize 3D shapes and map them onto paper. This program boosted maths scores by up to 19% among seven- to nine-year-olds across 80 schools. It also showed measurable improvements in spatial reasoning and computational thinking. That is not a small change; it is a real enhancement in learning abilities that traditional drills rarely achieve. Early university reports indicate that 96% of pupils made progress, and average spatial gains reached around 20%. Plans are in place for system-level rollout to get two in five primary classrooms by 2028. These are controlled pilots, not marketing slogans. They show us something urgent and straightforward: if we want better maths and broader STEM access, spatial intelligence in education is no longer just an enrichment. It is essential for how students learn and how teachers teach. This success story should inspire optimism about the potential of spatial intelligence in education.
Spatial intelligence in education needs a system, not a slogan
The term “spatial intelligence” sounds appealing at conferences, but it has a straightforward meaning in psychology and education. It refers to the ability to visualize and manipulate objects and relationships in space—mentally rotating, folding, cutting, and changing perspectives. Spatial intelligence is one of the most reliable indicators of success in engineering, computing, and design. It is also more adaptable than many think. Research and recent classroom trials show that targeted practice can improve it, and when that happens, mathematics improves too. The mistake schools often make is treating spatial reasoning as a talent that some students have while others do not, or as a niche skill for CAD labs. It is a general learning skill that helps students understand algebraic structure, interpret graphs, and think about rates, areas, and volumes. Ignoring it is like teaching reading without phonemic awareness.
The second mistake is pursuing tools instead of building a comprehensive system. Tablets, VR headsets, and flashy “world generators” may arrive, but alone, they do not change teaching practices. The Scottish pilots succeeded because they used structured spatial tasks that matched the existing maths curriculum, providing teachers with concrete materials, assessments, and lesson plans. Technology-aided visualization, such as seeing block structures or exploring a 3D model, but the real progress came from a coherent sequence, not from gadgets. "Spatial intelligence in education" will only thrive if it becomes a fundamental part of the curriculum, complete with clear learning goals, frequent low-stakes practice, teacher training, and assessments that value visual reasoning as much as number and text skills. Educators should be guided by the need for a comprehensive system, not just tools, in implementing spatial intelligence in education.
From language-first AI to world models: what classrooms actually need
Recent advances in AI make spatial reasoning more applicable in schools. Large language models are not going away; they remain the best tools for drafting, giving feedback, and tutoring. However, the push into “world models” is different. These systems can create and reason about 3D environments and the objects in them. A notable example is Marble, a new platform that transforms text, images, or short videos into persistent, downloadable 3D worlds. It also offers an editor for teachers and students to build a room, a landscape, or a molecule while the AI adds visual details. Pricing starts at free and goes up to paid tiers that include commercial rights and export to Unity or Unreal. This development matters because a classroom can quickly move from a sketch to a navigable scene in minutes, without needing specialized 3D skills. When used effectively, this reallocates time from asset creation to concept exploration.
Still, “street-smart” AI is not just visual. It also involves situational understanding. Good world models allow students to test “what if” scenarios: Does a block tower fall when we change a base? What happens to a light ray in a different medium? How does the flow change with a narrower pipe? The danger lies in thinking that the model feels for us. It does not. It offers a manipulable platform that is effective only if teachers frame tasks that require reasoning and explanation. Schools should clearly understand the limitations: processing costs, content safety, and accessibility. Platforms are advancing quickly—World Labs raised significant funding to develop spatial intelligence—but schools should adopt technology at a pace that aligns with pedagogy, not hype. In education, the critical question is not whether a model can create a beautiful world, but whether students can explain what happens in that world and why.
Figure 1: Entry cost is near-zero, and even the top tier stays under $100/month—lowering barriers to classroom pilots.
The evidence base for spatial intelligence in education
What does the research tell us if we step back from the headlines? First, the connection between spatial skills and success in STEM fields is powerful and longstanding. Longitudinal studies and modern replications show that spatial ability is a distinct predictor of who persists and excels in STEM, independent of verbal and quantitative skills. This is why screening for spatial strengths identifies students who might be overlooked by assessments that favor only verbal and numerical abilities. Practically, this becomes a tool for increasing diversity: it opens doors for students, often girls and those from disadvantaged backgrounds, whose strengths shine when tasks are demonstrated instead of just explained.
Figure 2: Across 29 studies (N=3,765), spatial training lifts maths performance by about 0.28 SD on average; the 95% CI stays clearly above zero, showing robust gains.
Second, spatial intelligence in education can be developed on a meaningful scale. A 2022 meta-analysis covering 3,765 participants across 29 studies found that spatial training led to an average improvement of 0.28 standard deviations in mathematics outcomes compared to control groups. The benefits were even greater when activities involved physical manipulatives and aligned with the concepts being assessed. This level of improvement, sustained over time, can shift an entire district's progress. Adding work with AR/VR and 3D printing in calculus and engineering in 2024–2025, where controlled studies indicate significant gains in spatial visualization and problem-solving, reinforces this message: when students actively engage with space—whether physical or virtual—their understanding of mathematics deepens.
Third, early national pilots reveal a way to scale this approach. The Scottish initiative did not depend on specialized teachers; it used simple training, standard resources, and repeatable tasks. Participation increased from a few dozen schools in 2023 to hundreds in 2025, involving 17 local authorities, and plans to reach 40% of primary classrooms within three years. Those numbers reflect policy-level commitments, not boutique trials. The gains—double-digit improvements in maths and marked increases in spatial skills—suggest that systems can change rapidly when offerings are straightforward, inexpensive, and integrated into existing curriculum time. This practical and scalable approach should reassure policymakers about the feasibility of implementing spatial intelligence in education.
A policy playbook to make spatial intelligence in education durable
View spatial intelligence in education as a broad capability, not a single subject. In primary years, introduce a weekly spatial lesson that connects to current maths topics—arrays during multiplication, nets while studying area and volume, and scale drawings with ratio. In lower secondary, link spatial tasks to science labs and computing modules that require spatial thinking, such as data visualization or basic robotics. In upper secondary and the first year of university, use world models and affordable VR to make multivariable functions, forces, and molecular structures easier to understand. This model does not require extensive new assessments; it asks exam boards to value diagrammatic reasoning and allow students to show their understanding through it.
Teacher development should be practical and short. Most teachers do not need to master 3D software; they need a set of tasks, examples of student work, and quick videos demonstrating how to conduct a 15-minute spatial warm-up. Procurement should emphasize open export and flexible device options to avoid locking schools into a single vendor. If a district adopts a world-generation tool, insist on privacy protections, local moderation options, and the ability to export to standard formats. Pair any digital tool with non-digital manipulatives. Evidence indicates that tangible materials enhance understanding, particularly for younger students and those who have learned to dislike maths. Equity must be a focus: prioritize spatial modules for schools and student groups underrepresented in STEM, and monitor participation and progress over time.
Finally, be realistic about limits and potential critiques. One critique is that spatial training only offers "near transfer" and will not translate to algebra or proof. The evidence does not support that claim; effect sizes on mathematics are typically positive, and the most substantial gains occur when training aligns with the maths being evaluated. Another critique argues that AI-generated worlds might be considered students. This risk exists if teachers use films. Still, it lessens when worlds become tools for explanation: predicting, manipulating, measuring, and defending their findings. A further critique claims that only wealthy schools can implement spatial technology. The Scottish pilots suggest otherwise: the core elements are practical teaching alongside simple materials, with technology as an aid rather than a barrier. Districts can begin with paper nets, blocks, and sketching before moving to digital environments as budgets permit.
The choice facing schools is not between language-first AI and spatial-first AI. It is between chasing tools and establishing an effective teaching system. Recent compelling evidence comes from classrooms that made spatial intelligence in education an everyday practice rather than an occasional one: weekly tasks, clear objectives, and assessments that value how students visualize just as much as how they solve problems. World models can aid by reducing the time from concept to scene and making unseen structures clear. However, the key to learning remains the student who can examine a world—whether on paper, on a desk, or in a headset—and explain it. The 19% increase in Scottish maths scores is not a limit; it is proof that spatial reasoning is a lever schools can utilize now. If systems invest in training, align the curriculum, and purchase tools that teachers find helpful, this can become a robust agenda for academic improvement. It is time to establish the platform and move past the fad.
The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.
References
Balla, T., Tóth, R., Zichar, M., & Hoffmann, M. (2024). Multimodal approach of improving spatial abilities. Multimodal Technologies and Interaction, 8(11), 99. Bellan, R. (2025, November 12). Fei-Fei Li’s World Labs speeds up the world model race with Marble, its first commercial product. TechCrunch. Field, H. (2025, November 13). World Labs is betting on “world generation” as the next AI frontier. The Verge. Flø, E. E. (2025). Assessing STEM differentiation needs based on spatial ability. Frontiers in Psychology. https://doi.org/10.3389/fpsyg.2025.1545603. Hawes, Z. C. K., Gilligan-Lee, K. A., & Mix, K. S. (2022). Effects of spatial training on mathematics performance: A meta-analysis. Developmental Psychology, 58(1), 112–137. Medina Herrera, L. M., Juárez Ordóñez, S., & Ruiz-Loza, S. (2024). Enhancing mathematical education with spatial visualization tools. Frontiers in Education, 9, 1229126. Pawlak-Jakubowska, A., et al. (2023). Evaluation of STEM students’ spatial abilities based on a universal multiple-choice test. Scientific Reports, 13. PYMNTS. (2025, November 13). World Labs launches Marble as spatial intelligence becomes the new AI focus. PYMNTS.com. Reuters. (2024, September 13). “AI godmother” Fei-Fei Li raises $230 million to launch AI startup focused on spatial intelligence. Reuters. The Times. (2025, September 16). Primary pupils to learn spatial reasoning skills to improve maths. The Times (Scotland). University of Glasgow. (2025, September 17). UofG launches Turner Kirk Centre for Spatial Reasoning. University of Glasgow News. University of Glasgow—Centre for Spatial Reasoning (Press round-up). (2025, October 1). University of Glasgow. University of Glasgow—STEM SPACE Project. (2025). STEM Space Project (programme description and outcomes).
Picture
Member for
1 year 1 month
Real name
Catherine Maguire
Bio
Catherine Maguire is a Professor of Computer Science and AI Systems at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI). She specializes in machine learning infrastructure and applied data engineering, with a focus on bridging research and large-scale deployment of AI tools in financial and policy contexts. Based in the United States (with summer in Berlin and Zurich), she co-leads SIAI’s technical operations, overseeing the institute’s IT architecture and supporting its research-to-production pipeline for AI-driven finance.
AI Grief Companion: Why a Digital Twin of the Dead Can Be Ethical and Useful
Picture
Member for
1 year 1 month
Real name
Catherine Maguire
Bio
Catherine Maguire is a Professor of Computer Science and AI Systems at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI). She specializes in machine learning infrastructure and applied data engineering, with a focus on bridging research and large-scale deployment of AI tools in financial and policy contexts. Based in the United States (with summer in Berlin and Zurich), she co-leads SIAI’s technical operations, overseeing the institute’s IT architecture and supporting its research-to-production pipeline for AI-driven finance.
Published
Modified
AI grief companions—digital twins—can ethically support mourning when clearly labeled and consent-based
Recent evidence shows chatbots modestly reduce distress and can augment scarce grief care
Regulate with strong disclosure, consent, and safety standards instead of bans
In 2024, an estimated 62 million people died worldwide. Each death leaves a gap that data rarely captures. Research during the pandemic found that one death can affect about nine close relatives in the United States. Even if that multiplier varies, the human impact is significant. Now consider a sobering fact from recent reviews: around 5% of bereaved adults meet the criteria for prolonged grief disorder, a condition that can last for years. These numbers reveal a harsh reality. Even well-funded health systems cannot meet the need for timely, effective grief care. In this context, an AI grief companion—a clearly labeled, opt-in tool that helps people remember, share stories, and manage emotions—should not be dismissed as inappropriate. It should be tested, regulated, and, where it proves effective, used. The moral choice is not to compare it to perfect therapy on demand. It is to compare it to long waits, late-night loneliness, and, too often, a $2 billion-a-year psychic market offering comfort without honesty.
Reframing the question: from “talking to the dead” to a disciplined digital twin
The phrase “talking to the dead” causes concern because it suggests deception. A disciplined AI grief companion should do the opposite. It must clearly disclose its synthetic nature, use only agreed-upon data, and serve as a structured aid for memory and meaning-making. This aligns with the concept of a digital twin: a virtual model connected to real-world facts for decision support. Digital twins are used to simulate hearts, factories, and cities because they provide quick insights. In grief, the “model” consists of curated stories, voice samples, photos, and messages, organized to help survivors recall, reflect, and manage emotions—not to pretend that those lost are still here. The value proposition is practical: low-cost, immediate, 24/7 access to a tool that can encourage healthy rituals and connect people with support. This is not just wishful thinking. Meta-analyses since 2023 show that AI chatbots can reduce symptoms of anxiety and depression by small to moderate amounts, and grief-focused digital programs can be helpful, especially when they encourage healthy exposure to reminders of loss. Some startups already provide memorial chat or conversational archives. Their existence is not proof of safety, but it highlights feasibility and demand.
Figure 1: Most bereaved people adapt; rates surge after disasters—supporting low-cost, targeted tools like governed AI grief companions.
The scale issue shows why reframing is essential now. The World Health Organization reports a global average of roughly 13 mental-health workers for every 100,000 people, with significant gaps in low- and middle-income countries. In Europe, treatment gaps for common disorders remain wide. Meanwhile, an industry focused on psychic and spiritual services generates about $2.3 billion annually in the United States alone. Suppose we could replace even a fraction of that spending with a transparent AI grief companion held to clinical safety standards and disclosure rules. In that case, the ethical response is not to ban the practice but to regulate and evaluate it.
What the evidence already allows—and what it does not
We should be cautious about our claims. There are currently no large randomized trials of AI grief companions based on a loved one’s data. However, related evidence is relevant. Systematic reviews from 2023 to 2025 show that conversational agents can reduce symptoms of depression and anxiety, with effect sizes comparable to many low-intensity treatments. A 2024 meta-analysis found substantial improvements for chatbot-based support among adults with depressive symptoms. The clinical reasoning is straightforward: guided journaling, cognitive reframing, and behavioral activation can be delivered in small, manageable steps at any time. Grief-specific digital therapy has also progressed. Online grief programs can decrease grief, depression, and anxiety, and early trials of virtual reality exposure for grief show longer-term benefits compared to conventional psychoeducation. When combined with statistics about grief, such as meta-analyses placing prolonged grief disorder around 5% among bereaved adults in general samples, we see a cautious but hopeful inference: a well-designed AI grief companion may not cure complicated grief, but it can reduce distress, encourage help-seeking, and assist with memory work—especially between limited therapy sessions.
Two safeguards are crucial. First, there must be a clear disclosure that the system is synthetic. The European Union’s AI Act requires users to be informed when interacting with AI and prohibits manipulative systems and the use of emotion recognition in schools and workplaces. Second, clinical safety is essential. The WHO’s 2024 guidance on large multimodal models emphasizes oversight, documented risk management, and testing for health use. Some tools already operate under health-system standards. For instance, Wysa’s components have UK clinical-safety certifications and are being assessed by NICE for digital referral tools. These are not griefbots, but they illustrate what “safety first” looks like in practice.
Figure 2: Small but reliable effects on depression and anxiety—useful as an adjunct between scarce therapy sessions.
The ethical concerns most people have are manageable
Three ethical worries dominate public discussions. The first is deception—that people may be fooled into thinking the deceased is “alive.” This can be addressed with mandatory labeling, clear cues, and language that avoids first-person claims about the present. The second concern is consent—who owns the deceased's data? The legal landscape is unclear. The GDPR does not protect the personal data of deceased individuals, leaving regulations to individual states. France, for example, has implemented post-mortem data regulations, but there are inconsistencies. The policy solution is straightforward but challenging to execute: no AI grief companion should be created without explicit consent from the data donor before death, or, if that is not possible, with a documented legal basis using the least invasive data, and allowing next of kin a veto right. The third concern is the exploitation of vulnerability. Italy’s data protection authority previously banned and fined a popular companion chatbot over risks to minors and unclear legal foundations, highlighting that regulators can act swiftly when necessary. These examples, along with recent voice likeness controversies involving major AI systems, demonstrate that consent and disclosure cannot be added later; they must be integrated from the start.
Design choices can minimize ethical risks. Time-limited sessions can prevent overuse. An opt-in “memorial mode” can stop late-night drifts into romanticizing or magical thinking. A locked “facts layer” can prevent the system from creating new biographical claims and rely only on verified items approved by the family. There should never be financial nudges within a session. Each interaction should conclude with evidence-based prompts for healthy behaviors: sleep hygiene, social interactions, and, when necessary, crisis resources. Since grief involves family dynamics, a good AI grief companion should also support group rituals—shared story prompts, remembrance dates, and printable summaries for those who prefer physical copies. None of these features is speculative; they are standard elements of solid health app design and align with WHO’s governance advice for generative AI in care settings.
A careful approach to deployment that does more good than harm
If we acknowledge that the alternative is often nothing—or worse, a psychic upsell—what would a careful rollout look like? Begin with a narrow, regulated use case: “memorialized recall and support” for adults in the first year after a loss. The AI grief companion should be opt-in, clearly labeled at every opportunity, and default to text. Voice and video options raise consent and likeness concerns and should require extra verification and, when applicable, proof of the donor’s pre-mortem consent. Training data should be kept to a minimum, sourced from the person’s explicit recordings and messages rather than scraped from the internet, and secured under strict access controls. In the EU, providers should comply with the AI Act’s transparency requirements, publish risk summaries, and disclose their content generation methods. In all regions, they should report on accuracy and safety evaluations, including rates of harmful outputs and incorrect information about the deceased, with documented suppression techniques.
Clinical integration is essential. Large health systems can evaluate AI grief companions as an addition to stepped-care models. For mild grief-related distress, the tool can offer structured journaling, values exercises, and memory prompts. For higher scores on recognized assessments, it should guide users toward evidence-based therapy or group support and provide crisis resources. This is not a distant goal. Health services already use AI-supported intake and referral tools; UK evaluators have placed some in early value assessment tracks while gathering more data. The best deployments will follow this model: real-world evaluations, clear stopping guidelines, and public dashboards.
Critics may argue that any simulation can worsen attachment and delay acceptance. That concern is valid. However, the theory of “continuing bonds” suggests that maintaining healthy connections—through letters, photographs, and recorded stories—can aid in adaptive grieving. Early research into digital and virtual reality grief interventions, when used carefully, indicates advantages for avoidance and meaning-making. The boundary to uphold is clear: no false claims of presence, no fabrications of new life events, and no promises of afterlife communication. The AI grief companion is, at best, a well-organized echo—helpful because it collects, structures, and shares what the person truly said and did. When used mindfully, it can help individuals express what they need and remember what they fear losing.
Anticipate another critique: chatbots are fallible and sometimes make errors or sound insensitive. This is true. That’s why careful design is essential in this area. Hallucination filters should block false dates, diagnoses, and places. A “red flag” vocabulary can guide discussions away from areas where the system lacks information. Session summaries should emphasize uncertainty rather than ignore it. Additionally, the system must never offer clinical advice or medication recommendations. The goal is not to replace therapy. It is to provide a supportive space, gather stories, and guide people toward human connection. Existing evidence from conversational agents in mental health—though not specific to griefbots—supports this modest claim.
There is also a justice aspect. Shortages are most severe where grief is heavy and services are limited. WHO data show stark global disparities in the mental health workforce. Digital tools cannot solve structural inequities, but they can improve access—helping those who feel isolated at 3 AM. For migrants, dispersed families, and communities affected by conflict or disaster, a multilingual AI grief companion could preserve cultural rituals and voices across distances. The ethical risks are real, but so is the moral argument. We should establish regulations that ensure safe access rather than push the practice underground.
The figures that opened this essay will not change soon. Tens of millions mourn each year, and a significant number struggle with daily life. Given this context, a well-regulated AI grief companion is not a gimmick. It is a practical tool that can make someone’s worst year a bit more bearable. The guidelines are clear: disclosure, consent, data minimization, and strict limits on claims. The pathway to implementation is familiar: assess as an adjunct to care, report outcomes, and adapt under attentive regulators using the AI Act’s transparency rules and WHO’s governance guidance. The alternative is not a world free of digital grief support. It is a world where commercial products fill the gap with unclear models, inadequate consent, and suggestive messaging. We can do better. A digital twin based on love and truth—clearly labeled and properly regulated—will never replace a hand to hold. But it can help someone through the night and into the morning. That is a good reason to build it well.
The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.
References
Digital Twin Consortium. (2020). Definition of a digital twin. Digital Twin Consortium. (n.d.). What is the value of digital twins? Eisma, M. C., et al. (2025). Prevalence rates of prolonged grief disorder… Frontiers in Psychiatry. European Parliament. (2024, March 13). Artificial Intelligence Act: MEPs adopt landmark law. Feng, Y., et al. (2025). Effectiveness of AI-Driven Conversational Agents… Journal of Medical Internet Research. Guardian. (2024, June 14). Are AI personas of the dead a blessing or a curse? HereAfter and grieftech overview. (2024). Generative Ghosts: Anticipating Benefits and Risks of AI… arXiv. IBISWorld. (2025). Psychic Services in the US—Industry size and outlook. Li, H., et al. (2023). Systematic review and meta-analysis of AI-based chatbots for mental health. npj Digital Medicine. NICE. (2025). Digital front door technologies to gather information for assessments for NHS Talking Therapies—Evidence generation plan. Our World in Data. (2024). How many people die each year? Privacy Regulation EU. (n.d.). GDPR Recital 27: Not applicable to data of deceased persons. Torous, J., et al. (2025). The evolving field of digital mental health: current evidence… Harvard Review of Psychiatry (open-access summary). WHO. (2024, January 18). AI ethics and governance guidance for large multimodal models. WHO. (2025, September 2). Over a billion people living with mental health conditions; services require urgent scale-up. Zhong, W., et al. (2024). Therapeutic effectiveness of AI-based chatbots… Journal of Affective Disorders. Verdery, A. M., et al. (2020). Tracking the reach of COVID-19 kin loss with a bereavement multiplier. PNAS. Wysa. (n.d.). First AI mental health app to meet NHS DCB 0129 clinical-safety standard.
Picture
Member for
1 year 1 month
Real name
Catherine Maguire
Bio
Catherine Maguire is a Professor of Computer Science and AI Systems at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI). She specializes in machine learning infrastructure and applied data engineering, with a focus on bridging research and large-scale deployment of AI tools in financial and policy contexts. Based in the United States (with summer in Berlin and Zurich), she co-leads SIAI’s technical operations, overseeing the institute’s IT architecture and supporting its research-to-production pipeline for AI-driven finance.
Certification helps but is narrow and assumes high natural diamond prices
Lab-grown diamonds crush prices, shrinking conflict rents at the source
Pair stronger traceability and sanctions with support for mining communities and education
Asia practiced tariff diplomacy: public deference, private deals
China’s rare-earth grip set the terms, yielding a short truce and modest tariff relief
Schools should hedge purchases and teach the supply-chain math behind these negotiations
Publicity is now a measurable asset, not just awareness
AI “digital doubles” and new laws make identity portable, licensable, and enforceable
Schools should value identity with attention-adjusted EMV and share revenue transparently
COP30 must set enforceable trade rules
Join a carbon price-floor club with fair borders
Recycle revenues and standardise carbon data to reward clean goods
The most crucial climate figure this month is not another reco
China’s global economic influence creates shared dependence
It reshapes rich-country industry and developing-country debt
Open-source AI deepens this reliance, making resilience vital
Ethan McGowan is a Professor of Financial Technology and Legal Analytics at the Gordon School of Business, SIAI. Originally from the United Kingdom, he works at the frontier of AI applications in financial regulation and institutional strategy, advising on governance and legal frameworks for next-generation investment vehicles. McGowan plays a key role in SIAI’s expansion into global finance hubs, including oversight of the institute’s initiatives in the Middle East and its emerging hedge fund operations.
Published
Modified
AI capital cheapens routine thinking and shifts work toward physical, contact-rich tasks
Gains are strong on simple tasks but stall without investment in real-world capacity
Schools should buy AI smartly, redesign assessments, and fund high-touch learning
Nearly 40% of jobs worldwide are at risk from artificial intelligence. This estimate from the International Monetary Fund highlights a simple fact: the cost of intelligence has decreased so much that software can now handle a greater share of routine thinking. We can think of this software as AI capital—an input that works alongside machines and people. Intelligence tasks are the first to be automated, while human work focuses on tasks that require physical presence. The cost of advanced AI models has dropped sharply since 2023. Additionally, hardware is providing more computing power for each euro spent every year. This trend lowers the effective cost of AI capital, while classroom, lab, and building expenses remain relatively stable. In this environment, shifts in wages and hiring occur not because work is vanishing, but because the mix of production is changing. Suppose educational institutions continue teaching as if intelligence were limited and physical resources were flexible. In that case, graduates will be unprepared for a labor market that no longer exists. It's crucial that we ensure equitable AI use and resource reallocation to prevent potential disparities.
Reframing AI Capital in the Production Function
The usual story of production—a combination of labor and physical capital—overlooks a third input that we now need to recognize. Let's call it A, or AI capital. This refers to disembodied, scalable intelligence that can perform tasks previously handled by clerks, analysts, and junior professionals. In a production function with three inputs, represented as 𝑌= 𝑓(𝐿,𝐾,𝐴), intelligence tasks are the first to be automated because the price of A is dropping faster than that of K. Many cognitive tasks can also be broken down into modules, making them easier to automate. A recent framework formalizes this idea in a two-sector model: intelligence output and physical output combine to produce goods and services. When A becomes inexpensive, the saturation of intelligence tasks increases, but the gains depend on having complementary physical capacity. This leads to a reallocation of human labor toward physical tasks, creating mixed effects on wages: wages may rise initially, then fall as automation deepens. Policies that assume a simple decline in wages miss this complex pattern.
Figure 1: Total labor 𝐿 endogenously splits between intelligence production 𝐼 and physical production 𝑃. As AI capital lowers the cost of intelligence tasks, labor shifts toward physical work while overall output 𝑌 depends on both streams.
The real question is not whether AI “replaces jobs” but whether adding another unit of AI capital increases output more than hiring one additional person. For tasks that are clearly defined, the answer is already yes. Studies show significant productivity boosts: mid-level writers completed work about 40% faster using a general-purpose AI assistant. In comparison, developers finished coding tasks approximately 56% faster with an AI partner. However, these gains decrease with more complex tasks, where AI struggles with nuances—this reflects the “jagged frontier” many teams are encountering. This pattern supports the argument for prioritizing AI: straightforward cognitive tasks will be automated first. In contrast, complex judgment tasks will remain human-dominated for now. We define “productivity” as time to completion and quality as measured by standardized criteria, noting that effect sizes vary with task complexity and user expertise.
When A expands while K and L do not, the share of labor can decline even when overall output stays constant. In simple terms, the same amount of production can require fewer workers. But this isn't an inevitable outcome. If physical and intellectual outputs complement rather than replace one another, investments in labs, clinics, logistics, and classrooms can help stabilize wages. This points to a critical shift for education systems: focusing on environments and approaches where physical presence enhances what AI alone cannot provide—care, hands-on skill, safety, and community.
Evidence: Falling AI Capital Prices, Mixed Productivity, Shifting Wages
The price indicators for AI capital are clear. By late 2023, API prices for popular models had dropped significantly, and hardware performance improved by about 30% each year. Prices won’t decline uniformly—newer models might be more expensive—but the overall trend is enough to change how businesses operate. Companies that previously paid junior analysts to consolidate memos are now using prompts and templates instead. Policymakers should interpret these signals as they would energy or shipping prices: as active factors influencing wages and hiring. We estimate the “price of A” by looking at published per-token API rates and hardware cost-effectiveness; we do not assume uniform access across all institutions.
Figure 2: As a larger share of tasks is automated, total output rises and more workers shift into physical, hands-on roles. The gains flatten at high automation, showing why investment in real-world capacity is needed to keep productivity growing.
The productivity evidence is generally positive but varies widely. Controlled experiments show significant improvements in routine content creation and coding. At the same time, observational studies and workforce surveys highlight that integrating AI can be challenging, and the benefits are often immediate. Some teams waste time fixing AI-generated text or adjusting to new workflows, while others achieve notable speed improvements. The result is an increase in task-level performance coupled with friction at the system level. Sector-specific data supports this: the OECD reports that a considerable number of job vacancies are in roles heavily exposed to AI, even as skill demands change when workers lack specialized AI skills. Labor-market rewards have also begun to shift: studies show wage premiums for AI-related skills, typically ranging from 15% to 25%, depending on the market and methodology.
The impact is not evenly distributed. The IMF predicts high exposure to AI in advanced economies where cognitive work predominates. The International Labour Organization (ILO) finds that women are more affected because clerical roles—highly automatable cognitive work—are often filled by women in wealthier countries. There are also new constraints in energy and infrastructure: data center demand could more than double by the end of the decade under specific scenarios, while power grid limitations are already delaying some projects. These issues further reinforce the trend toward prioritizing intelligence, which can outpace the physical capacities needed to support it. As AI capital expands, the potential returns begin to decrease unless physical capacity and skill training keep up. We draw on macroeconomic projections (IMF, IEA) and occupational exposure data (OECD, ILO); however, the uncertainty ranges can be vast and depend on various scenarios.
Managing AI Capital in Schools and Colleges
Education is at the center of this transition because it produces both types of inputs: intelligence and physical capacity. We should consider AI capital as a means to enhance routine thinking and free up human time for more personal work. Early evidence looks promising. A recent controlled trial revealed that an AI tutor helped students learn more efficiently than traditional in-class lessons led by experts. Yet, the adoption of such technologies is lagging. Surveys show low AI use among teachers in classrooms, gaps in available guidance, and limited training for institutions. Systems that address these gaps can more effectively translate AI capital into improved student learning while ensuring that core assessments remain rigorous. The controlled trial evaluated learning outcomes on aligned topics and used standardized results; survey findings are weighted to reflect national populations.
Three policy directions emerge from the focus on AI capital. First, rebalance the investment mix. If intelligence-based content is becoming cheaper and more effective, allocate limited funds to places where human interaction adds significant value, such as clinical placements, maker spaces, science labs, apprenticeships, and supervised practice. Second, raise professional standards for AI use. Train educators to integrate AI capital with meaningful feedback rather than letting the technology replace their discretion. The objective should not be to apply “AI everywhere,” but to focus on “AI where it enhances learning.” Third, promote equity. Given that clerical and low-status cognitive jobs are more vulnerable and tend to involve a higher percentage of women, schools relying too much on AI for basic tasks risk perpetuating gender inequalities. Track access, outcomes, and time used across demographic groups; leverage this data to direct support—coaching, capstone projects, internship placements—toward students who may be disadvantaged by the very tools that benefit others.
Administrators should approach their planning with a production mindset rather than simply relying on app lists. Consider where AI capital takes over, where it complements human effort, and where it may cause distractions. Utilize straightforward metrics. If a chatbot can produce decent lab reports, it can free up time for grading to focus on face-to-face feedback. If a scheduler can create timetables in seconds, invest staff time in mentorship. If a coding assistant helps beginners work faster, redesign tasks to emphasize design decisions, documentation, and debugging under pressure. In each case, the goal is to direct human labor towards the areas—both physical and relational—where value is amplifying.
Policy: Steering AI Capital Toward Shared Benefits
A clear policy framework is developing. Start with transparent procurement that treats AI capital as a utility, establishing clear terms for data use, uptime, and backup plans. Tie contracts to measurable learning outcomes or service results rather than just counting seat licenses. Next, create aligned incentives. Provide time-limited tax breaks or targeted grants for AI implementations that free up staff hours for high-impact learning experiences (like clinical supervision, laboratory work, and hands-on training). Pair these incentives with wage protection or transition stipends for administrative staff who upgrade their skills for student-facing jobs. This approach channels savings from AI capital back into the human interactions that are more difficult to automate.
Regulators should anticipate the obstacles. Growth in data centers and rising electricity needs present real logistical challenges. Education ministries and local governments can collaborate to pool their demand and negotiate favorable computing terms for schools and colleges. They can also publish disclosures regarding the use of AI in curricula and assessments, helping students and employers understand where AI was applied and how. Finally, implement metrics that account for exposure. Track what portion of each program’s assessments comes from physical or supervised activities. Determine how many contact hours each student receives and measure the administrative time freed up by implementing AI. Institutions that manage these ratios will enhance both productivity and the value of education.
Skeptics might question whether the productivity gains are exaggerated and whether new expenses—such as errors, monitoring, and training—cancel them out. They sometimes do. Research and news reports highlight teams whose workloads increased because they needed to verify AI outputs or familiarize themselves with new tools. Others highlight mental health issues arising from excessive tool usage. The solution is not to dismiss these concerns, but to focus on design: limit AI's capital to tasks with low error risk and affordable verification; adjust assessments to prioritize real-time performance; measure the time saved and reallocate it to more personal work. Where integration is poorly executed, gains diminish. Where it is effectively managed, early successes are more likely to persist.
Today, one of the most significant labor indicators might be this: intelligence is no longer scarce. The IMF’s figure showing 40% exposure reflects the macro reality that AI capital has reached a price-performance standard for many cognitive tasks. The risk for education isn’t becoming obsolete; it’s misallocating resources—spending limited funds on teaching rare thinking skills as if AI capital were still expensive and overlooking the physical and interpersonal work where value is now concentrated. The path forward is clear. Treat AI capital as a standard resource. Use it wisely. Implement it where it enhances routine tasks. Shift human labor to areas where it is still needed most: labs, clinics, workshops, and seminars where people connect and collaborate. Track the ratios; evaluate the trade-offs; protect those who are most at risk. If we follow this route, wages won’t just fall with automation. They will rise alongside complementary efforts. Schools will fulfill their mission: preparing individuals for the reality of today's world, not an idealized version of it.
The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.
References
Bruegel. 2023. Skills or a degree? The rise of skill-based hiring for AI and beyond (Working Paper 20/2023). Brookings Institution (Kording, K.; Marinescu, I.). 2025. (Artificial) Intelligence Saturation and the Future of Work (Working paper). Carbon Brief. 2025. “AI: Five charts that put data-centre energy use and emissions into context.” GitHub. 2022. “Research: Quantifying GitHub Copilot’s impact on developer productivity and happiness.” IEA. 2024. Electricity 2024: Analysis and Forecast to 2026. IEA. 2025. Electricity mid-year update 2025: Demand outlook. IFR (International Federation of Robotics). 2024. World Robotics 2024 Press Conference Slides. ILO. 2023. Generative AI and Jobs: A global analysis of potential effects on job quantity and quality. ILO. 2025. Generative AI and Jobs: A Refined Global Index of Occupational Exposure. IMF (Georgieva, K.). 2024. “AI will transform the global economy. Let’s make sure it benefits humanity.” IMF Blog. MIT (Noy, S.; Zhang, W.). 2023. Experimental Evidence on the Productivity Effects of Generative Artificial Intelligence (working paper). OECD. 2024. Artificial intelligence and the changing demand for skills in the labour market. OECD. 2024. How is AI changing the way workers perform their jobs and the skills they require? Policy brief. OECD. 2024. The impact of artificial intelligence on productivity, distribution and growth. OpenAI. 2023. “New models and developer products announced at DevDay.” (Pricing update). RAND. 2025. AI Use in Schools Is Quickly Increasing but Guidance Lags Behind. RAND. 2024. Uneven Adoption of Artificial Intelligence Tools Among U.S. Teachers and Principals. Scientific Reports (Kestin, G., et al.). 2025. “AI tutoring outperforms in-class active learning.” Epoch AI. 2024. “Performance per dollar improves around 30% each year.” Data Insight. University of Melbourne / ADM+S. 2025. “Does AI really boost productivity at work? Research shows gains don’t come cheap or easy.”
Picture
Member for
1 year 1 month
Real name
Ethan McGowan
Bio
Ethan McGowan is a Professor of Financial Technology and Legal Analytics at the Gordon School of Business, SIAI. Originally from the United Kingdom, he works at the frontier of AI applications in financial regulation and institutional strategy, advising on governance and legal frameworks for next-generation investment vehicles. McGowan plays a key role in SIAI’s expansion into global finance hubs, including oversight of the institute’s initiatives in the Middle East and its emerging hedge fund operations.