Skip to main content

Education and the AI Bubble: Budgets, Buildouts, and the Real Test of Value

Education and the AI Bubble: Budgets, Buildouts, and the Real Test of Value

Picture

Member for

11 months 3 weeks
Real name
Catherine Maguire
Bio
Catherine Maguire is a Professor of Computer Science and AI Systems at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI). She specializes in machine learning infrastructure and applied data engineering, with a focus on bridging research and large-scale deployment of AI tools in financial and policy contexts. Based in the United States (with summer in Berlin and Zurich), she co-leads SIAI’s technical operations, overseeing the institute’s IT architecture and supporting its research-to-production pipeline for AI-driven finance.

Modified

AI spending is soaring, but unit economics remain weak for education
Rising data-center capex and power costs will push up subscription and utility bills
Schools should buy outcomes, not hype—tie payments to verified learning gains

One alarming number should catch the attention of every education ministry and university boardroom: $6.7 trillion. This is the estimated amount needed worldwide by 2030 to build enough data-center capacity to meet the rising demand for computing, primarily driven by AI workloads. Most of this funding will go toward AI-ready facilities rather than teachers, curricula, or teaching itself. Even if this investment arrives on time, electricity costs for data centers are expected to more than double by 2030. This will strain grids and budgets that also support schools. Meanwhile, leading AI companies report significant revenue increases but continue to lose money as training and inference costs rise. The outcome is a classic pressure test. If the AI bubble can turn heavy investment into stable revenue before funding runs out, it will last. If not, education systems may find themselves stuck with long-term contracts for tools that fail to deliver value. It is wise to treat AI like any other unproven investment: require precise results before buying promises.

The AI bubble in numbers

What we see now looks more like a rush to outspend competitors than steady technology advancement. Microsoft, Alphabet, Amazon, and Meta all reported record or near-record capital spending in 2025. Much of this went toward chips, servers, and data centers to support AI. Alphabet alone planned capital expenditures of about $91 to $93 billion for the year. Microsoft forecast record quarterly spending, with further increases expected. Amazon indicated it would increase spending amid rising cloud demand. These expenditures are not small. They reduce free cash flow and increase the break-even point for organizations in the same market, including public education systems drawn in by flashy AI demonstrations. The reasoning behind this spending is straightforward: if usage grows quickly enough, today’s costs can become tomorrow’s advantage. Yet this assumption needs validation in schools by examining learning outcomes per euro spent, dollars saved per workflow, and the overall cost of AI-driven infrastructure.

While revenue figures look impressive, they don’t tell the whole story. OpenAI earned around $4.3 billion in the first half of 2025 and targeted $13 billion for the entire year, despite reports of billions in cash burn to maintain and develop its models. Anthropic's revenue surpassed $5 billion by late summer and approached $7 billion by October, with forecasts of $9 billion by year-end. However, high revenue does not necessarily mean healthy unit economics when compute, energy, and talent costs rise together. At the same time, Nvidia's quarterly revenue reached remarkable levels due to increased demand for AI hardware, highlighting where profits are accumulating today. For educators, this difference is crucial. As value accumulates upstream in chips and energy, buyers downstream face higher subscription prices and uncertain returns unless tools yield measurable improvements in learning or productivity.

Why the AI bubble matters for schools and universities


Education budgets are limited. Every euro spent on AI tools or local computing is a euro not available for teachers, tutoring, or student support. The AI bubble intensifies this trade-off. The International Energy Agency predicts data-center electricity use will roughly double to about 945 TWh by 2030, partly driven by AI. This demand will affect campus utilities and regional grids that also supply power for labs, dorms, and community services. If energy is scarce or expensive, institutions will face additional budget challenges due to higher utility costs and pass-through charges from cloud services used for AI. Therefore, the AI bubble is not just about valuations; it concerns operations and items that education leaders already understand: energy, bandwidth, device upgrades, and cybersecurity. Any plan to adopt AI must consider these essential aspects before signing contracts.

Policy signals are changing but remain unclear. In July 2025, the U.S. Department of Education released guidelines on responsible AI use in schools, emphasizing data protection, bias, and instructional alignment. By October 2025, at least 33 U.S. states and Puerto Rico had issued K-12 guidance. Globally, the OECD warns that AI adoption can both widen or close gaps depending on its design and governance. None of these bodies guarantees that generic AI will transform learning on its own or endorse vendor claims. Their message is clear: proceed with caution, but demonstrate proof. This means districts and universities should link procurement to evidence of impact and safeguard student data with the same diligence applied to health records. The obligation to provide proof lies with the seller, not the teacher, who must adapt their approach to a tool that may change prices or terms with the following technology cycle.

Breaking the AI bubble: unit economics that actually work in education

There is promising evidence that some AI tutors can enhance learning. A 2025 peer-reviewed study found that a dedicated AI tutor outperformed traditional active learning in terms of measurable gains, with students learning more in less time. Other assessments of AI helpers, such as Khanmigo pilots, indicated positive experiences among students and teachers and some improvements, though results varied across different contexts. The takeaway is not that AI surpasses classroom instruction, but that targeted systems closely matched to curriculum and assessments can generate value. Proper pricing is crucial. If a district spends more on licenses than it saves in tutoring, remediation, or course completion, the purchase is not worth it. AI that succeeds in terms of unit economics will be narrowly defined, well-integrated into teacher workflows, and not simply added on.

Many supporters believe that economies of scale will reduce costs and stabilize the bubble. However, training and deploying cutting-edge models remain costly. Rigorous analyses suggest that the most extensive training runs could exceed a billion dollars by 2027, with hardware, connectivity, and energy making up the majority of expenses. The necessary infrastructure investment is huge: industry analyses project trillions in data-center capital spending by 2030, with major companies already spending tens of billions each quarter. These realities have dual implications. They suggest price drops could occur as infrastructure increases. Still, they also tie the market to capital recovery cycles that may force vendors to raise prices or push customers toward more profitable options. Schools operate on annual budgets. A pricing model reliant on constant upselling poses a risk. Long-term contracts based on untested plans represent an even larger one.

The way forward through the AI bubble is both practical and focused. Purchase results rather than hype. Link payments to verified improvements in reading, math, or completion, using credible baselines for comparison. Prefer models that function effectively on existing devices or low-cost hardware to minimize energy and bandwidth costs. Encourage vendors to produce exportable logs and interoperable data so that the impact can be independently audited. Utilize pilot programs with defined exit strategies and clear stop-loss rules in case promised results do not materialize. Ensure that every procurement aligns with published AI guidelines and equity goals, so that benefits reach the students most in need. In short, we should demand that AI prove its value in the classroom through measured improvements. This is how we can turn the AI bubble into real value for learners instead of creating a future budget issue.

A cautious path forward through the AI bubble

The education sector should not attempt to outspend Big Tech. It should outsmart it. Begin with a precise accounting of total ownership costs: software, devices, bandwidth, teacher training, support systems, and energy costs. Connect each AI project to a specific challenge—absences, writing feedback, targeted Algebra I practice, or advising backlogs—and evaluate whether the tool improves that metric at a lower cost than other options. When it works, expand it; when it does not, stop using it. Policy can assist by standardizing evidence requirements across districts and nations, creating a single hurdle for vendors rather than fifty. Researchers should continue to publish prompt, independent assessments that distinguish lasting improvements from fleeting trends. If we keep procurement focused and evidence-driven, we can steer vendors away from speculative capital narratives and toward tools that perform well in classrooms, lecture halls, and advising centers.

Returning to the initial figure —$6.7 trillion in projected capital expenditure, alongside the expectation that data-center energy needs will more than double —does not constitute an education strategy. It represents a financial gamble predicated on the assumption that future revenue will exceed the limitations of energy, prices, and policies. Schools cannot support that gamble. However, they can insist that AI enhance learning time, lessen administrative burdens, and make public funds stretch farther than the current situation allows. The evidence requirement is significant because the stakes are personal: student time, teacher effort, and public confidence. If AI companies can meet these criteria, the AI bubble will transition into a sustainable market that prioritizes learners. If they cannot, the bubble will deflate, as bubbles tend to do, and the institutions that demanded evidence will be those that kept students safe. We should strive to be those institutions—calm, inquisitive, and unfazed by hype.


The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.


References

Alphabet Inc. “Alphabet hikes capex again after earnings beat on strong ad, cloud demand.” Reuters, Oct. 30, 2025.
International Energy Agency. Electricity 2025 – Executive Summary. 2025.
International Energy Agency. “Energy demand from AI.” 2024–2025.
McKinsey & Company. “The cost of compute: a $7 trillion race to scale data centers.” Apr. 28, 2025.
Microsoft. “Microsoft’s cloud surge lifts revenue above expectations; capex outlook.” Reuters, Oct. 30, 2025.
NVIDIA. “Financial Results for Q4 and Fiscal 2025.” Feb. 26, 2025.
OECD. The Potential Impact of Artificial Intelligence on Equity and Inclusion in Education. Working Paper, 2024.
OpenAI. “OpenAI generates $4.3 billion in revenue in first half of 2025, The Information reports.” Reuters, Oct. 2, 2025.
Stanford-affiliated study (Scientific Reports). “AI tutoring outperforms in-class active learning.” 2025.
U.S. Department of Education. “Guidance on Artificial Intelligence Use in Schools.” July 22, 2025.
Wharton Human-AI Initiative. AI 2024 Report: From Adoption to ROI. Nov. 2024.
“Anthropic aims to nearly triple annualized revenue in 2026.” Reuters, Oct. 16, 2025.
Editorial checks: This article is ~1,800 words, uses exactly four H2 headings, and keeps paragraphs within a ~120–200-word range under each section. All statistics are sourced above. I conducted a self-audit for originality and plain-English phrasing to keep Flesch Reading Ease above 50; sentences are short and direct.

Picture

Member for

11 months 3 weeks
Real name
Catherine Maguire
Bio
Catherine Maguire is a Professor of Computer Science and AI Systems at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI). She specializes in machine learning infrastructure and applied data engineering, with a focus on bridging research and large-scale deployment of AI tools in financial and policy contexts. Based in the United States (with summer in Berlin and Zurich), she co-leads SIAI’s technical operations, overseeing the institute’s IT architecture and supporting its research-to-production pipeline for AI-driven finance.

Mind Captioning in Education: Pattern Recognition Needs Policy, Not Hype

Mind Captioning in Education: Pattern Recognition Needs Policy, Not Hype

Picture

Member for

11 months 3 weeks
Real name
Ethan McGowan
Bio
Ethan McGowan is a Professor of Financial Technology and Legal Analytics at the Gordon School of Business, SIAI. Originally from the United Kingdom, he works at the frontier of AI applications in financial regulation and institutional strategy, advising on governance and legal frameworks for next-generation investment vehicles. McGowan plays a key role in SIAI’s expansion into global finance hubs, including oversight of the institute’s initiatives in the Middle East and its emerging hedge fund operations.

Modified

Mind captioning maps visual brain patterns to text, but it’s pattern recognition—not mind reading
In schools, keep it assistive only, with calibration, human-in-the-loop checks, and clear error budgets
Adopt strict governance first: informed consent, mental-privacy protections, and audited model cards

A machine that can guess your memories should make every educator think twice. In November 2025, researchers revealed that a non-invasive "mind captioning" system could identify which of 100 silent video clips a person was recalling nearly 40% of the time for some individuals, even when language areas were inactive. That exceeds mere chance. The model learned to connect patterns of visual brain activity to the meaning of scenes, then generated sentences that described the memories. It isn’t telepathy, and it isn’t perfect. But it clearly shows that semantic structure in the brain can be captured and translated into text. If this ability finds its way into school tools—ostensibly to help non-speaking students, assess comprehension, or personalize learning—it will come with power, risk, and the temptation to exaggerate its capabilities. Our systems excel at pattern recognition, but our policies fall short. This highlights the necessity and potential of interdisciplinary teamwork in developing mind captioning technology, underscoring the importance and promise of collaboration between engineers and neuroscientists.

Mind Captioning: Pattern Recognition, Not Mind Reading

The core idea of mind captioning is straightforward and transformative: the system maps visual representations in the brain to meaning and then to sentences. In lab settings, fMRI signals from a few participants watching or recalling short clips were turned into text that retained who did what to whom, rather than just a collection of words. This is important for classrooms because a tool that can retrieve relational meaning could, in theory, help students who cannot speak or who struggle to express what they saw. The potential benefits of this technology in education are vast, offering hope and optimism for the future of learning. However, the key term is "pattern recognition," not "mind reading." Models infer likely descriptions from brain activity linked to semantics learned from video captions. They do not pull private thoughts at will; they need consent, calibration, and context. Even the teams behind this research stress those limits and the ethical implications of mental privacy. We should adopt the same approach and draft our rules accordingly.

Performance demands similar attention. Related advances outline the progress and its limits. A 2023 study reconstructed the meaning of continuous language from non-invasive fMRI, showing that semantic content—rather than exact words—can be decoded when participants listen to stories. However, decoding what someone produces is more challenging. In 2025, a non-invasive “Brain2Qwerty” system translated brain activity while people typed memorized sentences, achieving an average character-level accuracy of around 68% with MEG and up to 81% for top participants; EEG was much less effective.

Figure 1: Above-chance decoding: ~50% when viewing clips; up to ~40% top-1 when recalling from 100 options (proficient subjects).

In contrast, invasive implants for inner speech have reported word or sentence accuracy in the 70% range in early trials, highlighting gaps in portability and safety that still need to be addressed. These figures do not support grading or monitoring. They advocate for narrow, assistive use, open error budgets, and interdisciplinary oversight wherever mind captioning intersects with learning.

Mind Captioning in Education: Accuracy, Error Budgets, and Use Cases

If mind captioning enters schools, its first appropriate role is in assistive communication. Imagine a student with aphasia watching a short clip and having a system suggest a simple sentence, which the student can then confirm or edit. This process puts the student in control, not the model. It also aligns with existing science: mind captioning works best with seen or remembered visual content that has a clear structure, is presented under controlled timing, and is calibrated to the individual. The complex equipment, such as fMRI today and promising but not yet wearable MEG, keeps this technology in clinics and research labs for now. Claims that it can evaluate reading comprehension on the spot are premature. Even fast EEG setups that decode rapid visual events have limited accuracy, reminding us that “instant” in headlines often means processing that takes seconds to minutes with many training trials behind the scenes. Tools should never exceed the error margins that establish their safe educational use.

Figure 2: MEG non-invasive decoding averages ~68% character accuracy; best participants ~81%; EEG averages ~33%—keep uses assistive.

Administrators should seek a clear error budget linked to each application. If the goal is to generate a one-sentence summary of a video the class just watched, what rate of false descriptions is acceptable before trust is broken—5%? 10%? For crucial decisions, the answer is zero, which means no high-stakes applications. Developers must disclose how long calibration takes per student, session length, and whether performance remains consistent across days without retraining. Results from fMRI may not apply to EEG in a school environment, and semantic decoders trained on cinematic clips may not work as well with hand-drawn diagrams. A reasonable standard is “human-in-the-loop or not at all.” The model suggests language; the learner approves it; the teacher oversees. Where timing or circumstances make that loop unfeasible, deployment should wait. Mind captioning should be treated as a tool for expression, not a silent judge of thought. This perspective safeguards students and keeps the technology focused on real needs.

Governance for Mind Captioning and Neural Data in Schools

The importance of governance in keeping pace with technological advancements cannot be overstated. In 2024, Colorado passed the first U.S. law to protect consumer “neural data,” classifying it as sensitive information and requiring consent—an essential part of any district procurement. In 2025, the U.S. Senate introduced the MIND Act, directing the Federal Trade Commission to study neural data governance and suggest a national framework. UNESCO adopted global standards for neurotechnology this month, emphasizing mental privacy and freedom of thought. The OECD’s 2025 Neurotechnology Toolkit pushes for flexible regulations that align with human rights. Together, these create a solid foundation: schools should treat neural signals like biometric health data, not as simple data trails. Contracts must prohibit secondary use, require default deletion, and draw a clear line between assistive communication and any form of behavior scoring or monitoring. Without that distinction, the social acceptance of mind captioning in education will vanish.

Districts can put this into practice. Require on-device or on-premise processing whenever possible; if cloud algorithms are necessary, mandate encryption during storage and transmission, strict purpose limitations, and independent audits. Insist on understandable local consent forms for students and guardians that teachers can easily explain. Request model cards disclosing training data types, subject counts, calibration needs, demographic performance differences, and failure modes, along with data sheets for each hardware sensor. Clearly define kill-switches and sunset clauses in contracts, with mandatory timelines for data deletion. Explain that outputs used in learning must be verifiable by students. Finally, implement red-team testing before any pilot: adversarial scenarios that test whether the system infers sensitive traits, leaks private associations, or produces inaccurate, confident sentences. Policy cannot remain just words. Procurement reflects policy in action; use it to establish a safe foundation for mind captioning.

Bridging Engineers and Neuroscience: A Curriculum for Evidence

The quickest way for mind captioning to fail in schools is to let engineers release a product without collaborating closely with neuroscientists or to let neuroscientists overlook classroom realities. We can address this by making co-production standard practice. Teacher-education programs should include a short, practical module on neural measurement: what fMRI, MEG, and EEG actually capture; why signal-to-noise ratios and subject variability are significant; and how calibration affects performance. Recent guidance on brain-computer interface design is straightforward: neuroscience principles must inform engineering decisions and user protocols. This means setting clear hypotheses, being transparent about pre-registration where possible, and focusing on how task design affects the decodable signal. The research frontier is changing—visual representation decoders report high top-5 accuracy in lab tasks, and semantic decoders now extract meaning from stories—but classroom tasks are messier. A shared curriculum across education, neuroscience, and human-computer interaction can help maintain realistic expectations and humane experiments.

Critiques deserve responses. One critique suggests the vision is exaggerated: Meta-funded non-invasive decoders achieve only "alphabet-level" accuracy. That misinterprets the trend. The latest non-invasive text-production study using MEG achieves about 68% character accuracy on average and 81% for the top subjects; invasive systems reach similar accuracy for inner speech but at the cost of surgery. Another critique states that privacy risks overshadow benefits. Governance can help here: Colorado’s law, the proposed MIND Act, and UNESCO’s standards allow schools to establish clear boundaries. A third critique claims that decoding structure from the visual system does not equate to understanding. Agreed. That is why we restrict educational use to assistive tasks that require human confirmation and why we measure gains in agency rather than relying on mind-reading theatrics. Meanwhile, the evidence base is expanding: the mind captioning study shows solid mapping from visual meaning to text and, significantly, generalizes to memory without depending on language regions. Use that progress. Do not oversell it.

That "40% from 100" figure is our guiding principle and our cautionary tale. It shows that mind captioning can recover structured meaning from non-invasive brain data at rates well above chance. It also illustrates that the system is imperfect and probabilistic, not a peek into private thoughts. Schools should implement technology only where it is valuable and limited: for assistive purposes with consent, for tasks aligned with the science, and with a human involved. The remaining focus is governance. We should treat neural data as sensitive by default, enforce deletion and purpose limits, and expect public model cards and red-team results before any pilot enters a classroom. We should also support training that allows teachers, engineers, and neuroscientists to communicate effectively. Mind captioning will tempt us to rush the process. We must resist that. Pattern recognition is impressive. Educational policy must be even more careful, protective, and focused on dignity.


The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.


References

Congress of the United States. (2025). Management of Individuals’ Neural Data Act of 2025 (MIND Act). 119th Congress, S.2925. Retrieved from Congress.gov.
Ferrante, M., et al. (2024). Decoding visual brain representations from biomedical signals. Computer Methods and Programs in Biomedicine.
The Guardian. (2025, November 6). UNESCO adopts global standards on the ‘wild west’ field of neurotechnology.
Horikawa, T. (2025). Mind captioning: Evolving descriptive text of mental content from human brain activity. Science Advances.
Live Science. (2025). New brain implant can decode a person’s inner monologue.
Neuroscience News. (2025, November 7). Brain decoder translates visual thoughts into text.
OECD. (2025, July). Neurotechnology Toolkit. Paris: OECD.
Rehman, M., et al. (2024). Decoding brain signals from rapid-event EEG for visual stimuli. Frontiers in Neuroscience.
Reuters. (2024, April 18). First law protecting consumers’ brainwaves signed by Colorado governor.
Scientific American / Nature. (2025, November 6). AI Decodes Visual Brain Activity—and Writes Captions for It.
Tang, J., et al. (2023). Semantic reconstruction of continuous language from non-invasive brain recordings. Nature Neuroscience.
Yang, H., et al. (2025). Guiding principles and considerations for designing a well-grounded brain-computer interface. Frontiers in Neuroscience.
Zhang, A., et al. (2025). Brain-to-Text Decoding: A Non-invasive Approach via Typing (Brain2Qwerty).

Picture

Member for

11 months 3 weeks
Real name
Ethan McGowan
Bio
Ethan McGowan is a Professor of Financial Technology and Legal Analytics at the Gordon School of Business, SIAI. Originally from the United Kingdom, he works at the frontier of AI applications in financial regulation and institutional strategy, advising on governance and legal frameworks for next-generation investment vehicles. McGowan plays a key role in SIAI’s expansion into global finance hubs, including oversight of the institute’s initiatives in the Middle East and its emerging hedge fund operations.