Skip to main content

When the Data Firehose Fails: Why Education Needs Symbolic AI Now

When the Data Firehose Fails: Why Education Needs Symbolic AI Now

Picture

Member for

8 months
Real name
Natalia Gkagkosi
Bio
Natalia Gkagkosi writes for The Economy Research, focusing on Economics and Sustainable Development. Her background in these fields informs her analysis of economic policies and their impact on sustainable growth. Her work highlights the critical connections between policy decisions and long-term sustainability.

Modified

Today’s AI is costly and brittle; schools need symbolic AI
Hybrid neuro-symbolic tools show each step, making feedback and grading fair
Policy should fund open subject rules and buy systems that prove their logic

We are in an AI boom that consumes power like entire countries while still stumbling over basic logic. By 2030, data centers are expected to use about 945 terawatt-hours of electricity per year, roughly the same amount as Japan consumes today, mainly due to AI. A single conversational query can consume around three watt-hours, much more than a web search. At this rate, larger models will become more expensive to operate in schools and less reliable in classrooms. Yet increasing scale has not solved the issue of weak reasoning. Large models can impress us but also contradict themselves on simple tasks. The solution for education does not lie in having more data. It requires a return to a clear concept with a fresh twist: symbolic AI. This approach involves systems that represent knowledge through symbols and rules, applying logic to them, and, when appropriate, working alongside neural networks. This shift changes our aim from imitation to proper understanding. It allows us to teach machines how to reason as we expect students to.

What Is Symbolic AI?

Symbolic AI represents the world with human-readable symbols—concepts, relationships, and rules—then uses logic to conclude. It includes the traditions of frames, ontologies, production rules, and theorem proving, and it traces back to expert systems that stored medical knowledge or designed complex tasks. Unlike pattern-matching systems, symbolic AI makes its reasoning process explicit. For example, it can explain why it concluded 'measles' by showing the rule 'IF fever AND rash THEN consider measles,' making its reasoning transparent. This transparency is vital for learning. Students, teachers, and auditors need to understand why a model concluded, not just that it did.

This approach is not outdated. Symbolic methods are still the preferred way to outline constraints, create curriculum graphs, and verify steps in math. They also scale differently than current models—not by gathering more text, but by adding better rules and organized knowledge. Symbolic AI excels in transparent generalization. When a rule applies, it holds wherever its conditions are met, regardless of how it looks on the surface. This is the kind of logic we teach in schools. The idea is straightforward and practical: if we want AI to help students learn reasoning, we should use systems that can reason symbolically and share their logic in a way students can verify.

Figure 1: The scale problem: even if accuracy improves, cost and energy rise fast—pushing schools toward inference-efficient, rule-checked tools like symbolic AI.

Why Symbolic AI Matters for Learning and Assessment

Education values reliability over novelty. Modern language models can generate fluent answers that obscure gaps in understanding. Studies in 2024 revealed that when tasks require combining simple parts into new structures—classic compositional reasoning—performance drops significantly. These tests aren't unusual; they reflect how math word problems or multi-step lab protocols function. Symbolic AI directly addresses this gap by encoding the relationships and steps that must hold for an answer to be valid. In assessments, a symbolic system can check each step against a rule base. In tutoring, it can pinpoint the exact constraint a student overlooked.

There is also a fairness argument. If AI will help grade proofs or explanations, schools need a traceable path from premise to conclusion. Symbolic AI provides graders with a record of the rules used and assumptions made. This record can be audited and used for teaching. It also cuts moderation costs. Instead of re-running random prompts and hoping for the same outcome, an assessor can verify whether the proof engine's steps match a curriculum specification. Practically, a district could publish a symbolic rubric for algebra or historical causation, allowing various tools to implement it. Teachers would then review the same logic rather than a multitude of different model quirks.

Hybrid Paths: Symbolic AI Meets Deep Learning

The choice isn't between symbolic or neural; it's both, by design. The most significant advancements in challenging reasoning tasks have come from neuro-symbolic systems that combine pattern recognition with formal logic. In 2024, DeepMind's AlphaGeometry merged learned search with a symbolic theorem prover to tackle geometry problems from the International Mathematical Olympiad at near gold-medalist levels. In 2025, AlphaGeometry-2 pushed this further, often surpassing top human Olympiad geometry benchmarks. The secret wasn't just "more data"; it was better reasoning structures integrated with perception. This model belongs in educational tools. Use neural networks to interpret diagrams or language, and use symbolic engines to ensure that every step is valid.

Hybrid systems also represent the meeting of rigor and practicality. A language model can draft a solution while a symbolic checker verifies or corrects it. If the checker fails, the system can explain which rule didn't apply and why. This feedback loop is ideal for formative learning. Over time, schools can develop compact domain models—competency maps for fractions, stoichiometry, grammar—that remain stable across various curricula and providers. The benefit is resilience: when prompts change, the logic stays the same. Furthermore, because the symbolic layer is lightweight, it can run on school hardware at predictable costs rather than relying on a cloud model that drains power and budget focus from students.

Figure 2: When pattern recognition is paired with formal proof, performance jumps—evidence that symbolic structure is the missing lever for reliable reasoning in education tech.

Policy Roadmap for Symbolic AI in Education

Policy should guide procurement and research toward systems that can explain their processes. First, establish a baseline: any AI used for grading or critical feedback should provide a symbolic record of the steps taken, the rules applied, and the knowledge used. This does not rule out neural models; it requires them to work within an auditable logic layer. Second, invest in open, standards-based knowledge graphs for key subjects. Ministries and districts can sponsor modular ontologies—such as algebraic identities, geometric axioms, and lab safety rules—that tools can reuse. These public goods can reduce reliance on vendors and help teachers extend the rules they actually teach.

Third, align AI policy with energy and cost realities. The IEA projects that data-center electricity demand will double by 2030, mainly driven by AI. Districts should insist on architectures that are efficient in inference: smaller local models in tandem with symbolic engines and, where cloud resources are necessary, strict energy usage and latency transparency. A clear procurement guide—traceable reasoning, predictable per-student energy consumption and costs, and adherence to established subject ontologies—will influence the market. This isn't about picking favorites; it's about purchasing systems that reinforce classroom logic and fit within public budgets.

The New Attention to Symbolic AI

This shift is not just theoretical. In late 2025, Nature reported that a significant trend in AI is the push to combine "good old-fashioned AI" with neural networks to achieve more human-like reasoning. Researchers and practitioners now see symbolism as the missing ingredient for logic, safety, and reliability. Even popular science publications have shifted from viewing it as nostalgic to recognizing it as necessary. If we want AI to operate by rules, we must provide it with rules. For education, this focus is essential because it signals an expanding set of tools and a research landscape to explore. Our field can lead rather than lag by funding pilot programs and sharing open resources that other fields can adopt.

This surge doesn't eliminate criticism. Skeptics remember the fragile expert systems and costly knowledge engineering. Those are valid concerns. The counterargument comes from practical experience with hybrid systems. We no longer need to code every detail manually. Neural models can generate candidate rules; teachers can validate and refine them, while symbolic engines can enforce them and clarify their logic. The outcome is not a rigid framework but a dynamic curriculum structured as constraints, examples, and proofs. When exams change, or new standards emerge, we change the rules, not the entire model. That mirrors how education operates now, and it should also be how its AI functions.

The initial figures are a warning and an opportunity. If we continue to prioritize scale alone, AI will consume budgets and energy while failing at what schools value most: clear reasoning. Symbolic AI offers a more educational avenue. It enables us to encode the structures we teach—axioms, rules, causal links—and hold machine answers to the same standards we expect from students. The shift we see in research points the way: perception where needed, logic where essential, and explanations everywhere. Policymakers can support this direction with procurement rules and shared, open subject ontologies. Districts can pilot tools that detail their processes, not just their results. The goal is clear and urgent: align the intelligence we invest in with the intelligence we cultivate. If we succeed, we will spend less time chasing model quirks and more time nurturing minds—both human and machine—that can articulate their reasoning.


The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.


References

DeepMind. (2024). AlphaGeometry: An Olympiad-level AI system for geometry. Nature paper and technical blog.
Epoch AI. (2025). How much energy does ChatGPT use? Gradient Update.
International Energy Agency. (2025). AI is set to drive surging electricity demand from data centres; see also Energy and AI base-case projections.
Jones, N. (2025). This AI combo could unlock human-level intelligence. Nature.
Jones, N., & Nature Magazine. (2025). Could symbolic AI unlock human-like intelligence? Scientific American.
Wikipedia contributors. (2025). Symbolic artificial intelligence. Wikipedia.
Yang, H., et al. (2024). Exploring compositional generalization of large language models. NAACL SRW.
Zhao, J., et al. (2024). Exploring the limitations of large language models in compositional reasoning. arXiv.
Zheng, T. H., Trinh, T.-H., et al. (2024–2025). AlphaGeometry and AlphaGeometry-2 results on Olympiad geometry. Nature and arXiv.

Picture

Member for

8 months
Real name
Natalia Gkagkosi
Bio
Natalia Gkagkosi writes for The Economy Research, focusing on Economics and Sustainable Development. Her background in these fields informs her analysis of economic policies and their impact on sustainable growth. Her work highlights the critical connections between policy decisions and long-term sustainability.

Sovereign AI Is Becoming Public Infrastructure

Sovereign AI Is Becoming Public Infrastructure

Picture

Member for

1 year
Real name
Catherine Maguire
Bio
Catherine Maguire is a Professor of Computer Science and AI Systems at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI). She specializes in machine learning infrastructure and applied data engineering, with a focus on bridging research and large-scale deployment of AI tools in financial and policy contexts. Based in the United States (with summer in Berlin and Zurich), she co-leads SIAI’s technical operations, overseeing the institute’s IT architecture and supporting its research-to-production pipeline for AI-driven finance.

Modified

Sovereign AI is public infrastructure for education
Countries blend open models and public compute to localize and cut dependence
Schools need shared data, transparent tests, and energy-smart procurement

In late 2025, South Korea announced plans with NVIDIA and local companies to deploy more than 260,000 GPUs across 'sovereign clouds' and AI factories. This figure shows a shift in ambition. It’s not just a pilot project; it is a national infrastructure. Europe is also taking similar steps. They are expanding a network of public 'AI Factories' based on EuroHPC supercomputers. This will allow startups and universities to access the substantial computing power needed for large models at a reduced cost. In Latin America, a regional team is training Latam-GPT to reflect local laws, languages, and cultures, rather than relying on defaults from Silicon Valley. The common goal is clear. Countries want their education systems, courts, and public services to depend on models they own and can examine. They seek sovereign AI systems that align with local laws, culture, security, and education needs—so that teachers and students can trust what they use.

The case for sovereign AI

Sovereign AI is more than just a slogan. It represents a shift from purchasing general cloud services to creating controllable AI systems governed by local rules. Switzerland has demonstrated that a high-income democracy can achieve this openly. In September, ETH Zurich, EPFL, and the Swiss National Supercomputing Centre introduced Apertus, an open, multilingual model trained on public infrastructure and available for broad use. The team presents it as a model for transparent, sovereign foundations that others can adapt. Openness is a policy choice, not just a marketing strategy. It increases accountability and enables schools, ministries, and researchers to verify claims rather than accept black boxes, which is crucial for strategic decision-making and public trust.

Sovereign AI is also necessary where global models do not fit local needs. The collaborative effort behind Latam-GPT, led by Chile’s CENIA, is training the model on regional legal texts, educational materials, and public records to represent Spanish, Portuguese, and Indigenous languages accurately. This approach is not just for show. When models struggle with code-switching or regional expressions, they can make classroom use risky and civic interactions unfair. A regional base model with national adjustments can bridge that gap. The goal is not to outperform the largest U.S. model on English benchmarks; it is to provide answers, grading, and guidance in the language and context students are familiar with.

Build, borrow, or blend: a sovereign AI playbook

There are three main routes to implement sovereign AI. The first is to build from the ground up, as Switzerland has done. The advantage is control over data, training methods, safety measures, and licensing. However, this approach is costly, requiring ongoing public funding, a consistent talent pool, and long-term computing resources. The second option is to borrow and adapt. Latin America’s initiative illustrates a low-compute option: start with an open-weight model and use adapters or LoRA to tailor it for slang, public services, and curricula. This can be done in weeks on modest clusters, which is essential for education budgets. The third option is to blend. This involves anchoring public policy in open models, contracting for private resources when necessary, and forming collaborations to reduce costs.

Europe's blended approach is currently the most advanced. The EU's AI Factories initiative leverages EuroHPC to provide AI-optimized supercomputing resources to startups, universities, and small- and medium-sized enterprises. The first wave of seven sites launched after 2024, with six more selected in October 2025, alongside new "AI Factory Antennas" to expand access. Funding is a mix of EU programs and national contributions. The policy goal is straightforward: reduce reliance on foreign, closed models by creating sovereign AI capacity as a shared resource. In the private sector, Europe’s Mistral continues to release strong open-weight models, enhancing the link between public infrastructure and open tools. This exemplifies a “borrow and blend” strategy on a continental scale.

Figure 1: Europe is institutionalizing sovereign AI as public infrastructure, expanding from 7 core sites in 2024 to a wider access layer in 2025

Power, chips, and classrooms: the hidden costs of sovereign AI

Sovereign AI requires significant computing and energy resources. The International Energy Agency expects global data center energy consumption to more than double by 2030, reaching about 945 TWh, with AI-focused centers contributing a large share of this increase. Education planners need to view this figure as both a budget item and a constraint on energy supply. Suppose a ministry wants an AI tutor in every classroom. In that case, it must consider where processing occurs, who pays for the computational cycles, and how to align service agreements with school hours. Often, the best choice is "nearby but not on-site": public cloud zones governed by domestic laws, supported by long-term clean energy contracts, and connected to education networks.

Figure 2: Rapid growth to 620–1,050 TWh by 2026 makes energy-aware procurement a core requirement for education systems.

Another challenge is hardware concentration. Analysts estimate that NVIDIA still accounts for 80-90% of the AI accelerator market. This concentration increases costs and delivery risks for any country scaling sovereign AI. Europe's response is to build public computing resources and diversify hardware options. The EuroHPC/AI Factories model grants access to non-corporate users, while the EU is investing in domestic chip design—such as a €61.6 million grant to Dutch company AxeleraAI for an inference chip aimed at data centers. While schools may not care about tensor cores, they will notice the impact when hardware shortages delay the launch of national reading tutors. Diversification and public access help mitigate these risks, which is vital for education policy and infrastructure planning.

Korea is addressing both challenges head-on. The government has selected five teams to develop a sovereign foundation model by 2027. They are making large GPU purchases, starting with an initial batch of 13,000 units, and have a multi-party plan that could scale to 260,000 GPUs across public and private AI factories. The public message is about national control, but the practical takeaway for education is procurement design: consolidate demand, negotiate energy contracts that align with school schedules, and reserve computing power for public interest. This is how to keep sovereign AI affordable for schools.

A practical sovereign AI agenda for education

The first step is to view sovereign AI as curriculum infrastructure. Use public models for essential tasks—such as assessment feedback, lesson planning, and language support—and require vendors to operate on platforms that comply with local laws or use certified AI Factories. This ensures data residency, accountability, and cost management. It also sets a guideline: models utilized on public networks must be open-source, or at least open enough for education authorities to validate safety, content filters, and logging. Europe’s direction shows the way, with AI Factories designed to provide researchers and small businesses open access, along with broader digital partnerships the EU is forming with countries like Korea and Singapore to share standards and best practices.

The second step is to create the data that models need to support classrooms. This involves gathering curated, consented, and representative datasets: textbooks under public licenses, anonymized exam papers, bilingual glossaries, local history archives, and speech data for lesser-known dialects. Latin America’s method—federating national collections under a regional model and then fine-tuning for each country—provides a solid template. If you want sovereign AI that can teach in Quechua or Romansh, you cannot simply rent it; you must create it. Attach data grants to teacher involvement, fund annotation, and collaborate with universities to ensure rigorous quality control.

The third step is building trust. Teachers will only use tools they can trust or understand. Ministries should create open evaluation tracks that any vendor or public lab can participate in. Define tasks that mirror classroom needs: provide detailed feedback on essays, identify bias in texts, and draft unit plans aligned with local standards. Publish the results and allow districts to choose from a trusted list. The cost of this approach is minimal compared to the expense of a failed national contract. The benefits include real choice and quicker development cycles. Lastly, it is crucial to plan for energy needs. The IEA's projections indicate that energy demand will rise. Education networks should work with national energy planners to avoid overlapping peak testing and tutoring times with times of grid stress.

We began with a stark figure: 260,000 GPUs for a single country’s sovereign AI initiative. While the exact number is significant, its implications are more important. AI is becoming a fundamental part of our infrastructure, with education set to be one of its main users. Suppose governments want models that teach in local languages, adhere to national curricula, and respect public law. In that case, the solution is not to commit to a single foreign platform. Instead, they should build shared capacity, pool risks, and, where possible, open the system. This work is already underway—from Switzerland’s open Apertus to Europe’s AI Factories, Korea’s national rollout, and Latin America’s regional models. The following steps are the responsibility of education leaders. They should treat sovereign AI as a public resource, fund it, assess its effectiveness in classroom applications, and maintain enough openness to build trust. The stakes are real. They involve the words, ideas, and feedback we present to students every day.


The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.


References

AxeleraAI grant. (2025, March 6). Reuters. Dutch chipmaker AxeleraAI gets €61.6 million EU grant to develop an AI inference chip.
Apertus: a fully open, transparent, multilingual language model. (2025, September 2). ETH Zurich/EPFL/CSCS (press release).
Energy and AI—Executive summary & News release. (2025). International Energy Agency. Projections for data-centre electricity demand through 2030.
EU AI Factories—Policy page. (2025, October 30). European Commission, Shaping Europe’s Digital Future.
EuroHPC JU—Selection of the first seven AI Factories. (2024, December 10). EuroHPC Joint Undertaking.
EuroHPC JU—Selects six additional AI Factories. (2025, October 10). EuroHPC Joint Undertaking.
EU–Republic of Korea Digital Partnership Council. (2025, November 27–28). European Commission press corner / Digital Strategy news.
Latam-GPT and the search for AI sovereignty. (2025, November 25). Brookings Institution.
Mistral unveils new models in race to gain edge in “open” AI. (2025, December 3). Financial Times.
NVIDIA, the South Korean government, and industrial giants are building an AI infrastructure and ecosystem. (2025, October 31). NVIDIA (press release).
South Korea accepts the first batch of NVIDIA GPUs under the large-scale AI infrastructure plan. (2025, December 1). The Korea Times.

Picture

Member for

1 year
Real name
Catherine Maguire
Bio
Catherine Maguire is a Professor of Computer Science and AI Systems at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI). She specializes in machine learning infrastructure and applied data engineering, with a focus on bridging research and large-scale deployment of AI tools in financial and policy contexts. Based in the United States (with summer in Berlin and Zurich), she co-leads SIAI’s technical operations, overseeing the institute’s IT architecture and supporting its research-to-production pipeline for AI-driven finance.