Benefits and working conditions make up a third to two-fifths of pay
Unions shift value into enforceable rights when cash is tight, boosting retention
Measure and fund non-monetary compensation to stabilize schools
Off-Planet Compute, On-Earth Learning: Why "Space Data Centers" Should Begin with Education
Picture
Member for
1 year 1 month
Real name
Catherine Maguire
Bio
Catherine Maguire is a Professor of Computer Science and AI Systems at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI). She specializes in machine learning infrastructure and applied data engineering, with a focus on bridging research and large-scale deployment of AI tools in financial and policy contexts. Based in the United States (with summer in Berlin and Zurich), she co-leads SIAI’s technical operations, overseeing the institute’s IT architecture and supporting its research-to-production pipeline for AI-driven finance.
Published
Modified
Use space data centers to ease Earth’s compute and water strain—start with education
Run low-latency classroom inference in LEO; keep training and sensitive data on Earth
Pilot with strict SLAs, life-cycle audits, and debris plans before scaling
The key figure we should focus on is this: global data centers could use more than 1,000 terawatt-hours of electricity by 2026. This accounts for a significant share of global power consumption and is increasing rapidly as AI transitions from labs to everyday workflows. While this trend offers tangible benefits, it also creates challenges in areas with limited budgets, particularly in schools and universities. Water-scarce towns now face pressure from new server farms. Power grids can delay connections for years. Campuses see rising costs for cloud-based tools or experience slowdowns when many classes move online simultaneously. The concept of space data centers may sound like science fiction. Still, it addresses a pressing, immediate policy goal: expanding computing resources for education without further straining local water, land, and energy supplies. If we want safe, fair AI in classrooms, the real question is no longer whether the idea is exciting. The question is whether education should drive the initial trials, establish guidelines, and define the procurement rules that follow.
Reframing Feasibility: Space Data Centers as Education Infrastructure
Today's debate tends to focus on engineering bravado—can we lift racks into orbit and keep them cool? This overlooks the actual use case. Education needs reliable, fast, and cost-effective processing for millions of small tasks: grading answers, running speech recognition, translating content, and powering after-school AI tutors. These tasks can be spread out, stored, and timed. They do not all require immediate processing, as a Wall Street trader or a gamer does. The reframing is straightforward: view space data centers as a backup and support layer for public-interest computing. Keep training runs and the most sensitive student data on the ground. Offload bursty, repeatable processing jobs to orbital resources during peak times, nights, or exam periods. Education, not advertising or cryptocurrencies, is the best starting point because it offers high social returns, has predictable demand (at the start of terms and during exam periods), and can accept slightly longer processing times—if managed well—in many situations.
This viewpoint is critical now because the pressures on Earth-based resources are real. The International Energy Agency predicts that data center electricity use could surpass 1,000 TWh by 2026 and continue to rise toward 2030, even without a fully AI-driven world. Water resources are equally stretched. A medium-sized data center can consume about 110 million gallons of water per year for cooling. In 2023, Google alone reported over 5 billion gallons used across its sites, with nearly a third drawn from areas facing medium to high water shortages. When a district must choose between building a new school or a new power substation, the trade-off becomes significant. Shifting part of the computing layer to continuous, reliable solar power in orbit does not eliminate these trade-offs, but it can alleviate them if initial efforts prioritize public needs and include strict environmental accounting from the outset.
What the Numbers Say About Space Data Centers
Skeptics are right to ask for data, not just metaphors. Launch costs to put equipment into orbit have dropped, with SpaceX offering standard rates. Several companies are studying ways to make space data centers work and believe it could bring environmental benefits under certain conditions. Technology leaders are also researching prototypes that use solar energy and new communication methods. These projects are at an early stage, but offer an opportunity for policy planning to occur alongside engineering.
Latency is the second important metric for classrooms. LEO satellite networks already achieve median latencies in the 45-80 ms range, depending on routing and timing, which is comparable to many terrestrial connections. This is insufficient for high-frequency trading but acceptable for most educational technology tasks, such as real-time captioning and adaptive learning, when caching is used effectively. Peer-reviewed tests conducted in 2024-2025 show steady improvements in low-orbit latency and packet loss. The implication is clear: if processing is staged near ground points, and if content is cached at the orbital edge, numerous educational tasks can run without noticeable delays. Training large models will remain on Earth or in a hybrid cloud, where power, maintenance, and compliance are more manageable. However, the inference tier—the part that impacts schools—can be moved. This is where the new capacity offers the most support and causes the least disruption.
Figure 1: Global data center electricity use could more than double from 2022 to 2026, tightening budgets and grids that schools rely on; this is the core pressure space data centers aim to relieve.
Latency, Equity, and the Classroom Edge with Space Data Centers
The case for equity is strong. Rural and small-town schools often have limited access to reliable infrastructure. When storms, fires, or heat waves occur, they are the first to lose service and take the longest to recover. Space data centers could serve as a floating edge, keeping essential learning tools operational even when local fiber connections are down or power is limited. A school district could sync lesson materials and assessments with orbital storage in advance. During outages, student devices can connect via any available satellite link and access the cached materials, while updates wait until connections stabilize. For special education and language access, where speech-to-text and translation are critical during class, this buffer can make a major difference. The goal is to design for processing near content, rather than pursuing flashy claims about space training.
Figure 2: Low-Earth-orbit links now deliver classroom-ready median latencies; geostationary paths remain an order of magnitude slower—underscoring why education pilots should target LEO for inference.
The environmental equity argument is also essential. Communities near large ground data centers bear the burden of water usage, diesel backups, and noise. Moving some processing off-planet does not eliminate launch emissions or the risk of space debris, but it can reduce local stressors on vulnerable watersheds. To be credible, however, initial efforts should provide complete reports on carbon and water use throughout the life cycle: emissions from launches, in-space operations, de-orbiting, debris removal, and the avoided local cooling and water use. Educators can enforce this transparency through their purchasing decisions. They can require independent environmental assessments, mandate end-of-life de-orbiting plans, and tie payments to verified ecological performance rather than mere promises. When approached in this manner, space becomes a practical tool for relieving pressure on Earth as we develop more sustainable grids and regulations.
A Policy Roadmap to Test and Govern Space Data Centers
The main recommendation is to launch three targeted, controlled pilot programs over two school years to shape education technology proactively. The first pilot focuses on content caching, in which national education bodies and open-education providers pre-position high-use resources for reading support in orbit via low-orbit satellites, targeting under 100 ms latency and strict privacy. The second pilot tests AI inference by evaluating speech recognition, captioning, and formative feedback on orbital nodes, ensuring reliable terrestrial backups and maintaining logs for bias and error assessment. The third pilot provides emergency continuity during outages or storms, prioritizing students needing assistive tech. Each pilot includes a ground control group to measure actual educational gains and improvements in access, not just network metrics.
Procurement and governance must go hand in hand—take decisive steps to shape them now. Ministries and agencies should immediately design model RFPs that pay only for actual processing, limit data in orbit to 24 hours unless consent is given, and require end-to-end encryption managed on Earth. Insist that providers map education rules like FERPA/GDPR to orbital processes, enforce latency standards, and fully commit to zero-trust security. Demand signed debris-mitigation and de-orbiting plans in every contract and tie payments to verified environmental outcomes. Do not wait for commercial offers: by setting these requirements now, education can become the leader—and the primary beneficiary—in the responsible, innovative adoption of space data center technology.
The Market Will Come—Education Should Set the Terms
The commercial competition is intensifying. Blue Origin has reportedly been working on orbital AI data centers, while SpaceX and others are investigating upgrades that could support computing loads. Startups are proposing “megawatt-class” orbital nodes. Tech media often portrays this as a battle among large companies, but the initial steady demand may come from the public sector. Education spends money in predictable cycles, values reliability over sheer speed, and can enter multi-year capacity agreements that reduce risks for early deployments. The ASCEND study indicates feasibility; Google’s team has shared plans for a TPU-powered satellite network with optical links; academic research outlines tethered and cache-optimized designs. None of this guarantees costs will be lower than ground systems in the immediate future. Still, it presents a path for specific, limited tasks where the overall cost, including water and land, is less per learning unit. That should be the key measure guiding us.
What about the common objections? Cost is a genuine concern, but declining launch prices and improved packing densities change the game. A tighter focus on processing tasks and caching means less reliance on constant, high-bandwidth data transfers. Latency is manageable with LEO satellites and intelligent routing, as field data now shows median latencies in mature markets of tens of milliseconds. Reliability can be improved through backup systems, graceful degradation of ground systems, and resilience during disasters. Maintenance is a known challenge; small, modular units with planned lifespans and guaranteed de-orbit procedures mitigate that risk. And yes, rocket emissions are significant; this is where complete life-cycle accounting and limits on the number of launches per educational task must be included. The underwater Project Natick initiative offers a helpful analogy: careful design in challenging environments can lead to better reliability than on land. The same discipline should apply to space. If these conditions are met, pilots can advance without greenwashing.
The path to improved learning goes straight through computing. We can continue to argue over permits for substations and water rights, or we can introduce a new layer with different demands and challenges. The opening statistic—more than 1,000 TWh of electricity used by data centers by 2026—is not just a number for a school trying to keep devices charged and cloud tools functioning. It explains rising costs, community pushback, and why outages affect those with the least resources first. Space data centers are not a magic solution. They are a way to increase capacity, reduce local pressures, and strengthen the services students depend on. If education takes the lead in this first round—through small, measurable pilots, strict privacy and debris regulations, and performance-based contracts—we can transform a lofty goal into a grounded policy achievement. The choice is not between dreams in space and crises on Earth. It is about allowing others to dictate the terms or establishing rules that prioritize public education first. Now is the time to draft those rules.
The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.
References
Ascend Horizon. (n.d.). Data centres in space. EESI. (2025, June 25). Data Centers and Water Consumption. Google. (2025, Nov. 4). Meet Project Suncatcher (The Keyword blog). IEA. (2024). Electricity 2024 – Executive summary. IEA. (2025). Energy and AI – Energy demand from AI. Lincoln Institute of Land Policy. (2025, Oct. 17). Data Drain: The Land and Water Impacts of the AI Boom. Microsoft. (2020, Sept. 14). Project Natick underwater datacenter results. Ookla. (2025, June 10). Starlink’s U.S. performance is on the rise. Reuters. (2025, Dec. 10). Blue Origin working on orbital data center technology — WSJ. Scientific American. (2025, Dec.). Space-Based Data Centers Could Power AI with Solar. SpaceX. (n.d.). Smallsat rideshare program pricing. Thales Alenia Space. (2024, June 27). ASCEND feasibility study results on space data centers. The Verge. (2025, Dec. 11). The scramble to launch data centers into space is heating up. Vaibhav Bajpai et al. (2025). Performance Insights into Starlink’s Latency and Packet Loss (preprint). Wall Street Journal. (2025, Dec. 11). Bezos and Musk Race to Bring Data Centers to Space.
Picture
Member for
1 year 1 month
Real name
Catherine Maguire
Bio
Catherine Maguire is a Professor of Computer Science and AI Systems at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI). She specializes in machine learning infrastructure and applied data engineering, with a focus on bridging research and large-scale deployment of AI tools in financial and policy contexts. Based in the United States (with summer in Berlin and Zurich), she co-leads SIAI’s technical operations, overseeing the institute’s IT architecture and supporting its research-to-production pipeline for AI-driven finance.
Omnibus simplification risks deepening Big Tech lock-in
Bind it to portability, open APIs, and switching
If others copy, copy the guardrails—not consolidation
Europe spent roughly €61 billion on cloud services
EU-Bonds cost more than Bunds due to design and index rules
Make them sovereign: permanent issuance, one agency, hedging tools, clear own resources
Tighter spreads free billions for education and investment
Treat blind hiring policy as a targeted tool, not a cure-all
Recent pilots show anonymity widens access while quality holds steady
Use a tiered process—Stage-1 blind scoring, controlled unblinding, and outcome audits—to balance equity and excellence
The fiscal sentiment multiplier can crowd in investment—if credit is open and demand credible
Japan’s new stimulus tests this channel amid record debt, higher yields, and shaky confidence
Aim spending at skills-linked, high-productivity sectors to avoid over-investment and lock in growth
Stop Hiring on AI Slop: Build Proof of Originality into Education and Work
Picture
Member for
1 year 1 month
Real name
Ethan McGowan
Bio
Ethan McGowan is a Professor of Financial Technology and Legal Analytics at the Gordon School of Business, SIAI. Originally from the United Kingdom, he works at the frontier of AI applications in financial regulation and institutional strategy, advising on governance and legal frameworks for next-generation investment vehicles. McGowan plays a key role in SIAI’s expansion into global finance hubs, including oversight of the institute’s initiatives in the Middle East and its emerging hedge fund operations.
Published
Modified
AI slop is flooding education and hiring, drowning out real skill
Fix the system by verifying process—observed writing, evidence-linked claims, and a short oral defense
Set provenance standards and incentives so accountable, source-grounded work beats paste
A single number summarizes the threat: Turnitin’s AI-writing tool scanned 200 million student papers in one year; 11% contained at least 20% AI text, and 3% were mostly AI-written. This is a structural issue, not a minor annoyance. AI-generated content undermines how we evaluate skill, knowledge, and trust in hiring, admissions, and publishing. The International Committee of the Red Cross found that chatbots fabricate archival citations. Retraction Watch tracks over 50,000 retractions and 300+ "hijacked" journals, showing how automation threatens research integrity. Google now targets “scaled content abuse,” much of it AI-driven. Unless we verify the originality process, not just the output, we risk losing the ability to certify real skill and knowledge.
AI slop is not spam; it is a structural risk
Spam clogs inboxes, while AI slop distorts signals. In education and hiring, we use writing to assess competence and judgment. The old cues—polished grammar, predictable structure, business-like phrasing—were not perfect but at least rare. Now, they are plentiful. Half of job seekers report using generative tools to create CVs and cover letters, while many HR teams use AI to sift through the resulting flood. This creates a cycle: generic applications pass generic filters, while candidates with genuine, distinctive work must prove they are not machines. The outcome is less precision, more effort, and increasing cynicism on both sides. The main risk is not the existence of AI but that our evaluation systems were designed for a different cost structure in producing text.
The research ecosystem shows how far the problem can expand if we do nothing. Large-scale AI content has contributed to a market filled with fake or hijacked journals, citation spam, and synthetic references. Retraction Watch’s database now includes over 50,000 retractions, and its Hijacked Journal Checker has more than 300 titles. Even respected institutions are receiving “record requests” for non-existent journals and issues created by chatbots. When gatekeepers rely on surface features rather than verifiable links, slop enters the system. Google’s 2024 policy change—targeting “scaled content abuse”—signals that high-volume AI publishing harms quality at a web scale. Education and hiring are smaller systems, but they share the same mechanics; if we want accurate signals, we must raise the cost of generating unreliable text and lower the cost of checking provenance.
Figure 1: One in nine papers showed notable AI text; high-AI papers still form a hard core that standard essays can’t hide.
From detection to design: a proof-of-originality pipeline for AI slop
The impulse to combat AI slop with AI detectors is reasonable, and in some cases, useful. However, detectors cannot do it alone; their error rates and bias risks are well-documented. A better approach is to redesign. Replace “trust the document” with “trust the process that produced it.” For admissions and hiring, this means three interconnected steps. First, require observed writing: a timed, supervised writing session on a prompt related to the role or course. It can be brief—45 to 90 minutes—but this output becomes the staff’s primary writing sample. Second, require evidence-linked claims: every factual claim must include a working identifier (DOI, ISBN, ISSN, or stable URL), and candidates must submit an evidence table with these IDs. Generic text is acceptable; untraceable claims are not. Third, include a quick oral defense within 48 hours. Applicants explain their key choices, sources, and trade-offs. You don’t need to guess who wrote the words; you assess who owns the ideas and can navigate the sources.
These steps are practical and proven. Notably, credibility issues arise when institutions rely solely on static documents and automated scores. When organizations shift to process evidence—timestamps, version histories, reference IDs, brief oral defenses—the slop disappears. Platforms that faced AI-generated floods moved from banning outputs to revising contribution processes, since moderators could not reliably identify authorship at scale. The lesson for hiring and admissions is to design a proof-of-originality pipeline that is hard to fake and easy to verify. This approach speeds up scoring, reduces uncertainty, and rewards genuine explanation over pasting.
Guardrails without unfair harm: make AI slop costly, not students
We must also avoid a second trap: turning faulty detectors into courtroom evidence. Studies show that popular AI detectors misclassify non-native English writing at high rates, and some universities have paused or adjusted disciplines led by detectors due to bias and false positives. K-12 teachers report similar uncertainties and concerns; only a few believe AI tools are more helpful than harmful, and many feel pressured to adopt them despite the integrity risks. The message is clear. Detectors can detect anomalies, but processes must determine which cases they are. A flag should initiate a discussion, not a judgment. When we create assessments that build resilience—such as observed writing, source-linked claims, and swift oral defenses—the need for high-stakes detection decreases.
Fairness requires small but significant changes. Use oral defenses in pairs so no single evaluator controls the outcome. Provide prep windows with open materials to level the playing field for second-language speakers who may think well but write slowly. Normalize assistive tools with provenance—reference managers, code notebooks with execution logs, and note-taking apps that track edits—so students and candidates can clearly show their process. Reserve strict penalties for deceit regarding the process (e.g., submitting bought work) rather than for the presence of AI aids. We will still need escalation paths for severe cases, and we must teach the difference between help and substitution. But if we center incentives around evidence and ownership, the slop engine loses its fuel: there is no reason to submit text you cannot defend with sources you can find.
Incentives and standards: raise the cost of AI slop at the source
AI slop flourishes when institutions reward output volume or surface polish without verifying provenance. We need to change the economics. In publishing, this shift has begun: visible retraction counts, lists of hijacked journals, and improved editorial processes are altering the incentives for authors and publishers. In search, Google’s anti-scaling content abuse policy suppresses low-value, factory-produced pages, encouraging creators to produce referenceable, practical work. Education and hiring should follow suit. Make source-verified writing the standard. Require that any factual claim include a working DOI or similar stable identifier, and that the applicant’s evidence table matches the text. Connect this to random checks by evaluators who actually click through sources. When a claim checks out, the candidate’s score grows; when it fails, the burden remains with the claimant. This is how we maintain speed while restoring trust.
Figure 2: Half of employers now use AI in recruiting, with the heaviest use in writing job ads—exactly where AI slop floods the funnel.
Standards bodies can assist. Admissions platforms can incorporate DOI/ISBN fields, complete with live validation. Applicant-tracking systems can facilitate observed-writing modules and securely record brief oral defenses. Journals and universities can provide data on reference quality—i.e., the percentage of claims with valid identifiers—alongside acceptance rates. HR associations track AI use in hiring; they can issue guidelines that favor tools that log processes and maintain auditable trails over opaque “fit” scores. Because job seekers increasingly rely on AI tools for applications, we should be clear about expectations: using AI for drafts is fine if the candidate can demonstrate source understanding and defend their choices in a brief recorded conversation. The message is “no AI.” The message is “no untraceable text.”
What proof of originality is required, starting tomorrow morning?
For educators, the primary focus should be on shifting assessments toward evidence-based and oral formats. Retain essays, but make key work come from observed sprints and brief defenses. Require a simple evidence table with DOIs or ISBNs for all factual claims. Provide examples of good “reference hygiene” to help students understand what quality looks like. Reduce reliance on detectors; use them only as preliminary checks. Publish clear policies distinguishing acceptable drafting help from substitution. When students know they must explain their choices, they become engaged in the sources and the reasoning, not just the writing.
For administrators, the emphasis should be on workflow and training. Provide the necessary tools—secure proctoring for short writing sprints, evidence-table templates, and time for 10–15 minute oral defenses. Train staff to evaluate process records and conduct quick, respectful orals to assess ownership, not just memorization. Clearly define appeal processes when detector flags arise, requiring staff to reference process evidence—version history, source checks, oral notes—before making decisions. Communicate this change clearly; applicants will adapt to whatever the system values, as will the coaching industry. If we prioritize provenance and explanation, we reduce unoriginal submissions and increase actual critical thinking.
For policymakers, the request is straightforward: establish requirements for provenance. Encourage accreditation standards that mandate source-linked claims for written assessments and recommend oral defenses for major submissions. Support public-interest tools—open DOI validators, link checkers, evidence-table creators—that simplify the process of doing this correctly. Fund research on fairness in AI detection and on assessment designs that lessen reliance on fragile scoring systems. We should not regulate tools; we should regulate proof.
Anticipating the critiques
One critique is that this creates overhead. It may seem to add steps. In reality, it streamlines the process: fewer ambiguous cases, fewer emails about “who wrote this,” and significantly less time wasted on guesswork. Short, observed writing followed by a ten-minute oral is quicker than days of emails and committee meetings over AI-detector scores that no one trusts. Another critique is that this may disadvantage shy or non-native candidates. This is a valid concern; it’s why orals should be brief, structured, and paired with prep windows and open materials. The aim is to assess ownership, not performance. Evidence tables also aid in this respect; they favor careful readers regardless of fluency or accent.
A third critique is that detectors are improving, so why change assessments? Detectors will become better; they also have limitations. Evidence shows that they often misclassify non-native writers, and universities have paused enforcement of detector flags for this reason. These tools cannot serve as the sole decision-makers. Use them effectively—spotting anomalies in volume—but move the burden of certainty to processes we control: observing work, verifying sources, and discussing reasoning. The wider environment is shifting. Google is reducing the visibility of low-quality content. Editors are becoming stricter with reference integrity. Our classrooms and hiring practices should follow this trend.
Return to that initial number. If 11% of a 200-million-paper collection shows significant AI-written content, then writing alone has lost its value. This does not mean writing has lost its importance. It indicates we need to redefine what writing signifies. A document lacking provenance conveys little about skill; a document with clear origins, source-linked claims, and an oral defense conveys much. This shift will also enhance hiring. If half of the applicants are using AI to generate application text, then the only way to recover meaningful signals is to value what cannot simply be copied: judgment about sources, the ability to explain decisions, and the skill of linking claims to verifiable evidence. This is not a retreat from technology; it is a restoration of trust. Establish proof of originality at the gates, and AI slop will recede to where it belongs—background noise that no longer determines anyone's future.
The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.
References
AOL (2025). AI Slop Is Spurring Record Requests for Imaginary Journals (S. American syndication). Business Insider (2025). 59% of young people see AI as a threat to jobs (Harvard Youth Poll). Google (2024). Core update and new spam policies: scaled content abuse. Pew Research Center (2024, 2025). Teachers’ views on AI in K-12; Teens’ use of ChatGPT doubled. Retraction Watch (2024). Retraction Watch Database; Hijacked Journal Checker. Scientific American (2025). AI Slop Is Spurring Record Requests for Imaginary Journals. SHRM (2024–2025). AI in HR and recruiting: usage statistics. Stack Exchange / Stack Overflow Meta (2023). AI-generated content policy and moderation conflict. Stanford HAI / Liang et al. (2023). GPT detectors are biased against non-native English writers (Patterns). Turnitin (2023–2025). AI detection one-year reports and guidance; 200M+ papers reviewed; detection rates. Wired (2024). Students are likely writing millions of papers with AI.
Picture
Member for
1 year 1 month
Real name
Ethan McGowan
Bio
Ethan McGowan is a Professor of Financial Technology and Legal Analytics at the Gordon School of Business, SIAI. Originally from the United Kingdom, he works at the frontier of AI applications in financial regulation and institutional strategy, advising on governance and legal frameworks for next-generation investment vehicles. McGowan plays a key role in SIAI’s expansion into global finance hubs, including oversight of the institute’s initiatives in the Middle East and its emerging hedge fund operations.