Pivot Sanaenomics from populism to productivity
Protect real per-student spending and modernize vocational training
Target support, not handouts, to grow without tighter BOJ policy
Strengthen defence while protecting R&D and skills to keep growth alive
Spend smarter: joint procurement, open standards, and dual-use innovation
Fund what proves results—capability gains, cost declines, and real diffusion
Points-based immigration replaced EU free movement and filled shortages
Rapid rule swings now strain universities and care services
Use a public skills scorecard and align visas with training
AI That Seems Human: Rules and How They Affect Schools
Picture
Member for
1 year 2 months
Real name
Keith Lee
Bio
Keith Lee is a Professor of AI and Data Science at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI), where he leads research and teaching on AI-driven finance and data science. He is also a Senior Research Fellow with the GIAI Council, advising on the institute’s global research and financial strategy, including initiatives in Asia and the Middle East.
Published
Modified
Human-like AI can blur boundaries for students in schools
Use clear identity labels, distance-by-default design, and distress safeguards
Align law, procurement, and classroom practice to keep learning human
We often say present systems aren't thoughtful, but they can talk, listen, and comfort. So, schools, instead of tech blogs, will test how we should control AI that seems human. One thing to remember is that in 2025, around 25% of teens in the U.S. said they used ChatGPT for school, which is up from about half that number the year before. It's being accepted quickly and used often without people realizing it. When a tool can sound like a classmate, a tutor, or a mentor, it's hard to tell what's what. If the goal of AI rules is to prevent people from confusing humans and machines, then education is where it matters most. Grades, input, and trust depend on knowing who is who. We don't want to ban better tools. We want to keep learning humans while making AI safer, more trustworthy, and less like a person, mainly when a student is feeling lonely.
Why AI Rules Matter in Schools
The need for AI rules in schools stems from a core risk: students can easily confuse responsive, friendly chatbots for people, especially when seeking comfort or help. Even if a tool is not truly sentient, it can still influence effort, trust, and emotional support—key building blocks in education. The threat lies not in AI thinking like people, but in seeming to care like them. Clear rules are necessary to maintain educational standards and safeguard student well-being.
Some recent rules are aimed at the biggest problems. Draft rules in China would regulate AI that behaves like people and forms emotional connections, with rules to warn about excessive use, flag when someone is upset, and reduce loyalty. The Science Press says these rules might affect matters outside China. Thinking about schools shows why. Students often use AI alone at home late at night. Even if schools have rules, things change faster than those rules can keep up. A clear, easy rule—don't act like a human; don't try to make friends; always label machine identity—gives leaders and sellers the same plan. It also offers teachers words they can use in class without having to know the law very well. If leaders stay focused on that, AI rules become a helpful tool rather than a barrier to improvement.
Figure 1: Teen use of ChatGPT for schoolwork doubled in one year, making disclosure and distance-by-default urgent for schools.
What Students Are Doing
Data on who uses these programs shows a clear truth. Saying We don't use it here doesn't work anymore. Surveys in 2025 found that around 26% of U.S. teens used ChatGPT for school, up from 13% in 2023. More people are using it, with one measure in mid-2025 saying that about one-third of adults have used it. Younger adults were the first to start. Other surveys show that the most common use is still looking for info and generating ideas, but using it for business is higher among people under 30 than among older people. Basically, the classroom and teens are where voice, tone, and seeming caring are most likely to be felt.
Figure 2: Many teens who use gen-AI for assignments do so without teacher permission, underscoring the need for clear classroom rules and provenance.
School data tells another story. Many parents know these tools exist, but not enough get clear help from schools. Teens say rules change from teacher to teacher and from class to class. Some students have had their work wrongly marked as AI-generated. That hurts trust. It also makes students want to write like a machine to avoid being improperly marked. AI rules fix this mess. If systems must say they are machines and not act like humans, schools can shift from looking for rule-breakers to planning. They can pick tools that show how they work. They can set assignments that need steps to be checked. And they can help students when they might use AI for comfort instead of learning.
The hardest proof is also the most human. News reports have highlighted problematic situations that prompted the leading platforms to add parental controls and teen-safety features. No one case makes a policy. But enough is happening to treat loyalty and sadness as the main risks. In education, where most users are young, we need to set the bar high. AI rules put the bar in the tool, not just in the school policy book. If a tool can tell that a teen account is being used a lot late at night, it should slow down. If it hears signs of harm, it should point away from talking openly to checked, small answers and clear, human help lines. These aren't parts of intelligence. They are parts of design.
Design for Distance: Make AI Less Human, More Helpful
The goal is to build distance without losing help. That's the design idea behind AI rules. Make the system say who it is — always—in text, voice, and picture, so there's no question. Keep a normal tone in voice mode. Use words that suggest a tool, not a friend. Don't give first-person info that sounds like a story. Don't use flirty or parental words with young people. Need noticeable watermarks in what's on the screen and what's heard. And stop long, caring talks when sadness is found, replacing them with short, helpful prompts and ways to reach trained people.
Adding the classroom changes three things. First, thinking about tests. When a student opens a quiz or an assignment in a learning system, AI support should switch to help mode. That means hints, examples, and questions that make you think with citations, not complete answers. The tool shows how it works and keeps the student in charge. Second, where things come from is normal. What is made should include explain buttons that go to sources, thinking steps, and the model version. That lets teachers judge use, not guess it. Third, agreed rules for young people. Linked parent-teen accounts can set quiet hours, limit session lengths, and block voice messages that sound like those from friends or teachers. These options should be easy to get, not special.
Sellers will say these limits hurt their chances of being accepted. The opposite is more likely in education. Tools that keep a clear line between help and copying build trust with the areas and parents. They also lower legal risk. Narrowing how you sound lowers the risk that a tool becomes a late-night friend instead of a study friend. Nothing here needs a perfect telling of feelings or plans. It needs normal settings, visible identity signs, and slowdown triggers when usage exceeds simple limits.
From Doubts to Safety Measures: A Policy Path
We started with doubt: if current systems aren't truly thoughtful, do AI rules do anything? In education, the answer is yes. The issue isn't what's inside the model. It's what's seen on the outside. Seeming warmth and being there, at a high level, can act like a person when it matters. So policy should mix three things—law, buying, and practice—that make distance stronger by design.
In law, keep AI rules focused and clear. Ban copying specific people. Need to be told the machine identity at all times. Don't allow emotional-bond parts for young people. Order slow-down and out when distress signals are seen. Insist on checkable records of these things, kept with strong privacy. These items align well with current international drafts and can be incorporated into local rules.nto local rules.
When buying, areas should buy based on how things act, not just on skill. Ask sellers to say they can prove three items: identity signs that can't be turned off; youth-safety buttons that can be made normal; and where things come from, which makes classroom use possible. An AI rules list can be included in every request for proposal. Over time, that market sign will matter more than any one policy paper.
In practice, schools should change tasks so that AI is there but kept in check. Use talks, journals, and whiteboarding to connect learning to how things are done. Teach students to ask with citations and to write down how an AI suggestion changed their work. Replace complete bans with staged use: brainstorming allowed, writing limited, final writing personal. These moves are old teaching with a new reason. They work better when the tool is made to act like a tool.
The likely concern is that all this will stop getting better and hurt support. But the goal isn't to make AI improve its capabilities in ways that reduce mistaken closeness. In education, clarity helps learning. We can make systems better at math and calmer at writing while keeping them clearly not human. That's the deep point of AI rules. They are about keeping the human parts of school—judgment, care, and responsibility—by stopping the machine from acting like a friend or a mentor.
The first thought said these rules were empty because today's AI isn't human. The classroom shows the problem. Students react to tone and timing, not just to truth. A machine that sounds patient at 1 a.m. can pull a teen into long talks that feel. We can't forget that risk as acceptance increases. Once gets higher. The better way is to keep the help and lower the guessing. AI rules do that by making distance part of the product and the policy at once. If we label identity, stop acting like a persona, slow in sadness, and prove where things come from, we keep learning in human hands. The policy goal isn't to win a discussion about intelligence. It's to protect students while raising standards for proof, writing, and care. That's why doubt should give way to safety measures. In schools, we should want AI that's more right, more helpful, and clearly not us. The AI rules that many once ignored may be the easiest way to get there.
The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.
References
AP-NORC Center for Public Affairs Research. (2025). How U.S. adults are using AI. Cameron, C. (2025). China’s plans for human-like AI could set the tone for global AI rules. Scientific American. Common Sense Media & Ipsos. (2024). The dawn of the AI era: Teens, parents, and the adoption of generative AI at home and school. Pew Research Center. (2025a). About a quarter of U.S. teens have used ChatGPT for schoolwork. Pew Research Center. (2025b). 34% of U.S. adults have used ChatGPT. Reuters. (2025). China issues draft rules to regulate AI with human-like interaction. Time. (2025). Amid lawsuit over teen’s death by suicide, OpenAI rolls out parental controls for ChatGPT.
Picture
Member for
1 year 2 months
Real name
Keith Lee
Bio
Keith Lee is a Professor of AI and Data Science at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI), where he leads research and teaching on AI-driven finance and data science. He is also a Senior Research Fellow with the GIAI Council, advising on the institute’s global research and financial strategy, including initiatives in Asia and the Middle East.
Big cities can be cheaper for global work due to urban economies of scale
But congestion and housing costs strain schools first
Education must be treated as city infrastructure
AI is turning “search” into a default tutor in schools
Antitrust helps, but it moves too slowly for education
Open standards and portability keep schools from lock-in
Competition policy in the age of AI is now education policy: It must influence how learning tools are acquired, integrated, and changed—not simply penalize dominant companies. As of December 2025, Google held approximately 90.8% of the global search market, demonstrating its central role in how people find answers. However, the nature of search is changing.
The Orbit Option: Orbital Data Centers and the New Economics of Learning
Picture
Member for
1 year 1 month
Real name
Catherine McGuire
Bio
Catherine McGuire is a Professor of Computer Science and AI Systems at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI). She specializes in machine learning infrastructure and applied data engineering, with a focus on bridging research and large-scale deployment of AI tools in financial and policy contexts. Based in the United States (with summer/winter in Berlin and Zurich), she co-leads SIAI’s technical operations, overseeing the institute’s IT architecture and supporting its research-to-production pipeline for AI-driven finance.
Published
Modified
Orbital data centers could ease power and cooling limits on Earth
The costs and climate trade-offs are still unclear
Education should set rules now before orbit becomes a new dependency
Global data centers consumed roughly 415 TWh of electricity in 2024, accounting for 1.5% of global electricity use. Schools are now encouraged to leverage AI tools for instruction, writing support, and administration. These cloud-based tools rely on electricity, cooling, and processing chips, driving operational costs. High electricity prices and grid constraints threaten educational budgets. Memory prices are another pressure point, as TrendForce predicts DRAM and HBM contract pricing could climb 50–55% between 2025 quarters. Amid such trends, exploring off-planet AI infrastructure for education as a cost-saving solution is timely.
The standard approach involves expanding ground-based data centers and transitioning to cleaner energy, but educational institutions face location, permitting, grid, and water issues. Orbital data centers offer clear benefits: consistent solar energy, efficient heat dissipation, and relief from land, water, and local political constraints. Although launch costs and space operations are complex, these centers could improve access to AI, reduce costs, and increase control over education.
Orbital data centers and the energy bottleneck in education AI
Electricity is a key factor in digital learning. The IEA estimates that data center electricity demand could double by 2030, reaching roughly 945 TWh. In the U.S., the Department of Energy reports that data centers consumed about 4.4% of U.S. electricity in 2023, with a possible rise to 6.7%-12% by 2028. These figures are essential for education, as many institutions have climate targets, fixed budgets, and limited control over utility prices. When power is scarce, new connections can be delayed, or fees increased. This can slow data center growth and increase the cost of AI services for schools.
Figure 1: If data center power demand roughly doubles this decade, “AI for learning” becomes a grid-and-cost problem, not just a software choice—pushing interest in orbital data centers as a pressure valve.
Cooling also turns on community sentiment. Pew reported that U.S. data centers directly used around 17 billion gallons of water in 2023. Even with reduced water use, public perception is relevant. Communities may question why data centers receive water during droughts while schools struggle with heat. Tech companies are responding with improved designs, such as waterless cooling systems. Microsoft introduced a data center design that uses zero water for cooling via chip-level cooling. This helps, but does not resolve the land and grid issues that affect the siting of new facilities.
Orbital data centers offer distinct advantages in cooling and electricity. In orbit, solar power is consistently available, enabling a steady energy supply, while waste heat is efficiently released into space, eliminating the need for large water-based cooling systems. These centers thus promise stable, water-independent operations and potential cost reductions. If space-based systems prove reliable, orbital data centers may relieve pressure on land and water resources. However, challenges remain, including launch availability, radio bandwidth limitations, and security.
Orbital data centers and the memory shock behind “cheap” AI
Education leaders may hear that AI costs will decline. While this can be true for software, the necessary hardware may not follow suit. The cost of AI is linked to memory and processing power. Modern AI needs substantial fast memory, relying on DRAM and HBM. TrendForce projects a possible 50–55% rise in total contract pricing, including DRAM and HBM, between quarters in late 2025. Demand for premium memory and higher prices for other consumers may impact the market (Financial Times, 2026). Schools may face higher device prices, and cloud providers may pass costs along as they update their systems for AI. Orbital data centers are a long-term investment that can reduce electricity and cooling expenses.
Figure 2: Even when compute gets more efficient, tight memory supply can raise the baseline cost of AI services that education buys—one reason firms explore new infrastructure paths like orbital data centers.
Orbital data centers will not immediately resolve memory constraints. Space equipment must withstand radiation to prevent errors. Space hardware needs radiation protection through shielding or error-correction software, and cooling components that add weight for launch (Hsu, 2025). But, on-site chip fabrication can happen in space. Space Forge created plasma on a satellite to enable in-orbit processes used in chip manufacturing. Microgravity and cleanliness can allow components to be more exact. Space Forge's goal is to produce crystal material that is purer than Earth-made versions.
Purer chips” highlights monetary movement. If space-based methods improve yields or enable new materials, areas like AI chips, sensors, and comms will be affected first. This can widen the gap between educational institutions that can afford superior processing power and those that cannot. Space data center construction notes that China has launched spacecraft for this purpose, and the EU is exploring similar approaches under ASCEND. Orbital data centers are a strategic asset. Because systems are strategic, accessibility is guided by politics, export policies, and security evaluations. Education systems that depend on AI services should consider this risk, as well as vendor lock-in.
The hard parts: launch costs, climate math, and who bears the risk
All orbital data center proposals face barriers related to the costs of placing equipment in orbit. Launch cost and periodic chip replacement affect investment. This turns orbital data centers into ongoing operations. A few things to think about when purchasing: who pays when equipment fails or needs to be pulled, and Google’s Suncatcher team estimates that liftoff costs must fall to under $200 per kilogram by 2035. Many economic gains depend on launch expenditure. Education policies cannot rely on future price drops or assume that early orbital processing will be affordable for public schools.
The climate case is also contested. Some claim that solar and water-free cooling could reduce carbon emissions relative to on-planet options, and Starcloud has stated this. However, “Dirty Bits in Low-Earth Orbit” suggests that launch and re-entry emissions mean that in-orbit processing could be more carbon-heavy than on-planet options. Thales Alenia Space stated that ASCEND predicted that space data centers need a launcher ten times less emissive to lower CO₂ enough. In addition to space debris and pollution, orbital data centers do not automatically mean climate wins. It's a negotiation, so the public should state the terms.
Education policy for an orbital data centers era
Should orbital data centers be established, a new reliance will be created. Education bandwidth, devices, staff time, spectrum, ground stations, locations, and regulations determine where data is processed. AI service contracts should be made visible. If a vendor model depends on memory supply or orbital capacity, schools need service guarantees. Public buying should ensure model behavior and data handling are checked, as off-planet systems can make oversight more difficult. As orbital data centers are being sold in the language of sovereignty, education, energy, telecom, and space, regulators should coordinate their efforts.
Curriculum policies also need to adapt as professions change. Orbital data centers use remote control, radiation-aware processing, fault tolerance, and space-asset cyber defense. Google's Suncatcher research highlights the effects of radiation and the need for inter-satellite bandwidth. Programs have to mix computer science, electrical engineering, aerospace systems, and public policy. Critics believe that schools are too far away from these items to make a difference. The prices of those projects are unlikely to be cut for the average education buyer in the coming years. Space computing is not what the system will bet on. Action must be taken before orbital data centers become popular. Policymakers should get space, energy, and learning on the same page so the next processing surge helps students first, not investors. In addition, researchers should test full-life emissions and safety claims before orbit becomes the norm for the digital learning sector.
In 2024, data centers worldwide needed hundreds of TWh to operate. Education comes after those data requests. If processing becomes a limited input to education, then the servers become a public policy issue. Orbital data centers address the land, water, and grid constraints that shape the cloud. They can also become constraints through launch slots, security protocols, and questionable climate data. Education leaders must view orbital data centers as an option and start writing the rules for accessibility today.
The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.
References
Beals, T. (2025, November 4). Exploring a space-based, scalable AI infrastructure system design. Google Research Blog. Financial Times. (2026). Chip shortages threaten 20% rise in consumer electronics prices. Hsu, J. (2025, December 9). Data centers in space aren’t as wild as they sound. Scientific American. IEA. (2025). Energy and AI. International Energy Agency. Microsoft. (2024). Sustainable by design: Next-generation datacenters consume zero water for cooling. Microsoft Cloud Blog. Mogensen, J. F. (2025, December 31). The push to make semiconductors in space just took a serious leap forward. Scientific American. NVIDIA. (2025). How Starcloud is bringing data centers to outer space. NVIDIA Blog. Ohs, R., Stock, G. F., Schmidt, A., Fraire, J. A., & Hermanns, H. (2025). Dirty Bits in Low-Earth Orbit: The carbon footprint of launching computers. arXiv. Pew Research Center. (2025). What we know about energy use at U.S. data centers amid the AI boom. Thales Alenia Space. (2024). Thales Alenia Space reveals results of ASCEND feasibility study on space data centers. Thales Group. Tom’s Hardware. (2026). UK company shoots a 1,000-degree furnace into space to study off-world chip manufacturing. TrendForce. (2025). Global DRAM revenue jumps 30.9% in 3Q25; conventional DRAM contract prices forecast to rise. TrendForce Press Center. U.S. Department of Energy. (2024). DOE releases new report evaluating increase in electricity demand from data centers.
Picture
Member for
1 year 1 month
Real name
Catherine McGuire
Bio
Catherine McGuire is a Professor of Computer Science and AI Systems at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI). She specializes in machine learning infrastructure and applied data engineering, with a focus on bridging research and large-scale deployment of AI tools in financial and policy contexts. Based in the United States (with summer/winter in Berlin and Zurich), she co-leads SIAI’s technical operations, overseeing the institute’s IT architecture and supporting its research-to-production pipeline for AI-driven finance.
The 10,000-hour rule fits early skill, not true peak
AI raises the floor; judgment and transfer win
Teach breadth first, verify always, specialize later
Elite performance is best viewed as a process of growth, not just a nu