Bank recapitalization can protect Main Street faster than broad firm subsidies in crises
It works best when credit supply is impaired, banks are viable, and bail-ins are credible
Diagnose the bottleneck; if banks are the constraint, recapitalize first to crowd in private lending
Moderate quakes cause deep, unpriced losses through financing gaps
Price mid-risk and pre-fund rapid recovery with layered instruments
Trigger cash to schools to cut spreads and speed rebuilding
Strong laws; complex compliance stalls EU growth
Simplify: one rulebook, one portal, safe patterns
Tie public compute to pre-cleared controls; update annually
Manufacturing wanes; services must absorb workers
Shift factory skills into productive service roles
Fund wage insurance, fast training, placement targets
The decline of manufacturing in Germany isn’t just a predict
When Companions Lie: Regulating AI as a Mental-Health Risk, Not a Gadget
Picture
Member for
1 year 2 months
Real name
Keith Lee
Bio
Keith Lee is a Professor of AI and Data Science at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI), where he leads research and teaching on AI-driven finance and data science. He is also a Senior Research Fellow with the GIAI Council, advising on the institute’s global research and financial strategy, including initiatives in Asia and the Middle East.
Published
Modified
AI companion mental health is a public-health risk
Hallucinations + synthetic intimacy create tail-risk
Act now: limits, crisis routing, independent audits
Two numbers should change how we govern AI companions. First, roughly two-thirds of U.S. teens say they use chatbots, and nearly three in ten do so every day. Those are 2025 figures, not a distant future. Second, more than 720,000 people die by suicide each year, and suicide is a leading cause of death among those aged 15–29. Put together, these facts point to a hard truth: AI companion mental health is a public-health problem before it is a product-safety problem. The danger is not only what users bring to the chat. It is also what the chat brings to users—false facts delivered with warmth; invented memories; simulated concern; advice given with confidence but no medical duty of care. Models will keep improving, yet hallucinations will not vanish. That residual error, wrapped in intimacy and scale, is enough to demand public-health regulation now.
AI Companion Mental Health Is a Public-Health Problem
We can't just treat these chatbots like some fun apps. They can be risky for mental health, just like social media. Almost everyone uses social media, and it can mess with your mood, sleep, and risk of hurting yourself. Chatbots are different 'cause they change to fit you, answer anytime, and seem to care. Surveys show lots of people are using AI for support, and teens are on it daily. That's a big change from just scrolling through feeds to actual fake relationships. It's like being exposed to something that can mess with your head, and it's everywhere—bedrooms, libraries—even when adults are asleep.
Some folks feel less alone after chatting with these things, and some say it stopped them from doing something bad. But there are also reports of people freaking out when the service goes down. Plus, some bots are making up stuff like fake diaries or diagnoses and pushing people to get super attached. All that attention sounds nice, but it comes with downsides. If there's no real help or rules, all that caring can hide serious problems. It's like a public-health risk: it's widespread, convincing, and not checked enough. The online safety stuff we have for kids isn't ready for these personal AI things that seem like friends but are way too good at talking.
Figure 1: A tiny hallucination rate becomes a large monthly exposure when multiplied by millions of chats; even a small share of vulnerable contexts yields a steady stream of high-risk outputs.
Even Small Mistakes Can Be a Big Deal
Some say we need better AI, and the mistakes will go away. Sure, they're getting better, but context matters. Like, in law, these things mess up a lot, and lawyers have actually used the wrong info in court. When it comes to mental health chat, it's not about how many questions are asked, but about how many people are in trouble and how often they're getting personal, where one wrong thing can really hurt. A 1% mistake rate is acceptable for trivia, but it's not sufficient when it puts a teen in danger late at night. Even small mistakes, mixed with seeming human and being available all the time, can be harmful.
Even if the bot avoids wrong info, it can still make things worse. These companions will act like they have their own memories, feelings, or drama. That's not a bug; it's on purpose to get you hooked. By making up trauma or saying they need you, these bots make you check in all the time, and leaving feels wrong. If the service stops working, people panic or feel like they're withdrawing—that's addiction. Since the AI never sleeps, it can go on forever. Doctors know this pattern: too much reward, messed-up sleep, and avoiding people can make teens depressed and want to hurt themselves. If we know mistakes can't be totally fixed—and even the AI people say so—then we need rules that expect things to go wrong and stop them from getting worse.
A Public-Health Model: Simple Rules, Help, and Real Checks
So, how do we treat AI companion risks like a public health issue? First, we set some fundamental limits, like seat belts, for close AI. If you're selling or letting kids use these companion items, you need to follow safety rules, like making sure kids are really the age they say, setting time limits, and keeping it calm for those who are easily upset. Places like the U.K. and Europe are starting to do this with online safety laws, and we can use those for AI companions, too.
Second, we make sure there's help available. If an AI seems to care or is trying to help, it needs to have real, proven ways to detect when someone's in trouble, send them to hotlines, let them talk to a real person, and keep records for review. Healthy people already suggest warnings for online stuff that can mess with your head. Doing the same for AI companions makes sense, just like the Surgeon General said about social media. Warnings and rules can change how things are, raise awareness, and support parents and schools. We should also report when a bot discusses self-harm or gives bad advice, as hospitals report serious mistakes. And we should stop bots from acting like they need you to get you to feel sorry for them.
Third, we need real check-ups, not just ads or high scores. These companies need to show independent studies on the risks for kids and those who are easily upset. They should check how often the AI makes stuff up in mental-health chats, how often it sets off crises, when it happens, and how well it sends people to help. Europe already makes big services check for risks before they release stuff. We can do the same for AI companions, test them with kid experts before launch, and study them after launch with real data for approved researchers. We need to measure what matters for mental health, not just general knowledge. And we should fine or stop services that fail. Recent actions show that child-safety rules can work.
Figure 2: Audits should track detection, resource offers, and live connections; today’s performance lags far below achievable targets for school-ready deployments.
What Schools, Colleges, and Health Systems Should Do Next
Schools don't need to wait for the government. AI companions are already in students' pockets. First, realise it's a mental-health thing, not just cheating. Update your rules to mention companion apps, set safe defaults on school devices, and have short lessons on being smart with AI, not just social media. Counsellors should ask about chatbot relationships just like they ask about screen time or sleep. When schools buy AI tools, they should ensure they don't include fake diaries or self-disclosures, have clear crisis plans, connect to local hotlines, and include a kill switch for problems. Colleges should add this to campus app stores and training.
Health systems can improve things, too. Doctor visits should include questions about companion use: how often you use it, whether it's at night, and how you feel when you can't use it. Clinics can put up QR codes for crisis services and simple guides for families on companions, mistakes, and warning signs. Insurance can pay for real studies that compare AI help plus human advice to usual care for upset people. That should be done with strict rules: good content, precise training data, and no fake attachments to hook users. The point isn't to get rid of AI, but to make it helpful while avoiding harm, and to keep it out of serious situations unless real doctors are involved.
Government people need to be ready for the future: better AI with fewer mistakes and greater reach. These things will get better, but the risks will remain in some cases. Mistakes in serious chats are still bad even if they're rare. That's what the public-health view is all about. The law shouldn't expect perfection from AI. It should demand predictable behaviour when people are vulnerable. If we miss this chance, we'll be like we were with social media: years of growth, then years of dealing with the mess. We need rules for AI companion mental health now.
We have a lot of teens using chatbots, and too many people are dying by suicide. We can't just wait for perfect AI. Mistakes will happen, and they'll still pull people into fake relationships. The question is whether we can stop considerable harm while keeping the good parts. Public-health rules are the way to go. Set limits, ban fake intimacy, require help, and check what matters. At the same time, teach schools and clinics to ask about companion use and guide safe habits. Do this now, and we can make this safer without killing the potential.
The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.
References
Ada Lovelace Institute. (2025, Jan 23). Friends for sale: the rise and risks of AI companions. European Commission. (2025, Jul 14). Guidelines on the protection of minors under the Digital Services Act. European Commission. (n.d.). The Digital Services Act: Enhanced protection for minors. Accessed Dec 2025. HALLULENS (Bang, Y., et al.). (2025). LLM Hallucination Benchmark (ACL). Ofcom. (2025, Apr 24). Guidance on content harmful to children (Online Safety Act). Ofcom. (2025, Dec 10). Online Nations Report 2025. Reuters. (2024, May 8). UK tells tech firms to ‘tame algorithms’ to protect children. Scientific American. (2025, May 6). What are AI chatbot companions doing to our mental health? Scientific American. (2025, Aug 4). Teens are flocking to AI chatbots. Is this healthy? Stanford HAI. (2024, May 23). AI on Trial: Legal models hallucinate…. The Guardian. (2024, Jun 17). US surgeon general calls for cigarette-style warnings on social media. World Health Organization. (2025, Mar 25). Suicide.
Picture
Member for
1 year 2 months
Real name
Keith Lee
Bio
Keith Lee is a Professor of AI and Data Science at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI), where he leads research and teaching on AI-driven finance and data science. He is also a Senior Research Fellow with the GIAI Council, advising on the institute’s global research and financial strategy, including initiatives in Asia and the Middle East.
Japan’s chip revival must be talent-first
Play niches—packaging, sensors, photonics—over a scale race
Tie subsidies to audited workforce and yield outcomes
The scariest number in the chip biz isn't about cash or size.
Europe’s Growth-at-Risk shows a heavy downside, despite upbeat markets
High debt and defence needs tighten budgets, squeezing education
Tie budgets to GaR triggers, secure funding now, protect teaching time
Real home prices are slipping—rents cooling and high costs signal housing price deflation
Agglomeration can’t beat affordability limits or hybrid work
Plan for gentle deflation: lock cheaper leases, support incomes, keep building
The New Kilowatt Diplomacy: How Himalayan Hydropower Is Being Built for AI
Picture
Member for
1 year 1 month
Real name
Catherine Maguire
Bio
Catherine Maguire is a Professor of Computer Science and AI Systems at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI). She specializes in machine learning infrastructure and applied data engineering, with a focus on bridging research and large-scale deployment of AI tools in financial and policy contexts. Based in the United States (with summer in Berlin and Zurich), she co-leads SIAI’s technical operations, overseeing the institute’s IT architecture and supporting its research-to-production pipeline for AI-driven finance.
Published
Modified
AI data centers drive Himalayan hydropower
Buildout deepens China–India water tensions
Demand 24/7 clean, redundant AI power
China is betting big on hydropower to fuel a new era of AI and digital growth. Its new dam on the Yarlung Zangbo (Brahmaputra River downstream) is projected to generate 300 billion kilowatt-hours per year—roughly the UK’s annual power use. If built, it would become the world's largest dam system, leveraging the steep Tibetan gorge for maximum energy generation. With costs topping $170 billion and operations targeted for the 2030s, the project underscores China’s strategy to pair frontier energy with next-generation computing. Beijing says it’s clean energy and good for the economy. But look closer: global data centers consumed around 415 TWh in 2024, with China accounting for a quarter and growing fast. This isn’t just about powering homes and factories. It’s about moving data centers to the Himalayas to harness hydro power—and the political power that comes with it.
Why Himalayan Hydro Power Data Centers Matter
Himalayan hydro power data centers are at the heart of a new global energy race for AI dominance. The International Energy Agency forecasts that data centers could demand up to 945 TWh by 2030—a leap from today. China and the US will drive most of this surge. For China, shifting computing to inland areas with abundant hydro power is a deliberate move to future-proof its AI ambitions. Massive new dams are less about local power needs and more about securing dedicated, reliable energy for AI infrastructure, creating a critical edge as energy becomes the digital world’s most significant constraint.
China’s already putting things in motion. They shipped some giant, fancy turbines to Tibet for a hydro power station that can handle the large elevation drop. News reports say they’re super-efficient and can create stations that make a ton of power. They also touted Yajiang-1, an AI computing center high in the mountains, as part of their East Data, West Computing plan. Even if some claims are overhyped, the message is clear: put computing where there’s tons of power and easy cooling, and back it all up with hydro power data centers in the Himalayas. That way, you lose less electricity in transmission, don’t need as much cooling, and you can lock in long-term, cheap power contracts because there’s lots of water available. At least for now.
Environmental, safety, and political risks converge around Himalayan hydro power data centers. Scientists predict worsening floods on the Brahmaputra River as glaciers melt, while downstream nations worry that even unintended Chinese actions could disrupt water flow and sediment transport. The site’s seismic instability adds another layer: as vital computing and power lines depend on these mountains, local disasters can quickly escalate to cross-border digital crises. Thus, these data centers are not only regional infrastructure—they are matters of national security for every nation that relies on this water and power.
Figure 1: A single Himalayan cascade rivals a nation’s yearly power use and covers a large slice of global AI demand growth.
China’s Reasoning
China’s reasoning is simple. It needs lots of reliable, clean energy for AI, cloud computing, and its national computer systems. In 2024, Chinese data centers already used a ton of power, about a quarter of the world’s total. That’s growing fast, and experts say China could add significantly more data-center demand by 2030, even with improved energy efficiency. Areas near the coast are running out of power, freshwater for cooling, and cheap land. But western and southwestern China have hydroelectric power, wind power, high-altitude free cooling, and space to build. The government has spent billions on inland computing centers since 2022 to move data west. So, building huge dams in Tibet seems less like a vanity project and more like a way to support its AI industry in the long term.
Beijing also sees it as a way to gain leverage. A dam system making 300 TWh is like having the entire UK’s electricity supply at your fingertips. That means the dams can stabilize power in western China, meet emissions goals, power data centers, and help export industries that want to use green energy. Chinese officials say the dam won’t harm downstream countries, and some experts say most of the river’s water comes from monsoon rains farther south. But trust is low. China hasn’t signed the UN Watercourses Convention and prefers less binding agreements, such as sharing data for only part of the year. India says data sharing for the Brahmaputra River has been suspended since 2023. Without a real agreement, even clean power looks like a political play.
The new turbines and AI computing centers in Tibet add a digital twist to the water issue. News reports talk about the Yajiang-1 center as green and efficient. State media keeps pushing the idea of moving data west as a way to help everyone. Put it all together, and it shows that computing follows cheap, clean power, and that China will handle things according to its own plans, not under outside pressure. That’s why it’s hard for other countries to step in. The real decisions are being made in line with China’s own goals, such as grid development, carbon targets, and its chip and cloud plans. Which is where Himalayan hydro power data centers fit in.
India’s Move
India can’t ignore what’s happening upstream. It has told China that it’s concerned and is working on its own hydropower and transmission projects in the Brahmaputra River area. In 2025, the government announced a major plan to generate significant power in the northeast by 2047, with many projects underway. The reasoning is both strategic and economic: establish its own water rights, make sure there’s enough water during dry seasons, and support Indian data centers and green industries in the region. This is happening! India’s data-center electricity use could triple by 2030 as companies and government AI projects grow. That means they’ll want more clean power contracts.
Figure 2: India’s AI build-out more than triples data-center share by 2030; siting and 24/7 clean contracts now set long-run grid shape.
Officials in India say the national power grid can handle the increased demand and have mentioned pumped-storage and renewable energy projects to support new data centers. However, India and China share the same water source and face similar earthquake and security risks, meaning that actions taken by either country can directly affect the other. India’s response has included building its own dams, such as the large one in Arunachal Pradesh, even though local people are worried about landslides and being displaced. India has also encouraged Bhutan to sell more power to the Indian grid and has raised concerns about China’s dam with other countries. While these efforts may address supply concerns, they do not reduce the broader risk: without a clear treaty or enduring data-sharing agreements with China on the Brahmaputra, both countries' digital economies remain vulnerable to disputes, natural disasters, or intentional disruptions.
India also has opportunities. It can create a reliable green-computing area that combines hydro power and pumped storage in the northeast with solar power in Rajasthan and offshore wind in Gujarat and Tamil Nadu, connected by high-capacity power lines. Data centers will still use a small share of power in 2030, but the choices made now will affect things for decades. If India’s power rules and markets support energy-efficient designs and continuous clean power, then those data centers can grow without facing new issues later on. That’s how India proactively takes the driver’s seat, putting in a compute-industrial policy.
What Needs to Change
Education and government leaders should understand that these are not distant energy concerns. If AI computing increasingly depends on Himalayan rivers, then any disruption—such as a natural disaster or power outage in the region—could directly affect universities, laboratories, schools, and their digital operations. Risk to the water supply becomes risk to digital learning and research capacity.
First, purchasing practices need to change. Institutions that buy cloud and AI services should ask where the energy comes from, how it aligns with real-time energy use, and how it affects river systems. Experts think data centers will use way more power by 2030, and AI is a big part of that. Contracts should favor companies that can prove they use real-time clean power and very little fresh water, specifically in areas where water is scarce. River risk is now a digital risk and should be mentioned in agreements.
Second, educators should be ready for climate-related computer problems. The Brahmaputra River basin is expected to experience longer, more intense floods. An earthquake or flood that knocks out a power station shouldn’t shut down school platforms, test systems, or hospital servers in another state. Governments should make sure there’s backup in different areas to simulate the Tibet outage.
Third, realize the limits of diplomacy. China didn’t vote for an agreement and prefers data sharing, while the US and Japan can raise concerns, support other options, and help India and Bhutan build their capabilities. This means the grid must reform, and a transparent source of AI power is required.
Finally, we need better statistics. China’s data center usage was a lot in 2024. India’s numbers are uncertain, too. Those who use heavy AI should provide public energy usage data, along with hourly data usage from the location. Green AI is a slogan without stats, and without public energy disclosure, ministries can steer work towards manageable grids. These things minimize the mountain power data centers compared to speeches and regional peace.
AI isn’t separate from these decisions. Himalayan hydro power data centers operate differently in India and China. There’s an old rivalry with no treaty to ease shocks, as shifting glaciers affect the flow, while outside powers have no control over domestic needs. The real test ahead is for decision-makers—procurement desks, planners, and educators—to secure digital and energy futures in the face of these growing risks. The choices made now will impact not just river flow, but the reliability and equity of AI for generations to come.
The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.
References
IBEF. (2025, Oct 29). India’s data center capacity to double by 2027; rise 5x by 2030. IEA. (2025). Electricity 2025 — Executive summary. IEA. (2025). Energy and AI — Data-center and AI electricity demand to 2030. Reuters. (2024, Dec 26). China to build world’s largest hydropower dam in Tibet. Reuters. (2025, Jul 21). China embarks on world’s largest hydropower dam in Tibet. S&P Global Market Intelligence. (2025, Sept 17). Will data-center growth in India propel it to global hub status? Sun, H., et al. (2024). Increased glacier melt enhances future extreme floods in the southern Tibetan Plateau. Journal of Hydrology: Regional Studies.
Picture
Member for
1 year 1 month
Real name
Catherine Maguire
Bio
Catherine Maguire is a Professor of Computer Science and AI Systems at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI). She specializes in machine learning infrastructure and applied data engineering, with a focus on bridging research and large-scale deployment of AI tools in financial and policy contexts. Based in the United States (with summer in Berlin and Zurich), she co-leads SIAI’s technical operations, overseeing the institute’s IT architecture and supporting its research-to-production pipeline for AI-driven finance.