Skip to main content

Charging the Trade War: Why Chinese EV Batteries Explain the Failure of Tariffs — and What Comes Next

EU tariffs raised EV prices but did not erode China’s dominance in battery production
Chinese EV batteries stay competitive due to scale, cost, and integrated supply chains
Without parallel investment in skills and capacity, trade measures invite retaliation without resilience

Selling Our Sickness: Why Health Data Monetization Is a Dead End

Selling Our Sickness: Why Health Data Monetization Is a Dead End

Picture

Member for

1 year 1 month
Real name
Ethan McGowan
Bio
Ethan McGowan is a Professor of Financial Technology and Legal Analytics at the Gordon School of Business, SIAI. Originally from the United Kingdom, he works at the frontier of AI applications in financial regulation and institutional strategy, advising on governance and legal frameworks for next-generation investment vehicles. McGowan plays a key role in SIAI’s expansion into global finance hubs, including oversight of the institute’s initiatives in the Middle East and its emerging hedge fund operations.

Modified

Health data monetization is failing because patients do not trust technology firms with sensitive medical records
Turning health data into a commodity ignores consent, governance, and healthcare’s real economics
Without strong safeguards and public oversight, most health data projects will keep breaking down

In the last five years, the health sector has generated more information than in the previous fifty. Every day, billions of data points are collected from electronic medical records, wearable devices, imaging systems, and pharmacy systems. Still, less than half of patients in developed countries are willing to share their medical records with private tech companies, even if the data is anonymized. This gap between the amount of data available and public trust is the real barrier to monetizing health data. Companies may have the tools to collect information at scale, but they lack the public’s permission to use it. This matters because health data is different from clicks, location, or shopping habits. It is personal, permanent, and tied to future risks. When trust disappears, participation drops. The success of monetized medical insights depends on whether people believe their data will be used for care, not profit, not just on better algorithms.

Health data monetization and the myth of voluntary scale

Many believe that patients will share health data as easily as they share information in other online markets, especially if it is useful or personalized. But this is not the case. People see medical information as part of their personal control, not something to trade. Patients share sensitive data with doctors because it happens within a trusted relationship with clear boundaries. Once the data leaves that setting, things change. Tech companies are mainly responsible to their shareholders, not patients, and their promises to protect data are based on company policies, not universal rules. This leads to understandable hesitation.

This hesitation is tangible. It appears in the form of low engagement rates, opt-outs, and incomplete datasets. Health data monetization needs scale to create value, but scale depends on trust. Without significant involvement, data sets are more likely to include early adopters, healthier people, or individuals less concerned about privacy. That prejudice lowers the quality of insights and limits their benefits for serious medical or policy uses. Businesses frequently find themselves with considerable fragmented, low-signal data that’s expensive to clean and validate. The economics of promotion don’t translate to medicine, where accuracy, representativeness, and responsibility are more important than volume.

There’s additionally a misunderstanding of what drives patients. Patients don’t see a clear personal benefit from communicating data with business platforms. In comparison to the perceived long-term risks of exposure, abuse, or resale, small improvements in suggestions or summaries aren’t enough. The risks seem permanent, while the benefits are unclear and dispersed. Health data monetization systems rarely handle this imbalance. Instead, they rely on vague claims about privacy and security, even though data breaches and secondary uses remain common across the digital world. It is logical to be doubtful under such conditions.

Figure 1: Patients are far more willing to share medical data with clinicians than with technology companies, even when anonymization is promised.

The reason health startups are always failing is health data monetization

Digital health is full of big ideas that often fail to deliver real value. There is a common pattern behind these failures. First, there is a payment gap: health systems and insurers, not patients, pay for care. Products based on data insights often have no clear way to get paid. Workflow is another issue. Doctors are already busy, so tools that add extra steps or alerts without making their jobs easier are quickly ignored. Regulations are also tough. Following health data rules is expensive, slow, and strict. Many startups underestimate these challenges and run out of money before they gain traction.

Health data monetization exacerbates these problems. When patients are treated as data sources, things get even more complicated. If companies do not solve the basic adoption issues, they face ethical risks and long-term responsibility. Even if businesses say they will not sell personal information, they often make money by licensing data, training models, or selling business insights to others. For patients, these differences do not matter. What matters is control. Once their data is uploaded, patients lose much of their say in how it is used.

The issue is exacerbated by the company's instability. Startups pivot. They unite. They may be bought. They don’t succeed. Data does, nevertheless, continue. When a company changes hands, data sets are also transferred under circumstances that users never foresaw. Under bankruptcy or acquisition, even solid internal protections can disintegrate. This isn't just a thought. This has happened repeatedly in the health and usage tech industries. Patients understand this risk instinctively, though legal systems may not keep pace. Because of this, a number of people choose not to take part at all.

The economic argument is weaker than it seems. While the broader data brokerage economy is large, the portion that includes high-quality, clinically helpful health data is far smaller. Buyers in this market want legal verification, provenance, and certainty. They prioritize quality over volume, and meeting these standards quickly reduces margins. As a result, health data monetization struggles to deliver venture-level returns without sacrificing standards, which further fuels public mistrust. This loop explains why so many well-funded health data projects stall or quietly vanish.

Figure 2: The headline data economy is massive, but the market for regulated, clinical-grade health data is comparatively small and slow-moving.

Steps that the policy must now take regarding health data monetization

If market incentives cannot bring together trust and value, policy should step in. Policy should encourage innovation that protects patients and delivers real public benefits. First, consent alone is not enough. Without legal guarantees, consent has limits. Patients need strong, enforceable promises that their data will be used only for agreed-upon purposes, and that these protections will endure even if organizations change. Second, transparency should be real, not just symbolic. Platforms handling medical data should publish access logs showing who accessed the data, why, and for how long. This is common in high-security fields and could become standard in health. Consent should be clear and easy to withdraw at any time. Patients should be able to grant certain uses for set periods and revoke permission later. Users should also have true control, making it easy to move or delete their data.

Public organizations additionally have a role. Health systems, universities, and regulators can create data trusts or public-interest intermediaries that manage data sets under democratic control. These groups can grant access for approved research while shielding patients from business churn. This model treats health data as a shared resource rather than as private property. It aligns incentives toward long-term value and social benefit.

Critics say that stricter rules will slow things down. That can be true in the near term. But the other possibility is worse. Scandals, negative reactions, and strict regulations that halt entire categories of innovation stem from weak governance. Durable progress in medicine has always been deliberate, gradual, and rooted in trust. Health data monetization won't be any different. The selection is between careful constraints now or recurring failure later.

Health data monetization is commonly depicted as unavoidable. It is not. It is a policy choice motivated by incentives, rules, and values. The economics of healthcare and the attitude of trust are misunderstood when medical records are treated as a revenue source. Patients are not data mines. They're partners in providing care. Most people will decide out without reliable safeguards, leaving businesses with minimal data sets and unfulfilled promises. It is obvious what to do next. Connect data use to client control. Base innovation on public interest governance. Reward systems that produce results rather than exploitation. Health data can support research and care without becoming another failed trial in digital overreach if lawmakers and organizations act now. If they do not, the industry will keep producing expensive illusions and silent collapses. The data is already showing us what works.


The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.


References

BBC News. (2026). OpenAI and health data integration coverage.
Euronews Next. (2026). ChatGPT Health feature and medical record integration.
Failory. (2025). Healthcare startup failure analysis.
Grand View Research. (2024). Global data broker market report.
Massively Better Healthcare. (2025). Why healthcare startups fail.
Precedence Research. (2024). Healthcare data monetization solutions market.
Rock Health. (2024). Consumer trust and digital health survey.
Scientific American. (2026). AI, health data, and public trust.

Picture

Member for

1 year 1 month
Real name
Ethan McGowan
Bio
Ethan McGowan is a Professor of Financial Technology and Legal Analytics at the Gordon School of Business, SIAI. Originally from the United Kingdom, he works at the frontier of AI applications in financial regulation and institutional strategy, advising on governance and legal frameworks for next-generation investment vehicles. McGowan plays a key role in SIAI’s expansion into global finance hubs, including oversight of the institute’s initiatives in the Middle East and its emerging hedge fund operations.

Integrating Physical AI Platforms into Education: A Forward-Thinking Policy Approach

Integrating Physical AI Platforms into Education: A Forward-Thinking Policy Approach

Picture

Member for

1 year 1 month
Real name
Catherine McGuire
Bio
Catherine McGuire is a Professor of Computer Science and AI Systems at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI). She specializes in machine learning infrastructure and applied data engineering, with a focus on bridging research and large-scale deployment of AI tools in financial and policy contexts. Based in the United States (with summer/winter in Berlin and Zurich), she co-leads SIAI’s technical operations, overseeing the institute’s IT architecture and supporting its research-to-production pipeline for AI-driven finance.

Modified

Physical AI moves intelligence from screens into systems that act in the real world
In education, AI shifts from a tool to shared infrastructure with new governance risks
The policy challenge is managing embodied intelligence at institutional scale

Automation is changing the world quickly. For example, factories added 4.28 million robots by 2023, a 10% increase in just one year. Now, intelligence is moving from distant data centers into machines that interact directly with the physical world. This shift means education policy must adapt quickly, as integrated systems combining software, sensors, processing, and mechanical components are becoming the norm. The main challenge is ensuring schools understand and address the mix of hardware, local processing, safety, and workforce changes as AI becomes both a physical and a digital force.

As AI moves from digital tools to physical devices, schools need a new way to bring technology into classrooms.

We need to change our approach. In the past, most discussions of AI in schools focused on software applications, such as content moderation, plagiarism detection, and personalized learning. These are still important, but now we should see software, hardware, and mechanical parts as components of a single system. This matters because spending, risks, and opportunities now overlap in the buying and use of these tools. When a school district buys an adaptive learning program, it’s not just a software license—they also deal with data sent to the cloud, on-site hardware, warranties, and safety steps for devices that can move, talk, or sense their surroundings. These changes affect budgets, teacher training, and fairness. Hardware depreciates differently from software, so maintenance costs are important but often overlooked. If schools treat these areas separately, they may misjudge costs and risks.

Figure 1: AI in education is no longer concentrated in the cloud; physical and edge systems now represent a growing share of deployed intelligence.

The numbers show strong growth. In 2023, there were about 4.28 million industrial robots in use, with more than half a million new ones added each year. This shows that physical systems are becoming common in many industries. The market for local AI processing is also growing fast and could reach tens of billions of dollars by 2024 or 2025. Venture funding for robotics and hardware-based AI has bounced back from 2022–2023, now reaching billions of dollars each year, mostly going to startups that blend on-device analytics with autonomous features.

Implications for Learning Environments and Curriculum Development

Moving to physical AI platforms changes what schools need to teach and maintain. Hardware skills are now essential. Teachers will need to manage devices that interact with students and classrooms, such as voice-activated tools, delivery robots, and environmental sensors. Buying teams must check warranties, update policies, and handle vendor relationships. Facilities staff should plan for charging stations, storage, and safety areas. Special education teams need to update support plans for new robotic tools that help with movement or sensory needs. Costs also need a fresh look. While software can be used by many, hardware incurs upfront costs, depreciates over time, and requires regular upkeep. Over five years, the total cost of classroom devices could exceed that of software if schools don’t plan for group purchases, shared services, or local repair centers.

AI devices can take over repetitive or routine tasks, letting teachers focus on students and advanced topics. Virtual assistants help with scheduling, grading, and paperwork. However, these benefits require reliable support and maintenance, or pilot programs risk failing. Policies should connect device funding to technical training and regular performance reviews.

Figure 2: As AI becomes physical, education systems face hardware-style cost curves rather than software-style scaling.

Governance, Safety, and Workforce Policies for Physical AI Platforms

Bringing physical AI into schools creates new challenges for rules and oversight. Physical systems can fail in different ways, such as sensor errors, mechanical breakdowns, or poor decisions. Rules designed for software problems are not enough for robots capable of causing real-world harm. Regulators need to set up standard safety checks that test software, stress-test hardware, and look at how people use the systems. These checks should compare different systems directly. For privacy, processing data on-site means less student data goes to the cloud, but it also brings up concerns about data logs, device software, and data sent to vendors. Policies should limit what is recorded on devices, clarify data handling, and require regular external audits.

Policymakers also need to focus on workforce development. There will be more need for maintenance workers, safety staff, and curriculum experts who understand both technology and society. Fair access is still key. Without action, gaps in access and support could reduce the benefits of new technology. Policymakers should back shared service centers for repairs and support, use funding that combines startup grants with ongoing payments, and require clear training and worker protection rules.

Evidence-Based Evaluation, Addressing Concerns, and Moving Forward

Concerns persist about past hardware projects—overhyped pilots, costly or unused devices, and incompatibility persist if support is lacking. The key is structured pilots and honest evaluation. Schools should track system uptime, learning time saved, support hours, and student outcomes, reporting findings publicly. Some believe hardware-based AI can help underserved schools automate hard-to-staff services, depending on funding. Shared services and vendor accountability may improve equity; if not, gaps may grow.

To lower risks and maximize benefits, policymakers should emphasize four clear policy actions: First, require industry-wide interoperability standards and enforceable warranties, ensuring schools can repair and maintain devices from multiple providers. Second, create and support regional service centers dedicated to device maintenance, software updates, and independent safety checks for school systems. Third, make successful implementation depend on teacher-led training and curriculum integration, rather than on simple device delivery. Fourth, mandate transparent public reporting on system uptime, safety incidents, and learning outcomes for any AI products used in schools. These steps will enable evidence-based decisions and prevent investments driven by novelty rather than impact.

In conclusion, intelligence is evolving beyond software. The increase in autonomous agents and robots puts physical AI at the forefront of decisions about education policy. Policies that view software, local processing, and physical elements as different purchases risk inefficiency and waste. Policymakers should adopt an integrated system that coordinates purchase, maintenance, safety, and educational methods. We need defined standards, institutions that support maintenance, and funding plans that sustain operations. When done right, schools will gain tools that increase human potential. When ignored, educational technology will be unequal. By making the platform last, the potential of physical intelligence can lead to public good.


The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.


References

Arm. (2026). The next platform shift: Physical and edge AI, powered by Arm. Arm Newsroom.
Crunchbase News. (2024). Robotics funding remains robust as startups seek to… Crunchbase News.
Grand View Research. (2025). Edge AI market size, share & trends. Grand View Research Report.
IFR — International Federation of Robotics. (2024). World Robotics 2024: Executive summary and press release. Frankfurt: IFR.
PitchBook. (2025). The AI boom is breathing new life into robotics startups. PitchBook Research.
TechTarget. (2024). What is AgentGPT? Definition and overview. TechTarget SearchEnterpriseAI.
The Verge. (2026). AI moves into the real world as companion robots and pets. The Verge.

Picture

Member for

1 year 1 month
Real name
Catherine McGuire
Bio
Catherine McGuire is a Professor of Computer Science and AI Systems at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI). She specializes in machine learning infrastructure and applied data engineering, with a focus on bridging research and large-scale deployment of AI tools in financial and policy contexts. Based in the United States (with summer/winter in Berlin and Zurich), she co-leads SIAI’s technical operations, overseeing the institute’s IT architecture and supporting its research-to-production pipeline for AI-driven finance.