Skip to main content

If You Can't Insure It, You Can't Permit It

If You Can't Insure It, You Can't Permit It

Picture

Member for

11 months 2 weeks
Real name
Ethan McGowan
Bio
Ethan McGowan is a Professor of Financial Technology and Legal Analytics at the Gordon School of Business, SIAI. Originally from the United Kingdom, he works at the frontier of AI applications in financial regulation and institutional strategy, advising on governance and legal frameworks for next-generation investment vehicles. McGowan plays a key role in SIAI’s expansion into global finance hubs, including oversight of the institute’s initiatives in the Middle East and its emerging hedge fund operations.

Modified

AVs must pass an insurance test—no policy, no deployment
Permits should hinge on corridor-specific coverage and quarterly audited claims data
Keep driver-assist and driverless distinct; if it’s not insurable at market rates, it’s not permissible

The most critical number in this debate is not the number of lidar beams or neural-network parameters; it's 39,345. This is the National Highway Traffic Safety Administration's early estimate of U.S. road deaths in 2024. While this number has decreased from 2023, it still represents about 1.20 fatalities for every 100 million vehicle miles traveled. Even in a so-called "good" year, this is an unacceptable baseline risk that we address every day through insurance. If self-driving technology is truly safer, it should be easy to prove this in a domain that relies on quantifiable risk: insurance. If an actuary cannot price your system without public support or legal protection, it doesn't deserve to operate at scale. Insurability is not a minor detail in regulation; it is the market's test of credibility. If we can't ensure it, we shouldn't let it go into operation.

From Liability Theory to Priceable Risk

To make that test concrete, we need to start with liability assignments that can actually be priced. The United Kingdom has progressed the most, with Parliament establishing a scheme that puts primary responsibility on motor insurers while an automated system is in use. This allows insurers to seek recovery from the manufacturer as needed. This clarity, set out in the Automated and Electric Vehicles Act and updated through the Automated Vehicles Act 2024, turns a philosophical debate into a contract that the market can handle. If a crash happens while the system is in "automated mode," the policy responds. If a software defect is to blame, the insurer can seek compensation from the developer. This is a key regulatory change because it outlines the losses for which insurers are accountable and under what conditions. In the United States, we, by contrast, have an inconsistent mix of tort laws and state-level pilot regulations; federal safety reporting exists, but liability clarity is lacking. A reasonable standard is straightforward: no large-scale deployment without a policy that a licensed insurer is willing to underwrite at market rates for the specific automated use-cases, such as a driverless robotaxi operating within a designated area or highway-only automation with a human supervisor.

Underwriters do not price hopes or ambitions; they price exposure. They care about a company's loss history, not its confidence. A growing set of data sources can help clarify the situation. California's disengagement and mileage datasets, although imperfect, still provide valuable insight into operational reliability. NHTSA's Standing General Order requires the timely reporting of crashes involving advanced driver assistance (Level 2) and automated driving systems (Levels 3-5), finally creating a minimum standard for evidence. None of these datasets is flawless, but they are vital for the credibility calculations actuaries need to determine whether past performance can inform future loss ratios. The aim is not to achieve absolute certainty; it is to establish insurability with precise confidence intervals and identifiable exclusions.

Figure 1: Even in an improving year, the human-driver baseline is 39,345 deaths and 1.20 fatalities per 100M miles—the hurdle any AV corridor must beat on insurable terms.

The Current Signal from Insurance Data Is Mixed

If we look at where insurers focus—on claims, not social media—the signals point in two directions. On the bright side, Swiss Re's analysis of 25.3 million fully driverless miles operated by Waymo in Phoenix, San Francisco, Los Angeles, and Austin reveals significant reductions in liability claims compared to baselines from over 200 billion human-driven miles: about an 88% drop in property damage claims and a 92% drop in bodily injury claims. Nine property-damage claims and two bodily-injury claims over that exposure are a small sample. Still, it represents the type of evidence that can shift underwriters from "maybe" to "what premium reflects that improvement?" This isn't just a marketing claim; substantial data from a major reinsurer support it. The findings may not apply to every technology stack or city, but they demonstrate that risk can be measured and priced when operations are well defined.

Figure 2: In 25.3M fully-driverless miles, liability claim frequencies were far lower than human baselines—about 88% lower for property damage and 92% lower for bodily injury in the studied corridors.

However, the negative signals are equally important. During the same time frame, the most widely recognized "self-driving" brand in the U.S. rebranded its offering to "Full Self-Driving (Supervised)," highlighting that a human must remain accountable. Regulators have consistently pursued investigations and scrutinized recalls, with official reports linking the company's Level 2 system to several fatal crashes. This does not end the conversation—Level 2 is fundamentally different from driverless Level 4—but it stresses that not all automation should be treated the same. A supervised assist feature relying on human fallback is not a proof of autonomy for insurance; it is a traditional auto policy that comes with new risks of misuse and defects. The firm's labeling change is an acknowledgment of this reality and serves as a reminder that insurability hinges on the mode and operational design domain, not just branding.

Method matters. Claims-frequency comparisons must account for exposure and be matched based on context: time of day, weather, road type, crash reporting standards, and average annual miles per vehicle. The Waymo-Swiss Re study attempts to address this by benchmarking against both general and "latest-generation" human-driver standards. Still, even then, a reinsurer will adjust the improvement until the confidence intervals tighten. Meanwhile, NHTSA's 2024 fatality rate serves as a reminder that the human baseline is not static. If human risk is declining—1.20 fatalities per 100 million miles in 2024, with early 2025 trends looking better—then the standard for an AV system to prove superiority rises. This is precisely why a market standard should be adaptable: a moving benchmark that insurers can and must adjust as the human baseline changes.

A Practical Insurability Standard for AV Pilots

So what does an "insurance-first" standard look like in practice? It starts with specificity. Policies must be clearly defined for a declared operational design domain (ODD): streets and times, weather conditions, fallback behavior, and whether a paid safety operator is present. Underwriters should provide an expected claims frequency and severity range for that ODD, along with a stated limit and retention, and confirm reinsurance support. An acceptable entry requirement could involve an A-rated insurer ready to issue primary coverage at typical market margins; transferring at least 30% of gross risk to an A-rated reinsurer; and implementing a parametric stop-loss that activates at predefined frequency or severity thresholds based on a human-driver baseline. The policy should consider software updates as endorsements that change risk, necessitating new rates when the technology stack undergoes significant changes. This is not excessive regulation; it ensures capital believes in the safety case enough to price it accurately.

The second pillar is data sharing that justifies those prices. The baseline is what California and NHTSA already require: reports on disengagements, miles driven, and crashes with standardized information. The next step is obtaining insurer-grade exposure and loss data: insured miles segmented by ODD, near-miss indicators (hard braking and significant lateral movement beyond set thresholds), third-party claims with injury coding, and repair cost distributions by component. Regulators should mandate that, as a condition for renewing permits, operators publish anonymized quarterly exposure and claims tables verified by an independent party. This framework won't make immature systems safe but will make unsafe systems costly, while rewarding mature systems with lower costs. That is the right incentive.

To align permitting with market signals, cities or states should approve driverless services only when insurers are willing to provide coverage without extra legal protections beyond standard measures. Public authorities can create "risk corridors" for pilots, co-funded with operators committed to transparency. At the same time, the UK model, where insurers act as first payers, offers a valuable framework for the U.S.

Concerns about insurers being gatekeepers are misdirected, as they already play this role due to the nature of capital. Although early-stage technologies may lack loss history, specific thresholds for each corridor can provide clarity. Insurance markets may shift, but stable data sharing and pilot testing can maintain standards without relaxing regulations.

Education should focus on practical risk engineering over theoretical ethics, fostering skills in areas like safety-case development and actuarial theory. Policymakers should budget for independent data audits as essential to AV permitting, promoting better measurement as a vital subsidy.Messaging must differentiate between levels of automation. Levels 2 and 3 are driver-assistance tools covered by traditional policies, while Level 4 requires distinct insurance based on specific usage rules.

By 2024, improvements in driverless operations are expected to prompt a shift towards formal testing, with the expectation that by 2026, all jurisdictions will mandate clearly priced primary policies alongside reinsurance for each operational design domain (ODD). This approach acknowledges where autonomy is safer while maintaining regulatory standards.The insurability test is not about reducing regulation or providing moral exemptions; it ensures accountability. Cities may subsidize premiums for data collection, but with strict limits. Ethical considerations around mobility and urban planning remain crucial as deployment progresses.

The starting point and the endpoint are the same number. Thirty-nine thousand three hundred forty-five is the floor we are trying to break. Suppose autonomy can lower that number in real corridors, in real weather, with real claims. In that case, insurers will rush to get involved, and regulators will see a strong market signal that allows for expansion without any drama. If autonomy cannot be insured without public backing, then we have our answer for now: we need more engineering, more measurement, and stricter ODDs until the actuarial math changes. The only rule we need to state—and the one we should enforce strictly—is simple: if you can't insure it, you can't allow it. That rule aligns incentives, protects the public, and gives the technology a fair chance to prove itself where it matters—on the balance sheet of risk.


The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.


References

Association of British Insurers (2024). Insurer requirements for automated vehicles. Retrieved from abi.org.uk (PDF).
Brookings Institution (2025, August 8). Setting the standard of liability for self-driving cars. Retrieved from brookings.edu.
California DMV (2023). Autonomous vehicle disengagement and mileage reports. Retrieved from dmv.ca.gov.
European Commission (2025, March 18). EU road fatalities drop by 3% in 2024, but progress remains slow. Retrieved from transport.ec.europa.eu.
MarketWatch Guides (2025). How will self-driving cars be insured in the future? Retrieved from marketwatch.com.
NHTSA (2025, April). Early estimate of motor vehicle traffic fatalities in 2024 (DOT HS 813 710). Retrieved from crashstats.nhtsa.dot.gov.
NHTSA (n.d.). Standing General Order on crash reporting. Retrieved from nhtsa.gov.
Reuters (2024, August 30). Life on autopilot: Self-driving cars raise liability and insurance questions and uncertainties. Retrieved from reuters.com.
Shoosmiths (2024, June 17). Automated Vehicles Act: spotlight on liability. Retrieved from shoosmiths.com.
Tesla (n.d.). Full Self-Driving (Supervised) subscriptions. Retrieved from tesla.com/support.
VICE (2025, September 9). Tesla is dropping the dream of human-free self-driving cars. Retrieved from vice.com.
Waymo & Swiss Re (2024, December 19). Comparison of liability claims for Waymo driverless operations vs. human baselines (25.3M miles). Waymo blog and technical PDF. Retrieved from waymo.com/safety and storage.googleapis.com.
Zurich Insurance (2025). Driverless vehicles and the future of motor insurance. Retrieved from zurich.co.uk.

Picture

Member for

11 months 2 weeks
Real name
Ethan McGowan
Bio
Ethan McGowan is a Professor of Financial Technology and Legal Analytics at the Gordon School of Business, SIAI. Originally from the United Kingdom, he works at the frontier of AI applications in financial regulation and institutional strategy, advising on governance and legal frameworks for next-generation investment vehicles. McGowan plays a key role in SIAI’s expansion into global finance hubs, including oversight of the institute’s initiatives in the Middle East and its emerging hedge fund operations.

Not Guinea Pigs, Not Glass Domes: How to Design AI Toys That Help Babies Learn

Not Guinea Pigs, Not Glass Domes: How to Design AI Toys That Help Babies Learn

Picture

Member for

11 months 2 weeks
Real name
Catherine Maguire
Bio
Catherine Maguire is a Professor of Computer Science and AI Systems at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI). She specializes in machine learning infrastructure and applied data engineering, with a focus on bridging research and large-scale deployment of AI tools in financial and policy contexts. Based in the United States (with summer in Berlin and Zurich), she co-leads SIAI’s technical operations, overseeing the institute’s IT architecture and supporting its research-to-production pipeline for AI-driven finance.

Modified

babies is inevitable—focus on smart guardrails, not bans
Mandate strict privacy, proven developmental claims, and designs that boost caregiver–infant serve-and-return
Advance equity with vetted, prompt-only co-play tools in public settings and firm vendor standards

A child born today will likely spend their early years in a home where a voice assistant is always listening. In the United States, about one in three people aged 12 and older have a smart speaker. Many of these owners have more than one device placed around the house, where babies nap, babble, and play. At the same time, the global market for connected toys is racing toward tens of billions of dollars by the end of the decade. These facts reveal an uncomfortable reality: whether we like it or not, infants will grow up around artificial intelligence. However, the potential of AI toys is not to be feared. Our choice is not between exposure and purity. We must decide between passive gadgets that collect data and impair human interaction, and well-regulated tools that have the potential to enhance it. If we want the latter, we must establish rules now that prioritize what matters most in the first thousand days: responsive human conversation and touch.

The Right Question: Replace or Relate?

The main issue is not whether a device is "AI." The problem is whether it replaces or relates. Language and cognitive skills in the early years depend on "serve-and-return" exchanges—those quick conversations between a caregiver and a child. Multiple studies show that the number of exchanges, rather than the total number of words a child hears, is linked to stronger language development and later success. These findings are based on observational audio recordings and neuroimaging studies that link conversational exchanges to both white-matter connectivity and language outcomes, with consistent effects across various research designs. For policy, it is essential to note that a bot talking to a baby is not the same as a tool that encourages a parent to interact with that baby. We should evaluate AI toys based on whether they effectively promote conversations and shared attention between humans, not on their ability to "engage a child."

This distinction also explains why screens continue to pose a problem for infants. Pediatric guidance is clear: for children under 18 months, avoid screen media except for video chatting. For toddlers, focus on high-quality content that adults watch with children, and set clear limits. The physiological findings align with this behavioral guidance. Babies learn best from responsive and interactive cues—voices that respond to their sounds, faces that match their expressions, and hands that share visible objects. A static video, no matter how well-made, cannot provide the same feedback. Pediatric recommendations are based on a combination of observational studies, parent coaching trials, and developmental neuroscience, emphasizing context rather than a universal time limit. This is not an argument against all technology. The standard is simple: if a device cannot show that it increases responsive human interaction, it does not belong in an infant's daily routine.

Guardrails That Enable, Not Ban

We should reject the false choice between fear and permissiveness. The way forward is to establish specific rules that distinguish between helpful and harmful designs. This will provide a sense of reassurance and security to parents, caregivers, policymakers, educators, toy manufacturers, regulators, and researchers. Privacy and security must come first, as trust is essential. Independent researchers have demonstrated that many "smart toys" transmit behavioral data to companies, often with inadequate encryption. This is unacceptable for any product aimed at children, especially babies. Regulators are making some progress. In the United States, new rules on children's privacy now require stricter consent and limit the monetization of kids' data. In Europe, the AI framework and existing children's design code focus on data reduction and best-interest principles, while the Digital Services Act prohibits targeted advertising to minors. These changes do not resolve every question about connected toys. Still, they establish a legal baseline: no silent data logging, no unclear user profiles, and no manipulative designs to increase screen time for toddlers. Procurement leaders in early-childhood systems should adopt this baseline immediately.

Figure 1: Smart speakers are already common in homes with children, and owners are accumulating multiple devices—raising the stakes for infant-safe defaults.

The second rule concerns claims. If a product advertises "language boost," it should be able to substantiate that claim. This involves conducting independent trials using validated measures, including tracking conversational exchanges through full-day audio recordings, measuring parent-reported outcomes against standardized tools, and incorporating stress or sleep measures when relevant. We do not need decade-long studies to take action, but we do need studies large enough to eliminate placebo effects and publication bias. Devices that monitor health require even stricter standards. Recent regulatory approvals have begun to separate true medical devices from consumer gadgets. That is progress, but the guidance for pediatric care remains straightforward: families should not depend on home monitors to lower the risk of SIDS. The proper path for AI-based infant health tools is narrow and supervised—through clinicians—and the claims should be limited to what the device is cleared to do. This emphasis on evidence-based claims will empower parents, caregivers, policymakers, educators, toy manufacturers, regulators, and researchers to make informed decisions about AI toys for infants.

The third rule is the design for co-play. An AI toy suitable for infants should function more like a "language mirror" than a talking advertisement. It should respond to a child's vocalizations and encourage the parent to identify what the baby is touching or to sing a timely song. It should quickly turn off unless an adult is present. It should work on-device by default, upload nothing without explicit permission for each use, and clearly explain—in simple terms—what data is stored, the purpose, and the duration. These principles are based on pediatric practices that emphasize parent coaching and legal requirements focused on data minimization and the best interests of children. They can support innovation by shifting the goal from "engagement time" to "human interactions per hour."

Figure 2: Even as overall ownership plateaus, the average number of smart speakers per owner keeps climbing—meaning more always-on microphones near infants unless products adopt privacy-by-default.

The final rule addresses equity. Wealthier families are more likely to buy structured toys or access parenting classes and speech therapy. If AI products become high-end items—or if "free" versions exploit data while paid versions keep it safe—we risk widening gaps in early language development. Public spaces, such as libraries or community health centers, can change this dynamic by providing supervised "co-play corners" equipped with evaluated devices and caregiver training. The key outcome to monitor is not how cleverly a toy speaks, but whether it helps busy adults communicate more often and effectively with their children.

What We Should Build Next

With these rules in place, we can consider future possibilities. Imagine a soft toy without a screen, equipped with a few touch sensors, and an on-device speech model designed to prompt rather than perform. The toy listens as a baby babbles while holding a soft block. Instead of lecturing, it signals the closest adult: "I hear sounds—want to try 'ba-ba-block' together?" The toy then goes silent. If the adult responds, it records the exchange and periodically offers another prompt based on the infant's movements—a tap song, a peekaboo rhythm, or a shared label for what the child holds. Over time, the parent receives a summary on their phone—not dopamine-driven "streaks," but patterns in their interactions with tips based on research. This represents AI as a conversational enhancer, not a substitute caregiver.

Now imagine a toddler's reading nook in a child care center. A small speaker sits beside board books. When an adult opens a book, the system listens for key words and suggests questions: "Where did the puppy go?" "Can you find the red ball?" The assistant does not narrate the entire story; it only supports interactive reading. Trials with preschoolers show that AI partners can, in specific settings, help with question-asking and engagement in stories at levels close to those with human partners. The transition to infants is not straightforward—infants require gestures and turn-taking more than questions—but the research is emerging. A responsible project would initially conduct pilot studies in high-need communities, designed with input from educators and parents, monitoring conversational exchanges and caregiver stress over several months.

Health devices will pursue a different direction. Here, the value is not "smarter parenting" but clinical supervision and peace of mind in specific circumstances: for a preterm infant just discharged, a baby with respiratory problems, or a family caring for a child on supplemental oxygen. Regulators have started approving infant pulse-oximetry devices for specified uses. That is helpful for those narrow cases. However, for healthy babies, the safest and most effective investments remain unchanged: practicing safe sleep, supporting breastfeeding when possible, ensuring smoke-free environments, and coaching caregivers. Policies should keep these priorities clear. Consumer wearables should not suggest they can prevent tragic outcomes when they cannot.

Educators and administrators have a fundamental role beyond procurement. Early-childhood programs can incorporate "talk-first tech" into staff training. A brief training session can show how to use a prompt-only toy to encourage naming, turn-taking, and gesture games with one-year-olds, while gradually reducing the device's role. Directors can require vendors to provide straightforward "evidence labels" detailing the product's aims, the studies backing those claims, and the data the device collects. State agencies can support this initiative with small grants for independent, community-based assessments, rather than relying solely on vendor-funded trials. Universities can contribute by standardizing outcome measures and making analysis methods public. The goal is to establish a feedback loop that drives product improvement by facilitating meaningful interactions between adults.

Parents also need clarity in a sea of marketing. A simple guideline can help: if a company cannot provide independent evidence that its product increases human interactions or caregiver responsiveness, view its claims as mere marketing. Another rule: if the device cannot function with most capabilities without an account, internet connection, or broad recording permissions, it is not designed with your child's best interests at heart. These guidelines are practical, not punitive. Babies do not need the latest features. They need consistent prompts that turn everyday moments—like feeding, changing, or bath time—into opportunities for language-rich interactions. The best AI remains in the background and lets human voices take the lead.

Policies can solidify these norms into action. New children's privacy rules in the United States now make it harder to monetize kids' data without explicit parental consent. Europe's AI regulations and the U.K.'s Children's Design Code focus on the best interests and data minimization. Enforcers should prioritize connected toys and baby monitors as early tests; assessments should include code reviews and real-world data analysis, rather than relying solely on checklists. Consumer labels can be helpful, but enforcement is crucial for changing the underlying incentives that drive these behaviors. If companies understand that they must validate their claims and protect their data, products will be developed with a focus on co-play and safety by design, rather than relying on shortcuts for growth.

A Better Ending

Let's return to where we began: a home where a voice assistant is always active and a child is just starting to explore the world. We will not eliminate these devices from our living rooms any more than we removed radios or smartphones from our lives. But we can change their role around babies. We can enforce privacy rules that prevent silent data collection. We can demand proof for both developmental and clinical claims. We can create toys that encourage adults to engage with children rather than interfere with them. We can focus on equity, ensuring that beneficial tools are available first where they are most needed. If one in three households already has a conversational device, the responsible course is to use that presence as an opportunity to deepen human connection, not replace it. The choice is in our hands. Let's develop AI that assists adults in doing what only they can: enriching the early years filled with conversation, touch, and trust.


The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.


References

American Academy of Pediatrics. (2024, Feb. 1). Screen time for infants. Retrieved from aap.org.
American Academy of Pediatrics. (2024, Feb. 26). The role of the pediatrician in the promotion of healthy digital media use in children. Pediatrics.
American Academy of Pediatrics. (2025, May 22). Screen time guidelines. Retrieved from aap.org.
Edison Research. (2024, Mar. 28). The Infinite Dial 2024.
European Commission. (2022, Oct. 27). The EU's Digital Services Act.
European Parliament. (2025, Feb. 19). EU AI Act: first regulation on artificial intelligence.
Grand View Research. (2024/2025). Connected toys market size, share & trends.
Information Commissioner's Office (U.K.). (n.d.). Age-appropriate design code (Children's Code).
Owlet. (2023, Nov. 9). FDA grants De Novo clearance to the Dream Sock. Contemporary Pediatrics.
Romeo, R. R., et al. (2018). Beyond the 30-million-word gap: Children's conversational experience is associated with language-related brain function. Developmental Cognitive Neuroscience. See also Romeo et al., 2021 review.
University of Basel. (2024, Aug. 26). How smart toys spy on kids: what parents need to know.
U.S. Federal Trade Commission. (2025, Jan. 16). FTC finalizes changes to children's privacy rule.
World Health Organization. (2019, Apr. 24). To grow up healthy, children need to sit less and play more.

Picture

Member for

11 months 2 weeks
Real name
Catherine Maguire
Bio
Catherine Maguire is a Professor of Computer Science and AI Systems at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI). She specializes in machine learning infrastructure and applied data engineering, with a focus on bridging research and large-scale deployment of AI tools in financial and policy contexts. Based in the United States (with summer in Berlin and Zurich), she co-leads SIAI’s technical operations, overseeing the institute’s IT architecture and supporting its research-to-production pipeline for AI-driven finance.

After the Google Ruling, Antitrust Became a Blunt Instrument for AI Competition

After the Google Ruling, Antitrust Became a Blunt Instrument for AI Competition

Picture

Member for

11 months 2 weeks
Real name
Ethan McGowan
Bio
Ethan McGowan is a Professor of Financial Technology and Legal Analytics at the Gordon School of Business, SIAI. Originally from the United Kingdom, he works at the frontier of AI applications in financial regulation and institutional strategy, advising on governance and legal frameworks for next-generation investment vehicles. McGowan plays a key role in SIAI’s expansion into global finance hubs, including oversight of the institute’s initiatives in the Middle East and its emerging hedge fund operations.

Modified

Antitrust breakups miss the real battleground: AI assistants, not blue links
Prioritize interoperability and open defaults to keep markets contestable
Track assistant-led discovery, not just search share, to safeguard users and educators

While the government's major search case crawled from filing to remedy over nearly five years, AI assistants shifted from curiosity to a daily habit. ChatGPT alone now handles about 2.5 billion prompts each day and logs billions of visits each month. This change in user behavior—where people ask systems to summarize answers rather than provide links—matters more for competition's future than whether a judge forces a company to sell a browser or a mobile operating system. The court's decision not to mandate a breakup fell flat in policy circles, even though Google retained its dominance. By the time the gavel fell, competition had already evolved. Search is becoming a feature of assistants, not the reverse. However, the legal tools remain focused on past battles. If we continue to attack shrinking link lists while assistants capture user demand, we will miss the market and the public interest it serves.

The market shifted while the law took its time

The court did not require Google to sell Chrome or Android. This decision spared the economy from the shock of a significant split. It highlighted a deeper issue: traditional antitrust solutions are too slow and too limited for markets that change rapidly in response to model updates and user habits, which can shift in just weeks. The legal timeline included a 2020 complaint, a 2024 opinion on liability, a spring 2025 trial for remedies, and a September 2025 order that fell short of a breakup, even as it tightened conduct rules. Meanwhile, OpenAI released a web-connected search mode to the public, Microsoft integrated Copilot into Windows and Office, and newcomers like Perplexity turned retrieval-augmented answers into a daily routine. The contest now looks different. It's about who sits between users and the web—who summarizes, cites, and takes action. A solution created for default search boxes cannot, on its own, manage this new critical point.

The numbers tell two stories. On one hand, Google's global search share remains vast, about 90% across devices, with Bing below 4%, suggesting a strong push for a structural fix. On the other hand, the part of the market most relevant to office work and professional research shows Bing creeping up toward double digits worldwide, and AI assistants are taking on queries that used to be traditional searches. A more credible perspective is that the incumbent is not collapsing, but rather, the space for competition has shifted. StatCounter measures shares based on page-view data and tends to underestimate usage within assistant interfaces, while Similarweb's "visits" indicate traffic, not unique visitors. In simple terms, if assistants provide answers before links load, their growth will not immediately be reflected in search-engine shares, even as they capture user intent. Policies that wait for share metrics to change may arrive two product cycles too late.

Figure 1: Bing’s gains are real on desktop, but the contest is migrating to assistants; relying on search-share alone understates where user intent is flowing.

Measuring the real competition: assistants vs. links

A better measure is emerging. Pew reports that the share of U.S. adults using ChatGPT has roughly doubled since 2023, and the Reuters Institute notes that chatbots are now recognized as a source of news discovery for the first time. Additionally, OpenAI's own usage research and impartial traffic data indicate that information-seeking is a core use of these assistants, not just a side feature. We count assistant-assisted discovery as part of the search market. In that case, Google's field of competitors now includes OpenAI/ChatGPT, Microsoft's Copilot, Perplexity, and specialized assistants, each vying for the first interaction with a question and the final interaction before an action is taken. This changes the discussion around remedies. Default settings on browsers and phones still matter, but control over model inputs, training data, and real-time content licensing determines the speed and quality of responses. Those who secure scalable rights to news, maps, product catalogs, academic abstracts, and local listings—not just access to a search bar—will dictate the pace of competition.

In this context, the September decision appears less as the end of competition policy and more as a reflection on the kind of competition policy we've tried to apply. The court's ruling against a breakup, combined with stricter conduct rules, will not halt the shift from queries to prompts. It does little to lower the significant barriers in artificial markets, such as computing costs, access to advanced models, and the cost of acquiring legal data. Regulators outside the U.S. have begun to change their strategies. The EU's Digital Markets Act has mandated choice screens and some unbundling, with early indications of modest gains for browsers. At the same time, the U.K.'s new DMCC framework empowers the CMA to set specific conduct rules for firms with "strategic market status." These tools are proactive and sector-specific. They move faster and can be modified to address assistant-related issues. The U.S. does not need a mirror image of these strategies, but rather a plan that recognizes assistance as a key component of the competitive landscape—not just search boxes.

From punishment to interoperability: a new plan

If "antitrust is dead," it is only because adapting structural fixes rarely changes behavior quickly enough. The alternative is not deregulation; it is making interoperability a policy priority. Begin with data. Courts can compel disclosures in specific areas, but policymakers should establish licensed data-sharing platforms for categories essentially treated as public goods: geography, civic information, public safety, and basic product information. Pair mandatory licensing at fair, transparent rates with open technical "ports"—stable APIs, standardized formats, and audit logs—so any qualified assistant can connect. This would reduce the importance of exclusive vertical integrations and shift the advantage to the interface, not the inputs. The CMA's work on foundational models is a valuable example, citing access to computing and data as major hurdles and proposing principles for fair access. Turning those principles into law and procurement measures would give challengers a chance that doesn't rely on rare and lengthy breakups.

Next, address defaults where they still impact users, but measure success based on switching costs, not just "choice screens." The EU has demonstrated that choice screens can help, yet design and implementation are crucial; earlier versions were awkward and had inconsistent effects. Make defaults portable: a user's chosen assistant should be accessible across devices and applications unless they opt out. Require one-tap rerouting for any query field to the user's current default assistant, and prohibit contract terms that penalize manufacturers for offering rival options upfront. Importantly, audit the interactions when AI summaries appear above links. If Google's summaries lead to fewer downstream clicks, require the disclosure of metrics, and offer parity options for rival answer modules, while ensuring clear source labeling. The goal is not to hinder Google's progress but to prevent a single gatekeeper from dominating the interface when assistants are designed to be interchangeable.

At the same time, we need to stop acting as if ad tech and discovery are separate entities. The ongoing phase of ad-tech remedies will affect who can finance free assistants on a large scale. A transparent, auditable auction with open interfaces enables competitors to purchase distribution without relying on an incumbent's opaque systems. If, as the DOJ contends, parts of Google's ad structure have been unlawful monopolies, the solution must include not only structural options but also opening auction rules and ensuring third-party assistants have fair access commitments that facilitate commercial traffic. The markets for attention fuel the engines; cutting off the toll booth reduces the incumbents' ability to subsidize exclusive defaults. This approach is more likely to promote competition than splitting Chrome and Android in 2025, especially when the bottleneck has shifted to the ad auction layer and the answer layer.

This reinforces the need for a public options mindset where the government acts as a buyer. Educational institutions, healthcare, and local governments can set procurement terms that ensure openness: any AI assistant serving students or civil servants must allow users to export conversations, publish its model/version watermark, and accept a standard set of content sources (curricula, research, local services) through APIs for others to use. If implemented on a large scale—think state systems and school districts—vendors will adapt. Regulators have spent years debating the theory of harm; buyers can establish a theory of change in contracts right now. Brookings has advocated for a forward-looking policy framework following this ruling; the task is to make that framework practical: interoperability instead of punishment, contracts rather than courtrooms, and speed over spectacle.

Figure 2: Rapid adoption signals an assistant-first shift in discovery; policy should track assistant share of sessions, not just search-engine market share.

Finally, let's talk about measurement. Policymakers often wait for changes in search engine share to indicate progress. However, by the time StatCounter shows a "golden cross," assistants will have won the mindshare that really matters. Instead, we should track the share of discovery that involves assistants—how often information-seeking sessions start with an assistant, how many times the assistant's summary ends the journey, and how many clicks to competing sites follow. Early indications suggest a trend in this direction: ChatGPT is widely available as a search tool, desktop search usage has begun to incline toward Bing, and chatbots are now a measurable source of news. None of this indicates that the incumbent is doomed. It highlights how the monopoly framework used in the 2010s underestimates where power resides in the 2020s: the interface that answers first and best. Our regulations should follow user behavior.

The initial statistic—2.5 billion prompts daily—illustrates why the debate over breakups seems irrelevant now. The court's decision not to split Chrome or Android will neither strengthen nor destroy competition. Competition has shifted to assistants while the case progressed through the courts. Suppose lawmakers aim for a competition that benefits students, teachers, and families. In that case, they need to stop recycling outdated structural solutions and create new avenues, such as licensed data-sharing commons, transparent ad auctions, portable defaults, and procurement that emphasizes openness and verifies it. Antitrust, in its traditional, slow manner, will continue to act as a safety net for egregious behavior. However, the focus should be on creating rules that make it easy to switch and integrate seamlessly. By ensuring the interface is competitive, we won't need to rely solely on link counts to understand if the market is working. If we continue to focus exclusively on breakups, we'll arrive too late, just as the market changes again. Time won't slow down for us; our methods must speed up to keep pace.


The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.


References

Barron's. (2025, September 2). U.S. judge rejects splitting up Google in major antitrust case.
Brookings Institution (Tom Wheeler & Bill Baer). (2025, September). Google decision demonstrates need to overhaul competition policy for AI era.
Competition and Markets Authority (U.K.). (2024, April 11). AI foundation models: Update paper.
European Commission. (n.d.). The Digital Markets Act: Ensuring fair and open digital markets.
Google. (2025, September 2). Our response to the court's decision in the DOJ Search case.
OpenAI. (2024, July 25; updates through February 5, 2025). SearchGPT prototype and Introducing ChatGPT search.
Pew Research Center. (2025, June 25). 34% of U.S. adults have used ChatGPT, about double the share in 2023.
Reuters Institute for the Study of Journalism. (2025, June 17). Digital News Report 2025.
Reuters. (2024, April 10). EU's new tech laws are working; small browsers gain market share.
StatCounter Global Stats. (2025, August). Search engine market share worldwide; Desktop search engine market share worldwide.
U.S. Department of Justice. (2025, September 2). Department of Justice wins significant remedies against Google.
9News/CNN. (2025, September 3). Google will not be forced to sell off Chrome or Android, judge rules in landmark antitrust ruling.

Picture

Member for

11 months 2 weeks
Real name
Ethan McGowan
Bio
Ethan McGowan is a Professor of Financial Technology and Legal Analytics at the Gordon School of Business, SIAI. Originally from the United Kingdom, he works at the frontier of AI applications in financial regulation and institutional strategy, advising on governance and legal frameworks for next-generation investment vehicles. McGowan plays a key role in SIAI’s expansion into global finance hubs, including oversight of the institute’s initiatives in the Middle East and its emerging hedge fund operations.