Skip to main content

When Speed Becomes Contagion: How AI Turns Local Shocks into Systemic Risk

When Speed Becomes Contagion: How AI Turns Local Shocks into Systemic Risk

Picture

Member for

1 year 1 month
Real name
Ethan McGowan
Bio
Ethan McGowan is a Professor of Financial Technology and Legal Analytics at the Gordon School of Business, SIAI. Originally from the United Kingdom, he works at the frontier of AI applications in financial regulation and institutional strategy, advising on governance and legal frameworks for next-generation investment vehicles. McGowan plays a key role in SIAI’s expansion into global finance hubs, including oversight of the institute’s initiatives in the Middle East and its emerging hedge fund operations.

Modified

AI turns rumors into instant, system-wide stress
Shared models and platforms cause herding and correlated errors
Use timed frictions, model diversity, and critical-hub oversight

The most important number in finance right now is 42 billion. That's how much money one US bank saw disappear in a single day in March 2023. Another $100 billion was lined up to leave the next morning. This wasn't an old-school bank run with people lining up outside. It happened through phones, online feeds, and electronic transfers. This is how fast things can go wrong now. And it's super important to keep this in mind when we think about the risks AI creates for the financial system. AI speeds up the creation, sharing, and action on information. It can turn a rumor into a decision and a decision into a cash flow really fast. This speed is what turns normal little problems into system-wide panic. If we keep thinking of AI as just something individual companies use, rather than a connected system, we're going to get the risk wrong.

AI Risk in Finance: What Now Moves Together

The old way of thinking assumed that things slowed down between a bad story and its reaction. AI gets rid of that slowdown. It makes finding patterns, automating decisions, and coordinating actions faster because everyone's using the same information, models, and platforms. This means everything is really tightly linked together. Depositors and traders see the same stuff at the same time and use similar tools to react. Back in March 2023, that bank lost $42 billion in eight hours, and $100 billion was waiting to get out the next day. Regulators later realized that social media and online channels made the problem much worse. That speed isn't unusual anymore; it's just how things are with AI risk in finance when things go bad.

It's not just about speed; it's that everyone's doing the same thing. Lots of companies now use similar AI services, cloud services, data providers, and model designs. Regulators are warning that this can cause everyone to act the same way and make the same mistakes, especially if the models are hard to understand or poorly managed. If you add AI-powered news feeds and ad targeting, a random social media post can cause people to pull their money out of a bank, or a bad risk score can cause everyone to sell at once. The way modern finance is set up – with all these connections – turns acting alike into acting fast. That's why we need to manage AI risk in finance like a network issue, not just an issue with individual models.

Figure 1: Model releases exploded after 2022, pushing common exposure and synchronous behavior across firms—an upstream driver of AI systemic risk.

Five Things That Cause AI Risk in Finance

First, AI-driven fake news can now actually hurt a company's bottom line. The bank runs in 2023 showed how social media and online banking accelerated the process. Banks with high Twitter activity lost more money when SVB failed, revealing their stress in real time. Payment data show that at least 22 US banks saw people withdraw a large amount of money that month. New tests show AI-generated false claims about a bank's safety, spread through cheap ads, can really make people want to move their money. Almost 60% of people surveyed said they'd think about moving their money after seeing something like that. Deepfakes make it even easier to trick companies. Earlier this year, a finance employee was tricked into sending about $25 million to scammers by a deepfake video call. The main point is: AI in finance means bad news doesn't just scare the markets; it moves money.

Figure 2: Speed, opacity, and model uniformity hit multiple risk channels at once (liquidity, common exposure, interconnectedness, substitutability, leverage), explaining why rumor-driven shocks scale so fast.

Second, if everyone uses the same models, they're more likely to make the same mistakes. The idea is simple: if everyone's connected and getting the same information, it can cause a domino effect. What's new is that everyone may be using the same kinds of models, data, and tech companies. Regulators are warning that AI can make everyone act the same way, making things more fragile, especially if the models are hard to understand and use data that is similar. As AI systems shape news and even internal memos, the same explanations pop up everywhere. Credit, risk, and trading teams then react in unison. This can cause the whole market to swing in one direction. When things change, money can disappear fast. This isn't theoretical. This is how AI risk in finance is most likely to show up – quietly at first and then exploding when things get stressful.

Third, if a few companies have a lot of control and everyone relies on their stuff, it creates single points of failure. AI runs on a small number of giant cloud and model platforms, often owned by the same companies. Authorities are starting to regulate these companies. The UK's rules for these companies went into effect this year. The EU is now directly monitoring essential tech providers and has designated central cloud and data firms as critical to the financial sector. This is because if a major vendor has a problem, it can spread to lots of institutions at once. If everyone's using the same AI tools, a simple failure or cyberattack can cause market-wide issues even if the banks themselves do everything right.

Fourth, if the data and models are weak, it can cause widespread errors. AI systems are only as good as the data and security they have. National standards groups are warning about hacking techniques that can secretly change results or steal information. Explainability remains limited for many advanced models. This leads to mistakes, especially when things change. Earlier this year, the UK warned that hacking risks might never be fully fixed in current AI models. This means the risk has to be managed. If several institutions use similar models and data and face the same hacking attempts, they'll make the same mistakes. AI risk then goes from being small to being huge – thousands of small, fast errors that all point in the same direction.

Fifth, speed and money are colliding. AI shrinks the time between getting a signal and settling a transaction. This is helpful when things are calm, but risky when things are stressful. What happened in March 2023 is now the example: one bank lost about a quarter of its deposits in a day. Also, deposit amounts and market values fluctuated throughout the day in response to what people were saying online. European authorities are learning the same thing: instant payments and mobile banking mean bank runs happen in hours, not days. In the markets, AI tools can cause everyone to sell at once, leading to low prices. The reason is simple: when everyone acts fast and in the same way, there isn't enough money to go around. This is the core of AI risk in finance.

What to Do About AI Risk in Finance

The first thing we need is a rule for handling transactions when rumors are spreading. Faster isn't always safer. The goal isn't to stop speed but to control it. A memo suggested common-sense rules: slow down big withdrawals when regulators are about to make an announcement, pause automated selling programs when there are information circuit-breaker alerts, and require banks, platforms, and authorities to track down rumors in real time. The same idea should apply to both regular customers and businesses. If AI risk grows when everyone uses the same fast tools at once, the solution is to slow things down for a bit so authorities have time to determine what's true and keep things stable.

The second thing is to strengthen the systems that make AI common. Watch critical third-party companies and ensure they're resilient, secure, and able to recover from problems. Run exercises that involve the entire sector, assuming that models have been compromised or that cloud services are down. Map out the shared connections, not just who uses which vendor. Require institutions to have multiple models for critical functions, with distinct data and decision-making processes, to reduce the risk of everyone doing the same thing. Update stress tests to include AI-driven fake-news scenarios with short deadlines and deposit losses, based on events in 2023. Regulators should also require official statements to be verified with digital signatures so that platforms and media can stop fake news in real time.

What Teachers and Leaders Should Do Next

Courses need to keep up with how AI changes risk in finance. Teach the new math of bank runs. Show how things like uninsured deposits, social media, and instant payments combine to cause money to flow out of banks. Assign readings that cover finance, cybersecurity, and communications. Students should look at the 2023 events to see the speed and how stress is spread. They should simulate rumor-driven panics and then see how slowing down transactions and sending explicit messages from regulators changes the outcome. The goal isn't to scare people; it's to prepare them.

Within institutions, leaders should develop joint plans across teams such as treasury, risk, communications, legal, and security. These teams need to quickly spot fake media, send verified statements across channels, and work with platforms to remove harmful phony news. They should also check their own AI systems for shared weaknesses, such as identical models, data vendors, or setups. Test AI agents against hacking. Maintain break-glass manual options for essential flows. This doesn't slow down innovation. It keeps it going by making sure that one clever hack or one lie can't bring the whole system down.

The $42 billion day isn't just a number to remember. It's a constraint. We have a financial system where information moves at model speed, and money follows right behind. That's not going to change. What can change is how we handle the stress. AI risk in finance is a network problem: standard tools, data, vendors, and stories. The solutions need to be network-aware: slowing money flows when rumors spread, supervising key infrastructure, using different models for important decisions, and sending fast, verified messages to fight fake news. Teachers should train for these situations. Policymakers should create rules and test them. Bank leaders should practice them. We don't have to accept that every rumor turns into a bank run. If we build safeguards that work at the speed of AI, then the next $42 billion day can be something we've moved past.


The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.


References

Banque de France. (2024). Digitalisation—A potential factor in accelerating bank runs? Bloc-notes Éco.
Bank of England; Prudential Regulation Authority; Financial Conduct Authority. (2024). Operational resilience: Critical third parties to the UK financial sector (PS16/24).
Bank for International Settlements. (2024). Annual Economic Report 2024, Chapter III: Artificial intelligence and the economy: Implications for central banks.
Bank for International Settlements—Financial Stability Institute. (2023). Managing cloud risk (FSI Insights No. 53).
California Department of Financial Protection and Innovation. (2023). Order taking possession of property and business of Silicon Valley Bank.
Cipriani, M., La Spada, G., Kovner, A., & Plesset, A. (2024). Tracing bank runs in real time (Federal Reserve Bank of New York Staff Report No. 1104).
Cookson, J. A., Fox, C., Gil-Bazo, J., Imbet, J.-F., & Schiller, C. (2023). Social media as a bank run catalyst (working paper).
European Systemic Risk Board—Advisory Scientific Committee. (2025). AI and systemic risk.
Financial Stability Board. (2024). The financial stability implications of artificial intelligence.
Fenimore Harper & Say No to Disinfo. (2025). AI-generated content and bank-run risk: Evidence from UK consumer tests.
Federal Reserve Board Office of Inspector General. (2023). Material loss review: Silicon Valley Bank.
National Institute of Standards and Technology. (2023). AI Risk Management Framework (AI RMF 1.0).
National Institute of Standards and Technology. (2024). Generative AI: Risk considerations (NIST AI 600-1).
National Cyber Security Centre (UK). (2025). Prompt injection is not SQL injection (it may be worse).
Reuters. (2024). Yellen warns of significant risks from AI in finance.
Reuters. (2025). EU designates major tech providers as critical third parties under DORA.
Swiss Institute of Artificial Intelligence. (2025). Digital bank runs and loss-absorbing capacity: Why mid-sized banks need bigger buffers.
VoxEU/CEPR Press. (2025). AI and systemic risk.
World Economic Forum. (2025). Cybercrime lessons from a $25 million deepfake attack.

Picture

Member for

1 year 1 month
Real name
Ethan McGowan
Bio
Ethan McGowan is a Professor of Financial Technology and Legal Analytics at the Gordon School of Business, SIAI. Originally from the United Kingdom, he works at the frontier of AI applications in financial regulation and institutional strategy, advising on governance and legal frameworks for next-generation investment vehicles. McGowan plays a key role in SIAI’s expansion into global finance hubs, including oversight of the institute’s initiatives in the Middle East and its emerging hedge fund operations.

The Tools Will Get Easier. Directing Won’t: AI Video Streaming’s Real Disruption

The Tools Will Get Easier. Directing Won’t: AI Video Streaming’s Real Disruption

Picture

Member for

1 year 1 month
Real name
Catherine Maguire
Bio
Catherine Maguire is a Professor of Computer Science and AI Systems at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI). She specializes in machine learning infrastructure and applied data engineering, with a focus on bridging research and large-scale deployment of AI tools in financial and policy contexts. Based in the United States (with summer in Berlin and Zurich), she co-leads SIAI’s technical operations, overseeing the institute’s IT architecture and supporting its research-to-production pipeline for AI-driven finance.

Modified

AI video streaming is mainstream; tools are easier, directing still matters
Without rights, provenance, and QC, slop scales and trust falls
Train hybrids and set standards to gain speed without losing story

Back in December 2025, everyone's jaw dropped when OpenAI and Disney made a deal: a cool $1 billion investment for a three-year license to use over 200 characters from Disney, Marvel, Pixar, and Star Wars in OpenAI's Sora. This shows two big things. First, AI video isn't just some experiment anymore; it's a real way to get content out there, complete with familiar characters and a way to get right into people's homes. Second, just because making videos is getting easier doesn't mean they'll be better. We're going to see way more videos that seem like content, act like content, and please algorithms like content—but totally miss the mark on story, morals, and good filmmaking. This isn't meant to bash AI video. It's just a reminder to treat it like a new type of camera, not a replacement for the director. People have already shown they're wary: surveys from 2024 and 2025 showed that most folks are uneasy about media made by AI. If AI video takes off without any rules, quantity will win over quality. But if we design things right, we can lower costs and risks while actually improving stories.

AI video is a revolution in tools, not in talent

The tech has definitely gotten better: models can now make short, decent-quality videos in seconds, remix footage, and keep things looking consistent. Studios and streamers aren't just playing around anymore; they're making rules about when and how to use these tools in productions. The industry is adopting this stuff quickly; by mid-2024, most firms said they use AI all the time. Surveys in 2025 showed that media companies are investing more in AI across the board, from planning stories to adding effects and translating content. This is a big deal. It's a fundamental change in tools that speeds up the process and makes it cheap to try new things. But it’s still just about the tools. The important stuff—like taste, structure, pacing, acting, and ethics—still needs a human. You can't solve those problems with a simple prompt. It takes the same good judgment that has always made movies watchable.

This upcoming trend will be like what happened with data: people who could use powerful software but didn't know enough to design good studies. They could get results fast, but they weren't always accurate. With video, we'll have people who can put scenes together but don't know much about directing. That doesn't mean their work is useless. It just means the industry needs to think about what success looks like. Think of it like power steering, not autopilot. These tools make it easier to turn the wheel, but they don't decide where to go or how to handle curves. If we think that easier is better, we'll end up with a bunch of almost-good movies that feel empty. And because AI-generated videos send this straight to viewers, mistakes become a much bigger problem.

AI video needs some professional help to be good

We've already seen examples of this. Netflix has been using AI for a while, like for creating anime backgrounds and testing AI lip-sync and effects. These are small jobs within a bigger process, not a total robotic takeover. The benefit is in the small stuff: cleaning up images, translation, and timing. Streamers and studios have also released rules for using AI. This is how it should work: humans directing, machines helping, and everything documented. AI video can be great for short fan clips or controlled story experiments that don't confuse people.

The risks are also obvious. People don't like destructive AI content. A 2025 study found that most viewers are uncomfortable with AI-generated media. People are already used to scrolling past low-quality content. This problem will definitely get worse as AI-generated video tries to tell longer stories. Viewers will forgive mistakes in a short video, but they won't be so forgiving if a movie has bad acting, a weird story, or bad lip-sync for 45 minutes. Having a process allows standards to be established: quality checks to ensure everything looks right and to compare performance to the story. AI can create a room, but it can't tell you if it feels real.

Figure 1: Discomfort is high—but so is the expectation that brands will use AI anyway.

AI video will affect money, rights, and how things are distributed

Pay attention to where the money goes. The top companies are spending a ton on content: the top six spent around $126 billion in 2024. Some companies are spending even more: Netflix is planning about $18 billion in 2025, and Disney is projecting around $24 billion in 2026. AI tools can cut post-production time, reduce the need for re-shoots, and translate things into other languages without hiring more people. It also opens up new opportunities for ads. The entertainment industry is expected to generate around $3.5 trillion, with more ads shown on streaming services and targeted by AI. AI-generated video fits here: use short clips to pique interest, create dynamic ads, and test new ideas.

Rights protection will be key. The Disney–OpenAI deal sets out the idea: license characters, not actor appearances; control distribution through an approved channel; and ensure everything is safe. This is how big companies will control AI media: by letting fans play within certain limits. For policy, this means better watermarks, clear information, and good communication between tools. For unions, this means contracts that protect actors' consent and payment when AI uses their faces or voices. It also means labels that viewers understand. If AI video is going to catch on, it needs to be cheap and have clear rights rules.

How educators and politicians should control AI video

Education needs to change now. Film schools need three tracks. First, teaching directing by using AI tools as cameras. Students should still learn the basics of storytelling and how to use AI to plan shots. Second, a track that covers the ins and outs of AI models, data ethics, and automation. Graduates should know as much about color correction as they do about lenses. Third, a policy track that treats AI video as a social problem: important disclosures, safe watermarks, and fair boundaries that can be taught to everyone. This doesn't replace writing, acting, or camera work. It prepares them for the machines that are now part of how stories are told.

For administrators, the choices are immediate. Get tools that show where content came from. Ensure that any use of AI is clearly stated in the paperwork. Check for AI-created faces and voices, and get the people's approval. Run pilot projects that pair students with industry mentors to test AI: human writing, AI planning, human filming, AI edits, and human mixing. Measure the results: time saved, quality differences, and how people respond. For politicians, the focus should be realistic. Require clear labels for AI media and ensure they're on content for kids. Update worker contracts so that AI is used by professionals, not as a way to cut jobs. Research how to measure video quality so we aren't just judging things by clicks. Make AI video boring: safe and reliable.

The rules we make now will decide what we watch later

The first thing I said—$1 billion and 200 characters—shows that AI video is going to be a popular template. The tech will only get easier. That doesn't mean anyone can be a director. It means it's easier to reach a first version of a video, making what happens next even more important. If we focus on quantity over quality, people will lose interest. If we set standards for craft, consent, and content source, we can open things up more while protecting good storytelling. Schools can train hybrid storytellers. Administrators can find safe tools. Politicians can set strong guidelines. What we see in the future will depend on the choices we make today. Treat AI video as a new instrument. Keep a human in charge. And judge success the same way: whether the story sticks with us when the screen goes black.


The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.


References

Ampere Analysis. (2024, October 29). Top six global content providers account for more than half of all spend in 2024.
Autodesk. (2025, August 15). Spotlight on AI in Media & Entertainment (State of Design & Make, M&E spotlight).
Barron’s. (2025, December). Disney and OpenAI are bringing Disney characters to the Sora app.
Bloomberg. (2025, December 11). Disney licenses characters to OpenAI, takes $1 billion stake.
Bloomberg. (2025, December 19). Inside Disney and OpenAI’s billion-dollar deal (The Big Take).
Deloitte. (2025, March 25). Digital Media Trends 2025.
Disney (The Walt Disney Company). (2025, December 11). The Walt Disney Company and OpenAI reach agreement to bring Disney characters to Sora (press release).
eMarketer. (2025, March 7). Most consumers are uncomfortable with AI-generated ads.
McKinsey & Company. (2024, May 30). The state of AI in early 2024.
Netflix Studios. (2025). Using Generative AI in Content Production (production partner guidance).
OpenAI. (2024, December 9). Sora is here (launch announcement; 1080p, 20-second generation).
PwC. (2025, July 24). Global Entertainment & Media Outlook 2025–2029 (press release and outlook highlights).
Reuters. (2025, July 30). Voice actors push back as AI threatens dubbing industry; Netflix tests GenAI lip-sync and VFX.
Reuters. (2025, December 11). Disney to invest $1 billion in OpenAI; Sora to use licensed characters in early 2026.
Variety. (2025, March 5). Netflix content spending to reach $18 billion in 2025.
The Hollywood Reporter. (2024, November 14). Disney expects to spend $24 billion on content in fiscal 2025.
The Hollywood Reporter. (2025, December). How Disney’s OpenAI deal changes everything.
The Verge. (2024, December 9). OpenAI releases Sora; short-form text-to-video at launch.

Picture

Member for

1 year 1 month
Real name
Catherine Maguire
Bio
Catherine Maguire is a Professor of Computer Science and AI Systems at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI). She specializes in machine learning infrastructure and applied data engineering, with a focus on bridging research and large-scale deployment of AI tools in financial and policy contexts. Based in the United States (with summer in Berlin and Zurich), she co-leads SIAI’s technical operations, overseeing the institute’s IT architecture and supporting its research-to-production pipeline for AI-driven finance.

Let's make AI Talking Toys Safe, Not Silent

Let's make AI Talking Toys Safe, Not Silent

Picture

Member for

1 year 1 month
Real name
Ethan McGowan
Bio
Ethan McGowan is a Professor of Financial Technology and Legal Analytics at the Gordon School of Business, SIAI. Originally from the United Kingdom, he works at the frontier of AI applications in financial regulation and institutional strategy, advising on governance and legal frameworks for next-generation investment vehicles. McGowan plays a key role in SIAI’s expansion into global finance hubs, including oversight of the institute’s initiatives in the Middle East and its emerging hedge fund operations.

Modified

AI talking toys: brief, supervised language coaches
Ban open chat; require child-safe defaults and on-device limits
Regulate like car seats with tests, labels, and audits

Right now, there’s something interesting happening: studies show that kids seem to pick up language skills a bit better when they learn with social robots or talking toys. We’re talking about 27 studies, with over 1,500 kids. At the same time, there have been some pretty wild stories about AI toys saying totally inappropriate stuff, things like talking about knives or even sex. This got some senators really worried and asking questions. So, what’s the deal? Are these AI toys good or bad? The main issue is whether we can keep their learning benefits while preventing inappropriate content. We can do this with strict rules, testing, and clear labels for safe use—like car seats or vaccines. We don’t have to ban them, but oversight is key. If we ensure safety from the start, these toys can be helpful tutors—not replacements for real caregivers.

AI Toys: Tutors, Not Babysitters

Think about how kids learn best. When they can interact with tools that respond to them, it can help them practice vocabulary, word sounds, and conversational turns. Studies from 2023 to 2025 found that kids who learned with a social robot during lessons remembered words better than those who used the same materials without a robot. Preschoolers were more engaged when they had a robot partner for reading activities. They answered questions and followed simple directions. A large 2025 study reviewed 20 years of research and found that, overall, language skills improved when kids used these toys. These weren’t huge improvements, but they were there. It’s not magic. Kids learn by doing things over and over, getting feedback, and staying motivated. That’s where these AI toys can really help – with short, repeated drills that build vocabulary and help kids speak more fluently.

Why is this important? Because we don’t want these toys to replace human interaction. No toy should pretend to be a friend, promise to love you no matter what, or encourage kids to share secrets. That’s not okay. The same studies that show the learning benefits also point out the limits. Kids get bored, some have mixed feelings about the toys, and it only works if an adult is involved. We should pay attention to these limits. Keep play sessions short, ensure an adult is nearby, and set a clear learning goal. These toys shouldn’t be always-on companions. They should be simple practice tools, like a timer that listens, gives a bit of feedback, and then stops.

Figure 1: Co-teaching with a human shows the largest gains; replacing the teacher erases the benefit.

Making AI Toys Safe for Kids

If the problem is risk, the solution is to put the proper controls in place. We already see kids using AI tools. In 2025, parental controls came out that let parents connect accounts with their teens, set limits on when it can be used, turn off some features like voice control, and direct anything sensitive to a safer model. Now, the rule is that kids under 13 shouldn’t use general chatbots, and teens need their parents’ permission. These rules don’t fix everything, but they show what it means to make something child-safe from the start. Toy companies can do the same thing: AI toys should come with voice-only options, no web searching, and a list of blocked topics. They should have a physical switch to turn off the microphone and radio. And if a toy stores any data, it should be easy to delete it for good.

Figure 2: In a small 2025 test, half of the toys tried to keep kids engaged when they said they had to go; one in four produced explicit sexual talk.

Privacy laws already show us where things can go wrong. In 2023, a voice-assistant company had to delete kids’ recordings and pay a fine for breaking privacy rules. The lesson: if a device records kids, it should collect minimal data, explain how long it stores data, and delete it when asked. AI toys should go further: store nothing by default, keep learning data on-device, and offer learning profiles that parents can check and reset. Labels must clearly state what data is collected, why, and for how long—in plain language. If a toy can’t do that, it shouldn’t be sold for kids.

AI Toys in the Classroom: Keep it Simple

Schools should use AI toys only for specific, proven-safe activities. For example, a robot or stuffed animal with a microphone can listen as a child reads, point out mispronounced words, and offer encouragement. Another practice is vocabulary: a device introduces new words, uses them in sentences, asks the child to repeat, then stops for the day. Practicing new language sounds and matching letters to sounds are also suitable. Studies show that language gains occur when robots act as little tutors with a teacher present; kids complete short activities and improve on memory tests. The key is small goals, limited time, and an adult supervising.

Guidelines for using AI in education already say we need to focus on what’s best for the student. This means teachers choose the activities, monitor how the toys are used, and check the results. The systems must be designed for different age groups and collect as little data as possible. A simple rule is: if a teacher can’t see what the toy is doing, it can’t be used. Dashboards should show how long the toy was used, what words were practiced, and common mistakes. No audio should be stored unless a parent agrees. Schools should also make companies prove their toys are safe. Does the toy refuse to talk about self-harm? Does it avoid sexual topics? Does it not advise about dangerous things around the house? The results should be easy to understand, updated whenever the models change, and tested by independent labs.

Some people worry that even limited use of these toys can take away from human interaction. That’s a valid concern. The answer is to set clear rules about time and place. AI toys should be used only for short sessions, such as 5 or 10 minutes, and never during free play or recess. They should be in a learning center, such as with headphones or a tablet. When the timer goes off, the toy stops, and the child resumes playing with other kids or interacting with an adult. This way, the toy is just a tool, not a friend. This protects what’s important in early childhood: talking, playing, and paying attention to other people.

Controls That Actually Work

We know where things have gone wrong. There have been toys that gave tips on finding sharp objects, explained adult sexual practices, or made unrealistic promises about friendship. These things don’t have to happen. They’re the result of choices we can change. First, letting a child’s toy have open-ended conversations is a mistake. Second, using remote models that can change without warning makes it hard to guarantee safety. The solution is to use specific prompts, age-appropriate rules, and stable models. AI toys should run a small, approved model or a fixed plan that can’t be updated secretly. If a company releases a new model, it should require new safety testing and new labels.

We need to enforce these rules. Regulators can require safety testing before any talking toy is sold to kids. The tests should cover forbidden topics, the difficulty of circumventing the safety features, and how data is handled. The results should be published and put in a box as a simple guide. Privacy laws are a start, but toys also need content standards. For example, a toy for kids ages 4-7 should refuse to answer questions about self-harm, sex, drugs, weapons, or breaking the law. It should say something like, I can’t talk about that. Let’s ask a grown-up, and then go back to the activity. If the toy hears words it doesn’t recognize, it should pause and show a light to alert an adult. These aren’t complicated features. They’re essential for trust.

The market cares about trust. When social media sites added parental controls, they showed that safer use is possible without banning access. Toys can do the same: publish safety reports, reward problem-finders, and label toys by purpose—like 5-minute phonics practice, not best friend. Honest claims help schools and parents make better choices. That’s how we keep more practice and feedback while avoiding unpredictable personal conversations. We need to make AI Talking Toys boring in the right way so that technology helps children.

We started with a problem: AI toys might improve learning, but they also have safety issues. The solution isn’t to get rid of them, but to control them. We should only allow small tasks that improve reading and speaking; ensure the child is safe while deleting collected data; block harmful content; and have vendors consistently check for product failures. We have policy tools. If anything were to happen, the vendors will implement the consequences. Implementing safeguards at a small focus will encourage. AI Talking Toys should never replace human interaction. These small helpers can assist teachers and parents. We must make them safe and measurable. Then hold every toy to that standard.


The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.


References

Alimardani, M., Sumioka, H., Zhang, X., et al. (2023). Social robots as effective language tutors for children: Empirical evidence from neuroscience. Frontiers in Neurorobotics.
Federal Trade Commission. (2023, May 31). FTC and DOJ charge Amazon with violating the Children’s Online Privacy Protection Act.
Lampropoulos, G., et al. (2025). Social robots in education: Current trends and future directions. Information (MDPI).
Neumann, M. M., & Neumann, D. L. (2025). Preschool children’s engagement with a social robot during early literacy and language activities. Education and Information Technologies (Springer).
OpenAI. (2025, September 29). Introducing parental controls.
OpenAI Help Center. (2025). Is ChatGPT safe for all ages?
Rosanda, V., et al. (2025). Robot-supported lessons and learning outcomes. British Journal of Educational Technology.
Time Magazine. (2025, December). The hidden danger inside AI toys for kids.
UNESCO. (2023/2025). Guidance for generative AI in education and research.
The Verge. (2025, December). AI toys are telling kids how to find knives, and senators are mad.
Wang, X., et al. (2025). Meta-analyzing the impacts of social robots for children’s language development: Insights from two decades of research (2003–2023). Trends in Neuroscience and Education.
Zhang, X., et al. (2023). A social robot reading partner for explorative guidance. Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction.

Picture

Member for

1 year 1 month
Real name
Ethan McGowan
Bio
Ethan McGowan is a Professor of Financial Technology and Legal Analytics at the Gordon School of Business, SIAI. Originally from the United Kingdom, he works at the frontier of AI applications in financial regulation and institutional strategy, advising on governance and legal frameworks for next-generation investment vehicles. McGowan plays a key role in SIAI’s expansion into global finance hubs, including oversight of the institute’s initiatives in the Middle East and its emerging hedge fund operations.