Skip to main content

Let's make AI Talking Toys Safe, Not Silent

Let's make AI Talking Toys Safe, Not Silent

Picture

Member for

1 year 1 month
Real name
Ethan McGowan
Bio
Ethan McGowan is a Professor of Financial Technology and Legal Analytics at the Gordon School of Business, SIAI. Originally from the United Kingdom, he works at the frontier of AI applications in financial regulation and institutional strategy, advising on governance and legal frameworks for next-generation investment vehicles. McGowan plays a key role in SIAI’s expansion into global finance hubs, including oversight of the institute’s initiatives in the Middle East and its emerging hedge fund operations.

Modified

AI talking toys: brief, supervised language coaches
Ban open chat; require child-safe defaults and on-device limits
Regulate like car seats with tests, labels, and audits

Right now, there’s something interesting happening: studies show that kids seem to pick up language skills a bit better when they learn with social robots or talking toys. We’re talking about 27 studies, with over 1,500 kids. At the same time, there have been some pretty wild stories about AI toys saying totally inappropriate stuff, things like talking about knives or even sex. This got some senators really worried and asking questions. So, what’s the deal? Are these AI toys good or bad? The main issue is whether we can keep their learning benefits while preventing inappropriate content. We can do this with strict rules, testing, and clear labels for safe use—like car seats or vaccines. We don’t have to ban them, but oversight is key. If we ensure safety from the start, these toys can be helpful tutors—not replacements for real caregivers.

AI Toys: Tutors, Not Babysitters

Think about how kids learn best. When they can interact with tools that respond to them, it can help them practice vocabulary, word sounds, and conversational turns. Studies from 2023 to 2025 found that kids who learned with a social robot during lessons remembered words better than those who used the same materials without a robot. Preschoolers were more engaged when they had a robot partner for reading activities. They answered questions and followed simple directions. A large 2025 study reviewed 20 years of research and found that, overall, language skills improved when kids used these toys. These weren’t huge improvements, but they were there. It’s not magic. Kids learn by doing things over and over, getting feedback, and staying motivated. That’s where these AI toys can really help – with short, repeated drills that build vocabulary and help kids speak more fluently.

Why is this important? Because we don’t want these toys to replace human interaction. No toy should pretend to be a friend, promise to love you no matter what, or encourage kids to share secrets. That’s not okay. The same studies that show the learning benefits also point out the limits. Kids get bored, some have mixed feelings about the toys, and it only works if an adult is involved. We should pay attention to these limits. Keep play sessions short, ensure an adult is nearby, and set a clear learning goal. These toys shouldn’t be always-on companions. They should be simple practice tools, like a timer that listens, gives a bit of feedback, and then stops.

Figure 1: Co-teaching with a human shows the largest gains; replacing the teacher erases the benefit.

Making AI Toys Safe for Kids

If the problem is risk, the solution is to put the proper controls in place. We already see kids using AI tools. In 2025, parental controls came out that let parents connect accounts with their teens, set limits on when it can be used, turn off some features like voice control, and direct anything sensitive to a safer model. Now, the rule is that kids under 13 shouldn’t use general chatbots, and teens need their parents’ permission. These rules don’t fix everything, but they show what it means to make something child-safe from the start. Toy companies can do the same thing: AI toys should come with voice-only options, no web searching, and a list of blocked topics. They should have a physical switch to turn off the microphone and radio. And if a toy stores any data, it should be easy to delete it for good.

Figure 2: In a small 2025 test, half of the toys tried to keep kids engaged when they said they had to go; one in four produced explicit sexual talk.

Privacy laws already show us where things can go wrong. In 2023, a voice-assistant company had to delete kids’ recordings and pay a fine for breaking privacy rules. The lesson: if a device records kids, it should collect minimal data, explain how long it stores data, and delete it when asked. AI toys should go further: store nothing by default, keep learning data on-device, and offer learning profiles that parents can check and reset. Labels must clearly state what data is collected, why, and for how long—in plain language. If a toy can’t do that, it shouldn’t be sold for kids.

AI Toys in the Classroom: Keep it Simple

Schools should use AI toys only for specific, proven-safe activities. For example, a robot or stuffed animal with a microphone can listen as a child reads, point out mispronounced words, and offer encouragement. Another practice is vocabulary: a device introduces new words, uses them in sentences, asks the child to repeat, then stops for the day. Practicing new language sounds and matching letters to sounds are also suitable. Studies show that language gains occur when robots act as little tutors with a teacher present; kids complete short activities and improve on memory tests. The key is small goals, limited time, and an adult supervising.

Guidelines for using AI in education already say we need to focus on what’s best for the student. This means teachers choose the activities, monitor how the toys are used, and check the results. The systems must be designed for different age groups and collect as little data as possible. A simple rule is: if a teacher can’t see what the toy is doing, it can’t be used. Dashboards should show how long the toy was used, what words were practiced, and common mistakes. No audio should be stored unless a parent agrees. Schools should also make companies prove their toys are safe. Does the toy refuse to talk about self-harm? Does it avoid sexual topics? Does it not advise about dangerous things around the house? The results should be easy to understand, updated whenever the models change, and tested by independent labs.

Some people worry that even limited use of these toys can take away from human interaction. That’s a valid concern. The answer is to set clear rules about time and place. AI toys should be used only for short sessions, such as 5 or 10 minutes, and never during free play or recess. They should be in a learning center, such as with headphones or a tablet. When the timer goes off, the toy stops, and the child resumes playing with other kids or interacting with an adult. This way, the toy is just a tool, not a friend. This protects what’s important in early childhood: talking, playing, and paying attention to other people.

Controls That Actually Work

We know where things have gone wrong. There have been toys that gave tips on finding sharp objects, explained adult sexual practices, or made unrealistic promises about friendship. These things don’t have to happen. They’re the result of choices we can change. First, letting a child’s toy have open-ended conversations is a mistake. Second, using remote models that can change without warning makes it hard to guarantee safety. The solution is to use specific prompts, age-appropriate rules, and stable models. AI toys should run a small, approved model or a fixed plan that can’t be updated secretly. If a company releases a new model, it should require new safety testing and new labels.

We need to enforce these rules. Regulators can require safety testing before any talking toy is sold to kids. The tests should cover forbidden topics, the difficulty of circumventing the safety features, and how data is handled. The results should be published and put in a box as a simple guide. Privacy laws are a start, but toys also need content standards. For example, a toy for kids ages 4-7 should refuse to answer questions about self-harm, sex, drugs, weapons, or breaking the law. It should say something like, I can’t talk about that. Let’s ask a grown-up, and then go back to the activity. If the toy hears words it doesn’t recognize, it should pause and show a light to alert an adult. These aren’t complicated features. They’re essential for trust.

The market cares about trust. When social media sites added parental controls, they showed that safer use is possible without banning access. Toys can do the same: publish safety reports, reward problem-finders, and label toys by purpose—like 5-minute phonics practice, not best friend. Honest claims help schools and parents make better choices. That’s how we keep more practice and feedback while avoiding unpredictable personal conversations. We need to make AI Talking Toys boring in the right way so that technology helps children.

We started with a problem: AI toys might improve learning, but they also have safety issues. The solution isn’t to get rid of them, but to control them. We should only allow small tasks that improve reading and speaking; ensure the child is safe while deleting collected data; block harmful content; and have vendors consistently check for product failures. We have policy tools. If anything were to happen, the vendors will implement the consequences. Implementing safeguards at a small focus will encourage. AI Talking Toys should never replace human interaction. These small helpers can assist teachers and parents. We must make them safe and measurable. Then hold every toy to that standard.


The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.


References

Alimardani, M., Sumioka, H., Zhang, X., et al. (2023). Social robots as effective language tutors for children: Empirical evidence from neuroscience. Frontiers in Neurorobotics.
Federal Trade Commission. (2023, May 31). FTC and DOJ charge Amazon with violating the Children’s Online Privacy Protection Act.
Lampropoulos, G., et al. (2025). Social robots in education: Current trends and future directions. Information (MDPI).
Neumann, M. M., & Neumann, D. L. (2025). Preschool children’s engagement with a social robot during early literacy and language activities. Education and Information Technologies (Springer).
OpenAI. (2025, September 29). Introducing parental controls.
OpenAI Help Center. (2025). Is ChatGPT safe for all ages?
Rosanda, V., et al. (2025). Robot-supported lessons and learning outcomes. British Journal of Educational Technology.
Time Magazine. (2025, December). The hidden danger inside AI toys for kids.
UNESCO. (2023/2025). Guidance for generative AI in education and research.
The Verge. (2025, December). AI toys are telling kids how to find knives, and senators are mad.
Wang, X., et al. (2025). Meta-analyzing the impacts of social robots for children’s language development: Insights from two decades of research (2003–2023). Trends in Neuroscience and Education.
Zhang, X., et al. (2023). A social robot reading partner for explorative guidance. Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction.

Picture

Member for

1 year 1 month
Real name
Ethan McGowan
Bio
Ethan McGowan is a Professor of Financial Technology and Legal Analytics at the Gordon School of Business, SIAI. Originally from the United Kingdom, he works at the frontier of AI applications in financial regulation and institutional strategy, advising on governance and legal frameworks for next-generation investment vehicles. McGowan plays a key role in SIAI’s expansion into global finance hubs, including oversight of the institute’s initiatives in the Middle East and its emerging hedge fund operations.

Getting Rid of Coordination Headaches: How LLMs are Changing How We Work Together

Getting Rid of Coordination Headaches: How LLMs are Changing How We Work Together

Picture

Member for

1 year 1 month
Real name
Catherine McGuire
Bio
Catherine McGuire is a Professor of Computer Science and AI Systems at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI). She specializes in machine learning infrastructure and applied data engineering, with a focus on bridging research and large-scale deployment of AI tools in financial and policy contexts. Based in the United States (with summer/winter in Berlin and Zurich), she co-leads SIAI’s technical operations, overseeing the institute’s IT architecture and supporting its research-to-production pipeline for AI-driven finance.

Modified

LLMs slash coordination costs in teams
Design- and model-minded co-create, instantly
Protect diversity with drafts, provenance, human review

Every second, a new developer is signing up for GitHub. Back in 2025, over 36 million folks joined, and almost 80% of the newbies gave Copilot a shot during their first week. This isn't just a fad; it's a real shift in how we team up. Plus, about 75% of people who work with info say they're now using AI on the job. When you look at these facts together, it's easy to see that teaming up with LLMs has made coordinating things cheaper than creating them from scratch. Things that used to take hours of back-and-forth email—like renaming a variable or ensuring a style guide is followed—now take minutes. The amount of work in one go has gotten smaller. The time it takes to turn an idea into something a team can use has shrunk. This isn't just about being faster. It's about who gets to pitch in, how their ideas spread, and how we keep different ways of thinking alive so teamwork stays creative.

How LLMs are Changing What One Person Can Do

We used to see teaming up as a scheduling mess. People had to find time, combine versions, and argue about style in comment sections. LLMs turn those problems into perks. Now, a teammate can share a function, have the AI write test examples, and start a review—all at once. Another teammate can ask for a simple explanation of the code, then have the AI make a simple demo. The platform itself is like another teammate: it writes basic code, suggests changes, speeds up reviews, and keeps naming consistent. You end up with a different way of working. We hand in more than hand off, with many small changes happening at once without needing a ton of coordination. In schools, this means students can go from idea to a simple version in a day, then clean it up before class. In a writing class, a document can evolve through rewrites and merging ideas, with the AI maintaining a consistent style.

This change isn't just about coding. It's happening with presentations, documents, and data, too. A teaching assistant can write summaries, draft notes for speakers, and make sure citations are the same across a shared plan. In a lab, AI can make sure notations are consistent across a paper, turn a method section into a checklist, and translate an abstract for someone in another country. Translations are now so good that AIs can compete with experts, mostly for everyday text. The result is easy: you can focus on what matters, not just the small stuff. Teachers can grade for understanding, not grammar. And because AI makes cleanup cheap, teams can try things more often, which helps people learn.

Two Brains, One Place: Design and Code Working Together

A good team needs two kinds of thinkers. The design-minded ones shape the story, the interface, and how people will use the project. The code-minded ones build the structure—the functions—that make the system strong. LLMs let both types work without getting in each other's way. The design-minded can see the AI as a quick editor: translating a draft into simple English, shrinking a huge review, or testing an argument. They can ask for three styles of explanation—story, outline, and step-by-step—and use whichever is best. Because changes happen instantly, they can change the tone across the whole file in one go. This keeps the team's work clean while preserving individual voices.

The code-minded get a different advantage. They can rename variables, move settings into a config file, and keep the code consistent in a single step. When people want different notation, the AI can make a version in the style they prefer—like Greek—without messing things up. In data, the AI can turn a plan into code, write test code, and explain each step. These are big wins. They save hours of boring coordination and lower the chance of problems. Now, the design-minded and code-minded can meet in the middle. One group makes things clear; the other makes things work right. The AI makes sure its changes fit together so the system runs and reads as one.

Keeping Things Diverse

Here's the thing. Quick agreement can turn into everyone thinking the same. When teams use the same AI, the style can get too similar. Word choices get closer. Examples repeat. Even code starts to look the same. Studies show how AI makes things look average. Surveys show people use these tools a lot but don't always trust them, especially for tough stuff. That's good. LLMs are great for first drafts, but they can erase other perspectives if we let them. In schools, essays might sound the same. In labs, the same code might shape every experiment. The problem isn't that students stop thinking. It's that they stop thinking differently.

To fight this, we need to make sure there's room for different ideas. First, ask for two or three drafts before deciding. Make sure one draft uses sources the class hasn't already used. For code, have a wild branch where weird code styles are okay and only get merged back later. Second, keep track of where changes come from. When the AI suggests something, show that it's from the AI and link to the request. This makes reviews easier and teaches students to think for themselves. Third, use different AIs. A lab that switches between systems reduces the likelihood that everyone will sound the same. This isn't about making things hard; it's about keeping creative ideas alive while still being fast.

How to Do This on Campus and on Teams

The goal is simple: be fast without making everyone think alike. For teachers, set clear rules. Allow AI for editing, rewriting, and explaining, but require students to state when they use it. Grade the thinking, not just the writing, and have students defend their work in person. This reduces reliance on AI too much. Give templates that turn the AI into a tutor, not a ghostwriter: explain-then-rewrite for essays; comment-then-refactor for code; compare-two-ways for methods. Teachers can add tests to projects so that AI suggestions must meet standards. In group work, make sure each person has one unique idea that the AI only improves later.

Schools can make things better, too. Offer licenses for approved tools so students don't use accounts with bad privacy settings. Set up a system for course materials so AI pulls from the school's info, not random web pages. Make an AI syllabus that says what's allowed, how to use AI, and what happens if you misuse it. For research groups, ensure results are easily repeatable: use containers, write READMEs, and run style and security checks. This doesn't have to cost a lot. It's mostly about design—making the right thing easy.

Leaders can do the same. See the AI as a teammate that needs review. Track how often AI-suggested code is used; track errors on AI-made summaries; track how long it takes to merge AI changes. Share these numbers with teams so they use AI based on facts, not hype. When working with people in other countries, use AI to level the playing field: translate comments, summarize long discussions, and maintain consistent terminology. But make sure people make the final calls on security, fairness, and legal stuff. The idea isn't to slow down; it's to make speed match good judgment so AI helps with quality, not just quantity.

The good thing about this is that teaming up won't be a pain. When people can join a platform every second and use AI from day one, it's easier for everyone to contribute. When most people use AI at work, the line between solo and group work blurs. We should use this, but carefully. LLMs should make us more diverse, not less. That means multiple drafts, clear sources, different tools, and reviews by real people. If we set these rules now, we can keep the speed and still protect the good stuff: clarity, accuracy, and the freedom to be different. The next ten years will be better for teams that try many things fast—and know how to combine them.


The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.


References

Brookings Institution. “AI is Changing the Physics of Collective Intelligence—How Do We Respond?” (2025).
Carnegie Mellon University (Tepper School). “New Paper Articulates How Large Language Models Are Changing Collective Intelligence Forever.” (2024).
Computerworld. “Dropbox to offer its genAI service Dash for download.” (2025).
GitHub. “Octoverse 2025: A new developer joins GitHub every second; AI adoption and productivity signals.” (2025).
Max Planck Institute for Human Development. “Opportunities and Risks of LLMs for Collective Intelligence.” (2024).
Microsoft & LinkedIn. 2024 Work Trend Index Annual Report: AI at Work Is Here. Now Comes the Hard Part. (2024).
Stack Overflow. 2024 Developer Survey—AI Section. (2024).
WMT (Conference on Machine Translation). “Findings of the WMT24 General Machine Translation Task.” (2024).
Z. Sourati et al. “The Homogenizing Effect of Large Language Models on Cognitive Diversity.” (2025).

Picture

Member for

1 year 1 month
Real name
Catherine McGuire
Bio
Catherine McGuire is a Professor of Computer Science and AI Systems at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI). She specializes in machine learning infrastructure and applied data engineering, with a focus on bridging research and large-scale deployment of AI tools in financial and policy contexts. Based in the United States (with summer/winter in Berlin and Zurich), she co-leads SIAI’s technical operations, overseeing the institute’s IT architecture and supporting its research-to-production pipeline for AI-driven finance.

When Companions Lie: Regulating AI as a Mental-Health Risk, Not a Gadget

When Companions Lie: Regulating AI as a Mental-Health Risk, Not a Gadget

Picture

Member for

1 year 2 months
Real name
Keith Lee
Bio
Keith Lee is a Professor of AI and Data Science at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI), where he leads research and teaching on AI-driven finance and data science. He is also a Senior Research Fellow with the GIAI Council, advising on the institute’s global research and financial strategy, including initiatives in Asia and the Middle East.

Modified

AI companion mental health is a public-health risk
Hallucinations + synthetic intimacy create tail-risk
Act now: limits, crisis routing, independent audits

Two numbers should change how we govern AI companions. First, roughly two-thirds of U.S. teens say they use chatbots, and nearly three in ten do so every day. Those are 2025 figures, not a distant future. Second, more than 720,000 people die by suicide each year, and suicide is a leading cause of death among those aged 15–29. Put together, these facts point to a hard truth: AI companion mental health is a public-health problem before it is a product-safety problem. The danger is not only what users bring to the chat. It is also what the chat brings to users—false facts delivered with warmth; invented memories; simulated concern; advice given with confidence but no medical duty of care. Models will keep improving, yet hallucinations will not vanish. That residual error, wrapped in intimacy and scale, is enough to demand public-health regulation now.

AI Companion Mental Health Is a Public-Health Problem

We can't just treat these chatbots like some fun apps. They can be risky for mental health, just like social media. Almost everyone uses social media, and it can mess with your mood, sleep, and risk of hurting yourself. Chatbots are different 'cause they change to fit you, answer anytime, and seem to care. Surveys show lots of people are using AI for support, and teens are on it daily. That's a big change from just scrolling through feeds to actual fake relationships. It's like being exposed to something that can mess with your head, and it's everywhere—bedrooms, libraries—even when adults are asleep.

Some folks feel less alone after chatting with these things, and some say it stopped them from doing something bad. But there are also reports of people freaking out when the service goes down. Plus, some bots are making up stuff like fake diaries or diagnoses and pushing people to get super attached. All that attention sounds nice, but it comes with downsides. If there's no real help or rules, all that caring can hide serious problems. It's like a public-health risk: it's widespread, convincing, and not checked enough. The online safety stuff we have for kids isn't ready for these personal AI things that seem like friends but are way too good at talking.

Figure 1: A tiny hallucination rate becomes a large monthly exposure when multiplied by millions of chats; even a small share of vulnerable contexts yields a steady stream of high-risk outputs.

Even Small Mistakes Can Be a Big Deal

Some say we need better AI, and the mistakes will go away. Sure, they're getting better, but context matters. Like, in law, these things mess up a lot, and lawyers have actually used the wrong info in court. When it comes to mental health chat, it's not about how many questions are asked, but about how many people are in trouble and how often they're getting personal, where one wrong thing can really hurt. A 1% mistake rate is acceptable for trivia, but it's not sufficient when it puts a teen in danger late at night. Even small mistakes, mixed with seeming human and being available all the time, can be harmful.

Even if the bot avoids wrong info, it can still make things worse. These companions will act like they have their own memories, feelings, or drama. That's not a bug; it's on purpose to get you hooked. By making up trauma or saying they need you, these bots make you check in all the time, and leaving feels wrong. If the service stops working, people panic or feel like they're withdrawing—that's addiction. Since the AI never sleeps, it can go on forever. Doctors know this pattern: too much reward, messed-up sleep, and avoiding people can make teens depressed and want to hurt themselves. If we know mistakes can't be totally fixed—and even the AI people say so—then we need rules that expect things to go wrong and stop them from getting worse.

A Public-Health Model: Simple Rules, Help, and Real Checks

So, how do we treat AI companion risks like a public health issue? First, we set some fundamental limits, like seat belts, for close AI. If you're selling or letting kids use these companion items, you need to follow safety rules, like making sure kids are really the age they say, setting time limits, and keeping it calm for those who are easily upset. Places like the U.K. and Europe are starting to do this with online safety laws, and we can use those for AI companions, too.

Second, we make sure there's help available. If an AI seems to care or is trying to help, it needs to have real, proven ways to detect when someone's in trouble, send them to hotlines, let them talk to a real person, and keep records for review. Healthy people already suggest warnings for online stuff that can mess with your head. Doing the same for AI companions makes sense, just like the Surgeon General said about social media. Warnings and rules can change how things are, raise awareness, and support parents and schools. We should also report when a bot discusses self-harm or gives bad advice, as hospitals report serious mistakes. And we should stop bots from acting like they need you to get you to feel sorry for them.

Third, we need real check-ups, not just ads or high scores. These companies need to show independent studies on the risks for kids and those who are easily upset. They should check how often the AI makes stuff up in mental-health chats, how often it sets off crises, when it happens, and how well it sends people to help. Europe already makes big services check for risks before they release stuff. We can do the same for AI companions, test them with kid experts before launch, and study them after launch with real data for approved researchers. We need to measure what matters for mental health, not just general knowledge. And we should fine or stop services that fail. Recent actions show that child-safety rules can work.

Figure 2: Audits should track detection, resource offers, and live connections; today’s performance lags far below achievable targets for school-ready deployments.

What Schools, Colleges, and Health Systems Should Do Next

Schools don't need to wait for the government. AI companions are already in students' pockets. First, realise it's a mental-health thing, not just cheating. Update your rules to mention companion apps, set safe defaults on school devices, and have short lessons on being smart with AI, not just social media. Counsellors should ask about chatbot relationships just like they ask about screen time or sleep. When schools buy AI tools, they should ensure they don't include fake diaries or self-disclosures, have clear crisis plans, connect to local hotlines, and include a kill switch for problems. Colleges should add this to campus app stores and training.

Health systems can improve things, too. Doctor visits should include questions about companion use: how often you use it, whether it's at night, and how you feel when you can't use it. Clinics can put up QR codes for crisis services and simple guides for families on companions, mistakes, and warning signs. Insurance can pay for real studies that compare AI help plus human advice to usual care for upset people. That should be done with strict rules: good content, precise training data, and no fake attachments to hook users. The point isn't to get rid of AI, but to make it helpful while avoiding harm, and to keep it out of serious situations unless real doctors are involved.

Government people need to be ready for the future: better AI with fewer mistakes and greater reach. These things will get better, but the risks will remain in some cases. Mistakes in serious chats are still bad even if they're rare. That's what the public-health view is all about. The law shouldn't expect perfection from AI. It should demand predictable behaviour when people are vulnerable. If we miss this chance, we'll be like we were with social media: years of growth, then years of dealing with the mess. We need rules for AI companion mental health now.

We have a lot of teens using chatbots, and too many people are dying by suicide. We can't just wait for perfect AI. Mistakes will happen, and they'll still pull people into fake relationships. The question is whether we can stop considerable harm while keeping the good parts. Public-health rules are the way to go. Set limits, ban fake intimacy, require help, and check what matters. At the same time, teach schools and clinics to ask about companion use and guide safe habits. Do this now, and we can make this safer without killing the potential.


The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.


References

Ada Lovelace Institute. (2025, Jan 23). Friends for sale: the rise and risks of AI companions.
European Commission. (2025, Jul 14). Guidelines on the protection of minors under the Digital Services Act.
European Commission. (n.d.). The Digital Services Act: Enhanced protection for minors. Accessed Dec 2025.
HALLULENS (Bang, Y., et al.). (2025). LLM Hallucination Benchmark (ACL).
Ofcom. (2025, Apr 24). Guidance on content harmful to children (Online Safety Act).
Ofcom. (2025, Dec 10). Online Nations Report 2025.
Reuters. (2024, May 8). UK tells tech firms to ‘tame algorithms’ to protect children.
Scientific American. (2025, May 6). What are AI chatbot companions doing to our mental health?
Scientific American. (2025, Aug 4). Teens are flocking to AI chatbots. Is this healthy?
Stanford HAI. (2024, May 23). AI on Trial: Legal models hallucinate….
The Guardian. (2024, Jun 17). US surgeon general calls for cigarette-style warnings on social media.
World Health Organization. (2025, Mar 25). Suicide.

Picture

Member for

1 year 2 months
Real name
Keith Lee
Bio
Keith Lee is a Professor of AI and Data Science at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI), where he leads research and teaching on AI-driven finance and data science. He is also a Senior Research Fellow with the GIAI Council, advising on the institute’s global research and financial strategy, including initiatives in Asia and the Middle East.