The bottleneck is investable projects, not capital
Build regulated grids and education contracts that generate steady, social returns
With clear rules, Southeast Asia sovereign wealth funds can turn savings into national progress
Small rate tweaks rarely change investment
Firms follow pecking order financing—cash first, then debt, equity last
Targeted credit tools and skills policy move capex more than blanket cuts
AI Chatfishing, AI Companions, and the Consent Gap: Why Disclosure Decides the Harm
Picture
Member for
1 year 1 month
Real name
Ethan McGowan
Bio
Ethan McGowan is a Professor of Financial Technology and Legal Analytics at the Gordon School of Business, SIAI. Originally from the United Kingdom, he works at the frontier of AI applications in financial regulation and institutional strategy, advising on governance and legal frameworks for next-generation investment vehicles. McGowan plays a key role in SIAI’s expansion into global finance hubs, including oversight of the institute’s initiatives in the Middle East and its emerging hedge fund operations.
Published
Modified
AI chatfishing hides bots in dating, removing consent and raising risks
Declared AI companions can help, but still need strict guardrails
Require clear disclosure, platform accountability, and education to close the consent gap
When a machine can pass for a person 70 percent of the time, the burden of proof shifts. In a 2025 study, which served as a modern Turing test, judges who chatted simultaneously with a human and a large language model identified the AI as the human 73 percent of the time. This study, conducted by [research institution], used [specific AI model] to demonstrate the rapid advancement of AI technology. This is not a simple trick; it marks the point where “Are you real?” stops being a fair defense. Move this capability into online dating, and the stakes increase rapidly. A wrong guess can lead to emotional, financial, and sometimes physical costs. AI chatfishing, which involves using AI to impersonate a human in romantic chat, thrives when the line between person and program is hard to define and is clear that it is blurred. To protect intimacy in the digital age, we need a straightforward rule that both the law and platforms can support: disclose whenever it’s a bot, every time, and make dishonesty costly for those who create the deception.
AI Chatfishing versus AI Companions: Consent, Expectations, and Incentives
AI chatfishing and AI companions are not the same. Both operate on the same technology and can appear warm, attentive, and even sweetly awkward. Yet, the key difference lies in consent. With AI companions, users opt in; they choose a system that openly simulates care. In contrast, AI chatfishing lures someone into a conversation that appears authentic but isn't. This difference alters expectations, power dynamics, and potential harm. It also changes incentives. Companion apps focus on satisfaction and retention by helping users feel better. Deceptive actors prioritize gaining resources—money, images, time, or attention—while concealing their methods. When the true nature of the speaker is hidden, consent is absent, and the risks range from broken trust to fraud and abuse. The distinction is moral, legal, and practical: disclosure turns a trick into a tool, while the lack of disclosure turns a tool into a trap. The rule must be as clear as the chat window itself.
Consequently, clear disclosure is essential. Europe has established a foundation. The EU AI Act requires that people be informed when they interact with a chatbot so that they can make an informed choice; this responsibility helps maintain trust. In October 2025, California took further steps. A new state law requires “companion chatbots” to state that they are not human, incorporate protections for minors, and submit public safety reports—including data on crisis referrals—so parents, educators, and regulators can see what occurs on a broader scale. Suppose a reasonable user might mistake a chatbot for a human. In that case, the system must disclose this, and that reminder must recur for minors. These rules outline a global standard the intimate internet needs: when software communicates like a person in contexts that matter, it must identify itself, and platforms must ensure that identity is unmistakable.
The Growing Role of AI in Dating and Why AI Chatfishing Thrives
Usage is increasing rapidly. Match Group’s 2025 Singles in America study, conducted with Kinsey Institute researchers, reveals that about one in four U.S. singles now uses AI to improve some aspect of dating, with the highest growth among Gen Z. A separate 2025 Norton survey shows that six in ten online daters believe they have had at least one conversation written by AI. Major outlets have noted this shift as platforms introduce AI helpers to enhance profiles and generate conversation starters. At the same time, users adopt tools to improve matches and avoid awkward starts. Meanwhile, role-play and companion platforms keep millions engaged for extended periods—indicating that conversational AI is not just a novelty but a new form of social software. This scale is significant: the more common AI chat becomes, the easier it is for AI chatfishing to blend in unnoticed.
Figure 1: Suspicion of AI far outpaces self-reported use, creating the consent gap where AI chatfishing thrives.
Detection is also challenging in practice. The same Turing-style study that raised concerns explains why: when models adopt a believable persona—young, witty, and adept online—they not only imitate grammar; they also replicate timing, tone, and empathy in ways that feel human. Our own psychology contributes to this, too. We enter new chats with hope and confirmation bias, wanting to see the best. A university lab study in 2024 found that non-experts only identified AI-generated text about 53 percent of the time under test conditions, barely above random guessing. Platforms are working on this. Bumble’s “Deception Detector” utilizes machine learning to identify and block fake, spam, and scam profiles; the company claims it can catch the majority of such accounts before they reach users. This is helpful, but it remains a platform promise rather than a guarantee, and deceptive actors adapt. The combination of increasing usage, human misjudgment, and imperfect filters creates a persistent space where AI chatfishing can thrive.
Why AI Companions Are Different and Still Risky for the Public Good
Stating that an entity is artificial changes the ethics. AI companions acknowledge their software status, and users can exit at any time. Many people report real benefits: practice for socially anxious daters, relief during grief, a space to express feelings, or prompts for self-reflection. A widely read first-person essay in 2025 describes how paying for an AI boyfriend after a painful breakup revealed that the tool provided steady emotional support—listening, prompting, and encouragement—enough to help the writer regain confidence in life. Public radio coverage reinforces this idea: for some, companions serve as a bridge back to human connection rather than a replacement. The role of AI companions in providing emotional support is significant and should be acknowledged.
However, companions still influence the public space. If millions spend hours practicing conversations with tireless software, their tolerance for messy human interaction may decline. Suppose teenagers learn early scripts about consent, boundaries, and empathy from systems trained on patterns. In that case, those scripts may carry into classrooms and dorms. New research warns that some systems can mirror unhealthy behaviors or blur sexual boundaries when protections are weak, especially for minors. The potential negative influence of AI companions on social skills is a concern that needs to be addressed.
Additionally, because many companion platforms frequently update their models and policies, the “personality” users rely on can change suddenly, undermining relationships and trust. The social consequences develop slowly and collectively: norms change, affecting not just individual experiences. This is why disclosure is vital, but not enough. We need age-appropriate designs, limitations on prolonged sessions, and protocols that encourage vulnerable users to seek human help when conversations become troubling. These requirements fit a declared, opt-in product; they cannot be enforced on hidden AI chatfishing.
Policy and Practice: A Disclosure-First Standard for the Intimate Internet
We should shift from asking whether AI chatfishing or AI companions are “worse” in the abstract and instead align rules with actual harm. Start with a disclosure-first standard for any intimate or semi-intimate setting, including dating apps, private marketplaces, and companion platforms. Suppose a reasonable person might mistake the speaker for a human. In that case, the system must say it is not and repeat this cue during more extended conversations. This rule should be enforced at the platform level, where logs, models, and interface choices can interact effectively. Align with existing frameworks: adopt the EU transparency baseline and the new California requirements to ensure users see consistent messages across regions. Importantly, notices should be placed inside the chat box, not in footers or terms links. The goal is to make identity part of the discussion, not a hidden puzzle for users already managing emotional and relational risks.
Figure 2: Even before widespread AI chatfishing, U.S. romance-scam losses stayed above $1B per year—evidence that disclosure and platform enforcement target a high-cost problem.
Markets also need incentives to combat deception that disclosure alone cannot address. Romance scammers extracted at least $1.14 billion from U.S. consumers in 2023, with a median reported loss of $2,000 per person; the actual toll is likely higher because many victims do not report. Platforms should face consequences when they ignore clear signals: rapid message frequency from new accounts, reuse of scripted templates, or coordination among fraud rings across apps. Regulators can facilitate this by measuring meaningful outcomes. They should publish standardized safety dashboards on bot takedown rates, time to block after a user’s first report, and median losses recovered. Require annual third-party audits to investigate evasion tactics and false positives. California’s initiative to mandate public reporting on crisis referrals is a positive step; similar disclosures on deception would help turn safety from a slogan into a measurable factor. For declared companions, require opt-in for minors, along with additional reminders and session limits, as California law requires. These are not theoretical ideals; they are practical measures aimed at reducing the space where AI chatfishing can profit.
Education Closes the Loop
K-12 digital citizenship programs should go beyond phishing links to focus on interpersonal authenticity: how to identify AI chatfishing, how to ask for identity confirmation during a chat without feeling ashamed, and how to exit quickly if something feels off. Colleges should update conduct codes and reporting systems to include AI-related harassment and deception, particularly when students use models to manipulate others. Procurement teams can act now by choosing tools that display identity labels by default in the chat interface and offer a one-tap option for verified human support. Faculty can model best practices in office-hour chatbots and course communities by using clear badges and periodic reminders. These small, visible actions help students adopt a norm: in close conversations, being honest about who—and what—we are is the foundation of care.
The question of which is “worse” has a practical answer. Hidden AI in romance is worse because it strips away consent, invites fraud, and exploits intimacy for profit. Declared AI companions can still pose risks, but the harms are limited by choice and design. Suppose we want a healthier future for dating and connection. In that case, we should create regulations that make disclosure seamless and deceit costly. We must educate people, especially teenagers and young adults, on what an “AI disclosure” looks like and why it is essential. Finally, we should focus on measuring outcomes rather than just announcing features, so platforms that prioritize user protection succeed while those that enable AI chatfishing fall short. The experiments have already demonstrated that machines can impersonate us; the real challenge now is whether we can remain genuine ourselves.
The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.
References
BusinessWire. (2024, February 6). Bumble Inc. launches Deception Detector™: An AI-powered shield against spam, scam, and fake profiles. European Commission. (n.d.). AI Act — transparency obligations. Federal Trade Commission. (2024, February 13). Love stinks—when a scammer is involved. Harper’s Bazaar. (2025, January 24). How I learned to stop worrying and love the bot. Jones, C. R., & Bergen, B. K. (2025). Large language models pass the Turing test. arXiv preprint. LegiScan. (2025). California SB 243 — Companion chatbots (enrolled text). Match Group & The Kinsey Institute. (2025, June 10). 14th Annual Singles in America study. Penn State University. (2024, May 14). Q&A: The increasing difficulty of detecting AI versus human. Scientific American. (2025, October). The rise of AI “chatfishing” in online dating poses a modern Turing test. Skadden, Arps. (2025, October 13). New California “companion chatbot” law: Disclosure, safety protocol, and reporting requirements. The Washington Post. (2025, July 3). How AI is impacting online dating and apps. WHYY. (2025, May 7). Will AI replace human connection? Wired. (2025, August). Character.AI pivots to AI entertainment; 20M monthly users.
Picture
Member for
1 year 1 month
Real name
Ethan McGowan
Bio
Ethan McGowan is a Professor of Financial Technology and Legal Analytics at the Gordon School of Business, SIAI. Originally from the United Kingdom, he works at the frontier of AI applications in financial regulation and institutional strategy, advising on governance and legal frameworks for next-generation investment vehicles. McGowan plays a key role in SIAI’s expansion into global finance hubs, including oversight of the institute’s initiatives in the Middle East and its emerging hedge fund operations.
Quantum Echoes delivered a verified 13,000× speedup, elevating quantum chaos education
Curricula should pivot from qubit counts to competence in echoes, OTOCs, and noise
Fund teacher PD, simple cloud labs, and artifact-based assessment
APEC economic cooperation keeps delivering even without a U.S.–China truce
Mid-sized members drive results through trade facilitation, mobility, and digital trust
Train skills; scale ABTC and CBPR.
Debt stigma slows borrowing and drags on money velocity
Purpose-framed, safeguarded credit boosts productive investment in education and firms
Use macroprudential guardrails to lift growth without fueling bubbles
Building the Third AI Stack: An Airbus-Style Playbook for Public AI Cooperation
Picture
Member for
1 year 2 months
Real name
Keith Lee
Bio
Keith Lee is a Professor of AI and Data Science at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI), where he leads research and teaching on AI-driven finance and data science. He is also a Senior Research Fellow with the GIAI Council, advising on the institute’s global research and financial strategy, including initiatives in Asia and the Middle East.
Published
Modified
Build a third AI stack for education
Adopt an Airbus-style consortium for procurement
Prioritize teacher time-savings, multilingual access, and audited safety
A single number highlights the stakes: in 2024, Airbus delivered 766 jetliners, which is more than twice Boeing's 348. This marks a significant advantage for a European consortium over an American company that once seemed unbeatable. The lesson isn't about airplanes; it's about how countries can combine resources, industrial policies, and standards to shape a critical market. Education is at a similar turning point. If we want safe, affordable, multilingual AI for classrooms, research, and public services, we need the third AI stack. This stack would be a collaborative layer of compute, data, and safety infrastructure that works alongside corporate platforms and open-source communities. Europe is starting this stack with €7 billion for shared supercomputing, while the United States is launching NAIRR to provide compute and datasets to researchers. Together, these initiatives outline a path similar to Airbus's AI approach. The opportunity is limited. Compute and model scales are rapidly increasing. Without coordinated efforts, the gap between what schools need and what private stacks offer will grow.
Why the third AI stack matters now
The third AI stack is not just a future possibility, but a pressing need in the present. Its importance is underscored by the current market dynamics and the increasing concentration of advanced AI in the hands of a few.
The market for advanced AI is concentrating where it counts most: accelerator hardware and the software that keeps users dependent on it. Analysts estimate that training costs for cutting-edge systems have increased by four to five times annually since 2010. The latest compute-intensive runs have exceeded 10^26 floating-point operations. TrendForce reports that, across all types of AI server accelerators, one vendor holds a significant market share, nearing 90 percent in GPU sales alone. This isn't a critique of success; it's a warning about potential bottlenecks and the erosion of bargaining power. When compute, essential frameworks, and curated data reside within a few proprietary stacks, public-interest applications—like multilingual tutoring, teacher planning, and assessment research—get sidelined or priced out. A third AI stack would provide public and cooperative infrastructure to ensure basic access to compute, data protection that protects privacy, and safety evaluations. Without it, education systems will struggle, buying isolated solutions while missing the chance to influence foundational resources.
Figure 1: One vendor dominates GPU supply for AI servers, creating a bargaining bottleneck; a third AI stack can pool procurement to lower risk and price.
The policy framework for this third layer is already forming. The EU's AI Act took effect on August 1, 2024, with general-purpose obligations gradually implemented throughout 2025 and high-risk rules to follow. Simultaneously, the OECD's AI Principles—which many jurisdictions now follow—offer a common language for human-centered, trustworthy AI. Regulation alone can't create capacity, but it clarifies responsibilities and reduces uncertainty. When we combine guidelines with pooled resources—common compute, linked data spaces, and shared testing—we create a model for an industry that meets public needs while scaling globally. For schools and universities, this means reliable access to tools that match curriculum, languages, and community values, rather than only seeing systems that prioritize advertising or sales.
An Airbus-style consortium for public AI
Airbus began in 1970 as a Franco-German-UK-Spanish consortium to challenge a dominant player. It didn't succeed through protectionism alone; it pooled R&D risks, standardized interfaces, and coordinated procurement with long-term goals. The results are precise: after facing challenges such as the pandemic and the 737 MAX crisis, Airbus has led global deliveries for 6 years in a row, delivering 766 aircraft in 2024. While no analogy is perfect, the political economy is relevant. What once seemed like a fixed duopoly was transformed by a patient coalition that used shared financing and common standards to accelerate progress. A third AI stack can follow this model: a transatlantic, and eventually global, consortium that networks compute, curates open datasets, and establishes safety tests with independent labs, which collectively provide affordable solutions for public users.
Figure 2: Airbus’s consortium model shows how coordinated investment can shift an industry—exactly the kind of cooperation the third AI stack needs.
Elements of that consortium already exist. Europe's EuroHPC Joint Undertaking has a budget of about €7 billion to deploy and connect supercomputers across member states. It is now establishing "AI Factories" in different countries to host fine-tuning, inference, and evaluation workloads. In the United States, the National AI Research Resource (NAIRR) pilot began in January 2024, involving 10 federal agencies and over two dozen private and nonprofit partners to provide advanced computing, datasets, models, and training to researchers and educators. Meanwhile, an international network of AI Safety Institutes is beginning to coordinate evaluation methods and share results. If we bring these efforts together under a governance charter focused on education and science, the Airbus analogy could shift from a mere metaphor to a practical plan.
Financing the third AI stack: pooled compute, open data, shared safety
Financing is crucial. The cost of training next-generation models is approaching $1 billion, and infrastructure plans require multi-year commitments. Public budgets can't—and shouldn't—compete dollar-for-dollar with large tech companies, but they can leverage their buying power. Three actions stand out. First, extend EuroHPC-style pooled procurement to create a joint window for education compute, ensuring a set amount of GPU hours for universities, vocational programs, and school districts, with reserved capacity for teacher resources. Second, expand NAIRR into a standing fund with matched contributions from foundations and industry, linked to open licensing for models below a certain capability threshold and protecting access to sensitive datasets above that threshold. Third, formalize the International Network of AI Safety Institutes as the neutral testing ground for models intended for public education, with evaluations focusing on pedagogy, bias, and child safety. These aren't ambitious goals; they extend existing programs.
The regulatory schedule supports this approach. The phased obligations of the EU AI Act for general-purpose and high-risk systems create clear targets for vendors. The OECD's compute frameworks provide governments with a guide to plan capacity along three dimensions: availability and use, effectiveness, and resilience. From the supply side, the market for accelerators is growing rapidly; one forecast predicts AI data-center chips could surpass $200 billion by 2025. As capacity increases, public purchasers should negotiate pricing that favors educational use, exchanging predictability for volume—similar to how Airbus-era launch aid traded capital for long-term orders within WTO rules. The aim isn't to choose winners; it's to ensure that the third AI stack exists and remains affordable.
Open data is the second pillar. Education can't rely on data scraped from the web, which often fails to represent classrooms, vocational training, and non-English-speaking learners. Horizon Europe and Digital Europe are already funding structured data spaces; these should expand to include multilingual educational data, incorporating consent, age-appropriate safeguards, and tracking of data sources. Curated datasets for lesson plans, assessment items, and classroom discussions—licensed for research and public services—could enable smaller models to perform effectively on tasks that matter in schools. The UNESCO Recommendation on AI ethics, accepted by all 194 member states, addresses rights and equity. The third AI stack needs a data governance component that translates these principles into practical rules for generating, auditing, and deleting student data.
Safety standards need to be measurable, not just theoretical. The emerging network of AI Safety Institutes—from the UK to the U.S.—has started publishing pre-deployment evaluations and building shared risk taxonomies. Education regulators can build on this work by adding specific tests for problems such as incorrect citations, harmful stereotypes in feedback, or unsafe experimental suggestions in lab simulations. The goal isn't to certify "safe AI," but to create a system of ongoing evaluation linked to deployment. This means schools would adopt only models that pass basic audits, and providers would commit to retesting after significant updates. The third stack would incorporate red-teaming sandboxes and make evaluation results available to parents and teachers in clear language, turning safety into a common resource.
From cooperation to classroom impact
The Airbus analogy matters most when it benefits teachers on Monday mornings. Evidence is growing that well-guided generative tools can save time. In England's first large-scale trial, teachers using ChatGPT with a brief guide reduced planning time by about 31 percent while maintaining quality. The OECD's latest TALIS survey shows that one in three teachers in participating systems already use AI at work, with higher adoption rates in Singapore and the UAE. These are early signs, not guarantees, but they indicate where public value lies: planning, differentiation, formative feedback, and translation across languages and reading levels. Suppose the third AI stack lowers the cost of these tasks and implements safeguards. In that case, the advantages will multiply across millions of classrooms.
The policy initiatives are concrete. Ministries and districts can contract for "educator-first" model hosting on AI Factory sites that ensure data residency, fund small multilingual models refined on licensed curriculum content, and develop secure portals that give teachers access without dealing with consumer terms of service. Universities can reserve NAIRR allocations for education departments and labs, speeding up rigorous trials that assess learning improvements and bias reduction, rather than just user satisfaction. Safety institutes can hold annual "education evaluations" in which vendors are tested against shared benchmarks developed by teachers and child development experts. None of this requires building a national chatbot; it requires cooperation across borders, budgets, and standards so that classroom-ready AI is a guaranteed resource, not an optional extra.
Critics may argue that public-private cooperation could reinforce existing players or that governments may act too slowly for an industry that doubles its computing every few quarters. Both points have merit. However, the Airbus story shows that coordinated buyers can influence supply while enhancing safety and interoperability. The pace of change isn't as mismatched as it seems. EuroHPC is setting up AI-focused sites; NAIRR is bringing in partners; the International Network of AI Safety Institutes is synchronizing tests. The crucial missing element is a clear education mandate that unites these efforts for classroom applications and establishes shared goals: hours saved, reduced inequalities, measurable improvements in reading and numeracy, and clear audits of model behavior in school environments.
The alternative is stagnation. Suppose we let the market set its own direction. In that case, education will be stuck with sporadic pilots and individual licenses that rarely expand. If we focus solely on regulation, we will create paperwork without the capacity to implement it. The third AI stack represents a middle ground: a collaborative infrastructure that minimizes risks, lowers costs, and empowers educators. Airbus didn't eliminate Boeing; it compelled a duopoly to compete on quality and efficiency. A public AI consortium won't replace private stacks; it will drive them to compete based on public value.
Let's return to the opening figure: an international consortium delivered 766 aircraft last year, outpacing its rival's total by more than double because countries chose to share risk, align standards, and collaborate over decades. Education needs this same long-term perspective for AI. Build the third AI stack now: secure compute resources for teaching and research on EuroHPC and NAIRR, create multilingual, consented data spaces for pedagogy, and establish safety evaluation as an ongoing public service. Then enter into large-scale contracts so that every teacher and learner can access trustworthy AI in their language, aligned with their curriculum, and compliant with their laws. This isn't just about catching the latest trends; it's about establishing the foundational framework that lowers costs, builds trust, and allows schools to choose what works best. Airbus demonstrates that practical cooperation can accelerate progress. The following two years will determine whether education is in control or merely trying to keep up.
The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.
References
AP News. (2025, January). Boeing's aircraft deliveries and orders in 2024 reflect the company's rough year. Retrieved from AP News. Cirium. (2025, January). Shaking out the Airbus and Boeing 2024 delivery numbers. Retrieved from Cirium ThoughtCloud. Digital Strategy, European Commission. (2024–2025). AI Act: Regulatory framework and application timeline. Retrieved from European Commission. Education Endowment Foundation. (2024, December). Teachers using ChatGPT can cut planning time by 31%. Retrieved from EEF. EuroHPC Joint Undertaking. (2021–2027). Discover EuroHPC JU and budget overview. Retrieved from EuroHPC. European Commission. (2024). AI Factories—Shaping Europe's digital future. Retrieved from European Commission. OECD. (2023). A blueprint for building national compute capacity for AI. Retrieved from OECD Digital Economy Papers. OECD. (2024). Artificial intelligence, data and competition. Retrieved from OECD. OECD. (2025). Results from TALIS 2024. Retrieved from OECD. OECD.AI. (2019–2024). OECD AI Principles and adherents. Retrieved from OECD.AI. Reuters. (2025, January). Airbus keeps top spot with 766 jet deliveries in 2024. Retrieved from Reuters. Simple Flying. (2025, October). Underdog story: How Airbus became part of the planemaking duopoly. Retrieved from Simple Flying. Stanford HAI. (2025). AI Index 2025—Economy chapter. Retrieved from Stanford HAI. TrendForce. (2024, July). NVIDIA's market share in AI servers and accelerators. Retrieved from TrendForce. U.S. National Science Foundation. (2024, January). Democratizing the future of AI R&D: NAIRR pilot launch. Retrieved from NSF. U.S. Department of Commerce / NIST. (2024, November). Launch of the International Network of AI Safety Institutes. Retrieved from NIST. Epoch AI. (2024–2025). Training compute trends and model thresholds. Retrieved from Epoch AI.
Picture
Member for
1 year 2 months
Real name
Keith Lee
Bio
Keith Lee is a Professor of AI and Data Science at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI), where he leads research and teaching on AI-driven finance and data science. He is also a Senior Research Fellow with the GIAI Council, advising on the institute’s global research and financial strategy, including initiatives in Asia and the Middle East.