Skip to main content

AI Grief Companion: Why a Digital Twin of the Dead Can Be Ethical and Useful

AI Grief Companion: Why a Digital Twin of the Dead Can Be Ethical and Useful

Picture

Member for

1 year
Real name
Catherine Maguire
Bio
Catherine Maguire is a Professor of Computer Science and AI Systems at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI). She specializes in machine learning infrastructure and applied data engineering, with a focus on bridging research and large-scale deployment of AI tools in financial and policy contexts. Based in the United States (with summer in Berlin and Zurich), she co-leads SIAI’s technical operations, overseeing the institute’s IT architecture and supporting its research-to-production pipeline for AI-driven finance.

Modified

AI grief companions—digital twins—can ethically support mourning when clearly labeled and consent-based
Recent evidence shows chatbots modestly reduce distress and can augment scarce grief care
Regulate with strong disclosure, consent, and safety standards instead of bans

In 2024, an estimated 62 million people died worldwide. Each death leaves a gap that data rarely captures. Research during the pandemic found that one death can affect about nine close relatives in the United States. Even if that multiplier varies, the human impact is significant. Now consider a sobering fact from recent reviews: around 5% of bereaved adults meet the criteria for prolonged grief disorder, a condition that can last for years. These numbers reveal a harsh reality. Even well-funded health systems cannot meet the need for timely, effective grief care. In this context, an AI grief companion—a clearly labeled, opt-in tool that helps people remember, share stories, and manage emotions—should not be dismissed as inappropriate. It should be tested, regulated, and, where it proves effective, used. The moral choice is not to compare it to perfect therapy on demand. It is to compare it to long waits, late-night loneliness, and, too often, a $2 billion-a-year psychic market offering comfort without honesty.

Reframing the question: from “talking to the dead” to a disciplined digital twin

The phrase “talking to the dead” causes concern because it suggests deception. A disciplined AI grief companion should do the opposite. It must clearly disclose its synthetic nature, use only agreed-upon data, and serve as a structured aid for memory and meaning-making. This aligns with the concept of a digital twin: a virtual model connected to real-world facts for decision support. Digital twins are used to simulate hearts, factories, and cities because they provide quick insights. In grief, the “model” consists of curated stories, voice samples, photos, and messages, organized to help survivors recall, reflect, and manage emotions—not to pretend that those lost are still here. The value proposition is practical: low-cost, immediate, 24/7 access to a tool that can encourage healthy rituals and connect people with support. This is not just wishful thinking. Meta-analyses since 2023 show that AI chatbots can reduce symptoms of anxiety and depression by small to moderate amounts, and grief-focused digital programs can be helpful, especially when they encourage healthy exposure to reminders of loss. Some startups already provide memorial chat or conversational archives. Their existence is not proof of safety, but it highlights feasibility and demand.

Figure 1: Most bereaved people adapt; rates surge after disasters—supporting low-cost, targeted tools like governed AI grief companions.

The scale issue shows why reframing is essential now. The World Health Organization reports a global average of roughly 13 mental-health workers for every 100,000 people, with significant gaps in low- and middle-income countries. In Europe, treatment gaps for common disorders remain wide. Meanwhile, an industry focused on psychic and spiritual services generates about $2.3 billion annually in the United States alone. Suppose we could replace even a fraction of that spending with a transparent AI grief companion held to clinical safety standards and disclosure rules. In that case, the ethical response is not to ban the practice but to regulate and evaluate it.

What the evidence already allows—and what it does not

We should be cautious about our claims. There are currently no large randomized trials of AI grief companions based on a loved one’s data. However, related evidence is relevant. Systematic reviews from 2023 to 2025 show that conversational agents can reduce symptoms of depression and anxiety, with effect sizes comparable to many low-intensity treatments. A 2024 meta-analysis found substantial improvements for chatbot-based support among adults with depressive symptoms. The clinical reasoning is straightforward: guided journaling, cognitive reframing, and behavioral activation can be delivered in small, manageable steps at any time. Grief-specific digital therapy has also progressed. Online grief programs can decrease grief, depression, and anxiety, and early trials of virtual reality exposure for grief show longer-term benefits compared to conventional psychoeducation. When combined with statistics about grief, such as meta-analyses placing prolonged grief disorder around 5% among bereaved adults in general samples, we see a cautious but hopeful inference: a well-designed AI grief companion may not cure complicated grief, but it can reduce distress, encourage help-seeking, and assist with memory work—especially between limited therapy sessions.

Two safeguards are crucial. First, there must be a clear disclosure that the system is synthetic. The European Union’s AI Act requires users to be informed when interacting with AI and prohibits manipulative systems and the use of emotion recognition in schools and workplaces. Second, clinical safety is essential. The WHO’s 2024 guidance on large multimodal models emphasizes oversight, documented risk management, and testing for health use. Some tools already operate under health-system standards. For instance, Wysa’s components have UK clinical-safety certifications and are being assessed by NICE for digital referral tools. These are not griefbots, but they illustrate what “safety first” looks like in practice.

Figure 2: Small but reliable effects on depression and anxiety—useful as an adjunct between scarce therapy sessions.

The ethical concerns most people have are manageable

Three ethical worries dominate public discussions. The first is deception—that people may be fooled into thinking the deceased is “alive.” This can be addressed with mandatory labeling, clear cues, and language that avoids first-person claims about the present. The second concern is consent—who owns the deceased's data? The legal landscape is unclear. The GDPR does not protect the personal data of deceased individuals, leaving regulations to individual states. France, for example, has implemented post-mortem data regulations, but there are inconsistencies. The policy solution is straightforward but challenging to execute: no AI grief companion should be created without explicit consent from the data donor before death, or, if that is not possible, with a documented legal basis using the least invasive data, and allowing next of kin a veto right. The third concern is the exploitation of vulnerability. Italy’s data protection authority previously banned and fined a popular companion chatbot over risks to minors and unclear legal foundations, highlighting that regulators can act swiftly when necessary. These examples, along with recent voice likeness controversies involving major AI systems, demonstrate that consent and disclosure cannot be added later; they must be integrated from the start.

Design choices can minimize ethical risks. Time-limited sessions can prevent overuse. An opt-in “memorial mode” can stop late-night drifts into romanticizing or magical thinking. A locked “facts layer” can prevent the system from creating new biographical claims and rely only on verified items approved by the family. There should never be financial nudges within a session. Each interaction should conclude with evidence-based prompts for healthy behaviors: sleep hygiene, social interactions, and, when necessary, crisis resources. Since grief involves family dynamics, a good AI grief companion should also support group rituals—shared story prompts, remembrance dates, and printable summaries for those who prefer physical copies. None of these features is speculative; they are standard elements of solid health app design and align with WHO’s governance advice for generative AI in care settings.

A careful approach to deployment that does more good than harm

If we acknowledge that the alternative is often nothing—or worse, a psychic upsell—what would a careful rollout look like? Begin with a narrow, regulated use case: “memorialized recall and support” for adults in the first year after a loss. The AI grief companion should be opt-in, clearly labeled at every opportunity, and default to text. Voice and video options raise consent and likeness concerns and should require extra verification and, when applicable, proof of the donor’s pre-mortem consent. Training data should be kept to a minimum, sourced from the person’s explicit recordings and messages rather than scraped from the internet, and secured under strict access controls. In the EU, providers should comply with the AI Act’s transparency requirements, publish risk summaries, and disclose their content generation methods. In all regions, they should report on accuracy and safety evaluations, including rates of harmful outputs and incorrect information about the deceased, with documented suppression techniques.

Clinical integration is essential. Large health systems can evaluate AI grief companions as an addition to stepped-care models. For mild grief-related distress, the tool can offer structured journaling, values exercises, and memory prompts. For higher scores on recognized assessments, it should guide users toward evidence-based therapy or group support and provide crisis resources. This is not a distant goal. Health services already use AI-supported intake and referral tools; UK evaluators have placed some in early value assessment tracks while gathering more data. The best deployments will follow this model: real-world evaluations, clear stopping guidelines, and public dashboards.

Critics may argue that any simulation can worsen attachment and delay acceptance. That concern is valid. However, the theory of “continuing bonds” suggests that maintaining healthy connections—through letters, photographs, and recorded stories—can aid in adaptive grieving. Early research into digital and virtual reality grief interventions, when used carefully, indicates advantages for avoidance and meaning-making. The boundary to uphold is clear: no false claims of presence, no fabrications of new life events, and no promises of afterlife communication. The AI grief companion is, at best, a well-organized echo—helpful because it collects, structures, and shares what the person truly said and did. When used mindfully, it can help individuals express what they need and remember what they fear losing.

Anticipate another critique: chatbots are fallible and sometimes make errors or sound insensitive. This is true. That’s why careful design is essential in this area. Hallucination filters should block false dates, diagnoses, and places. A “red flag” vocabulary can guide discussions away from areas where the system lacks information. Session summaries should emphasize uncertainty rather than ignore it. Additionally, the system must never offer clinical advice or medication recommendations. The goal is not to replace therapy. It is to provide a supportive space, gather stories, and guide people toward human connection. Existing evidence from conversational agents in mental health—though not specific to griefbots—supports this modest claim.

There is also a justice aspect. Shortages are most severe where grief is heavy and services are limited. WHO data show stark global disparities in the mental health workforce. Digital tools cannot solve structural inequities, but they can improve access—helping those who feel isolated at 3 AM. For migrants, dispersed families, and communities affected by conflict or disaster, a multilingual AI grief companion could preserve cultural rituals and voices across distances. The ethical risks are real, but so is the moral argument. We should establish regulations that ensure safe access rather than push the practice underground.

The figures that opened this essay will not change soon. Tens of millions mourn each year, and a significant number struggle with daily life. Given this context, a well-regulated AI grief companion is not a gimmick. It is a practical tool that can make someone’s worst year a bit more bearable. The guidelines are clear: disclosure, consent, data minimization, and strict limits on claims. The pathway to implementation is familiar: assess as an adjunct to care, report outcomes, and adapt under attentive regulators using the AI Act’s transparency rules and WHO’s governance guidance. The alternative is not a world free of digital grief support. It is a world where commercial products fill the gap with unclear models, inadequate consent, and suggestive messaging. We can do better. A digital twin based on love and truth—clearly labeled and properly regulated—will never replace a hand to hold. But it can help someone through the night and into the morning. That is a good reason to build it well.


The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.


References

Digital Twin Consortium. (2020). Definition of a digital twin.
Digital Twin Consortium. (n.d.). What is the value of digital twins?
Eisma, M. C., et al. (2025). Prevalence rates of prolonged grief disorder… Frontiers in Psychiatry.
European Parliament. (2024, March 13). Artificial Intelligence Act: MEPs adopt landmark law.
Feng, Y., et al. (2025). Effectiveness of AI-Driven Conversational Agents… Journal of Medical Internet Research.
Guardian. (2024, June 14). Are AI personas of the dead a blessing or a curse?
HereAfter and grieftech overview. (2024). Generative Ghosts: Anticipating Benefits and Risks of AI… arXiv.
IBISWorld. (2025). Psychic Services in the US—Industry size and outlook.
Li, H., et al. (2023). Systematic review and meta-analysis of AI-based chatbots for mental health. npj Digital Medicine.
NICE. (2025). Digital front door technologies to gather information for assessments for NHS Talking Therapies—Evidence generation plan.
Our World in Data. (2024). How many people die each year?
Privacy Regulation EU. (n.d.). GDPR Recital 27: Not applicable to data of deceased persons.
Torous, J., et al. (2025). The evolving field of digital mental health: current evidence… Harvard Review of Psychiatry (open-access summary).
WHO. (2024, January 18). AI ethics and governance guidance for large multimodal models.
WHO. (2025, September 2). Over a billion people living with mental health conditions; services require urgent scale-up.
Zhong, W., et al. (2024). Therapeutic effectiveness of AI-based chatbots… Journal of Affective Disorders.
Verdery, A. M., et al. (2020). Tracking the reach of COVID-19 kin loss with a bereavement multiplier. PNAS.
Wysa. (n.d.). First AI mental health app to meet NHS DCB 0129 clinical-safety standard.

Picture

Member for

1 year
Real name
Catherine Maguire
Bio
Catherine Maguire is a Professor of Computer Science and AI Systems at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI). She specializes in machine learning infrastructure and applied data engineering, with a focus on bridging research and large-scale deployment of AI tools in financial and policy contexts. Based in the United States (with summer in Berlin and Zurich), she co-leads SIAI’s technical operations, overseeing the institute’s IT architecture and supporting its research-to-production pipeline for AI-driven finance.

AI Capital and the Future of Work in Education

AI Capital and the Future of Work in Education

Picture

Member for

1 year
Real name
Ethan McGowan
Bio
Ethan McGowan is a Professor of Financial Technology and Legal Analytics at the Gordon School of Business, SIAI. Originally from the United Kingdom, he works at the frontier of AI applications in financial regulation and institutional strategy, advising on governance and legal frameworks for next-generation investment vehicles. McGowan plays a key role in SIAI’s expansion into global finance hubs, including oversight of the institute’s initiatives in the Middle East and its emerging hedge fund operations.

Modified

AI capital cheapens routine thinking and shifts work toward physical, contact-rich tasks
Gains are strong on simple tasks but stall without investment in real-world capacity
Schools should buy AI smartly, redesign assessments, and fund high-touch learning

Nearly 40% of jobs worldwide are at risk from artificial intelligence. This estimate from the International Monetary Fund highlights a simple fact: the cost of intelligence has decreased so much that software can now handle a greater share of routine thinking. We can think of this software as AI capital—an input that works alongside machines and people. Intelligence tasks are the first to be automated, while human work focuses on tasks that require physical presence. The cost of advanced AI models has dropped sharply since 2023. Additionally, hardware is providing more computing power for each euro spent every year. This trend lowers the effective cost of AI capital, while classroom, lab, and building expenses remain relatively stable. In this environment, shifts in wages and hiring occur not because work is vanishing, but because the mix of production is changing. Suppose educational institutions continue teaching as if intelligence were limited and physical resources were flexible. In that case, graduates will be unprepared for a labor market that no longer exists. It's crucial that we ensure equitable AI use and resource reallocation to prevent potential disparities.

Reframing AI Capital in the Production Function

The usual story of production—a combination of labor and physical capital—overlooks a third input that we now need to recognize. Let's call it A, or AI capital. This refers to disembodied, scalable intelligence that can perform tasks previously handled by clerks, analysts, and junior professionals. In a production function with three inputs, represented as 𝑌= 𝑓(𝐿,𝐾,𝐴), intelligence tasks are the first to be automated because the price of A is dropping faster than that of K. Many cognitive tasks can also be broken down into modules, making them easier to automate. A recent framework formalizes this idea in a two-sector model: intelligence output and physical output combine to produce goods and services. When A becomes inexpensive, the saturation of intelligence tasks increases, but the gains depend on having complementary physical capacity. This leads to a reallocation of human labor toward physical tasks, creating mixed effects on wages: wages may rise initially, then fall as automation deepens. Policies that assume a simple decline in wages miss this complex pattern.

Figure 1: Total labor 𝐿 endogenously splits between intelligence production 𝐼 and physical production 𝑃. As AI capital lowers the cost of intelligence tasks, labor shifts toward physical work while overall output 𝑌 depends on both streams.

The real question is not whether AI “replaces jobs” but whether adding another unit of AI capital increases output more than hiring one additional person. For tasks that are clearly defined, the answer is already yes. Studies show significant productivity boosts: mid-level writers completed work about 40% faster using a general-purpose AI assistant. In comparison, developers finished coding tasks approximately 56% faster with an AI partner. However, these gains decrease with more complex tasks, where AI struggles with nuances—this reflects the “jagged frontier” many teams are encountering. This pattern supports the argument for prioritizing AI: straightforward cognitive tasks will be automated first. In contrast, complex judgment tasks will remain human-dominated for now. We define “productivity” as time to completion and quality as measured by standardized criteria, noting that effect sizes vary with task complexity and user expertise.

When A expands while K and L do not, the share of labor can decline even when overall output stays constant. In simple terms, the same amount of production can require fewer workers. But this isn't an inevitable outcome. If physical and intellectual outputs complement rather than replace one another, investments in labs, clinics, logistics, and classrooms can help stabilize wages. This points to a critical shift for education systems: focusing on environments and approaches where physical presence enhances what AI alone cannot provide—care, hands-on skill, safety, and community.

Evidence: Falling AI Capital Prices, Mixed Productivity, Shifting Wages

The price indicators for AI capital are clear. By late 2023, API prices for popular models had dropped significantly, and hardware performance improved by about 30% each year. Prices won’t decline uniformly—newer models might be more expensive—but the overall trend is enough to change how businesses operate. Companies that previously paid junior analysts to consolidate memos are now using prompts and templates instead. Policymakers should interpret these signals as they would energy or shipping prices: as active factors influencing wages and hiring. We estimate the “price of A” by looking at published per-token API rates and hardware cost-effectiveness; we do not assume uniform access across all institutions.

Figure 2: As a larger share of tasks is automated, total output rises and more workers shift into physical, hands-on roles. The gains flatten at high automation, showing why investment in real-world capacity is needed to keep productivity growing.

The productivity evidence is generally positive but varies widely. Controlled experiments show significant improvements in routine content creation and coding. At the same time, observational studies and workforce surveys highlight that integrating AI can be challenging, and the benefits are often immediate. Some teams waste time fixing AI-generated text or adjusting to new workflows, while others achieve notable speed improvements. The result is an increase in task-level performance coupled with friction at the system level. Sector-specific data supports this: the OECD reports that a considerable number of job vacancies are in roles heavily exposed to AI, even as skill demands change when workers lack specialized AI skills. Labor-market rewards have also begun to shift: studies show wage premiums for AI-related skills, typically ranging from 15% to 25%, depending on the market and methodology.

The impact is not evenly distributed. The IMF predicts high exposure to AI in advanced economies where cognitive work predominates. The International Labour Organization (ILO) finds that women are more affected because clerical roles—highly automatable cognitive work—are often filled by women in wealthier countries. There are also new constraints in energy and infrastructure: data center demand could more than double by the end of the decade under specific scenarios, while power grid limitations are already delaying some projects. These issues further reinforce the trend toward prioritizing intelligence, which can outpace the physical capacities needed to support it. As AI capital expands, the potential returns begin to decrease unless physical capacity and skill training keep up. We draw on macroeconomic projections (IMF, IEA) and occupational exposure data (OECD, ILO); however, the uncertainty ranges can be vast and depend on various scenarios.

Managing AI Capital in Schools and Colleges

Education is at the center of this transition because it produces both types of inputs: intelligence and physical capacity. We should consider AI capital as a means to enhance routine thinking and free up human time for more personal work. Early evidence looks promising. A recent controlled trial revealed that an AI tutor helped students learn more efficiently than traditional in-class lessons led by experts. Yet, the adoption of such technologies is lagging. Surveys show low AI use among teachers in classrooms, gaps in available guidance, and limited training for institutions. Systems that address these gaps can more effectively translate AI capital into improved student learning while ensuring that core assessments remain rigorous. The controlled trial evaluated learning outcomes on aligned topics and used standardized results; survey findings are weighted to reflect national populations.

Three policy directions emerge from the focus on AI capital. First, rebalance the investment mix. If intelligence-based content is becoming cheaper and more effective, allocate limited funds to places where human interaction adds significant value, such as clinical placements, maker spaces, science labs, apprenticeships, and supervised practice. Second, raise professional standards for AI use. Train educators to integrate AI capital with meaningful feedback rather than letting the technology replace their discretion. The objective should not be to apply “AI everywhere,” but to focus on “AI where it enhances learning.” Third, promote equity. Given that clerical and low-status cognitive jobs are more vulnerable and tend to involve a higher percentage of women, schools relying too much on AI for basic tasks risk perpetuating gender inequalities. Track access, outcomes, and time used across demographic groups; leverage this data to direct support—coaching, capstone projects, internship placements—toward students who may be disadvantaged by the very tools that benefit others.

Administrators should approach their planning with a production mindset rather than simply relying on app lists. Consider where AI capital takes over, where it complements human effort, and where it may cause distractions. Utilize straightforward metrics. If a chatbot can produce decent lab reports, it can free up time for grading to focus on face-to-face feedback. If a scheduler can create timetables in seconds, invest staff time in mentorship. If a coding assistant helps beginners work faster, redesign tasks to emphasize design decisions, documentation, and debugging under pressure. In each case, the goal is to direct human labor towards the areas—both physical and relational—where value is amplifying.

Policy: Steering AI Capital Toward Shared Benefits

A clear policy framework is developing. Start with transparent procurement that treats AI capital as a utility, establishing clear terms for data use, uptime, and backup plans. Tie contracts to measurable learning outcomes or service results rather than just counting seat licenses. Next, create aligned incentives. Provide time-limited tax breaks or targeted grants for AI implementations that free up staff hours for high-impact learning experiences (like clinical supervision, laboratory work, and hands-on training). Pair these incentives with wage protection or transition stipends for administrative staff who upgrade their skills for student-facing jobs. This approach channels savings from AI capital back into the human interactions that are more difficult to automate.

Regulators should anticipate the obstacles. Growth in data centers and rising electricity needs present real logistical challenges. Education ministries and local governments can collaborate to pool their demand and negotiate favorable computing terms for schools and colleges. They can also publish disclosures regarding the use of AI in curricula and assessments, helping students and employers understand where AI was applied and how. Finally, implement metrics that account for exposure. Track what portion of each program’s assessments comes from physical or supervised activities. Determine how many contact hours each student receives and measure the administrative time freed up by implementing AI. Institutions that manage these ratios will enhance both productivity and the value of education.

Skeptics might question whether the productivity gains are exaggerated and whether new expenses—such as errors, monitoring, and training—cancel them out. They sometimes do. Research and news reports highlight teams whose workloads increased because they needed to verify AI outputs or familiarize themselves with new tools. Others highlight mental health issues arising from excessive tool usage. The solution is not to dismiss these concerns, but to focus on design: limit AI's capital to tasks with low error risk and affordable verification; adjust assessments to prioritize real-time performance; measure the time saved and reallocate it to more personal work. Where integration is poorly executed, gains diminish. Where it is effectively managed, early successes are more likely to persist.

Today, one of the most significant labor indicators might be this: intelligence is no longer scarce. The IMF’s figure showing 40% exposure reflects the macro reality that AI capital has reached a price-performance standard for many cognitive tasks. The risk for education isn’t becoming obsolete; it’s misallocating resources—spending limited funds on teaching rare thinking skills as if AI capital were still expensive and overlooking the physical and interpersonal work where value is now concentrated. The path forward is clear. Treat AI capital as a standard resource. Use it wisely. Implement it where it enhances routine tasks. Shift human labor to areas where it is still needed most: labs, clinics, workshops, and seminars where people connect and collaborate. Track the ratios; evaluate the trade-offs; protect those who are most at risk. If we follow this route, wages won’t just fall with automation. They will rise alongside complementary efforts. Schools will fulfill their mission: preparing individuals for the reality of today's world, not an idealized version of it.


The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.


References

Bruegel. 2023. Skills or a degree? The rise of skill-based hiring for AI and beyond (Working Paper 20/2023).
Brookings Institution (Kording, K.; Marinescu, I.). 2025. (Artificial) Intelligence Saturation and the Future of Work (Working paper).
Carbon Brief. 2025. “AI: Five charts that put data-centre energy use and emissions into context.”
GitHub. 2022. “Research: Quantifying GitHub Copilot’s impact on developer productivity and happiness.”
IEA. 2024. Electricity 2024: Analysis and Forecast to 2026.
IEA. 2025. Electricity mid-year update 2025: Demand outlook.
IFR (International Federation of Robotics). 2024. World Robotics 2024 Press Conference Slides.
ILO. 2023. Generative AI and Jobs: A global analysis of potential effects on job quantity and quality.
ILO. 2025. Generative AI and Jobs: A Refined Global Index of Occupational Exposure.
IMF (Georgieva, K.). 2024. “AI will transform the global economy. Let’s make sure it benefits humanity.” IMF Blog.
MIT (Noy, S.; Zhang, W.). 2023. Experimental Evidence on the Productivity Effects of Generative Artificial Intelligence (working paper).
OECD. 2024. Artificial intelligence and the changing demand for skills in the labour market.
OECD. 2024. How is AI changing the way workers perform their jobs and the skills they require? Policy brief.
OECD. 2024. The impact of artificial intelligence on productivity, distribution and growth.
OpenAI. 2023. “New models and developer products announced at DevDay.” (Pricing update).
RAND. 2025. AI Use in Schools Is Quickly Increasing but Guidance Lags Behind.
RAND. 2024. Uneven Adoption of Artificial Intelligence Tools Among U.S. Teachers and Principals.
Scientific Reports (Kestin, G., et al.). 2025. “AI tutoring outperforms in-class active learning.”
Epoch AI. 2024. “Performance per dollar improves around 30% each year.” Data Insight.
University of Melbourne / ADM+S. 2025. “Does AI really boost productivity at work? Research shows gains don’t come cheap or easy.”

Picture

Member for

1 year
Real name
Ethan McGowan
Bio
Ethan McGowan is a Professor of Financial Technology and Legal Analytics at the Gordon School of Business, SIAI. Originally from the United Kingdom, he works at the frontier of AI applications in financial regulation and institutional strategy, advising on governance and legal frameworks for next-generation investment vehicles. McGowan plays a key role in SIAI’s expansion into global finance hubs, including oversight of the institute’s initiatives in the Middle East and its emerging hedge fund operations.