China’s global economic influence creates shared dependence
It reshapes rich-country industry and developing-country debt
Open-source AI deepens this reliance, making resilience vital
Ethan McGowan is a Professor of Financial Technology and Legal Analytics at the Gordon School of Business, SIAI. Originally from the United Kingdom, he works at the frontier of AI applications in financial regulation and institutional strategy, advising on governance and legal frameworks for next-generation investment vehicles. McGowan plays a key role in SIAI’s expansion into global finance hubs, including oversight of the institute’s initiatives in the Middle East and its emerging hedge fund operations.
Published
Modified
AI capital cheapens routine thinking and shifts work toward physical, contact-rich tasks
Gains are strong on simple tasks but stall without investment in real-world capacity
Schools should buy AI smartly, redesign assessments, and fund high-touch learning
Nearly 40% of jobs worldwide are at risk from artificial intelligence. This estimate from the International Monetary Fund highlights a simple fact: the cost of intelligence has decreased so much that software can now handle a greater share of routine thinking. We can think of this software as AI capital—an input that works alongside machines and people. Intelligence tasks are the first to be automated, while human work focuses on tasks that require physical presence. The cost of advanced AI models has dropped sharply since 2023. Additionally, hardware is providing more computing power for each euro spent every year. This trend lowers the effective cost of AI capital, while classroom, lab, and building expenses remain relatively stable. In this environment, shifts in wages and hiring occur not because work is vanishing, but because the mix of production is changing. Suppose educational institutions continue teaching as if intelligence were limited and physical resources were flexible. In that case, graduates will be unprepared for a labor market that no longer exists. It's crucial that we ensure equitable AI use and resource reallocation to prevent potential disparities.
Reframing AI Capital in the Production Function
The usual story of production—a combination of labor and physical capital—overlooks a third input that we now need to recognize. Let's call it A, or AI capital. This refers to disembodied, scalable intelligence that can perform tasks previously handled by clerks, analysts, and junior professionals. In a production function with three inputs, represented as 𝑌= 𝑓(𝐿,𝐾,𝐴), intelligence tasks are the first to be automated because the price of A is dropping faster than that of K. Many cognitive tasks can also be broken down into modules, making them easier to automate. A recent framework formalizes this idea in a two-sector model: intelligence output and physical output combine to produce goods and services. When A becomes inexpensive, the saturation of intelligence tasks increases, but the gains depend on having complementary physical capacity. This leads to a reallocation of human labor toward physical tasks, creating mixed effects on wages: wages may rise initially, then fall as automation deepens. Policies that assume a simple decline in wages miss this complex pattern.
Figure 1: Total labor 𝐿 endogenously splits between intelligence production 𝐼 and physical production 𝑃. As AI capital lowers the cost of intelligence tasks, labor shifts toward physical work while overall output 𝑌 depends on both streams.
The real question is not whether AI “replaces jobs” but whether adding another unit of AI capital increases output more than hiring one additional person. For tasks that are clearly defined, the answer is already yes. Studies show significant productivity boosts: mid-level writers completed work about 40% faster using a general-purpose AI assistant. In comparison, developers finished coding tasks approximately 56% faster with an AI partner. However, these gains decrease with more complex tasks, where AI struggles with nuances—this reflects the “jagged frontier” many teams are encountering. This pattern supports the argument for prioritizing AI: straightforward cognitive tasks will be automated first. In contrast, complex judgment tasks will remain human-dominated for now. We define “productivity” as time to completion and quality as measured by standardized criteria, noting that effect sizes vary with task complexity and user expertise.
When A expands while K and L do not, the share of labor can decline even when overall output stays constant. In simple terms, the same amount of production can require fewer workers. But this isn't an inevitable outcome. If physical and intellectual outputs complement rather than replace one another, investments in labs, clinics, logistics, and classrooms can help stabilize wages. This points to a critical shift for education systems: focusing on environments and approaches where physical presence enhances what AI alone cannot provide—care, hands-on skill, safety, and community.
Evidence: Falling AI Capital Prices, Mixed Productivity, Shifting Wages
The price indicators for AI capital are clear. By late 2023, API prices for popular models had dropped significantly, and hardware performance improved by about 30% each year. Prices won’t decline uniformly—newer models might be more expensive—but the overall trend is enough to change how businesses operate. Companies that previously paid junior analysts to consolidate memos are now using prompts and templates instead. Policymakers should interpret these signals as they would energy or shipping prices: as active factors influencing wages and hiring. We estimate the “price of A” by looking at published per-token API rates and hardware cost-effectiveness; we do not assume uniform access across all institutions.
Figure 2: As a larger share of tasks is automated, total output rises and more workers shift into physical, hands-on roles. The gains flatten at high automation, showing why investment in real-world capacity is needed to keep productivity growing.
The productivity evidence is generally positive but varies widely. Controlled experiments show significant improvements in routine content creation and coding. At the same time, observational studies and workforce surveys highlight that integrating AI can be challenging, and the benefits are often immediate. Some teams waste time fixing AI-generated text or adjusting to new workflows, while others achieve notable speed improvements. The result is an increase in task-level performance coupled with friction at the system level. Sector-specific data supports this: the OECD reports that a considerable number of job vacancies are in roles heavily exposed to AI, even as skill demands change when workers lack specialized AI skills. Labor-market rewards have also begun to shift: studies show wage premiums for AI-related skills, typically ranging from 15% to 25%, depending on the market and methodology.
The impact is not evenly distributed. The IMF predicts high exposure to AI in advanced economies where cognitive work predominates. The International Labour Organization (ILO) finds that women are more affected because clerical roles—highly automatable cognitive work—are often filled by women in wealthier countries. There are also new constraints in energy and infrastructure: data center demand could more than double by the end of the decade under specific scenarios, while power grid limitations are already delaying some projects. These issues further reinforce the trend toward prioritizing intelligence, which can outpace the physical capacities needed to support it. As AI capital expands, the potential returns begin to decrease unless physical capacity and skill training keep up. We draw on macroeconomic projections (IMF, IEA) and occupational exposure data (OECD, ILO); however, the uncertainty ranges can be vast and depend on various scenarios.
Managing AI Capital in Schools and Colleges
Education is at the center of this transition because it produces both types of inputs: intelligence and physical capacity. We should consider AI capital as a means to enhance routine thinking and free up human time for more personal work. Early evidence looks promising. A recent controlled trial revealed that an AI tutor helped students learn more efficiently than traditional in-class lessons led by experts. Yet, the adoption of such technologies is lagging. Surveys show low AI use among teachers in classrooms, gaps in available guidance, and limited training for institutions. Systems that address these gaps can more effectively translate AI capital into improved student learning while ensuring that core assessments remain rigorous. The controlled trial evaluated learning outcomes on aligned topics and used standardized results; survey findings are weighted to reflect national populations.
Three policy directions emerge from the focus on AI capital. First, rebalance the investment mix. If intelligence-based content is becoming cheaper and more effective, allocate limited funds to places where human interaction adds significant value, such as clinical placements, maker spaces, science labs, apprenticeships, and supervised practice. Second, raise professional standards for AI use. Train educators to integrate AI capital with meaningful feedback rather than letting the technology replace their discretion. The objective should not be to apply “AI everywhere,” but to focus on “AI where it enhances learning.” Third, promote equity. Given that clerical and low-status cognitive jobs are more vulnerable and tend to involve a higher percentage of women, schools relying too much on AI for basic tasks risk perpetuating gender inequalities. Track access, outcomes, and time used across demographic groups; leverage this data to direct support—coaching, capstone projects, internship placements—toward students who may be disadvantaged by the very tools that benefit others.
Administrators should approach their planning with a production mindset rather than simply relying on app lists. Consider where AI capital takes over, where it complements human effort, and where it may cause distractions. Utilize straightforward metrics. If a chatbot can produce decent lab reports, it can free up time for grading to focus on face-to-face feedback. If a scheduler can create timetables in seconds, invest staff time in mentorship. If a coding assistant helps beginners work faster, redesign tasks to emphasize design decisions, documentation, and debugging under pressure. In each case, the goal is to direct human labor towards the areas—both physical and relational—where value is amplifying.
Policy: Steering AI Capital Toward Shared Benefits
A clear policy framework is developing. Start with transparent procurement that treats AI capital as a utility, establishing clear terms for data use, uptime, and backup plans. Tie contracts to measurable learning outcomes or service results rather than just counting seat licenses. Next, create aligned incentives. Provide time-limited tax breaks or targeted grants for AI implementations that free up staff hours for high-impact learning experiences (like clinical supervision, laboratory work, and hands-on training). Pair these incentives with wage protection or transition stipends for administrative staff who upgrade their skills for student-facing jobs. This approach channels savings from AI capital back into the human interactions that are more difficult to automate.
Regulators should anticipate the obstacles. Growth in data centers and rising electricity needs present real logistical challenges. Education ministries and local governments can collaborate to pool their demand and negotiate favorable computing terms for schools and colleges. They can also publish disclosures regarding the use of AI in curricula and assessments, helping students and employers understand where AI was applied and how. Finally, implement metrics that account for exposure. Track what portion of each program’s assessments comes from physical or supervised activities. Determine how many contact hours each student receives and measure the administrative time freed up by implementing AI. Institutions that manage these ratios will enhance both productivity and the value of education.
Skeptics might question whether the productivity gains are exaggerated and whether new expenses—such as errors, monitoring, and training—cancel them out. They sometimes do. Research and news reports highlight teams whose workloads increased because they needed to verify AI outputs or familiarize themselves with new tools. Others highlight mental health issues arising from excessive tool usage. The solution is not to dismiss these concerns, but to focus on design: limit AI's capital to tasks with low error risk and affordable verification; adjust assessments to prioritize real-time performance; measure the time saved and reallocate it to more personal work. Where integration is poorly executed, gains diminish. Where it is effectively managed, early successes are more likely to persist.
Today, one of the most significant labor indicators might be this: intelligence is no longer scarce. The IMF’s figure showing 40% exposure reflects the macro reality that AI capital has reached a price-performance standard for many cognitive tasks. The risk for education isn’t becoming obsolete; it’s misallocating resources—spending limited funds on teaching rare thinking skills as if AI capital were still expensive and overlooking the physical and interpersonal work where value is now concentrated. The path forward is clear. Treat AI capital as a standard resource. Use it wisely. Implement it where it enhances routine tasks. Shift human labor to areas where it is still needed most: labs, clinics, workshops, and seminars where people connect and collaborate. Track the ratios; evaluate the trade-offs; protect those who are most at risk. If we follow this route, wages won’t just fall with automation. They will rise alongside complementary efforts. Schools will fulfill their mission: preparing individuals for the reality of today's world, not an idealized version of it.
The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.
References
Bruegel. 2023. Skills or a degree? The rise of skill-based hiring for AI and beyond (Working Paper 20/2023). Brookings Institution (Kording, K.; Marinescu, I.). 2025. (Artificial) Intelligence Saturation and the Future of Work (Working paper). Carbon Brief. 2025. “AI: Five charts that put data-centre energy use and emissions into context.” GitHub. 2022. “Research: Quantifying GitHub Copilot’s impact on developer productivity and happiness.” IEA. 2024. Electricity 2024: Analysis and Forecast to 2026. IEA. 2025. Electricity mid-year update 2025: Demand outlook. IFR (International Federation of Robotics). 2024. World Robotics 2024 Press Conference Slides. ILO. 2023. Generative AI and Jobs: A global analysis of potential effects on job quantity and quality. ILO. 2025. Generative AI and Jobs: A Refined Global Index of Occupational Exposure. IMF (Georgieva, K.). 2024. “AI will transform the global economy. Let’s make sure it benefits humanity.” IMF Blog. MIT (Noy, S.; Zhang, W.). 2023. Experimental Evidence on the Productivity Effects of Generative Artificial Intelligence (working paper). OECD. 2024. Artificial intelligence and the changing demand for skills in the labour market. OECD. 2024. How is AI changing the way workers perform their jobs and the skills they require? Policy brief. OECD. 2024. The impact of artificial intelligence on productivity, distribution and growth. OpenAI. 2023. “New models and developer products announced at DevDay.” (Pricing update). RAND. 2025. AI Use in Schools Is Quickly Increasing but Guidance Lags Behind. RAND. 2024. Uneven Adoption of Artificial Intelligence Tools Among U.S. Teachers and Principals. Scientific Reports (Kestin, G., et al.). 2025. “AI tutoring outperforms in-class active learning.” Epoch AI. 2024. “Performance per dollar improves around 30% each year.” Data Insight. University of Melbourne / ADM+S. 2025. “Does AI really boost productivity at work? Research shows gains don’t come cheap or easy.”
Picture
Member for
1 year
Real name
Ethan McGowan
Bio
Ethan McGowan is a Professor of Financial Technology and Legal Analytics at the Gordon School of Business, SIAI. Originally from the United Kingdom, he works at the frontier of AI applications in financial regulation and institutional strategy, advising on governance and legal frameworks for next-generation investment vehicles. McGowan plays a key role in SIAI’s expansion into global finance hubs, including oversight of the institute’s initiatives in the Middle East and its emerging hedge fund operations.
Globalization shifts tasks to cheaper hubs while people move unevenly
Left-behind places lose jobs and grow politically angry
Insure workers, boost mobility, and invest in local productivity
Manufacturing hires fewer people; services now drive job growth
Digital services hit $4.25T and robot density doubled, shrinking mid-skill factory roles
Pivot to service-led industrialisation with skills, standards, and digital trade rules
Private school subsidies risk emptying public schools by erasing price differences
Competition will shift to entrance exams and prep, as Korea shows
Link subsidies to fair admissions and fee caps, invest in public quality, and track enrolment
Algorithmic Targeting Is Not Segregation: Fix Outcomes Without Breaking the Math
Picture
Member for
1 year
Real name
Catherine Maguire
Bio
Catherine Maguire is a Professor of Computer Science and AI Systems at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI). She specializes in machine learning infrastructure and applied data engineering, with a focus on bridging research and large-scale deployment of AI tools in financial and policy contexts. Based in the United States (with summer in Berlin and Zurich), she co-leads SIAI’s technical operations, overseeing the institute’s IT architecture and supporting its research-to-production pipeline for AI-driven finance.
Published
Modified
Optimization isn’t segregation
Impose variance thresholds and independent audits
Require delivery reports and fairness controls
The key statistic in the public debate isn't about clicks or conversions. It's the 10% variance cap that U.S. regulators required Meta to meet for most housing ads by December 31, under a court-monitored settlement. This agreement requires the company’s “Variance Reduction System” to reduce the gap between eligible audiences and actual viewers, by sex and estimated race or ethnicity, to 10% or less for most ads, with federal oversight until June 2026. This is an outcome rule, not a moral judgment. It doesn't claim that “the algorithm is racist.” Instead, it states, “meet this performance standard, or fix your system.” As schools and governments debate whether algorithmic targeting in education ads amounts to segregation, we should remember this vital idea. The way forward is through measurable outcomes and responsible engineering, without labeling neutral, math-driven optimization as an act of intent.
What algorithmic targeting actually does
Algorithmic targeting has two stages. First, advertisers and the platform define a potential audience using neutral criteria. Then, the platform’s delivery system decides who actually sees each ad based on predicted relevance, estimated value, and budget limits. At the scale of social media, this second stage is the engine. Most ads won't reach everyone in the target group; the delivery algorithm sorts, ranks, and distributes resources. Courts and agencies understand this distinction. In 2023, the Justice Department enforced an outcome standard for housing ads on Meta, requiring the new Variance Reduction System to keep demographic disparities within specific limits and report progress to an independent reviewer. This solution targeted delivery behavior instead of banning optimization or calling it segregation. The lesson is clear: regulate what the system does, not what we fear it might mean.
Critics argue that even neutral systems can lead to unequal results. This is true and has been documented. In 2024, researchers from Princeton and USC ran paired education ads and found that Meta’s delivery favored specific results: ads for some for-profit colleges reached a higher proportion of Black users than ads for similar public universities, even when the ads were neutral. When more “realistic” creatives were used, this racial skew increased. Their method controlled for confounding factors by pairing similar ads and analyzing aggregated delivery reports, a practical approach for investigating a complex system. These findings are essential. They illustrate disparate impact—an outcome gap—not proof of intent. Policy should recognize this difference.
Figure 1: Regulation targets outcomes, not intent: most ads must fall within a 10% demographic variance window.
The legal line on algorithmic targeting
The current case that sparked this debate claims that Meta’s education ad delivery discriminates and that the platforms, as public accommodations in Washington, D.C., provide different quality service to other users. In July, the D.C. Superior Court allowed the suit to move forward. It categorized the claims as “archetypal” discrimination under D.C. law. It suggested that nondisclosure of how the system operates could constitute a deceptive trade practice. This decision permits the case to continue but does not provide a final ruling. It indicates that state civil rights and consumer protection laws can address algorithmic outcomes. Still, it does not resolve the key question: when does optimization become segregation? The answer should rely on intent and whether protected characteristics (or close proxies) are used as decision inputs, rather than on whether disparities exist at the group level after delivery.
There is a straightforward way to draw this line. Disparate treatment, which refers to the intentional use of race, sex, or similar indicators, should result in liability. Disparate impact, on the other hand, refers to unequal outcomes from neutral processes, which should prompt engineering duties and audits. This is how the 2023 housing settlement operates: it sets numerical limits, appoints an independent reviewer, and allows the platform an opportunity to reduce variance without banning prediction. This is also the approach for other high-risk systems: we require testing and transparency, not blanket condemnations of mathematics. Applying this model of outcomes and audits to education ads would protect students without labeling algorithmic targeting as segregation.
Evidence of bias is objective; the remedy should be audits, not labels
The body of research on delivery bias is extensive. Long before the latest education ad study, audits showed that Meta’s delivery algorithm biased job ads by race and gender, even when the advertiser's targeting was neutral. A notable 2019 paper demonstrated that similar job ads reached very different audiences based on creative choices and platform optimization. Journalists and academics replicated these patterns: construction jobs mainly went to men; cashier roles to women; some credit and employment ads favored men, despite higher female engagement on the platform. We should not overlook these disparities. We should address them by setting measurable limits, exposing system behavior to independent review, and testing alternative scenarios, just as the housing settlement now requires. This is more honest and effective than labeling the process as segregation.
Education deserves special attention. The 2024 audit found that ads for for-profit colleges reached a larger share of Black users than public university ads, aligning with longstanding enrollment differences—about 25% Black enrollment in the for-profit sector versus roughly 14% in public colleges, based on data from the College Scorecard used by the researchers. This history helps clarify the observed biases but does not justify them. The appropriate response is to require education and delivery to meet clear fairness standards—perhaps a variance limit similar to housing—and to publish compliance metrics. This respects that optimization is probabilistic and commercial while demanding equal access to information about public opportunities.
Figure 2: Realistic creatives widen demographic reach gaps between for-profit and public college ads
A policy path that protects opportunity without stifling practical math
A better set of rules should look like this. First, prohibit inputs that reveal intent. Platforms and advertisers shouldn't include protected traits or similar indicators in ad delivery for education, employment, housing, or credit. Second, establish outcome limits and audit them. Require regular reports showing that delivery for education ads remains within an agreed range across protected classes, with an independent reviewer authorized to test, challenge, and demand corrections. This is already what the housing settlement does, and it has specific targets and deadlines. Third, require advertiser-facing tools that indicate when a campaign fails fairness checks and automatically adjust bids or ad rotation to bring delivery back within the limits. None of these steps requires labeling algorithmic targeting as segregation. All of them help reduce harmful biases.
The state and local landscape is moving towards a compliance-focused model. New York City’s Local Law 144 mandates annual bias audits for automated employment decisions and public reporting. Several state attorneys general have started using existing consumer protection and civil rights laws to monitor AI outcomes in hiring and other areas. These measures do not prohibit optimization; they demand evidence that the system operates fairly. National policymakers can adapt this framework for education ads: documented audits, standardized variance metrics, and safe havens for systems that meet the standards. This approach avoids the extremes of “anything goes” and “everything is segregation,” aligning enforcement with what courts are willing to oversee: performance, not metaphors.
What educators and administrators should require now
Education leaders can take action without waiting for final court rulings. When purchasing ads, insist on delivery reports that show audience composition and on tools that promote fairness-aware bidding. Request independent audit summaries in RFPs, not just audience estimates. If platforms do not provide variance metrics, allocate more funding to those that do. Encourage paired-ad testing, a low-cost method used by research teams to uncover biases while controlling for confounding factors. The goal isn't to litigate intent; it's to ensure that students from all backgrounds see the same opportunities. This is a practical approach, not a philosophical one. It enables us to turn a heated label into a standard that improves access where it matters: public colleges, scholarships, apprenticeships, and financial aid.
Policymakers can assist by clarifying safe harbors. A platform that clearly excludes protected traits, releases a technical paper detailing its fairness controls, and meets a defined variance threshold for education ads should receive consideration and a specific period to rectify any issues flagged in audits. In contrast, a platform that remains opaque or uses traits or obvious proxies should face penalties, including damages and injunctions. This distinction acknowledges a crucial ethical point: optimization driven by data can be lawful when it respects clear limits and transparency, and it becomes unlawful when it bypasses those constraints. The DOJ’s housing settlement demonstrates how to create rules that engineers can implement and courts can enforce.
The 10% figure is not a minor detail. It serves as a guide to regulating algorithmic targeting without turning every disparity into a moral judgment. Labeling algorithmic targeting as segregation obscures the critical distinction between intent and impact. It also hampers the tools that help schools reach the right students and aid students in finding the right schools. We do not need metaphors from a bygone era. We need measurable requirements, public audits, and independent checks that ensure fair delivery while allowing optimization to function within strict limits. Suppose courts and agencies insist on this approach, platforms will adapt. In that case, research teams will continue testing, and students—especially those who historically have had fewer opportunities—will receive better information. Avoid sweeping labels. Maintain the rules on outcomes. Let the math work for everyone, with transparency.
The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.
References
Ali, M., Sapiezynski, P., Bogen, M., Korolova, A., Mislove, A., & Rieke, A. (2019). How Facebook’s ad delivery can lead to biased outcomes. Proceedings of CSCW. Brody, D. (2025, November 13). Equal Rights Center v. Meta is the most important tech case flying under the radar. Brookings Institution. Imana, B., Korolova, A., & Heidemann, J. (2024). Auditing for racial discrimination in the delivery of education ads. ACM FAccT ’24. Northeastern University/Upturn. (2019). How Facebook’s Ad Delivery Can Lead to Biased Outcomes (replication materials). Reuters. (2025, October 24). Stepping into the AI void in employment: Why state AI rules now matter more than federal policy. U.S. Department of Justice. (2023/2025 update). Justice Department and Meta Platforms Inc. reach key agreement to address discriminatory delivery of housing advertisements (press release; compliance targets and VRS). Washington Lawyers’ Committee & Equal Rights Center. (2025, February 11). Equal Rights Center v. Meta (Complaint, D.C. Superior Court).
Picture
Member for
1 year
Real name
Catherine Maguire
Bio
Catherine Maguire is a Professor of Computer Science and AI Systems at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI). She specializes in machine learning infrastructure and applied data engineering, with a focus on bridging research and large-scale deployment of AI tools in financial and policy contexts. Based in the United States (with summer in Berlin and Zurich), she co-leads SIAI’s technical operations, overseeing the institute’s IT architecture and supporting its research-to-production pipeline for AI-driven finance.
Japan’s new PM locks in a hard-line, US-aligned stance
Japan–China ties enter “stable instability” as ASEAN/Seoul outreach continues
Schools and policymakers must harden compliance and diversify partnerships
China’s clean-energy surge makes the 7–10% by 2035 a floor, not a ceiling
Wind, solar, and EV scale are bending emissions down despite coal capacity
With the U.S.
Markets provide the fastest, most reliable signal of expected inflation
Pairing market prices with news-based textual indicators improves shock classification and timing
Education systems should anchor wages and procurement to this dashboard with simple, rules-based triggers
Generative AI for Older Adults: Lessons from the Internet Age
Picture
Member for
1 year
Real name
David O'Neill
Bio
David O’Neill is a Professor of Finance and Data Analytics at the Gordon School of Business, SIAI. A Swiss-based researcher, his work explores the intersection of quantitative finance, AI, and educational innovation, particularly in designing executive-level curricula for AI-driven investment strategy. In addition to teaching, he manages the operational and financial oversight of SIAI’s education programs in Europe, contributing to the institute’s broader initiatives in hedge fund research and emerging market financial systems.
Published
Modified
Older adults are missing out on generative AI
Used well, it can boost independence and wellbeing
Policy must make these tools senior-friendly
In 2000, only 14% of Americans aged 65 and older were online. By 2024, that number had risen to 90%. This shift is so significant that it's easy to forget how unfamiliar the internet once seemed to older adults. Today, many people in this age group video call their grandchildren, manage their bank accounts on smartphones, and consider YouTube their main TV channel. However, when we transition from browsing the web to using large language models, we see a regression. By mid-2025, only 10% of Americans aged 65 and older had ever used ChatGPT, compared with 58% of adults under 30. Data from Italy shows a similar trend: while three-quarters of adults are aware of generative AI, regular use remains concentrated among younger, more educated people. Generative AI for older adults is now in a position similar to the internet at the turn of the century: visible and popular, but mostly overlooked by seniors.
Most discussions of this gap view it as a job-market issue. Evidence from Italian household surveys indicates that using generative AI is linked to a 1.8% to 2.2% increase in earnings, about half a year's worth of additional schooling, and one-tenth of the wage benefit seen with basic computer use in the 1990s. From this perspective, younger, tech-savvy workers benefit first while older workers fall behind. While this interpretation isn’t wrong, it is limited. For those in their 60s and 70s, generative AI is less about income and more about independence, health, and social connections. The better comparison isn't early spreadsheets or email, but how the internet and smartphones changed well-being in later life once they became accessible and valuable. If we overlook this comparison, we risk repeating a 20-year delay that older adults cannot afford.
Generative AI for Older Adults and the New Adoption Gap
Recent Italian survey data highlight how significantly age influences the use of these tools. In April 2024, 75.6% of Italians aged 18 to 75 reported awareness of generative AI tools like ChatGPT, yet only 36.7% had used them at least once in the past year, and just 20.1% were monthly users. Age and education create a clear divide: adults aged 18 to 34 were 11 percentage points more likely to know about generative AI than those 65 and older, and among those aware of it, they were 30 percentage points more likely to use it. These are significant differences that reflect well-documented patterns in the "digital divide," where older adults see fewer benefits from new technologies and face steeper learning curves and greater perceived risks. Consequently, generative AI for older adults exists, but it is mostly outside their everyday activities.
Figure 1: Older adults show high awareness of generative AI but very low usage, widening the digital gap.
Evidence from other countries shows that Italy is not an anomaly. A module in the U.S. Federal Reserve’s Survey of Consumer Expectations finds that awareness of generative AI now exceeds 80% among adults. Usage rates are slightly higher than those in Italy, but the same pronounced divides by age, education, and gender persist. Pew Research Center estimates that by June 2025, 34% of U.S. adults had used ChatGPT. The difference by age is stark: 58% of adults under 30 compared to 25% of those aged 50 to 64 and just 10% of those 65 and older. Across the EU, the Commission’s Joint Research Centre reports that about 30% of workers now use some form of AI, with adoption highest among younger, better-educated groups. Generative AI for older adults is thus developing within a framework of established digital inequality: seniors have achieved near-universal internet access. Still, they are once again marginalized by a new general-purpose technology.
Figure 2: Internet adoption among older adults surged over two decades, showing how fast late-life uptake can accelerate once technology becomes accessible.
This situation would be less concerning if the gains from adoption were solely financial. Estimates from Italy suggest that generative AI use provides only a modest earnings boost, much smaller than the benefits received from basic computer skills during the early computer age. Yet older adults interact with health systems, social services, and financial providers that are quickly integrating AI. If generative AI for older adults remains uncommon, the risk extends beyond reduced income; it also includes diminished ability to navigate services influenced by algorithms. The Italian data highlight another vital aspect: social engagement strongly predicts the use of generative AI, even after considering education and income. This finding mirrors decades of research on the internet, where social connections and perceived usefulness determine whether late adopters continue to use these tools. Understanding generative AI through this perspective is crucial, as it shifts the focus from “teaching seniors to code with chatbots” to integrating these technologies into the social and service settings they trust, thereby illuminating the true potential of AI for older adults.
What the Internet Era Taught Us About Late-Life Technology Adoption
The history of the web and smartphones illustrates how quickly older adults can close a gap once technologies become simpler and more relevant. In the United States, only 14% of those 65 and older used the internet in 2000; by 2024, that number reached 90%, just nine percentage points lower than the youngest age group. Home broadband and smartphone ownership reflect a similar trend: as of 2021, 61% of people aged 65 and older owned a smartphone, and 64% had broadband at home, up from single-digit levels in the mid-2000s. Even YouTube—a platform initially considered for teenagers—has seen use grow among older adults, with the percentage of Americans aged 65 and older using it rising from 38% to 49% between 2019 and 2021. In other words, older adults did not grow up digital. Still, once devices became touch-based, constantly connected, and integrated into social life, they underwent large-scale adaptation.
This access brought about not just convenience but also improved well-being. A study of adults aged 50 and older found that using the internet for communication, information, practical tasks, and leisure positively affected life satisfaction and, in terms of task performance and leisure, negatively correlated with symptoms of depression. An analysis of older Japanese adults revealed that frequent internet users enjoyed better physical and cognitive health, stronger social connections, and healthier behaviors than those who didn't use the internet, even after controlling for initial differences. Studies in England and other aging societies also show a link between regular internet use among seniors and higher quality-of-life scores. Overall, this research suggests that when older adults successfully incorporate digital tools into their daily lives, they often experience greater autonomy, social ties, and psychological resilience.
However, the evidence cautions against being overly optimistic. A recent quantitative study of older adults in a European country, using European Social Survey data, found that daily internet use is negatively associated with self-reported happiness, even while it is positively related to social life indicators. A 2025 analysis from China described a "dark side," noting that internet use is associated with improved overall subjective well-being. Still, it also creates new vulnerabilities, with hope being a key psychological factor. The takeaway isn't that older adults should disconnect; rather, it is about the intensity and purpose of their digital interactions. Well-designed tools that foster communication, meaningful learning, and practical problem-solving tend to enhance late-life well-being. In contrast, aimless browsing and exposure to scams or misinformation do not have the same effect. Generative AI for older adults will follow this same trend unless it is thoughtfully created and regulated.
Designing Generative AI for Older Adults as a Well-Being Tool
Suppose we view generative AI for older adults as an extension of digital infrastructure. In that case, its most impactful uses will be straightforward and practical. Older adults already interact with AI-driven systems when seeking public benefits, scheduling medical appointments, or navigating banking apps. Conversational agents based on large language models could transform these interactions into two-way support: breaking down forms into simple language, drafting letters to landlords or insurers, or helping prepare questions for doctors. Research on health and wellness chatbots shows that older adults are willing to use them for medication reminders, lifestyle coaching, and appointment help if the interfaces are user-friendly and trust is established over time. Early qualitative studies indicate seniors appreciate chatbots that are patient, non-judgmental, and aware of local context—not those filled with jargon or pushy prompts.
Labor market evidence suggests that the most significant benefit of generative AI for older adults may not be financial. Data from Italian households reveal that the earning boost associated with generative AI use is real but modest. For retirees or those nearing retirement, this boost may not matter. What is crucial is whether these tools can help maintain independence—allowing someone to stay in their home longer, manage a chronic condition more effectively, or remain active in community groups. Findings from England’s longitudinal aging study and similar research suggest that using the internet for communication and information improves quality of life and reduces loneliness among older adults. A growing body of research indicates that AI companions and assistants can help combat isolation. However, the quality of this evidence varies. Suppose generative AI for older adults can focus on these high-value functions. In that case, its social benefits may significantly outweigh its direct economic contributions.
Design decisions will shape this future. Surveys show that around 60% of Americans aged 65 and older have serious concerns about the integration of AI into everyday products and services. Classes offered by organizations like Senior Planet in the United States highlight this: participants are eager to learn, but they worry about scams, misinformation, and hidden data collection. For generative AI for older adults, "accessible design" has at least three aspects. First, interfaces must accommodate slow typing, hearing, or vision impairments, and interruptions; voice input and clear visual feedback can help. Second, safety features—such as prompts about scams, easy-to-follow source links, and skepticism regarding financial or health claims—should be built into the systems rather than added later. Third, tailoring matters: advice on pensions, care systems, or tenant rights must be specific to national regulations, not generic templates. Each of these elements lessens cognitive load and increases the chances that older adults will see AI as helpful rather than threatening.
Policy for Inclusive Generative AI for Older Adults
The European Union’s "Digital Decade" strategy aims to ensure that 80% of adults have at least basic digital skills by 2030. This goal should now expand to include proficiency in using generative AI to enhance well-being rather than detract from it. The most effective delivery channels are those already trusted by seniors. Public libraries, community centers, trade unions, and universities for older adults can host short, practical workshops where participants practice asking chatbots to rewrite scam emails, summarize medical documents, or generate questions for consultations. In Italy and other aging societies, adult education programs can pair tech-savvy students with older learners to explore AI tools together, turning social engagement—already a key factor in adoption—into a foundational design principle. Importantly, this training should not be framed as a crash course in “future-proofing your CV,” but as a toolkit for engaging with public services, managing finances, and maintaining social connections.
Governments and regulators also play a role in shaping the market for generative AI for older adults. Health and welfare agencies can create “public option” chatbots that provide answers based on verified information and acknowledge uncertainty, rather than pushing older adults toward less transparent private tools. Consumer protection authorities can mandate that AI systems used in pension advice, insurance, or credit scoring provide accessible explanations and clear appeal paths. Given the established links between internet use and better subjective well-being in later life, the onus should be on providers to demonstrate that their tools do not systematically mislead or exploit older users. Labor market policy is also essential. As AI becomes integrated into workplace software, employers should offer targeted training for older workers, recognizing that even modest earnings gains from generative AI can help extend productive careers for those who want to continue working.
The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.
References
Aldasoro, I., Armantier, O., Doerr, S., Gambacorta, L., & Oliviero, T. (2024a). The gen AI gender gap. Economics Letters, 241, 111814. Aldasoro, I., Armantier, O., Doerr, S., Gambacorta, L., & Oliviero, T. (2024b). Survey evidence of gen AI and households: Job prospects amid trust concerns. BIS Bulletin, 86. Bick, A., Blandin, A., & Deming, D. (2024). The rapid adoption of generative AI. VoxEU. European Commission. (2025). Impact of digitalisation: 30% of EU workers use AI. Joint Research Centre. Gambacorta, L., Jappelli, T., & Oliviero, T. (2025). Generative AI: Uneven adoption, labour market returns, and policy implications. VoxEU. Lifshitz, R., Nimrod, G., & Bachner, Y. G. (2018). Internet use and well-being in later life: A functional approach. Aging & Mental Health, 22(1), 85–91. Nakagomi, A., Shiba, K., Kawachi, I., et al. (2022). Internet use and subsequent health and well-being in older adults: An outcome-wide analysis. Computers in Human Behavior, 130, 107156. Pew Research Center. (2022). Share of those 65 and older who are tech users has grown in the past decade. Pew Research Center. (2024). Internet/Broadband Fact Sheet. Pew Research Center. (2025). 34% of U.S. adults have used ChatGPT, about double the share in 2023. Suárez-Álvarez, A., & Vicente, M. R. (2023). Going “beyond the GDP” in the digital economy: Exploring the relationship between internet use and well-being in Spain. Humanities and Social Sciences Communications, 10(1), 582. Suárez-Álvarez, A., & Vicente, M. R. (2025). Internet use and the Well-Being of the Elders: A quantitative study in an aged country. Social Indicators Research, 176(3), 1121–1135. Washington Post. (2025, August 19). How America’s seniors are confronting the dizzying world of AI. Yu, S., et al. (2024). Understanding older adults’ acceptance of chatbots in health contexts. International Journal of Human–Computer Interaction. Zhang, D., et al. (2025). The dark side of the association between internet use and subjective well-being among older adults. BMC Geriatrics.
Picture
Member for
1 year
Real name
David O'Neill
Bio
David O’Neill is a Professor of Finance and Data Analytics at the Gordon School of Business, SIAI. A Swiss-based researcher, his work explores the intersection of quantitative finance, AI, and educational innovation, particularly in designing executive-level curricula for AI-driven investment strategy. In addition to teaching, he manages the operational and financial oversight of SIAI’s education programs in Europe, contributing to the institute’s broader initiatives in hedge fund research and emerging market financial systems.