Skip to main content

Algorithmic Targeting Is Not Segregation: Fix Outcomes Without Breaking the Math

Algorithmic Targeting Is Not Segregation: Fix Outcomes Without Breaking the Math

Picture

Member for

1 year
Real name
Catherine Maguire
Bio
Catherine Maguire is a Professor of Computer Science and AI Systems at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI). She specializes in machine learning infrastructure and applied data engineering, with a focus on bridging research and large-scale deployment of AI tools in financial and policy contexts. Based in the United States (with summer in Berlin and Zurich), she co-leads SIAI’s technical operations, overseeing the institute’s IT architecture and supporting its research-to-production pipeline for AI-driven finance.

Modified

Optimization isn’t segregation
Impose variance thresholds and independent audits
Require delivery reports and fairness controls

The key statistic in the public debate isn't about clicks or conversions. It's the 10% variance cap that U.S. regulators required Meta to meet for most housing ads by December 31, under a court-monitored settlement. This agreement requires the company’s “Variance Reduction System” to reduce the gap between eligible audiences and actual viewers, by sex and estimated race or ethnicity, to 10% or less for most ads, with federal oversight until June 2026. This is an outcome rule, not a moral judgment. It doesn't claim that “the algorithm is racist.” Instead, it states, “meet this performance standard, or fix your system.” As schools and governments debate whether algorithmic targeting in education ads amounts to segregation, we should remember this vital idea. The way forward is through measurable outcomes and responsible engineering, without labeling neutral, math-driven optimization as an act of intent.

What algorithmic targeting actually does

Algorithmic targeting has two stages. First, advertisers and the platform define a potential audience using neutral criteria. Then, the platform’s delivery system decides who actually sees each ad based on predicted relevance, estimated value, and budget limits. At the scale of social media, this second stage is the engine. Most ads won't reach everyone in the target group; the delivery algorithm sorts, ranks, and distributes resources. Courts and agencies understand this distinction. In 2023, the Justice Department enforced an outcome standard for housing ads on Meta, requiring the new Variance Reduction System to keep demographic disparities within specific limits and report progress to an independent reviewer. This solution targeted delivery behavior instead of banning optimization or calling it segregation. The lesson is clear: regulate what the system does, not what we fear it might mean.

Critics argue that even neutral systems can lead to unequal results. This is true and has been documented. In 2024, researchers from Princeton and USC ran paired education ads and found that Meta’s delivery favored specific results: ads for some for-profit colleges reached a higher proportion of Black users than ads for similar public universities, even when the ads were neutral. When more “realistic” creatives were used, this racial skew increased. Their method controlled for confounding factors by pairing similar ads and analyzing aggregated delivery reports, a practical approach for investigating a complex system. These findings are essential. They illustrate disparate impact—an outcome gap—not proof of intent. Policy should recognize this difference.

Figure 1: Regulation targets outcomes, not intent: most ads must fall within a 10% demographic variance window.

The legal line on algorithmic targeting

The current case that sparked this debate claims that Meta’s education ad delivery discriminates and that the platforms, as public accommodations in Washington, D.C., provide different quality service to other users. In July, the D.C. Superior Court allowed the suit to move forward. It categorized the claims as “archetypal” discrimination under D.C. law. It suggested that nondisclosure of how the system operates could constitute a deceptive trade practice. This decision permits the case to continue but does not provide a final ruling. It indicates that state civil rights and consumer protection laws can address algorithmic outcomes. Still, it does not resolve the key question: when does optimization become segregation? The answer should rely on intent and whether protected characteristics (or close proxies) are used as decision inputs, rather than on whether disparities exist at the group level after delivery.

There is a straightforward way to draw this line. Disparate treatment, which refers to the intentional use of race, sex, or similar indicators, should result in liability. Disparate impact, on the other hand, refers to unequal outcomes from neutral processes, which should prompt engineering duties and audits. This is how the 2023 housing settlement operates: it sets numerical limits, appoints an independent reviewer, and allows the platform an opportunity to reduce variance without banning prediction. This is also the approach for other high-risk systems: we require testing and transparency, not blanket condemnations of mathematics. Applying this model of outcomes and audits to education ads would protect students without labeling algorithmic targeting as segregation.

Evidence of bias is objective; the remedy should be audits, not labels

The body of research on delivery bias is extensive. Long before the latest education ad study, audits showed that Meta’s delivery algorithm biased job ads by race and gender, even when the advertiser's targeting was neutral. A notable 2019 paper demonstrated that similar job ads reached very different audiences based on creative choices and platform optimization. Journalists and academics replicated these patterns: construction jobs mainly went to men; cashier roles to women; some credit and employment ads favored men, despite higher female engagement on the platform. We should not overlook these disparities. We should address them by setting measurable limits, exposing system behavior to independent review, and testing alternative scenarios, just as the housing settlement now requires. This is more honest and effective than labeling the process as segregation.

Education deserves special attention. The 2024 audit found that ads for for-profit colleges reached a larger share of Black users than public university ads, aligning with longstanding enrollment differences—about 25% Black enrollment in the for-profit sector versus roughly 14% in public colleges, based on data from the College Scorecard used by the researchers. This history helps clarify the observed biases but does not justify them. The appropriate response is to require education and delivery to meet clear fairness standards—perhaps a variance limit similar to housing—and to publish compliance metrics. This respects that optimization is probabilistic and commercial while demanding equal access to information about public opportunities.

Figure 2: Realistic creatives widen demographic reach gaps between for-profit and public college ads

A policy path that protects opportunity without stifling practical math

A better set of rules should look like this. First, prohibit inputs that reveal intent. Platforms and advertisers shouldn't include protected traits or similar indicators in ad delivery for education, employment, housing, or credit. Second, establish outcome limits and audit them. Require regular reports showing that delivery for education ads remains within an agreed range across protected classes, with an independent reviewer authorized to test, challenge, and demand corrections. This is already what the housing settlement does, and it has specific targets and deadlines. Third, require advertiser-facing tools that indicate when a campaign fails fairness checks and automatically adjust bids or ad rotation to bring delivery back within the limits. None of these steps requires labeling algorithmic targeting as segregation. All of them help reduce harmful biases.

The state and local landscape is moving towards a compliance-focused model. New York City’s Local Law 144 mandates annual bias audits for automated employment decisions and public reporting. Several state attorneys general have started using existing consumer protection and civil rights laws to monitor AI outcomes in hiring and other areas. These measures do not prohibit optimization; they demand evidence that the system operates fairly. National policymakers can adapt this framework for education ads: documented audits, standardized variance metrics, and safe havens for systems that meet the standards. This approach avoids the extremes of “anything goes” and “everything is segregation,” aligning enforcement with what courts are willing to oversee: performance, not metaphors.

What educators and administrators should require now

Education leaders can take action without waiting for final court rulings. When purchasing ads, insist on delivery reports that show audience composition and on tools that promote fairness-aware bidding. Request independent audit summaries in RFPs, not just audience estimates. If platforms do not provide variance metrics, allocate more funding to those that do. Encourage paired-ad testing, a low-cost method used by research teams to uncover biases while controlling for confounding factors. The goal isn't to litigate intent; it's to ensure that students from all backgrounds see the same opportunities. This is a practical approach, not a philosophical one. It enables us to turn a heated label into a standard that improves access where it matters: public colleges, scholarships, apprenticeships, and financial aid.

Policymakers can assist by clarifying safe harbors. A platform that clearly excludes protected traits, releases a technical paper detailing its fairness controls, and meets a defined variance threshold for education ads should receive consideration and a specific period to rectify any issues flagged in audits. In contrast, a platform that remains opaque or uses traits or obvious proxies should face penalties, including damages and injunctions. This distinction acknowledges a crucial ethical point: optimization driven by data can be lawful when it respects clear limits and transparency, and it becomes unlawful when it bypasses those constraints. The DOJ’s housing settlement demonstrates how to create rules that engineers can implement and courts can enforce.

The 10% figure is not a minor detail. It serves as a guide to regulating algorithmic targeting without turning every disparity into a moral judgment. Labeling algorithmic targeting as segregation obscures the critical distinction between intent and impact. It also hampers the tools that help schools reach the right students and aid students in finding the right schools. We do not need metaphors from a bygone era. We need measurable requirements, public audits, and independent checks that ensure fair delivery while allowing optimization to function within strict limits. Suppose courts and agencies insist on this approach, platforms will adapt. In that case, research teams will continue testing, and students—especially those who historically have had fewer opportunities—will receive better information. Avoid sweeping labels. Maintain the rules on outcomes. Let the math work for everyone, with transparency.


The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.


References

Ali, M., Sapiezynski, P., Bogen, M., Korolova, A., Mislove, A., & Rieke, A. (2019). How Facebook’s ad delivery can lead to biased outcomes. Proceedings of CSCW.
Brody, D. (2025, November 13). Equal Rights Center v. Meta is the most important tech case flying under the radar. Brookings Institution.
Imana, B., Korolova, A., & Heidemann, J. (2024). Auditing for racial discrimination in the delivery of education ads. ACM FAccT ’24.
Northeastern University/Upturn. (2019). How Facebook’s Ad Delivery Can Lead to Biased Outcomes (replication materials).
Reuters. (2025, October 24). Stepping into the AI void in employment: Why state AI rules now matter more than federal policy.
U.S. Department of Justice. (2023/2025 update). Justice Department and Meta Platforms Inc. reach key agreement to address discriminatory delivery of housing advertisements (press release; compliance targets and VRS).
Washington Lawyers’ Committee & Equal Rights Center. (2025, February 11). Equal Rights Center v. Meta (Complaint, D.C. Superior Court).

Picture

Member for

1 year
Real name
Catherine Maguire
Bio
Catherine Maguire is a Professor of Computer Science and AI Systems at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI). She specializes in machine learning infrastructure and applied data engineering, with a focus on bridging research and large-scale deployment of AI tools in financial and policy contexts. Based in the United States (with summer in Berlin and Zurich), she co-leads SIAI’s technical operations, overseeing the institute’s IT architecture and supporting its research-to-production pipeline for AI-driven finance.

Generative AI for Older Adults: Lessons from the Internet Age

Generative AI for Older Adults: Lessons from the Internet Age

Picture

Member for

1 year
Real name
David O'Neill
Bio
David O’Neill is a Professor of Finance and Data Analytics at the Gordon School of Business, SIAI. A Swiss-based researcher, his work explores the intersection of quantitative finance, AI, and educational innovation, particularly in designing executive-level curricula for AI-driven investment strategy. In addition to teaching, he manages the operational and financial oversight of SIAI’s education programs in Europe, contributing to the institute’s broader initiatives in hedge fund research and emerging market financial systems.

Modified

Older adults are missing out on generative AI
Used well, it can boost independence and wellbeing
Policy must make these tools senior-friendly

In 2000, only 14% of Americans aged 65 and older were online. By 2024, that number had risen to 90%. This shift is so significant that it's easy to forget how unfamiliar the internet once seemed to older adults. Today, many people in this age group video call their grandchildren, manage their bank accounts on smartphones, and consider YouTube their main TV channel. However, when we transition from browsing the web to using large language models, we see a regression. By mid-2025, only 10% of Americans aged 65 and older had ever used ChatGPT, compared with 58% of adults under 30. Data from Italy shows a similar trend: while three-quarters of adults are aware of generative AI, regular use remains concentrated among younger, more educated people. Generative AI for older adults is now in a position similar to the internet at the turn of the century: visible and popular, but mostly overlooked by seniors.

Most discussions of this gap view it as a job-market issue. Evidence from Italian household surveys indicates that using generative AI is linked to a 1.8% to 2.2% increase in earnings, about half a year's worth of additional schooling, and one-tenth of the wage benefit seen with basic computer use in the 1990s. From this perspective, younger, tech-savvy workers benefit first while older workers fall behind. While this interpretation isn’t wrong, it is limited. For those in their 60s and 70s, generative AI is less about income and more about independence, health, and social connections. The better comparison isn't early spreadsheets or email, but how the internet and smartphones changed well-being in later life once they became accessible and valuable. If we overlook this comparison, we risk repeating a 20-year delay that older adults cannot afford.

Generative AI for Older Adults and the New Adoption Gap

Recent Italian survey data highlight how significantly age influences the use of these tools. In April 2024, 75.6% of Italians aged 18 to 75 reported awareness of generative AI tools like ChatGPT, yet only 36.7% had used them at least once in the past year, and just 20.1% were monthly users. Age and education create a clear divide: adults aged 18 to 34 were 11 percentage points more likely to know about generative AI than those 65 and older, and among those aware of it, they were 30 percentage points more likely to use it. These are significant differences that reflect well-documented patterns in the "digital divide," where older adults see fewer benefits from new technologies and face steeper learning curves and greater perceived risks. Consequently, generative AI for older adults exists, but it is mostly outside their everyday activities.

Figure 1: Older adults show high awareness of generative AI but very low usage, widening the digital gap.

Evidence from other countries shows that Italy is not an anomaly. A module in the U.S. Federal Reserve’s Survey of Consumer Expectations finds that awareness of generative AI now exceeds 80% among adults. Usage rates are slightly higher than those in Italy, but the same pronounced divides by age, education, and gender persist. Pew Research Center estimates that by June 2025, 34% of U.S. adults had used ChatGPT. The difference by age is stark: 58% of adults under 30 compared to 25% of those aged 50 to 64 and just 10% of those 65 and older. Across the EU, the Commission’s Joint Research Centre reports that about 30% of workers now use some form of AI, with adoption highest among younger, better-educated groups. Generative AI for older adults is thus developing within a framework of established digital inequality: seniors have achieved near-universal internet access. Still, they are once again marginalized by a new general-purpose technology.

Figure 2: Internet adoption among older adults surged over two decades, showing how fast late-life uptake can accelerate once technology becomes accessible.

This situation would be less concerning if the gains from adoption were solely financial. Estimates from Italy suggest that generative AI use provides only a modest earnings boost, much smaller than the benefits received from basic computer skills during the early computer age. Yet older adults interact with health systems, social services, and financial providers that are quickly integrating AI. If generative AI for older adults remains uncommon, the risk extends beyond reduced income; it also includes diminished ability to navigate services influenced by algorithms. The Italian data highlight another vital aspect: social engagement strongly predicts the use of generative AI, even after considering education and income. This finding mirrors decades of research on the internet, where social connections and perceived usefulness determine whether late adopters continue to use these tools. Understanding generative AI through this perspective is crucial, as it shifts the focus from “teaching seniors to code with chatbots” to integrating these technologies into the social and service settings they trust, thereby illuminating the true potential of AI for older adults.

What the Internet Era Taught Us About Late-Life Technology Adoption

The history of the web and smartphones illustrates how quickly older adults can close a gap once technologies become simpler and more relevant. In the United States, only 14% of those 65 and older used the internet in 2000; by 2024, that number reached 90%, just nine percentage points lower than the youngest age group. Home broadband and smartphone ownership reflect a similar trend: as of 2021, 61% of people aged 65 and older owned a smartphone, and 64% had broadband at home, up from single-digit levels in the mid-2000s. Even YouTube—a platform initially considered for teenagers—has seen use grow among older adults, with the percentage of Americans aged 65 and older using it rising from 38% to 49% between 2019 and 2021. In other words, older adults did not grow up digital. Still, once devices became touch-based, constantly connected, and integrated into social life, they underwent large-scale adaptation.

This access brought about not just convenience but also improved well-being. A study of adults aged 50 and older found that using the internet for communication, information, practical tasks, and leisure positively affected life satisfaction and, in terms of task performance and leisure, negatively correlated with symptoms of depression. An analysis of older Japanese adults revealed that frequent internet users enjoyed better physical and cognitive health, stronger social connections, and healthier behaviors than those who didn't use the internet, even after controlling for initial differences. Studies in England and other aging societies also show a link between regular internet use among seniors and higher quality-of-life scores. Overall, this research suggests that when older adults successfully incorporate digital tools into their daily lives, they often experience greater autonomy, social ties, and psychological resilience.

However, the evidence cautions against being overly optimistic. A recent quantitative study of older adults in a European country, using European Social Survey data, found that daily internet use is negatively associated with self-reported happiness, even while it is positively related to social life indicators. A 2025 analysis from China described a "dark side," noting that internet use is associated with improved overall subjective well-being. Still, it also creates new vulnerabilities, with hope being a key psychological factor. The takeaway isn't that older adults should disconnect; rather, it is about the intensity and purpose of their digital interactions. Well-designed tools that foster communication, meaningful learning, and practical problem-solving tend to enhance late-life well-being. In contrast, aimless browsing and exposure to scams or misinformation do not have the same effect. Generative AI for older adults will follow this same trend unless it is thoughtfully created and regulated.

Designing Generative AI for Older Adults as a Well-Being Tool

Suppose we view generative AI for older adults as an extension of digital infrastructure. In that case, its most impactful uses will be straightforward and practical. Older adults already interact with AI-driven systems when seeking public benefits, scheduling medical appointments, or navigating banking apps. Conversational agents based on large language models could transform these interactions into two-way support: breaking down forms into simple language, drafting letters to landlords or insurers, or helping prepare questions for doctors. Research on health and wellness chatbots shows that older adults are willing to use them for medication reminders, lifestyle coaching, and appointment help if the interfaces are user-friendly and trust is established over time. Early qualitative studies indicate seniors appreciate chatbots that are patient, non-judgmental, and aware of local context—not those filled with jargon or pushy prompts.

Labor market evidence suggests that the most significant benefit of generative AI for older adults may not be financial. Data from Italian households reveal that the earning boost associated with generative AI use is real but modest. For retirees or those nearing retirement, this boost may not matter. What is crucial is whether these tools can help maintain independence—allowing someone to stay in their home longer, manage a chronic condition more effectively, or remain active in community groups. Findings from England’s longitudinal aging study and similar research suggest that using the internet for communication and information improves quality of life and reduces loneliness among older adults. A growing body of research indicates that AI companions and assistants can help combat isolation. However, the quality of this evidence varies. Suppose generative AI for older adults can focus on these high-value functions. In that case, its social benefits may significantly outweigh its direct economic contributions.

Design decisions will shape this future. Surveys show that around 60% of Americans aged 65 and older have serious concerns about the integration of AI into everyday products and services. Classes offered by organizations like Senior Planet in the United States highlight this: participants are eager to learn, but they worry about scams, misinformation, and hidden data collection. For generative AI for older adults, "accessible design" has at least three aspects. First, interfaces must accommodate slow typing, hearing, or vision impairments, and interruptions; voice input and clear visual feedback can help. Second, safety features—such as prompts about scams, easy-to-follow source links, and skepticism regarding financial or health claims—should be built into the systems rather than added later. Third, tailoring matters: advice on pensions, care systems, or tenant rights must be specific to national regulations, not generic templates. Each of these elements lessens cognitive load and increases the chances that older adults will see AI as helpful rather than threatening.

Policy for Inclusive Generative AI for Older Adults

The European Union’s "Digital Decade" strategy aims to ensure that 80% of adults have at least basic digital skills by 2030. This goal should now expand to include proficiency in using generative AI to enhance well-being rather than detract from it. The most effective delivery channels are those already trusted by seniors. Public libraries, community centers, trade unions, and universities for older adults can host short, practical workshops where participants practice asking chatbots to rewrite scam emails, summarize medical documents, or generate questions for consultations. In Italy and other aging societies, adult education programs can pair tech-savvy students with older learners to explore AI tools together, turning social engagement—already a key factor in adoption—into a foundational design principle. Importantly, this training should not be framed as a crash course in “future-proofing your CV,” but as a toolkit for engaging with public services, managing finances, and maintaining social connections.

Governments and regulators also play a role in shaping the market for generative AI for older adults. Health and welfare agencies can create “public option” chatbots that provide answers based on verified information and acknowledge uncertainty, rather than pushing older adults toward less transparent private tools. Consumer protection authorities can mandate that AI systems used in pension advice, insurance, or credit scoring provide accessible explanations and clear appeal paths. Given the established links between internet use and better subjective well-being in later life, the onus should be on providers to demonstrate that their tools do not systematically mislead or exploit older users. Labor market policy is also essential. As AI becomes integrated into workplace software, employers should offer targeted training for older workers, recognizing that even modest earnings gains from generative AI can help extend productive careers for those who want to continue working.


The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.


References

Aldasoro, I., Armantier, O., Doerr, S., Gambacorta, L., & Oliviero, T. (2024a). The gen AI gender gap. Economics Letters, 241, 111814.
Aldasoro, I., Armantier, O., Doerr, S., Gambacorta, L., & Oliviero, T. (2024b). Survey evidence of gen AI and households: Job prospects amid trust concerns. BIS Bulletin, 86.
Bick, A., Blandin, A., & Deming, D. (2024). The rapid adoption of generative AI. VoxEU.
European Commission. (2025). Impact of digitalisation: 30% of EU workers use AI. Joint Research Centre.
Gambacorta, L., Jappelli, T., & Oliviero, T. (2025). Generative AI: Uneven adoption, labour market returns, and policy implications. VoxEU.
Lifshitz, R., Nimrod, G., & Bachner, Y. G. (2018). Internet use and well-being in later life: A functional approach. Aging & Mental Health, 22(1), 85–91.
Nakagomi, A., Shiba, K., Kawachi, I., et al. (2022). Internet use and subsequent health and well-being in older adults: An outcome-wide analysis. Computers in Human Behavior, 130, 107156.
Pew Research Center. (2022). Share of those 65 and older who are tech users has grown in the past decade.
Pew Research Center. (2024). Internet/Broadband Fact Sheet.
Pew Research Center. (2025). 34% of U.S. adults have used ChatGPT, about double the share in 2023.
Suárez-Álvarez, A., & Vicente, M. R. (2023). Going “beyond the GDP” in the digital economy: Exploring the relationship between internet use and well-being in Spain. Humanities and Social Sciences Communications, 10(1), 582.
Suárez-Álvarez, A., & Vicente, M. R. (2025). Internet use and the Well-Being of the Elders: A quantitative study in an aged country. Social Indicators Research, 176(3), 1121–1135.
Washington Post. (2025, August 19). How America’s seniors are confronting the dizzying world of AI.
Yu, S., et al. (2024). Understanding older adults’ acceptance of chatbots in health contexts. International Journal of Human–Computer Interaction.
Zhang, D., et al. (2025). The dark side of the association between internet use and subjective well-being among older adults. BMC Geriatrics.

Picture

Member for

1 year
Real name
David O'Neill
Bio
David O’Neill is a Professor of Finance and Data Analytics at the Gordon School of Business, SIAI. A Swiss-based researcher, his work explores the intersection of quantitative finance, AI, and educational innovation, particularly in designing executive-level curricula for AI-driven investment strategy. In addition to teaching, he manages the operational and financial oversight of SIAI’s education programs in Europe, contributing to the institute’s broader initiatives in hedge fund research and emerging market financial systems.