Certification helps but is narrow and assumes high natural diamond prices
Lab-grown diamonds crush prices, shrinking conflict rents at the source
Pair stronger traceability and sanctions with support for mining communities and education
Asia practiced tariff diplomacy: public deference, private deals
China’s rare-earth grip set the terms, yielding a short truce and modest tariff relief
Schools should hedge purchases and teach the supply-chain math behind these negotiations
Publicity is now a measurable asset, not just awareness
AI “digital doubles” and new laws make identity portable, licensable, and enforceable
Schools should value identity with attention-adjusted EMV and share revenue transparently
COP30 must set enforceable trade rules
Join a carbon price-floor club with fair borders
Recycle revenues and standardise carbon data to reward clean goods
The most crucial climate figure this month is not another reco
China’s global economic influence creates shared dependence
It reshapes rich-country industry and developing-country debt
Open-source AI deepens this reliance, making resilience vital
Ethan McGowan is a Professor of Financial Technology and Legal Analytics at the Gordon School of Business, SIAI. Originally from the United Kingdom, he works at the frontier of AI applications in financial regulation and institutional strategy, advising on governance and legal frameworks for next-generation investment vehicles. McGowan plays a key role in SIAI’s expansion into global finance hubs, including oversight of the institute’s initiatives in the Middle East and its emerging hedge fund operations.
Published
Modified
AI capital cheapens routine thinking and shifts work toward physical, contact-rich tasks
Gains are strong on simple tasks but stall without investment in real-world capacity
Schools should buy AI smartly, redesign assessments, and fund high-touch learning
Nearly 40% of jobs worldwide are at risk from artificial intelligence. This estimate from the International Monetary Fund highlights a simple fact: the cost of intelligence has decreased so much that software can now handle a greater share of routine thinking. We can think of this software as AI capital—an input that works alongside machines and people. Intelligence tasks are the first to be automated, while human work focuses on tasks that require physical presence. The cost of advanced AI models has dropped sharply since 2023. Additionally, hardware is providing more computing power for each euro spent every year. This trend lowers the effective cost of AI capital, while classroom, lab, and building expenses remain relatively stable. In this environment, shifts in wages and hiring occur not because work is vanishing, but because the mix of production is changing. Suppose educational institutions continue teaching as if intelligence were limited and physical resources were flexible. In that case, graduates will be unprepared for a labor market that no longer exists. It's crucial that we ensure equitable AI use and resource reallocation to prevent potential disparities.
Reframing AI Capital in the Production Function
The usual story of production—a combination of labor and physical capital—overlooks a third input that we now need to recognize. Let's call it A, or AI capital. This refers to disembodied, scalable intelligence that can perform tasks previously handled by clerks, analysts, and junior professionals. In a production function with three inputs, represented as 𝑌= 𝑓(𝐿,𝐾,𝐴), intelligence tasks are the first to be automated because the price of A is dropping faster than that of K. Many cognitive tasks can also be broken down into modules, making them easier to automate. A recent framework formalizes this idea in a two-sector model: intelligence output and physical output combine to produce goods and services. When A becomes inexpensive, the saturation of intelligence tasks increases, but the gains depend on having complementary physical capacity. This leads to a reallocation of human labor toward physical tasks, creating mixed effects on wages: wages may rise initially, then fall as automation deepens. Policies that assume a simple decline in wages miss this complex pattern.
Figure 1: Total labor 𝐿 endogenously splits between intelligence production 𝐼 and physical production 𝑃. As AI capital lowers the cost of intelligence tasks, labor shifts toward physical work while overall output 𝑌 depends on both streams.
The real question is not whether AI “replaces jobs” but whether adding another unit of AI capital increases output more than hiring one additional person. For tasks that are clearly defined, the answer is already yes. Studies show significant productivity boosts: mid-level writers completed work about 40% faster using a general-purpose AI assistant. In comparison, developers finished coding tasks approximately 56% faster with an AI partner. However, these gains decrease with more complex tasks, where AI struggles with nuances—this reflects the “jagged frontier” many teams are encountering. This pattern supports the argument for prioritizing AI: straightforward cognitive tasks will be automated first. In contrast, complex judgment tasks will remain human-dominated for now. We define “productivity” as time to completion and quality as measured by standardized criteria, noting that effect sizes vary with task complexity and user expertise.
When A expands while K and L do not, the share of labor can decline even when overall output stays constant. In simple terms, the same amount of production can require fewer workers. But this isn't an inevitable outcome. If physical and intellectual outputs complement rather than replace one another, investments in labs, clinics, logistics, and classrooms can help stabilize wages. This points to a critical shift for education systems: focusing on environments and approaches where physical presence enhances what AI alone cannot provide—care, hands-on skill, safety, and community.
Evidence: Falling AI Capital Prices, Mixed Productivity, Shifting Wages
The price indicators for AI capital are clear. By late 2023, API prices for popular models had dropped significantly, and hardware performance improved by about 30% each year. Prices won’t decline uniformly—newer models might be more expensive—but the overall trend is enough to change how businesses operate. Companies that previously paid junior analysts to consolidate memos are now using prompts and templates instead. Policymakers should interpret these signals as they would energy or shipping prices: as active factors influencing wages and hiring. We estimate the “price of A” by looking at published per-token API rates and hardware cost-effectiveness; we do not assume uniform access across all institutions.
Figure 2: As a larger share of tasks is automated, total output rises and more workers shift into physical, hands-on roles. The gains flatten at high automation, showing why investment in real-world capacity is needed to keep productivity growing.
The productivity evidence is generally positive but varies widely. Controlled experiments show significant improvements in routine content creation and coding. At the same time, observational studies and workforce surveys highlight that integrating AI can be challenging, and the benefits are often immediate. Some teams waste time fixing AI-generated text or adjusting to new workflows, while others achieve notable speed improvements. The result is an increase in task-level performance coupled with friction at the system level. Sector-specific data supports this: the OECD reports that a considerable number of job vacancies are in roles heavily exposed to AI, even as skill demands change when workers lack specialized AI skills. Labor-market rewards have also begun to shift: studies show wage premiums for AI-related skills, typically ranging from 15% to 25%, depending on the market and methodology.
The impact is not evenly distributed. The IMF predicts high exposure to AI in advanced economies where cognitive work predominates. The International Labour Organization (ILO) finds that women are more affected because clerical roles—highly automatable cognitive work—are often filled by women in wealthier countries. There are also new constraints in energy and infrastructure: data center demand could more than double by the end of the decade under specific scenarios, while power grid limitations are already delaying some projects. These issues further reinforce the trend toward prioritizing intelligence, which can outpace the physical capacities needed to support it. As AI capital expands, the potential returns begin to decrease unless physical capacity and skill training keep up. We draw on macroeconomic projections (IMF, IEA) and occupational exposure data (OECD, ILO); however, the uncertainty ranges can be vast and depend on various scenarios.
Managing AI Capital in Schools and Colleges
Education is at the center of this transition because it produces both types of inputs: intelligence and physical capacity. We should consider AI capital as a means to enhance routine thinking and free up human time for more personal work. Early evidence looks promising. A recent controlled trial revealed that an AI tutor helped students learn more efficiently than traditional in-class lessons led by experts. Yet, the adoption of such technologies is lagging. Surveys show low AI use among teachers in classrooms, gaps in available guidance, and limited training for institutions. Systems that address these gaps can more effectively translate AI capital into improved student learning while ensuring that core assessments remain rigorous. The controlled trial evaluated learning outcomes on aligned topics and used standardized results; survey findings are weighted to reflect national populations.
Three policy directions emerge from the focus on AI capital. First, rebalance the investment mix. If intelligence-based content is becoming cheaper and more effective, allocate limited funds to places where human interaction adds significant value, such as clinical placements, maker spaces, science labs, apprenticeships, and supervised practice. Second, raise professional standards for AI use. Train educators to integrate AI capital with meaningful feedback rather than letting the technology replace their discretion. The objective should not be to apply “AI everywhere,” but to focus on “AI where it enhances learning.” Third, promote equity. Given that clerical and low-status cognitive jobs are more vulnerable and tend to involve a higher percentage of women, schools relying too much on AI for basic tasks risk perpetuating gender inequalities. Track access, outcomes, and time used across demographic groups; leverage this data to direct support—coaching, capstone projects, internship placements—toward students who may be disadvantaged by the very tools that benefit others.
Administrators should approach their planning with a production mindset rather than simply relying on app lists. Consider where AI capital takes over, where it complements human effort, and where it may cause distractions. Utilize straightforward metrics. If a chatbot can produce decent lab reports, it can free up time for grading to focus on face-to-face feedback. If a scheduler can create timetables in seconds, invest staff time in mentorship. If a coding assistant helps beginners work faster, redesign tasks to emphasize design decisions, documentation, and debugging under pressure. In each case, the goal is to direct human labor towards the areas—both physical and relational—where value is amplifying.
Policy: Steering AI Capital Toward Shared Benefits
A clear policy framework is developing. Start with transparent procurement that treats AI capital as a utility, establishing clear terms for data use, uptime, and backup plans. Tie contracts to measurable learning outcomes or service results rather than just counting seat licenses. Next, create aligned incentives. Provide time-limited tax breaks or targeted grants for AI implementations that free up staff hours for high-impact learning experiences (like clinical supervision, laboratory work, and hands-on training). Pair these incentives with wage protection or transition stipends for administrative staff who upgrade their skills for student-facing jobs. This approach channels savings from AI capital back into the human interactions that are more difficult to automate.
Regulators should anticipate the obstacles. Growth in data centers and rising electricity needs present real logistical challenges. Education ministries and local governments can collaborate to pool their demand and negotiate favorable computing terms for schools and colleges. They can also publish disclosures regarding the use of AI in curricula and assessments, helping students and employers understand where AI was applied and how. Finally, implement metrics that account for exposure. Track what portion of each program’s assessments comes from physical or supervised activities. Determine how many contact hours each student receives and measure the administrative time freed up by implementing AI. Institutions that manage these ratios will enhance both productivity and the value of education.
Skeptics might question whether the productivity gains are exaggerated and whether new expenses—such as errors, monitoring, and training—cancel them out. They sometimes do. Research and news reports highlight teams whose workloads increased because they needed to verify AI outputs or familiarize themselves with new tools. Others highlight mental health issues arising from excessive tool usage. The solution is not to dismiss these concerns, but to focus on design: limit AI's capital to tasks with low error risk and affordable verification; adjust assessments to prioritize real-time performance; measure the time saved and reallocate it to more personal work. Where integration is poorly executed, gains diminish. Where it is effectively managed, early successes are more likely to persist.
Today, one of the most significant labor indicators might be this: intelligence is no longer scarce. The IMF’s figure showing 40% exposure reflects the macro reality that AI capital has reached a price-performance standard for many cognitive tasks. The risk for education isn’t becoming obsolete; it’s misallocating resources—spending limited funds on teaching rare thinking skills as if AI capital were still expensive and overlooking the physical and interpersonal work where value is now concentrated. The path forward is clear. Treat AI capital as a standard resource. Use it wisely. Implement it where it enhances routine tasks. Shift human labor to areas where it is still needed most: labs, clinics, workshops, and seminars where people connect and collaborate. Track the ratios; evaluate the trade-offs; protect those who are most at risk. If we follow this route, wages won’t just fall with automation. They will rise alongside complementary efforts. Schools will fulfill their mission: preparing individuals for the reality of today's world, not an idealized version of it.
The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.
References
Bruegel. 2023. Skills or a degree? The rise of skill-based hiring for AI and beyond (Working Paper 20/2023). Brookings Institution (Kording, K.; Marinescu, I.). 2025. (Artificial) Intelligence Saturation and the Future of Work (Working paper). Carbon Brief. 2025. “AI: Five charts that put data-centre energy use and emissions into context.” GitHub. 2022. “Research: Quantifying GitHub Copilot’s impact on developer productivity and happiness.” IEA. 2024. Electricity 2024: Analysis and Forecast to 2026. IEA. 2025. Electricity mid-year update 2025: Demand outlook. IFR (International Federation of Robotics). 2024. World Robotics 2024 Press Conference Slides. ILO. 2023. Generative AI and Jobs: A global analysis of potential effects on job quantity and quality. ILO. 2025. Generative AI and Jobs: A Refined Global Index of Occupational Exposure. IMF (Georgieva, K.). 2024. “AI will transform the global economy. Let’s make sure it benefits humanity.” IMF Blog. MIT (Noy, S.; Zhang, W.). 2023. Experimental Evidence on the Productivity Effects of Generative Artificial Intelligence (working paper). OECD. 2024. Artificial intelligence and the changing demand for skills in the labour market. OECD. 2024. How is AI changing the way workers perform their jobs and the skills they require? Policy brief. OECD. 2024. The impact of artificial intelligence on productivity, distribution and growth. OpenAI. 2023. “New models and developer products announced at DevDay.” (Pricing update). RAND. 2025. AI Use in Schools Is Quickly Increasing but Guidance Lags Behind. RAND. 2024. Uneven Adoption of Artificial Intelligence Tools Among U.S. Teachers and Principals. Scientific Reports (Kestin, G., et al.). 2025. “AI tutoring outperforms in-class active learning.” Epoch AI. 2024. “Performance per dollar improves around 30% each year.” Data Insight. University of Melbourne / ADM+S. 2025. “Does AI really boost productivity at work? Research shows gains don’t come cheap or easy.”
Picture
Member for
1 year
Real name
Ethan McGowan
Bio
Ethan McGowan is a Professor of Financial Technology and Legal Analytics at the Gordon School of Business, SIAI. Originally from the United Kingdom, he works at the frontier of AI applications in financial regulation and institutional strategy, advising on governance and legal frameworks for next-generation investment vehicles. McGowan plays a key role in SIAI’s expansion into global finance hubs, including oversight of the institute’s initiatives in the Middle East and its emerging hedge fund operations.
Globalization shifts tasks to cheaper hubs while people move unevenly
Left-behind places lose jobs and grow politically angry
Insure workers, boost mobility, and invest in local productivity
Manufacturing hires fewer people; services now drive job growth
Digital services hit $4.25T and robot density doubled, shrinking mid-skill factory roles
Pivot to service-led industrialisation with skills, standards, and digital trade rules
Private school subsidies risk emptying public schools by erasing price differences
Competition will shift to entrance exams and prep, as Korea shows
Link subsidies to fair admissions and fee caps, invest in public quality, and track enrolment
Algorithmic Targeting Is Not Segregation: Fix Outcomes Without Breaking the Math
Picture
Member for
1 year
Real name
Catherine Maguire
Bio
Catherine Maguire is a Professor of Computer Science and AI Systems at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI). She specializes in machine learning infrastructure and applied data engineering, with a focus on bridging research and large-scale deployment of AI tools in financial and policy contexts. Based in the United States (with summer in Berlin and Zurich), she co-leads SIAI’s technical operations, overseeing the institute’s IT architecture and supporting its research-to-production pipeline for AI-driven finance.
Published
Modified
Optimization isn’t segregation
Impose variance thresholds and independent audits
Require delivery reports and fairness controls
The key statistic in the public debate isn't about clicks or conversions. It's the 10% variance cap that U.S. regulators required Meta to meet for most housing ads by December 31, under a court-monitored settlement. This agreement requires the company’s “Variance Reduction System” to reduce the gap between eligible audiences and actual viewers, by sex and estimated race or ethnicity, to 10% or less for most ads, with federal oversight until June 2026. This is an outcome rule, not a moral judgment. It doesn't claim that “the algorithm is racist.” Instead, it states, “meet this performance standard, or fix your system.” As schools and governments debate whether algorithmic targeting in education ads amounts to segregation, we should remember this vital idea. The way forward is through measurable outcomes and responsible engineering, without labeling neutral, math-driven optimization as an act of intent.
What algorithmic targeting actually does
Algorithmic targeting has two stages. First, advertisers and the platform define a potential audience using neutral criteria. Then, the platform’s delivery system decides who actually sees each ad based on predicted relevance, estimated value, and budget limits. At the scale of social media, this second stage is the engine. Most ads won't reach everyone in the target group; the delivery algorithm sorts, ranks, and distributes resources. Courts and agencies understand this distinction. In 2023, the Justice Department enforced an outcome standard for housing ads on Meta, requiring the new Variance Reduction System to keep demographic disparities within specific limits and report progress to an independent reviewer. This solution targeted delivery behavior instead of banning optimization or calling it segregation. The lesson is clear: regulate what the system does, not what we fear it might mean.
Critics argue that even neutral systems can lead to unequal results. This is true and has been documented. In 2024, researchers from Princeton and USC ran paired education ads and found that Meta’s delivery favored specific results: ads for some for-profit colleges reached a higher proportion of Black users than ads for similar public universities, even when the ads were neutral. When more “realistic” creatives were used, this racial skew increased. Their method controlled for confounding factors by pairing similar ads and analyzing aggregated delivery reports, a practical approach for investigating a complex system. These findings are essential. They illustrate disparate impact—an outcome gap—not proof of intent. Policy should recognize this difference.
Figure 1: Regulation targets outcomes, not intent: most ads must fall within a 10% demographic variance window.
The legal line on algorithmic targeting
The current case that sparked this debate claims that Meta’s education ad delivery discriminates and that the platforms, as public accommodations in Washington, D.C., provide different quality service to other users. In July, the D.C. Superior Court allowed the suit to move forward. It categorized the claims as “archetypal” discrimination under D.C. law. It suggested that nondisclosure of how the system operates could constitute a deceptive trade practice. This decision permits the case to continue but does not provide a final ruling. It indicates that state civil rights and consumer protection laws can address algorithmic outcomes. Still, it does not resolve the key question: when does optimization become segregation? The answer should rely on intent and whether protected characteristics (or close proxies) are used as decision inputs, rather than on whether disparities exist at the group level after delivery.
There is a straightforward way to draw this line. Disparate treatment, which refers to the intentional use of race, sex, or similar indicators, should result in liability. Disparate impact, on the other hand, refers to unequal outcomes from neutral processes, which should prompt engineering duties and audits. This is how the 2023 housing settlement operates: it sets numerical limits, appoints an independent reviewer, and allows the platform an opportunity to reduce variance without banning prediction. This is also the approach for other high-risk systems: we require testing and transparency, not blanket condemnations of mathematics. Applying this model of outcomes and audits to education ads would protect students without labeling algorithmic targeting as segregation.
Evidence of bias is objective; the remedy should be audits, not labels
The body of research on delivery bias is extensive. Long before the latest education ad study, audits showed that Meta’s delivery algorithm biased job ads by race and gender, even when the advertiser's targeting was neutral. A notable 2019 paper demonstrated that similar job ads reached very different audiences based on creative choices and platform optimization. Journalists and academics replicated these patterns: construction jobs mainly went to men; cashier roles to women; some credit and employment ads favored men, despite higher female engagement on the platform. We should not overlook these disparities. We should address them by setting measurable limits, exposing system behavior to independent review, and testing alternative scenarios, just as the housing settlement now requires. This is more honest and effective than labeling the process as segregation.
Education deserves special attention. The 2024 audit found that ads for for-profit colleges reached a larger share of Black users than public university ads, aligning with longstanding enrollment differences—about 25% Black enrollment in the for-profit sector versus roughly 14% in public colleges, based on data from the College Scorecard used by the researchers. This history helps clarify the observed biases but does not justify them. The appropriate response is to require education and delivery to meet clear fairness standards—perhaps a variance limit similar to housing—and to publish compliance metrics. This respects that optimization is probabilistic and commercial while demanding equal access to information about public opportunities.
Figure 2: Realistic creatives widen demographic reach gaps between for-profit and public college ads
A policy path that protects opportunity without stifling practical math
A better set of rules should look like this. First, prohibit inputs that reveal intent. Platforms and advertisers shouldn't include protected traits or similar indicators in ad delivery for education, employment, housing, or credit. Second, establish outcome limits and audit them. Require regular reports showing that delivery for education ads remains within an agreed range across protected classes, with an independent reviewer authorized to test, challenge, and demand corrections. This is already what the housing settlement does, and it has specific targets and deadlines. Third, require advertiser-facing tools that indicate when a campaign fails fairness checks and automatically adjust bids or ad rotation to bring delivery back within the limits. None of these steps requires labeling algorithmic targeting as segregation. All of them help reduce harmful biases.
The state and local landscape is moving towards a compliance-focused model. New York City’s Local Law 144 mandates annual bias audits for automated employment decisions and public reporting. Several state attorneys general have started using existing consumer protection and civil rights laws to monitor AI outcomes in hiring and other areas. These measures do not prohibit optimization; they demand evidence that the system operates fairly. National policymakers can adapt this framework for education ads: documented audits, standardized variance metrics, and safe havens for systems that meet the standards. This approach avoids the extremes of “anything goes” and “everything is segregation,” aligning enforcement with what courts are willing to oversee: performance, not metaphors.
What educators and administrators should require now
Education leaders can take action without waiting for final court rulings. When purchasing ads, insist on delivery reports that show audience composition and on tools that promote fairness-aware bidding. Request independent audit summaries in RFPs, not just audience estimates. If platforms do not provide variance metrics, allocate more funding to those that do. Encourage paired-ad testing, a low-cost method used by research teams to uncover biases while controlling for confounding factors. The goal isn't to litigate intent; it's to ensure that students from all backgrounds see the same opportunities. This is a practical approach, not a philosophical one. It enables us to turn a heated label into a standard that improves access where it matters: public colleges, scholarships, apprenticeships, and financial aid.
Policymakers can assist by clarifying safe harbors. A platform that clearly excludes protected traits, releases a technical paper detailing its fairness controls, and meets a defined variance threshold for education ads should receive consideration and a specific period to rectify any issues flagged in audits. In contrast, a platform that remains opaque or uses traits or obvious proxies should face penalties, including damages and injunctions. This distinction acknowledges a crucial ethical point: optimization driven by data can be lawful when it respects clear limits and transparency, and it becomes unlawful when it bypasses those constraints. The DOJ’s housing settlement demonstrates how to create rules that engineers can implement and courts can enforce.
The 10% figure is not a minor detail. It serves as a guide to regulating algorithmic targeting without turning every disparity into a moral judgment. Labeling algorithmic targeting as segregation obscures the critical distinction between intent and impact. It also hampers the tools that help schools reach the right students and aid students in finding the right schools. We do not need metaphors from a bygone era. We need measurable requirements, public audits, and independent checks that ensure fair delivery while allowing optimization to function within strict limits. Suppose courts and agencies insist on this approach, platforms will adapt. In that case, research teams will continue testing, and students—especially those who historically have had fewer opportunities—will receive better information. Avoid sweeping labels. Maintain the rules on outcomes. Let the math work for everyone, with transparency.
The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.
References
Ali, M., Sapiezynski, P., Bogen, M., Korolova, A., Mislove, A., & Rieke, A. (2019). How Facebook’s ad delivery can lead to biased outcomes. Proceedings of CSCW. Brody, D. (2025, November 13). Equal Rights Center v. Meta is the most important tech case flying under the radar. Brookings Institution. Imana, B., Korolova, A., & Heidemann, J. (2024). Auditing for racial discrimination in the delivery of education ads. ACM FAccT ’24. Northeastern University/Upturn. (2019). How Facebook’s Ad Delivery Can Lead to Biased Outcomes (replication materials). Reuters. (2025, October 24). Stepping into the AI void in employment: Why state AI rules now matter more than federal policy. U.S. Department of Justice. (2023/2025 update). Justice Department and Meta Platforms Inc. reach key agreement to address discriminatory delivery of housing advertisements (press release; compliance targets and VRS). Washington Lawyers’ Committee & Equal Rights Center. (2025, February 11). Equal Rights Center v. Meta (Complaint, D.C. Superior Court).
Picture
Member for
1 year
Real name
Catherine Maguire
Bio
Catherine Maguire is a Professor of Computer Science and AI Systems at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI). She specializes in machine learning infrastructure and applied data engineering, with a focus on bridging research and large-scale deployment of AI tools in financial and policy contexts. Based in the United States (with summer in Berlin and Zurich), she co-leads SIAI’s technical operations, overseeing the institute’s IT architecture and supporting its research-to-production pipeline for AI-driven finance.