Skip to main content

Beyond the Robot Hype: An Education Strategy for Narrow Automation

Beyond the Robot Hype: An Education Strategy for Narrow Automation

Picture

Member for

10 months 4 weeks
Real name
Keith Lee
Bio
Keith Lee is a Professor of AI and Data Science at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI), where he leads research and teaching on AI-driven finance and data science. He is also a Senior Research Fellow with the GIAI Council, advising on the institute’s global research and financial strategy, including initiatives in Asia and the Middle East.

Modified

AI and robotics remain narrow tools, excelling only in tightly defined tasks
Human versatility—handling exceptions, combining roles, and adapting to context—remains the decisive advantage
Education policy must prioritize training for this versatility, turning automation into complement rather than substitute

The recent rush to declare a labor apocalypse misses a stubborn fact: even at the technology frontier, most work resists complete substitution. The International Monetary Fund estimates that about 40% of jobs globally are “exposed” to AI. Yet, exposure in low-income countries is closer to 26%—and exposure is not the same as displacement. It is an index of tasks that could be affected, for better or worse, by specific tools under specific conditions. That spread—40 versus 26—frames a more profound truth about education and labor policy in developing economies: the bottleneck is not a looming general artificial intelligence erasing human work, but the limited generalization of today’s systems and the uneven ability to apply them productively. Where tasks are standardized and context is controlled, automation advances quickly. Where the job hinges on recognizing edge cases, juggling multiple goals, and handling messy environments, human versatility remains decisive. The question for education is not how to train people for replacement, but how to cultivate that versatility as a comparative advantage. This highlights the importance of task redesign in the face of automation, empowering educators to take a proactive role in shaping the future of education.

From Narrow Automation to Broad Capability: Why Generalization Still Falters

What is commonly referred to as AI in much of the industry is best described as automation plus pattern recognition, tightly bound to the data and environments in which it is trained. This distinction is significant for schools. A robot can repeat a calibrated pick-and-place in a fixed cell with remarkable speed; however, it will not, on its own, infer what to do when a supplier swaps packaging, a part is slightly bent, or the ambient light changes the camera’s calibration. Surveys of “general-purpose” robotics make the state of play clear: despite impressive progress using foundation models and large, diverse datasets, robust performance under distribution shift—the move from training conditions to the real world—remains a significant challenge. We should anticipate capable systems in well-scoped niches, rather than broad agents that seamlessly transition among tasks. Therefore, education policy must prioritize the human skills that can integrate niche automations into reliable production.

Figure 1: Automation clusters in routine cognitive and physical tasks, while AI mostly augments non-routine work; new roles emerge at the boundary. This visual underlines the article’s core claim: today’s systems are narrow, not general.

The difference between finer granularity and true generality is not rhetorical. New robotic datasets and “generalist robot policy” efforts explicitly acknowledge that models trained across many demonstrations still struggle to transfer without careful domain adaptation and human supervision. Meta-analyses and surveys in 2023–2025 document promising sample-efficiency gains and multi-task behaviors, but also persistent gaps in compositional reasoning and robustness when conditions deviate. In other words, computation has made single actions more precise, not made work more universally machine-doable. For education systems, this suggests two priorities: build students’ capacity to diagnose when automation is brittle, and teach them to re-scope tasks, write operating “guardrails,” and recover gracefully when models fail—skills that are inherently cross-domain and human-centered.

What the 2023–2025 Data Actually Show for Developing Economies

Cross-country evidence since 2023 reinforces the case for reframing. The IMF’s 2024 analysis pegs AI exposure at roughly 60% in advanced economies, 40% in emerging markets, and 26% in low-income countries, emphasizing that many developing economies are less immediately disrupted—but also less ready to benefit—because exposure concentrates in clerical and high-skill services. Complementing this, an ILO working paper on generative AI concludes that the dominant near-term effect is augmentation of tasks, not wholesale automation, and notes that clerical work—an essential source of women’s employment—sits among the most exposed categories. OECD work adds a long-running backdrop: across member countries, about 28% of jobs sit in occupations at high risk of automation, with exposure skewed by education and task mix. Together, these sources indicate substantial task-level changes, limited job-level substitution, and significant heterogeneity across sectors and skills.

Meanwhile, the hardware realities of automation remain geographically concentrated. According to the International Federation of Robotics, 541,000 industrial robots were installed in 2023, increasing the global stock to approximately 4.3 million. Roughly 70% of those new units were allocated to Asia, with about half going to China alone. China’s density has more than doubled in four years to around 470 robots per 10,000 manufacturing workers, powered increasingly by domestic suppliers. That concentration matters for developing countries: it means learning to work alongside automation is becoming a baseline capability in export-integrated value chains. At the same time, the frontier of robotic deployment remains far from many classrooms. Where factories scale, labor-saving automation can coincide with higher output and, paradoxically, stable or rising employment in adjacent roles—installation, maintenance, quality, and logistics—if training systems pivot in time.

Figure 2: Robot adoption per worker rises with baseline wages—autos and electronics lead; agriculture and textiles lag. The gradient supports the policy focus on readiness, TVET, and complementary infrastructure rather than a one-size-fits-all automation push.

Readiness gaps are not abstract. In 2024, about 5.5 billion people—68% of the world—were online, but only 27% of people in low-income countries used the internet, compared with 93% in high-income economies. Electricity access has improved, yet approximately 666 million people still lack access to electricity in 2023, predominantly in Sub-Saharan Africa. Digital infrastructure experts estimate that achieving universal broadband by 2030 would require over $400 billion in investment. These constraints dampen both adoption and the realized productivity of AI systems in schools and workplaces. The IMF warns, accordingly, that lower readiness could widen the income gap even if exposure is lower. Education policy that assumes ubiquitous connectivity will miss the constraint that matters most in many districts: the socket and the signal.

Regional studies underscore the need for caution. For Latin America and the Caribbean, a 2024 assessment by the ILO and the World Bank estimated that 2–5% of jobs could be eliminated by AI in the near term, with 26–38% affected to some extent. Importantly, digital infrastructure limitations will also constrain the impact. In East Asia and the Pacific, a 2025 World Bank review finds that technology adoption has boosted employment as scale and productivity effects offset direct displacement in several sectors, even as benefits skew to skilled workers. The policy implication is not to deny displacement risks, but to recognize the pattern: automation trims specific tasks and roles while opening up complementary work, where training systems can pivot over time, and stalls where connectivity and basic infrastructure lag. This underscores the need for a shift in training systems to adapt to the changing labor market, emphasizing the importance of making timely adjustments in education policy.

Rewriting Education Policy for a Robot-Adjacent Labor Market

If the edge of the problem is narrow automation rather than imminent generality, curricula should be redesigned to teach “versatility in context.” That means building horizontal range—the capacity to handle exceptions and novel cases—and vertical task stacking—the ability to combine technical, organizational, and communication tasks around an automation. UNESCO’s 2024 AI competency frameworks for teachers and students point in this direction by centering critical use, ethical judgment, and basic model understanding rather than tool-specific recipes. Systems that embed these competencies early and reinforce them through upper-secondary and post-secondary tracks will create graduates who can scope problems properly, write precise instructions for tools, anticipate failure modes, and make trade-offs under constraints, all of which raise the productivity of narrow automation without pretending it will think for them.

The pathway runs through TVET as much as it does through universities. The demand signal from factories and service providers is already shifting toward “purple-collar” roles that blend operator, maintainer, and data-literate supervisor. Given that Asia accounted for 70% of new robot installations in 2023, countries integrated into those value chains require rapid, modular upskilling in safety, sensor calibration, basic control logic, data logging, and line reconfiguration. Education ministries should broker industry-education compacts that co-design micro-credentials stackable into diplomas, align with international standards where possible, and are delivered in conjunction with work-based learning. Where public institutes lack equipment, shared training centers with pooled, open-calendar access can reduce capital costs and spread utilization. This is not a wager on humanoids; it is a bet that maintenance, integration, and exception-handling will be abundant—even as task automation grows.

A human-in-the-loop philosophy should extend into how schools themselves adopt AI. Start with administrative and low-stakes uses where augmentation is clearest, such as drafting routine correspondence, scheduling, document classification, and data cleaning, while retaining human review. A brief method note clarifies the stakes: suppose, conservatively, that 30% of clerical tasks in a district office are technically amenable to partial automation under current tools. If only one-third of households and staff have reliable connectivity and devices, the adequate short-run exposure could be roughly 0.30 × 0.33 ≈ , or 10% of clerical workload—small but meaningful, and conditional on training. This is a back-of-the-envelope estimate, not a forecast, combining ILO task exposure patterns with ITU connectivity shares; it illustrates why piloting with measurement and user training matters more than sweeping mandates.

Equity cannot be an afterthought. Clerical roles are among the most exposed to generative AI and are disproportionately held by women; without targeted upskilling, AI could erode one of the few stable rungs in many labor markets. Education systems should prioritize transitions from routine clerical work into higher-judgment student-facing support, data stewardship, and school operations analytics—roles that pair domain knowledge with oversight of tools. At the same time, governments need to fund the prerequisites—reliable electricity and broadband—to make any of this real. World Bank analyses of digital pathways for education and connectivity finance make the investment case clear; without it, AI policies risk becoming paper plans. The strategic choice is not whether to “ban” or “mandate” AI in schools; it is to sequence investments so that human capability—teachers, technicians, supervisors—can compound the gains of narrow automation.

A final policy lever is to temper macro expectations with micro design. Acemoglu’s 2024 “simple macroeconomics of AI” reminds us that productivity and cost savings play through tasks, not mystical growth surges. Once policymakers accept that automation’s displacement and augmentation effects will be uneven and path-dependent, they can set realistic targets for sectoral training, vendor procurement clauses that require human override and audit logs, and iterative evaluation cycles. The payoff for education is credibility: when ministries publish task-level baselines and measure the share of workflows that become faster, more accurate, or more equitable because a tool is embedded with trained staff, public trust rises, and the path for expansion becomes evidence-based.

40% of global jobs are exposed to AI, compared to 26% in low-income countries; this is not a countdown clock to human obsolescence. It is a map of where careful task redesign and human-machine collaboration can deliver gains, and where readiness stands in the way. For developing countries, the strategic advantage is not to chase generalized autonomy; it is to double down on human versatility—training people to orchestrate brittle automations, to notice edge cases, to reframe problems on the fly, and to do all this in institutions where electricity and connectivity are reliable, and where learning pathways are short, stackable, and tied to real equipment. If education systems teach that form of judgment at scale, the next wave of automation will widen opportunity rather than narrow it. The call to action is straightforward: fund the socket and the signal; adopt competency frameworks that prize critical use over blind adoption; build TVET-industry compacts for robot-adjacent skills; and measure task-level gains relentlessly. The robots will keep getting better. Human versatility must move faster.


The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.


References

Acemoglu, D. (2024). The Simple Macroeconomics of AI. NBER Working Paper No. 32487.
Epoch AI. (2025). Most AI value will come from broad automation, not from R&D.
Forbes/CognitiveWorld. (2019). Automation Is Not Intelligence.
International Federation of Robotics (IFR). (2024). World Robotics 2024 – Industrial Robots: Press conference deck and highlights.
International Labour Organization (ILO). (2023). Generative AI and jobs: A global analysis of potential effects on job quantity and quality. ILO Working Paper 96.
International Labour Organization (ILO). (2025). Generative AI and Jobs: A Refined Global Index of Occupational Exposure.
International Monetary Fund (IMF). (2024). Gen-AI: Artificial Intelligence and the Future of Work. Staff Discussion Note SDN/2024/001.
International Telecommunication Union (ITU). (2024). Measuring digital development: Facts and Figures 2024.
OECD. (2024). OECD Employment Outlook 2024.
Reuters. (2024). AI could eliminate up to 5% of jobs in Latin America, study finds.
UNESCO. (2024). AI competency frameworks for teachers and students (overview).
World Bank. (2025). Future Jobs: Robots, Artificial Intelligence, and Digital Technologies in East Asia and Pacific.
World Bank. (2025). Tracking SDG7: The Energy Progress Report 2025 (highlights).
World Bank. (2025). Digital Pathways for Education: Enabling Learning Impact.

Picture

Member for

10 months 4 weeks
Real name
Keith Lee
Bio
Keith Lee is a Professor of AI and Data Science at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI), where he leads research and teaching on AI-driven finance and data science. He is also a Senior Research Fellow with the GIAI Council, advising on the institute’s global research and financial strategy, including initiatives in Asia and the Middle East.

Beyond the Owl: How "Subliminal Learning" Repeats a Classic Statistical Mistake in Educational AI

Beyond the Owl: How "Subliminal Learning" Repeats a Classic Statistical Mistake in Educational AI

Picture

Member for

10 months
Real name
Ethan McGowan
Bio
Ethan McGowan is a Professor of Financial Technology and Legal Analytics at the Gordon School of Business, SIAI. Originally from the United Kingdom, he works at the frontier of AI applications in financial regulation and institutional strategy, advising on governance and legal frameworks for next-generation investment vehicles. McGowan plays a key role in SIAI’s expansion into global finance hubs, including oversight of the institute’s initiatives in the Middle East and its emerging hedge fund operations.

Modified

‘Subliminal learning’ signals spurious shortcuts, not new pedagogy
Demand negative controls, cross-lineage validation, and robustness-first training
Procure only models passing wothe rst-case and safety gates across schools


A single, stark statistic underscores the urgency of our situation. Between 2015 and 2022, the share of students whose principals reported teacher shortages rose from 29% to 46.7% across the OECD. This means that almost one in two students is now learning in systems struggling to staff classrooms. In this context, we cannot afford to be swayed by research fashions that overpromise and under-replicate. When headlines tout the mysterious inheritance of traits from 'teacher' AI models by 'student' AI models, even when the training data seem unrelated, we must question whether the mystery is a result of flawed methodology or a mere mirage. The need of the hour is for education leaders to have access to robust AI that performs consistently across schools, not results that collapse outside a controlled lab environment. The policy question is not whether AI can do remarkable things; it is whether the claims can withstand the confounders that have misled social science for decades.

The current spate of "subliminal learning" stories follows an experiment in which a teacher model with some trait—say, a fondness for owls or, more worryingly, misaligned behavior—generates data that appear unrelated to that trait, such as strings of numbers. A student model trained on this data reportedly absorbs the trait anyway. The Scientific American summary presents the finding as an unexpected, even eerie, kind of transfer. But read as methodology rather than magic, the setup echoes a familiar pitfall: spurious association in high-dimensional spaces, amplified by a tight coupling between teacher and student. When the teacher and student share an underlying base model, their internal representations are already aligned; "learning" may be the student rediscovering the latent structure it was predisposed to find. That is not a revelation about education. It is a reminder about research design.

The Shortcut Trap in Distillation

What the headlines call subliminal can be reframed as shortcut learning under distillation, a process in AI where a complex model is simplified into a smaller, more manageable one. Deep networks often seize on proxies that 'work' in the training distribution but fail under shifts; the literature is replete with examples of classifiers that latch onto textures or backgrounds rather than objects, or into dataset-specific quirks that do not generalize. If a student is trained to mimic a teacher's outputs—even when those outputs are filtered or encoded—the student can inherit shortcuts encoded in the teacher's representational geometry. The Anthropic/Truthful AI team itself reports that the effect occurs when teacher and student share the same base model, a strong hint that we are watching representational leakage rather than semantically independent learning. In other words, the phenomenon may be a laboratory artifact of initialization and distillation, rather than a new principle of cognition.

Spurious regression has a long pedigree: Granger and Newbold showed half a century ago how high R² and plausible coefficients can arise from unrelated time series if you ignore underlying structure. Today's machine-learning equivalent is spurious correlation across groups and environments, which predictably crumbles when it falls out of distribution. A 2024 survey catalogs how often models succeed by exploiting superficial signals, and the community has responded with methods such as group distributionally robust optimization and invariant risk minimization to force models to rely on stable, causally relevant features. The upshot for education is plain. Suppose a model's advantage depends on an idiosyncratic alignment between teacher and student networks, or on hidden artifacts of synthetic data generation. In that case, the performance will evaporate when the model is deployed across districts, devices, and demographics. That is not alignment; it is overfitting in disguise.

The Education Stakes in 2025


Figure 1: Teacher shortages jumped from 29% to 46.7% of students affected across the OECD, reversing a brief improvement in 2018. In systems under strain, flimsy AI evidence isn’t harmless—it crowds out scarce capacity.

The stakes are not hypothetical. New usage data indicate that faculty already employ large models for core instructional work: in one analysis of 74,000 educator conversations, 57% concerned curriculum development, with research assistance and assessment following behind. This is not a future scenario; it is a live pipeline from model behavior to classroom materials. Inject fragile methods into that pipeline, and the system propagates them at scale. In an environment where teacher time is scarce and digital tools shape daily practice, we must be more—not less—demanding of evidence standards.


Figure 2: Curriculum design dominates faculty AI use, meaning any modeling artifact can propagate straight into lesson materials. Governance should target where the real volume is—not edge-case demos.

Meanwhile, the sector is increasingly turning to synthetic data, both by choice and necessity. Gartner's prediction that more than 60 percent of AI training data would be synthetic by 2024 is becoming a reality, with estimates suggesting that up to one-fifth of training data is already synthetic. Independent analyses forecast that publicly available, high-quality human text may be effectively exhausted for training within a few years—by around 2028—pushing developers toward distillation and model-generated corpora. In this world, teacher–student pipelines are not an edge case; they are the default. Without careful guardrails, we risk unknowingly amplifying shortcuts and transmitting misalignment through datasets that look innocuous to humans but encode the teacher's quirks. This is precisely the failure mode the subliminal-learning experiments dramatize, and it is a risk we cannot afford to ignore.

Equity and governance concerns sharpen the point. OECD's latest monitoring places teacher capacity and system resilience at the forefront of policy priorities; UNESCO's guidance on generative AI urges human-centred, context-aware use with strong safeguards. A research culture that normalizes spurious associations as "surprising" discoveries runs directly against those priorities. For schools already navigating resource shortages, fragile AI methods are not a curiosity—they are an operational risk.

A Methods Standard to Prevent Spurious AI in Schools

If we treat the subliminal-learning story as a cautionary tale rather than a breakthrough, a practical policy agenda follows. First, preregistration with negative controls should become table stakes for educational AI research. When a study claims trait transfer through "unrelated" data, researchers must include pre-specified placebo features—like the proverbial owl preference—and report whether the pipeline also "discovers" them under random relabeling. Psychology's replication crisis taught us how easily flexible analysis can spin significance from noise; requiring negative controls protects education from becoming the next field to relearn that lesson at a high cost. Journals and funders should not accept claims about hidden structure without proof that the pipeline does not also discover structures that do not exist.

Second, environmental diversification should be mandatory for any model intended for classroom use. That includes training and evaluation across multiple school systems, devices, languages, and content standards—and critically, across model lineages. Suppose the effect depends on teacher–student architectural kinship, as the Anthropic/Truthful AI work indicates. In that case, demonstrations must show that the claimed benefits persist when the student is trained on outputs from a different family of models. Otherwise, we are validating an effect that rides on shared initialization, not educational relevance. Regulatory sandboxing can help here: ministries or states can host pooled, privacy-preserving evaluation environments that make it cheap to run the same protocol across districts and model families before procurement.

Third, robustness-first objectives—such as group distributionally robust optimization and invariant risk minimization—should be normalized in ed-AI training regimes. These methods explicitly penalize performance that arises from environment-specific quirks, encouraging models to focus on features that remain stable across contexts. They are not silver bullets; even their proponents note limitations. But unlike hype-driven discovery, they encode into the loss function what policy actually values: performance that survives heterogeneity across schools. Procurement guidelines can require vendors to report group-wise worst-case accuracy and to document the distribution shifts they tested, not just average scores.

Fourth, lineage disclosure and compatibility constraints should be included in contracts. If subliminal transfer manifests most strongly when the student shares the teacher's base model, buyers deserve to know both lineages. Districts could, for example, prefer cross-lineage distillation for high-stakes tasks to reduce the risk of hidden trait transmission. Where cross-lineage training is infeasible, vendors should present independent audits demonstrating that model behavior remains stable when the teacher is replaced with a different family. This is not bureaucratic overhead; it is the modern analog of requiring assay independence in medical diagnostics.

Ultimately, we should distinguish between safety claims and performance claims in public messaging. The same experiment that transfers an innocuous "owl preference" can also transfer misalignment, which manifests as dangerous instructions—a result widely reported in both the technical and popular press. Education systems should treat those two outcomes as a single governance problem: the risk of trait propagation through model-generated data. It follows that red-team evaluations for safety must run in parallel with achievement-oriented benchmarks, with release gates that can halt deployment if safety degrades under distribution shifts or teacher-swap tests.

Anticipating critiques. One response is that the phenomenon reveals a real, hidden structure: if a student can infer a teacher's trait from numbers, perhaps the trait is genuinely encoded in distributional signatures that human reviewers cannot see. The problem is not the possibility but the proof. Without negative controls, cross-lineage validation, and out-of-distribution tests, we cannot distinguish between a "hidden causal signal" and a "repeatable artifact." Another objection is practical: if students learn better with AI-assisted materials, why dwell on method minutiae? Because the shortcut trap is precisely that: it delivers early gains that wash out when the distribution changes, which it always does in education, across schools, cohorts, and curricula. The best evidence we have—from studies on shortcut learning to surveys of spurious correlations—indicates that models trained on unstable signals tend to falter when the context shifts. That is a poor bargain for systems already stretched to the limit.

From Artifact to Action

A brief method note: Several figures above are policy-relevant precisely because they change the baseline. Teacher shortages approaching one in two students alter the cost of false positives in ed-tech. Faculty usage data showing curriculum development as the number-one AI use means any modeling artifact can propagate directly into lesson content. Projections of a data drought explain why distillation and synthetic pipelines are not optional. These are not scare statistics; they are context variables that should have been central to how we interpret "subliminal learning" from the start.

Where does this leave policy? Schools and ministries should move quickly, but not by chasing eerie effects. Instead, codify a method's standard for educational AI: preregistration with placebo features; cross-environment and cross-lineage training and evaluation; robustness-first objectives; lineage disclosure; and joint safety-and-achievement release gates—frame procurement around worst-case group performance, not average benchmarks. Tie vendor payments to replication across independent sites. Align all of this with UNESCO's human-centred guidance and the OECD's focus on system capacity. That is how we turn an intriguing lab result into a sturdier, fairer AI infrastructure for classrooms.

In a world where 46.7% of students attend schools reporting teacher shortages, the price of fads is measured in lost learning opportunities. "Subliminal learning" may be a vivid demonstration of how strongly coupled networks echo one another, but it does not license a policy pivot toward fishing for hidden signals. The burden of proof lies with those who claim that seemingly unrelated data carry reliable pedagogical value; the default assumption, borne out by years of research on shortcuts and spuriousness, is that artifacts masquerade as insights. The practical path for education is neither cynicism nor hype. It is governance that treats every surprising correlation as a stress test waiting to be failed—until it passes the controls, crosses lineages, and survives the messy variety of real classrooms. Only then should we scale.


The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.


References

Anthropic Fellows Program & Truthful AI. (2025). Subliminal learning: Language models transmit behavioral traits via hidden signals in data. arXiv:2507.14805 (v1).
Arjovsky, M., Bottou, L., Gulrajani, I., & Lopez-Paz, D. (2019). Invariant Risk Minimization. arXiv:1907.02893.
Geirhos, R., Jacobsen, J.-H., Michaelis, C., Zemel, R., Brendel, W., Bethge, M., & Wichmann, F. A. (2020). Shortcut learning in deep neural networks. Nature Machine Intelligence, perspective (preprint on arXiv:2004.07780).
Granger, C. W. J., & Newbold, P. (1974). Spurious regressions in econometrics. Journal of Econometrics, 2(2), 111–120.
Hasson, E. R. (2025, August 29). Student AIs pick up unexpected traits from teachers through subliminal learning. Scientific American.
OECD. (2024). Education Policy Outlook 2024. OECD Publishing.
OECD. (2024). Education at a Glance 2024. OECD Publishing.
Sagawa, S., Koh, P. W., Hashimoto, T., & Liang, P. (2020). Distributionally robust neural networks for group shifts. ICLR (preprint arXiv:1911.08731).
Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science, 22(11), 1359–1366.
TechMonitor. (2023, August 2). Most AI training data could be synthetic by next year—Gartner.
UNESCO. (2023/2025 update). Guidance for generative AI in education and research. UNESCO.

Picture

Member for

10 months
Real name
Ethan McGowan
Bio
Ethan McGowan is a Professor of Financial Technology and Legal Analytics at the Gordon School of Business, SIAI. Originally from the United Kingdom, he works at the frontier of AI applications in financial regulation and institutional strategy, advising on governance and legal frameworks for next-generation investment vehicles. McGowan plays a key role in SIAI’s expansion into global finance hubs, including oversight of the institute’s initiatives in the Middle East and its emerging hedge fund operations.