Borderless Services Trade: Why Tariffs Miss the Target
Services trade is increasingly borderless and digital Tariffs miss; data rules, licensing, and standards decide access Train for exportable skills and build trust-based regimes to unlock growth

I
Domain-Specific AI Is the Safer Bet for Classrooms and Markets
Domain-Specific AI Is the Safer Bet for Classrooms and Markets
Published
Modified
General AI predicts probabilities, not context-specific safety Domain-specific AI fits the task and lowers risk in classrooms and markets Use ISO 42001, NIST RMF, and the EU AI Act, and test on domain benchmarks

Reported AI incidents reached 233 in 2024, marking a 56% increase from the previous year. This sharp rise highlights real-world failures as generative systems become part of everyday tools. The public is aware of this shift. By September 2025, half of U.S. adults expressed more concern than excitement about AI in their daily lives, with only 10% feeling more excited than worried. This sentiment reveals a trend that cannot be ignored: general-purpose models focus on predicting the next token instead of meeting safety standards that differ by context. They operate on probability, not purpose. As usage shifts from chat interfaces to classrooms and financial settings, relying on probability as a measure of safety is inadequate. The solution lies in domain-specific AI, which consists of systems designed specifically for a narrow task, including safety measures suited to the task's stakes.
The case for domain-specific AI
The main issue with the push for greater generality is that it misaligns model learning with safety requirements. Models learn from training data trends, while safety is situational. What is acceptable in an open forum may be harmful in a school feedback tool or a trading assistant. As regulators outline risk tiers, this misalignment becomes both a compliance and ethical concern. The European Union’s AI Act, which took effect on August 1, 2024, categorizes AI used for educational decisions as high-risk, imposing stricter obligations and documentation requirements. General-purpose models also fall under a specific category with unique responsibilities. In short, the law reflects reality: risk depends on usage, not hype. Domain-specific AI acknowledges this reality. It narrows inputs and outputs, aligns evaluations with workflow, and establishes error budgets that correspond to the potential harm involved.
General benchmarks support the same idea. When researchers adapted the traditional MMLU exam into MMLU-Pro, top models experienced a 16-33% drop in accuracy. Performance on today’s leaderboards tends to falter when faced with real-life scenarios. This serves as a warning against unscoped deployment. Meanwhile, government labs now publish pre-deployment safety evaluations because jailbreak vulnerabilities still occur. The UK’s AI Safety Institute has outlined methods for measuring attack success and correctness; its May 2024 update bluntly addresses models’ dangerous capabilities and the need for contextual testing. If failure modes vary by context, safety must also be context-dependent. Domain-specific AI enables this possibility.

Evidence from incidents, benchmarks, and data drift
The curve for incidents is steep. The Stanford AI Index 2025 reported 233 incidents in 2024, marking the highest number on record. This figure is a system-wide measure rather than a reaction to media coverage. Additionally, the proportion of “restricted” tokens in common web data rose from about 5-7% in 2019 to 20-33% by 2023, altering both model inputs and behavior. As training data becomes more varied, the likelihood of general models misfiring in edge cases increases. Safety cannot be a once-and-done action; it must be a continuous practice tied to specific domains, using ongoing tests that reflect real tasks.
Safety evaluations confirm this trend. New assessment frameworks like HELM-Safety and AIR-Bench 2024 reveal that measured safety is heavily influenced by the specific harms tested and how prompts are structured. The UK AISI’s method also rates attack success rates and underscores that risk is contingent on deployment context rather than solely on model capability. The conclusion is clear: a general score does not guarantee safety for a specific classroom workflow, exam proctor, or FX-desk summary bot. Domain-specific AI allows us to select relevant benchmarks, limit the input space, and establish refusal criteria that reflect the stakes involved.
Data presents another limitation. As access to high-quality text decreases, developers increasingly rely on synthetic data and smaller, curated datasets. This raises the risk of overfitting and makes distribution drift more likely. Therefore, targeted curation gains importance. A model for history essay feedback should include exemplars of rubrics, standard errors, and grade-level writing—not millions of random web pages. A model for news reading in FX requires calendars, policy documents, and specific press releases, rather than a general assortment of internet text. In both cases, domain-specific AI addresses the data issue by defining what “good coverage” means.
What domain-specific AI changes in education and finance
In education, the risks are personal and honest. AI tools that determine student access or evaluate performance fall into high-risk categories in the EU. This requires rigorous documentation, monitoring, and human oversight. It also calls for choosing the right design for the task. A model that accepts inputs aligned with rubrics, generates structured feedback, and cites authoritative sources is easier to audit and remains within an error budget compared to a general chat model that can veer off-topic. NIST’s AI Risk Management Framework and its 2024 Generative AI profile provide practical guidance: govern, map, measure, manage—applied to specific use cases. Schools can utilize these tools to define what “good” means for various needs, such as formative feedback, plagiarism checks, or support accommodations, and to determine when AI should not be applied.
In finance, the most effective approaches are focused and testable. The best outcomes currently emerge from news-to-signal pipelines rather than general conversation agents. A study by the ECB this summer found that using a large model to analyze two pages of commentary in PMI releases significantly enhanced GDP forecasts. Academic and industry research shows similar improvements when models extract structured signals from curated news instead of trying to act like investors. One EMNLP 2024 study showed that fine-tuning LLMs on newsflow yielded return signals that surpassed conventional sentiment scores in out-of-sample tests. A 2024 EUR/USD study combined LLM-derived text features with market data, reducing MAE by 10.7% and RMSE by 9.6% compared to the best existing baseline. This demonstrates domain-specific AI in action: focused inputs, clear targets, and straightforward validation.
The governance layer must align with this technical focus. ISO/IEC 42001:2023, the first AI management system standard, provides organizations with a way to integrate safety into daily operations: defining roles, establishing controls, implementing monitoring, and improving processes. Combining this with the EU AI Act’s risk tiers and NIST’s RMF creates a coherent protocol for schools, ministries, and finance firms. Start small. Measure what truly matters. Prove it. Domain-specific AI is not a retreat from innovation; it is how innovation thrives amid real-world challenges.
Answering the pushback—and governing the shift
Critics may argue that “narrow” models will lag behind general models that continue to scale. However, the issue isn’t about measuring intelligence; it’s about being fit for purpose. When researchers modify tests or change prompts, general models struggle. MMLU-Pro’s 16-33 point decline illustrates that today’s apparent mastery could collapse under shifting distributions. General models remain prone to jailbreak vulnerabilities, so safety teams continue to publish defenses because clever attacks still work. The UK AI Safety Institute’s methodologies and the follow-up reports from labs and security firms emphasize one need: we must evaluate safety against the specific hazards of each task. Domain-specific AI achieves this inherently.

Cost is another concern. Building smaller, focused systems may be redundant. In reality, effective stacks combine both types. Use a general model for drafting or retrieval, then run outputs through a policy engine that checks for rubric compliance, data lineage, and refusal rules. ISO 42001 and NIST RMF guide teams in this process by detailing what data was utilized, what tests were conducted, and how failures are addressed. The EU AI Act rewards such design with clearer paths to compliance, particularly for high-risk educational applications and for general models used in regulated environments. The lesson from recent evaluations is evident: governance must reside where the work occurs. This is more cost-effective than dealing with incident responses, reputational damage, and rework after audits.
A final objection is that specialization could limit creativity. Evidence from the finance sector suggests otherwise. The most reliable improvements currently come from models that analyze specific news to generate precise signals, validated against clear benchmarks. The ECB example demonstrated how small amounts of focused text improved projections, while the EUR/USD study outperformed baselines using task-specific features. Industry research indicates that fine-tuned newsflows perform better than generic sentiment analysis. None of these systems “thinks like a fund manager.” They excel at one function, making them easier to evaluate when they falter. In education, the parallel is clear: assist teachers in providing rubric-based feedback, highlight patterns of misunderstanding, and reserve critical judgment for humans. This approach keeps the helpful tool and mitigates harm.
The evidence is compelling. Incidents rose to 233 in 2024, public trust is fragile, and stricter benchmarks reveal delicate performance. The solution isn't abstract “alignment” with universal values implemented in larger models. The remedy is to connect capability with context. Domain-specific AI narrows inputs and outputs, utilizes curated data, and demonstrates effectiveness through relevant tests. It establishes concrete governance through ISO 42001 and NIST RMF. It aligns with the EU AI Act’s risk-driven framework. Its efficacy is already demonstrated in lesson feedback and news-based macro signals. The call to action is straightforward. Schools and ministries should set domain-specific error budgets, adopt risk frameworks, and choose systems that prove their reliability in domain tests before they interact with students. Financial firms should define assistants to focus on information extraction and scoring, not decision-making, and hold them to measurable performance standards. We can continue to chase broad applications and hope for safety, or we can develop domain-specific AI that meets the standards our classrooms and markets require.
The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.
References
Adalovelace Institute. (2024). Under the radar? London: Ada Lovelace Institute.
Carriero, A., Clark, T., & Pettenuzzo, D. (2024). Macroeconomic Forecasting with Large Language Models (slides). Boston College.
Ding, H., Zhao, X., Jiang, Z., Abdullah, S. N., & Dewi, D. A. (2024). EUR/USD Exchange Rate Forecasting Based on Information Fusion with LLMs and Deep Learning. arXiv:2408.13214.
European Commission. (2024). EU AI Act enters into force. Brussels.
Guo, T., & Hauptmann, E. (2024). Fine-Tuning Large Language Models for Stock Return Prediction Using Newsflow. In EMNLP Industry Track. ACL Anthology.
ISO. (2023). ISO/IEC 42001:2023—Artificial intelligence management system. Geneva: International Organization for Standardization.
NIST. (2023). Artificial Intelligence Risk Management Framework (AI RMF 1.0). Gaithersburg, MD.
NIST. (2024). AI RMF: Generative AI Profile (NIST AI 600-1). Gaithersburg, MD.
RAND Corporation. (2024). Risk-Based AI Regulation: A Primer on the EU AI Act. Santa Monica, CA.
Reuters. (2025, June 26). ECB economists improve GDP forecasting with ChatGPT.
Stanford CRFM. (2024). HELM-Safety. Stanford University.
Stanford HAI. (2025). AI Index Report 2025—Responsible AI. Stanford University.
UK AI Safety Institute (AISI). (2024). Approach to evaluations & Advanced AI evaluations—May update. London: DSIT & AISI.
Wang, Y., Ma, X., Zhang, G., et al. (2024). MMLU-Pro: A More Robust and Challenging Multi-Task Language Understanding Benchmark. NeurIPS Datasets & Benchmarks (Spotlight).
Similar Post
The Rational Gamble: Takaichi’s Rise and the Hard Reset for Japan's Education Policy
Takaichi’s rise is a pragmatic bet, not a protest vote Shrinking cohorts demand an education reset—protect funding, cut teacher overload, use AI with guardrails Judge success by delivery: fair consolidation and measurable gains in teacher time

Skills Policy for Competition: What Firms' Playbooks Teach Schools
Firms under competition cut jobs, switch lines, or move tasks offshore Schools need a skills policy for competition: fast, stackable, portable training Fund future-shaping sectors and track outcomes so workers can switch quickly

Physical vs Transition Climate Risk is Distinct, and Our Curricula Need to Catch Up
Physical and transition climate risks are orthogonal inside firms Physical risk is geographic and weather-driven; transition risk is policy- and profit-driven Teach and regulate with two dashboards, not one, so budgets and actions match each risk

Borderless AI Labor Is Changing Work and School
Borderless AI Labor Is Changing Work and School
Published
Modified
AI is making labor borderless as online services surge Opportunity expands, but standards, audits, and broadband are crucial Schools must teach task-first skills, platform literacy, and safeguards

The fastest-growing part of global trade is undergoing a profound transformation. In 2023, digitally deliverable services—work sent and received through networks—reached about $4.5 trillion. For the first time, developing economies surpassed $1 trillion in exports. This accounts for more than half of all services trade. Borders still exist, but for more tasks, they function like dimmer switches rather than walls. This is the world of borderless AI labor, a transformative force where millions of people annotate data, moderate content, build software, translate, advise, and teach from anywhere. The rise of large language models and affordable coordination tools has accelerated this shift. Employers can now swiftly create distributed teams, manage them with algorithmic systems, and pay them across time zones. The key question is not whether jobs will shift, but who sets the rules for that shift and how schools and governments prepare people to succeed in it.
The borderless AI labor market is here
The numbers clearly illustrate this trend. In a 2025 survey of employers, 40-41% of firms indicated they would reduce staff where AI automates tasks. At the same time, they are reorienting their business models around AI and hiring for new skills. This change does not spell disaster. By 2030, the best estimate is a net gain of 78 million jobs globally, with 170 million created and 92 million displaced. To put it another way: tasks are moving, talent strategies are changing, and opportunities are opening—often across borders. The borderless AI labor market is not just a shift, but a gateway to unprecedented growth and opportunity.

Hiring practices are already reflecting this shift. Data from platforms show that remote and cross-border hiring increased through 2024. Over 80% of roles on one global platform were classified as remote, and cross-border hiring rose by about 42% year-over-year. In the United States alone, 64 million people—or 38% of the workforce—freelanced in 2023, many serving clients they never met in person. These trends, coupled with the surge in digitally delivered services, confirm that borderless AI labor has scale and momentum.
Winners and risks in borderless AI labor
The benefits are significant. For the 2 billion people in informal work—over 60% of workers in much of the Global South—AI-enabled matching, translation, and reputation tools can help clarify skills for employers and create opportunities for higher-value tasks. This is the inclusive promise of borderless AI labor, where visibility and trust are essential in global services. When communication, payments, and rating systems are tied to the worker, location becomes less critical, and performance issues become more significant.
However, the risks are real and immediate. The global supply chain for AI, which relies on data labelers and content moderators in cities like Nairobi, Manila, Accra, Bogotá, and others, is a key part of this issue. Over the past two years, research and litigation have highlighted issues like psychological harm, low pay, and unclear practices within this ecosystem. New projects by policy groups are beginning to show how moderation systems can fail in non-English contexts and low-resource languages. The solution is not to withdraw from global work but to set enforceable standards for wages, safety, and mental health support wherever moderation and annotation take place. While automation can filter some content, it cannot replace the need for decent labor conditions.
A subtler risk comes from algorithmic management itself. This refers to the use of AI tools to assign tasks, monitor productivity, and evaluate workers at scale. Recent surveys in advanced economies reveal widespread use of these systems and varied governance. Many companies have policies, but many workers still express concerns about transparency and fairness. In cross-border teams, power imbalances can worsen if ratings and algorithm-based decision-making are unaccountable. The solution is not to ban these tools; it is to make them auditable and explainable, providing clear recourse for workers when mistakes happen.
There is also a broader change. As tasks move online, the demand for office space is shifting unevenly. In the U.S., office vacancies reached around 20% by late 2024, the highest level in decades, with distress in the sector nearing $52 billion. This does not mean cities will die; rather, it indicates a shift away from low-performing spaces toward digital infrastructure and talent. Employers will continue to focus on time zones, skills, and cost—rather than postal codes. Borderless AI labor will keep driving this change.
Rules for a fair marketplace in borderless AI labor
First, establish portable labor standards for digital work. Governments that buy AI services should require contractors—and their subcontracted platforms—to meet basic conditions, including living-wage floors based on purchasing power, mandated rest periods, and access to mental health support for high-risk jobs like content moderation. International cooperation can help align these standards with existing due diligence laws. A credible standard would connect buyers in cities like New York or Berlin to the outcomes for workers in places like Nairobi or Cebu.
Second, include algorithmic management in policy discussions. Regulators should require companies that use automated assignment, monitoring, or evaluation to disclose such practices, document their effects, and provide appeals processes. Evidence shows that companies already report governance measures, but their implementation often outpaces oversight. Clear rules, alongside audits by independent organizations, can reduce bias, limit intrusive monitoring, and protect the integrity of cross-border teams. Where national laws do not apply, public and institutional buyers can enforce contract obligations.
Third, invest in connectivity. An estimated 3 billion people still lack internet access. A rigorous estimate from 2023 places the global broadband investment needed for near-universal access at about $418 billion, mostly in emerging markets. This investment is essential infrastructure for borderless AI labor. Prioritize open-access fiber, shared 4G/5G in rural areas, and community networks. Public funds should promote open competition and fair-use rules so smaller firms and schools can benefit.
Fourth, speed up cross-border payments and credentials. Workers need quick and low-cost methods to get paid, demonstrate their skills, and maintain verified work histories. Trade organizations are already tracking a surge in digital services. The next step is mutual recognition of micro-credentials and compatible identity standards. Reliable certificates and verifiable portfolios allow employers to assess workers based on evidence, not stereotypes—an essential principle in borderless AI labor.
Preparing schools for borderless AI labor
Education systems must shift from a geography-focused model to a task-focused model. Start with language-aware and AI-aware curricula. Students should learn how to use model-assisted tools for translation, summarization, coding, and data cleaning. They also need to judge outputs, cite sources, and respect privacy. Employer surveys indicate substantial investment in AI skills and ongoing job redesign. Schools should reflect this reality with projects that involve actual work for external clients and peers in different time zones, building the trust that borderless AI labor values.

Next, teach platform literacy and how to promote portfolios. Many graduates will have careers that combine organizational roles with platform work. They must understand service-level agreements, dispute processes, rating systems, and the basics of tax compliance across borders. Capstone projects should culminate in public portfolios featuring versioned code, reproducible analyses, and evidence of collaboration across cultures and time zones. Credentialing should be modular and stackable, so a learner in Accra can combine a local certificate with a global micro-credential recognized by the same employer.
Finally, prioritize well-being and ethics. The data work behind AI can be demanding. Students who choose annotation, moderation, or risk analysis need training to handle exposure to harmful content and find pathways to safer, higher-value roles. Programs should normalize mental health support, rotate students out of traumatic tasks, and teach how to create safer processes—such as automated filtering, triage, and escalation—to reduce harm. The goal is not to deter students but to empower them and provide safeguards in a market that will continue to grow.
The world has already crossed a significant threshold. When digitally deliverable services surpass $4.5 trillion and developing economies exceed $1 trillion in exports, we aren’t discussing a future trend; we’re managing a current reality. Borderless AI labor is not just a concept; it involves a daily flow of tasks, talent, and trust—organized by algorithms, traded in global markets, and facilitated over networks. We can let this system develop by default, with weak protections, inadequate training, and increasing disparities. Or we can take action: establish portable standards for digital work, regulate algorithmic management, invest in connectivity, and teach for a task-driven environment. If we choose the latter, the potential is genuine. Millions who have been invisible to global employers can gain recognition, verifiable reputations, and access to safer, better-paying jobs. This is a policy decision—one that schools, ministries, and buyers can act on now, while the new market is still taking shape.
The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.
References
Amnesty International. (2025, April 3). Kenya: Meta can be sued in Kenya for role in Ethiopia conflict.
Brookings Institution. (2025, October 7). Reimagining the future of data and AI labor in the Global South.
Center for Democracy & Technology (CDT). (2024, January 30). Investigating content moderation systems in the Global South.
Deel. (2025, April 13). Global hiring trends show cross-border growth and 82% remote roles.
MSCI Real Assets. (2025, July 15). The office-market recovery is here, just not everywhere.
Moody’s CRE. (2025, January 3). Q4 2024 preliminary trend announcement: Office vacancy hits 20.4%.
OEC D. (2024, Mar.). Using AI in the workplace.
OECD. (2025, February 6). Algorithmic management in the workplace.
Oughton, E., Amaglobeli, D., & Moszoro, M. (2023). What would it cost to connect the unconnected? arXiv.
United Nations Conference on Trade and Development (UNCTAD) via WAM. (2024, December 7). Developing economies surpass $1 trillion in digitally deliverable services; global total $4.5 trillion.
Upwork. (2023, December 12). Freelance Forward 2023.
World Economic Forum. (2025, January 7). Future of Jobs Report 2025 (press release and digest).
World Economic Forum. (2025, May 13). How AI is reshaping the future of informal work in the Global South.
World Trade Organization. (2025). Digitally Delivered Services Trade Dataset.
Similar Post
Say 'Ah' to Data: AI Tongue Diagnosis as the New First Look
AI tongue diagnosis turns a centuries-old check into fast, cheap triage With phones and clear protocols, it flags likely risks for confirmatory tests Deploy in primary care with consent, calibration, and monitoring to scale safely

Stop the Wiring: How to Contain AI Elder Fraud Without Blaming the Victim
Older adults lose billions to AI elder fraud each year Biology and deepfake tools amplify impersonation and urgency Add default friction—holds, verification, and reimbursements—to stop wires before money leaves

When Extra Information Helps: A Bayesian Information Update Test for Central Bank and School Communication
Extra information helps only when it adds new, orthogonal signal and is easy to process Central banks and schools should use plain anchors and concrete rules to trigger a Bayesian information update Measure belief shifts, not word counts, and iterate when messages fail to move the posterior

Stop Training Coders, Build Scientists: An ASEAN AI Talent Strategy
Stop Training Coders, Build Scientists: An ASEAN AI Talent Strategy
Published
Modified
Bootcamps produce tool users, not frontier researchers ASEAN needs a scientist-first AI talent strategy Fund PhDs, labs, and compute to invent, not import

The statistic that should jolt us awake is simple: almost 40% of jobs worldwide will be affected by AI, either displaced or reshaped, according to the IMF. This scale of change is historic. It is not merely a coding-bootcamp issue; it is a problem of research capacity. If ASEAN opts for quick courses intended to help generalist software engineers use AI tools, the region risks repeating a mistake others have learned the hard way: preparing people to operate kits while leaving the science to others. The ASEAN AI talent strategy must start with a straightforward premise. Low-cost agents and copilots already handle much of what entry-level coders do, and their subscriptions cost less than a team lunch. If Southeast Asia wants to compete, it needs a pipeline of AI scientists capable of building models, designing algorithms, and publishing groundbreaking work that attracts investment and fosters clusters. This is the only approach that will scale.
Why an ASEAN AI talent strategy must reject “teach-everyone-to-code”
There is a common policy instinct: subsidize short courses that enable existing developers to utilize AI libraries from the cloud. This instinct is understandable. It is quick, appealing, and easy to measure. However, the market has evolved rapidly. Enterprise surveys in 2024–2025 indicate that the majority of firms are now incorporating generative AI into their workflows. Several studies report double-digit productivity gains for tasks like coding, writing, and support. Meanwhile, mainstream tools have lowered the cost of entry-level coding. GitHub Copilot plans run about US$10–$19 per user per month. OpenAI’s small models charge tokens at a rate of cents per million. When the marginal cost of junior tasks approaches zero, bootcamps that teach tool operation become a treadmill. They prepare people for jobs that machines can already perform. The ASEAN AI talent strategy must, therefore, focus on deep capability rather than superficial familiarity.
ASEAN’s higher-education and research base highlights the risk of remaining shallow. Several member states still report very low researcher densities by global standards. A recent review notes 81 researchers per million in the Philippines, 205 in Indonesia, and 115 in Vietnam—figures far below those of advanced economies. Malaysia allocates about 0.95% of GDP to R&D, roughly five times Vietnam’s share. Yet, even in Malaysia, the challenge is creating strong links between universities and industry. Suppose Southeast Asia spends limited resources on short training courses without enhancing research capacity. In that case, it will produce more users of foreign tools rather than scientists who develop ASEAN tools. That is a trap.

What South Korea’s bootcamps got wrong
South Korea serves as a warning. For years, policy emphasized rapid scaling of digital training through national platforms and intensive programs aimed at pushing more individuals into “AI-ready” roles. This approach improved literacy and awareness but did not create a surge of frontier researchers. Meanwhile, the political landscape changed. In December 2024, the National Assembly impeached the sitting president; in April 2025, the Constitutional Court removed him from office. The direction of tech policy shifted again. When strategy changes and budgets are adjusted, the only lasting asset is foundational capacity: graduate programs, labs, and a trained scientific base that can withstand disruptions. South Korea’s experience should caution ASEAN against equating bootcamp completion with scientific competitiveness.
There is also a harsh market reality. Employers are already replacing entry-level tasks with AI. A business leader survey conducted by the British Standards Institution in 2024–2025 reveals that firms are embracing automation, even as jobs change or vanish. An increasing number of studies show that copilots expedite routine coding, and some long-term data indicate a decline in junior placement rates at bootcamps compared to the pre-gen-AI era. None of this implies “don’t train.” It emphasizes the need to train for the right frontier. Teaching thousands to integrate existing APIs into apps will not provide a regional advantage when copilots perform the same tasks in seconds. Teaching a smaller number to design new learning architectures, optimize inference under tight energy budgets, or build multilingual evaluation suites will be challenging. That is what investors value.
China’s lesson: build scientists, not generalists
China’s current trajectory illustrates what sustained investment in talent and computing can achieve. In 2025, top universities increased enrollment in strategic fields such as AI, integrated circuits, and biomedicine to meet national priorities. Cities implemented compute-voucher programs that subsidized access to training clusters for smaller firms, transforming idle capacity into research output. China’s AI research workforce has grown significantly over the past decade, while its universities and labs continue to attract leading researchers. The United States still dominates the origin of “frontier” foundation models, according to the 2024 AI Index. Still, the global competition now revolves around who can build and retain scientific teams and who can obtain computing power at reasonable costs. The message for ASEAN is clear. To generate spillover benefits in your cities, invest in scientists and labs, not merely in users of tools. The research core is what anchors ecosystems and can lead to significant economic growth.

The talent strategy behind that research core is crucial. Elite programs attract top undergraduates into demanding, math-heavy tracks; they offer PhDs with generous stipends; they establish joint industry chairs allowing principal investigators to move between labs and startups; and they form groups that publish in competitive venues. The region doesn’t need to replicate China’s political economy to emulate its pipeline design. ASEAN can achieve this within a democratic framework by pooling resources and setting standards. The goal should be clear: train a few thousand AI scientists—people who can publish, review, and lead—rather than hundreds of thousands of tool operators. This is not elitist; it is practical in an age where entry-level work is being automated, and where rewards go to original research.
A regional blueprint for an ASEAN AI talent strategy
First, fund depth, not breadth. Create an ASEAN Doctoral Network in AI that offers joint PhDs across leading universities in Singapore, Malaysia, Thailand, Vietnam, Indonesia, and the Philippines. Admit small cohorts based on merit, provide regional stipends tied to local living expenses, and guarantee computing time through a shared cluster. The backbone can be a federated compute alliance, located at research universities and connected through high-speed academic networks. Cities hosting nodes should ensure clean power; states should offer expedited visas for fellows and their families. Policymakers, your role is crucial in ensuring the success of this strategy. Success should not be measured by enrollment but by published papers, released code, and established labs.
Second, improve the research base. Most ASEAN members must increase their R&D spending and research intensity to move beyond applied integration. The gap is apparent. Malaysia allocates just under 1% of GDP for R&D, while some peers spend even less; several countries report researcher densities too low to support robust lab cultures. Establish national five-year minimums for public AI research funding. Tie grants to multi-institutional teams, ensuring at least one public university is involved outside the capital city. Encourage repatriation by offering packages for “returning scientists” that cover relocation, lab startup, and hiring of early-stage talent. A stronger research base will also lessen the need to import expertise at high costs.
Third, align policy with industry needs, but guard against dilution. Malaysia’s national AI office and Indonesia’s AI roadmap indicate intent to coordinate. Use these organizations to redirect funding toward fundamental research. Designate at least a quarter of public AI funding as contestable only with a university principal investigator on the grant. Require that every government-funded model provide a replicable evaluation card, complete with multilingual benchmarks that reflect Southeast Asia’s languages. This is how the region establishes credibility in venues where scientific reputations are built.
Fourth, support the early-career ladder even as copilots become more prevalent. The HBR warning is valid: if companies eliminate junior roles, they may jeopardize their future teams. Governments can encourage better practices without micromanaging hiring by linking R&D tax credits to paid research residencies for new graduates in approved labs. Provide matching funds for companies sponsoring PhD industrial fellowships. Promote open-source contributions from publicly funded work and establish national code registries that enable students to create portable portfolios. These small design choices can significantly impact career development.
Finally, acknowledge where generalist upskilling still fits. Digital literacy and short courses will remain crucial for the broader workforce, providing a buffer against disruption. However, they are not the foundation of competitive advantage in AI. The latest World Bank analysis for East Asia and the Pacific states that new technologies have, so far, supported employment. Yet, it cautions that reforms are needed to sustain job creation and limit inequality. ASEAN should benefit from gains while investing in the scarce asset: scientific talent capable of setting the frontier for regional firms. In a market with broad adoption, the advantage belongs to those who can invent.
We began with a stark statistic: 40% of jobs will be influenced by AI. The region can choose to react with more short courses or respond with a well-thought-out plan. The better option is a scientist-first ASEAN AI talent strategy that funds rigor, builds labs, secures computing power, and creates opportunities for researchers who can publish and innovate. Political landscapes will shift. Costs will drop. Tools will improve. What endures is capacity. If that capacity is rooted in ASEAN’s universities and companies, value will emerge. If it exists elsewhere, ASEAN will forever rely on renting it. The policy path is concrete. It requires leaders to choose a frontier and support it with funding, visas, computing power, and patience. Act now, and within five years, the region will produce its own research, not just consume press releases. Its firms will also hire the people behind the work. In a world filled with inexpensive agents, the only costly asset left is original thinking. Foster that.
The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.
References
Aljazeera. (2024, December 14). South Korea National Assembly votes to impeach President Yoon Suk-yeol.
AP News. (2025, April 3–5). Yoon Suk Yeol removed as South Korea’s president over short-lived martial law.
British Standards Institution (BSI). (2024, September 18). Embrace AI tools even if some jobs change or are lost.
GitHub. (2024–2025). Plans and pricing for GitHub Copilot.
IMF—Georgieva, K. (2024, January 14). AI will transform the global economy. Let’s make sure it benefits humanity.
ILO. (2024). Asia-Pacific Employment and Social Outlook 2024.
McKinsey & Company. (2024, May 30). The state of AI in early 2024: GenAI adoption, use cases, and value.
OECD. (2025, June 23). Emerging divides in the transition to artificial intelligence.
OECD. (2025, July 8). Unlocking productivity with generative AI: Evidence from experimental studies.
Our World in Data (UIS via World Bank). (2025). Number of R&D researchers per million people (1996–2023).
Peng, S., et al. (2023). The Impact of AI on Developer Productivity. arXiv:2302.06590.
Reuters. (2025, March 10). China’s top universities expand enrolment to beef up capabilities in AI.
Reuters. (2025, July 22). Indonesia targets foreign investment with new AI roadmap, official says.
Stanford HAI. (2024). The 2024 AI Index Report.
Tom’s Hardware. (2025, September). China subsidizes AI computing with “compute vouchers.”
UNESCO. (2024, February 19). ASEAN stepping up its green and digital transition.
World Bank. (2025, June 17/July 1). Future Jobs: Robots, Artificial Intelligence, and Digital Platforms in East Asia and Pacific; Press release on technology and jobs in EAP.
World Bank/ASEAN (SEAMEO-RIHED). (2022). The State of Higher Education in Southeast Asia.