Skip to main content

Stop Training Coders, Build Scientists: An ASEAN AI Talent Strategy

Stop Training Coders, Build Scientists: An ASEAN AI Talent Strategy

Picture

Member for

1 year
Real name
Keith Lee
Bio
Keith Lee is a Professor of AI and Data Science at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI), where he leads research and teaching on AI-driven finance and data science. He is also a Senior Research Fellow with the GIAI Council, advising on the institute’s global research and financial strategy, including initiatives in Asia and the Middle East.

Modified

Bootcamps produce tool users, not frontier researchers
ASEAN needs a scientist-first AI talent strategy
Fund PhDs, labs, and compute to invent, not import

The statistic that should jolt us awake is simple: almost 40% of jobs worldwide will be affected by AI, either displaced or reshaped, according to the IMF. This scale of change is historic. It is not merely a coding-bootcamp issue; it is a problem of research capacity. If ASEAN opts for quick courses intended to help generalist software engineers use AI tools, the region risks repeating a mistake others have learned the hard way: preparing people to operate kits while leaving the science to others. The ASEAN AI talent strategy must start with a straightforward premise. Low-cost agents and copilots already handle much of what entry-level coders do, and their subscriptions cost less than a team lunch. If Southeast Asia wants to compete, it needs a pipeline of AI scientists capable of building models, designing algorithms, and publishing groundbreaking work that attracts investment and fosters clusters. This is the only approach that will scale.

Why an ASEAN AI talent strategy must reject “teach-everyone-to-code”

There is a common policy instinct: subsidize short courses that enable existing developers to utilize AI libraries from the cloud. This instinct is understandable. It is quick, appealing, and easy to measure. However, the market has evolved rapidly. Enterprise surveys in 2024–2025 indicate that the majority of firms are now incorporating generative AI into their workflows. Several studies report double-digit productivity gains for tasks like coding, writing, and support. Meanwhile, mainstream tools have lowered the cost of entry-level coding. GitHub Copilot plans run about US$10–$19 per user per month. OpenAI’s small models charge tokens at a rate of cents per million. When the marginal cost of junior tasks approaches zero, bootcamps that teach tool operation become a treadmill. They prepare people for jobs that machines can already perform. The ASEAN AI talent strategy must, therefore, focus on deep capability rather than superficial familiarity.

ASEAN’s higher-education and research base highlights the risk of remaining shallow. Several member states still report very low researcher densities by global standards. A recent review notes 81 researchers per million in the Philippines, 205 in Indonesia, and 115 in Vietnam—figures far below those of advanced economies. Malaysia allocates about 0.95% of GDP to R&D, roughly five times Vietnam’s share. Yet, even in Malaysia, the challenge is creating strong links between universities and industry. Suppose Southeast Asia spends limited resources on short training courses without enhancing research capacity. In that case, it will produce more users of foreign tools rather than scientists who develop ASEAN tools. That is a trap.

Figure 1: Six months of top AI coding assistants cost under US$250—orders of magnitude below a single bootcamp. Upskilling for tool operation can’t beat automation on price.

What South Korea’s bootcamps got wrong

South Korea serves as a warning. For years, policy emphasized rapid scaling of digital training through national platforms and intensive programs aimed at pushing more individuals into “AI-ready” roles. This approach improved literacy and awareness but did not create a surge of frontier researchers. Meanwhile, the political landscape changed. In December 2024, the National Assembly impeached the sitting president; in April 2025, the Constitutional Court removed him from office. The direction of tech policy shifted again. When strategy changes and budgets are adjusted, the only lasting asset is foundational capacity: graduate programs, labs, and a trained scientific base that can withstand disruptions. South Korea’s experience should caution ASEAN against equating bootcamp completion with scientific competitiveness.

There is also a harsh market reality. Employers are already replacing entry-level tasks with AI. A business leader survey conducted by the British Standards Institution in 2024–2025 reveals that firms are embracing automation, even as jobs change or vanish. An increasing number of studies show that copilots expedite routine coding, and some long-term data indicate a decline in junior placement rates at bootcamps compared to the pre-gen-AI era. None of this implies “don’t train.” It emphasizes the need to train for the right frontier. Teaching thousands to integrate existing APIs into apps will not provide a regional advantage when copilots perform the same tasks in seconds. Teaching a smaller number to design new learning architectures, optimize inference under tight energy budgets, or build multilingual evaluation suites will be challenging. That is what investors value.

China’s lesson: build scientists, not generalists

China’s current trajectory illustrates what sustained investment in talent and computing can achieve. In 2025, top universities increased enrollment in strategic fields such as AI, integrated circuits, and biomedicine to meet national priorities. Cities implemented compute-voucher programs that subsidized access to training clusters for smaller firms, transforming idle capacity into research output. China’s AI research workforce has grown significantly over the past decade, while its universities and labs continue to attract leading researchers. The United States still dominates the origin of “frontier” foundation models, according to the 2024 AI Index. Still, the global competition now revolves around who can build and retain scientific teams and who can obtain computing power at reasonable costs. The message for ASEAN is clear. To generate spillover benefits in your cities, invest in scientists and labs, not merely in users of tools. The research core is what anchors ecosystems and can lead to significant economic growth.

Figure 1: Frontier models are concentrated in a few scientific hubs. ASEAN won’t close this gap with bootcamps; it needs labs, PhDs, and compute.

The talent strategy behind that research core is crucial. Elite programs attract top undergraduates into demanding, math-heavy tracks; they offer PhDs with generous stipends; they establish joint industry chairs allowing principal investigators to move between labs and startups; and they form groups that publish in competitive venues. The region doesn’t need to replicate China’s political economy to emulate its pipeline design. ASEAN can achieve this within a democratic framework by pooling resources and setting standards. The goal should be clear: train a few thousand AI scientists—people who can publish, review, and lead—rather than hundreds of thousands of tool operators. This is not elitist; it is practical in an age where entry-level work is being automated, and where rewards go to original research.

A regional blueprint for an ASEAN AI talent strategy

First, fund depth, not breadth. Create an ASEAN Doctoral Network in AI that offers joint PhDs across leading universities in Singapore, Malaysia, Thailand, Vietnam, Indonesia, and the Philippines. Admit small cohorts based on merit, provide regional stipends tied to local living expenses, and guarantee computing time through a shared cluster. The backbone can be a federated compute alliance, located at research universities and connected through high-speed academic networks. Cities hosting nodes should ensure clean power; states should offer expedited visas for fellows and their families. Policymakers, your role is crucial in ensuring the success of this strategy. Success should not be measured by enrollment but by published papers, released code, and established labs.

Second, improve the research base. Most ASEAN members must increase their R&D spending and research intensity to move beyond applied integration. The gap is apparent. Malaysia allocates just under 1% of GDP for R&D, while some peers spend even less; several countries report researcher densities too low to support robust lab cultures. Establish national five-year minimums for public AI research funding. Tie grants to multi-institutional teams, ensuring at least one public university is involved outside the capital city. Encourage repatriation by offering packages for “returning scientists” that cover relocation, lab startup, and hiring of early-stage talent. A stronger research base will also lessen the need to import expertise at high costs.

Third, align policy with industry needs, but guard against dilution. Malaysia’s national AI office and Indonesia’s AI roadmap indicate intent to coordinate. Use these organizations to redirect funding toward fundamental research. Designate at least a quarter of public AI funding as contestable only with a university principal investigator on the grant. Require that every government-funded model provide a replicable evaluation card, complete with multilingual benchmarks that reflect Southeast Asia’s languages. This is how the region establishes credibility in venues where scientific reputations are built.

Fourth, support the early-career ladder even as copilots become more prevalent. The HBR warning is valid: if companies eliminate junior roles, they may jeopardize their future teams. Governments can encourage better practices without micromanaging hiring by linking R&D tax credits to paid research residencies for new graduates in approved labs. Provide matching funds for companies sponsoring PhD industrial fellowships. Promote open-source contributions from publicly funded work and establish national code registries that enable students to create portable portfolios. These small design choices can significantly impact career development.

Finally, acknowledge where generalist upskilling still fits. Digital literacy and short courses will remain crucial for the broader workforce, providing a buffer against disruption. However, they are not the foundation of competitive advantage in AI. The latest World Bank analysis for East Asia and the Pacific states that new technologies have, so far, supported employment. Yet, it cautions that reforms are needed to sustain job creation and limit inequality. ASEAN should benefit from gains while investing in the scarce asset: scientific talent capable of setting the frontier for regional firms. In a market with broad adoption, the advantage belongs to those who can invent.

We began with a stark statistic: 40% of jobs will be influenced by AI. The region can choose to react with more short courses or respond with a well-thought-out plan. The better option is a scientist-first ASEAN AI talent strategy that funds rigor, builds labs, secures computing power, and creates opportunities for researchers who can publish and innovate. Political landscapes will shift. Costs will drop. Tools will improve. What endures is capacity. If that capacity is rooted in ASEAN’s universities and companies, value will emerge. If it exists elsewhere, ASEAN will forever rely on renting it. The policy path is concrete. It requires leaders to choose a frontier and support it with funding, visas, computing power, and patience. Act now, and within five years, the region will produce its own research, not just consume press releases. Its firms will also hire the people behind the work. In a world filled with inexpensive agents, the only costly asset left is original thinking. Foster that.


The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.


References

Aljazeera. (2024, December 14). South Korea National Assembly votes to impeach President Yoon Suk-yeol.
AP News. (2025, April 3–5). Yoon Suk Yeol removed as South Korea’s president over short-lived martial law.
British Standards Institution (BSI). (2024, September 18). Embrace AI tools even if some jobs change or are lost.
GitHub. (2024–2025). Plans and pricing for GitHub Copilot.
IMF—Georgieva, K. (2024, January 14). AI will transform the global economy. Let’s make sure it benefits humanity.
ILO. (2024). Asia-Pacific Employment and Social Outlook 2024.
McKinsey & Company. (2024, May 30). The state of AI in early 2024: GenAI adoption, use cases, and value.
OECD. (2025, June 23). Emerging divides in the transition to artificial intelligence.
OECD. (2025, July 8). Unlocking productivity with generative AI: Evidence from experimental studies.
Our World in Data (UIS via World Bank). (2025). Number of R&D researchers per million people (1996–2023).
Peng, S., et al. (2023). The Impact of AI on Developer Productivity. arXiv:2302.06590.
Reuters. (2025, March 10). China’s top universities expand enrolment to beef up capabilities in AI.
Reuters. (2025, July 22). Indonesia targets foreign investment with new AI roadmap, official says.
Stanford HAI. (2024). The 2024 AI Index Report.
Tom’s Hardware. (2025, September). China subsidizes AI computing with “compute vouchers.”
UNESCO. (2024, February 19). ASEAN stepping up its green and digital transition.
World Bank. (2025, June 17/July 1). Future Jobs: Robots, Artificial Intelligence, and Digital Platforms in East Asia and Pacific; Press release on technology and jobs in EAP.
World Bank/ASEAN (SEAMEO-RIHED). (2022). The State of Higher Education in Southeast Asia.

Picture

Member for

1 year
Real name
Keith Lee
Bio
Keith Lee is a Professor of AI and Data Science at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI), where he leads research and teaching on AI-driven finance and data science. He is also a Senior Research Fellow with the GIAI Council, advising on the institute’s global research and financial strategy, including initiatives in Asia and the Middle East.

From Smartphone Bans to AI Policy in Schools: A Playbook for Safer, Smarter Classrooms

From Smartphone Bans to AI Policy in Schools: A Playbook for Safer, Smarter Classrooms

Picture

Member for

11 months 2 weeks
Real name
Catherine Maguire
Bio
Catherine Maguire is a Professor of Computer Science and AI Systems at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI). She specializes in machine learning infrastructure and applied data engineering, with a focus on bridging research and large-scale deployment of AI tools in financial and policy contexts. Based in the United States (with summer in Berlin and Zurich), she co-leads SIAI’s technical operations, overseeing the institute’s IT architecture and supporting its research-to-production pipeline for AI-driven finance.

Modified

Smartphone bans offer a blueprint for AI policy in schools
Use age-tiered access, strict privacy, and teacher oversight
Evaluate results publicly to protect attention, equity, and integrity

Let's begin with a significant figure: forty. By the close of 2024, a staggering 79 education systems, representing approximately 40% of the global education landscape, had enacted laws or policies curbing the use of smartphones in schools. This global trend is a clear indicator of how societies experiment with a technology, observe its impact, and then establish strict regulations for children. This pattern, which has been observed with smartphones, is now a pressing need for large language models. The parallels are undeniable. Both technologies are ubiquitous, designed to captivate users, and have the potential to divert students from learning at a pace that schools struggle to match. The question is not whether we need an AI policy in schools, but whether we can develop it with the same speed and clarity that the rise of smartphones has taught us to adopt.

What smartphone bans teach us about AI policy in schools

The success of smartphone bans over the past two years serves as a reassuring case study for policy. UNESCO tracked a swift shift from scattered rules to national prohibitions or restrictions. By the end of 2024, seventy-nine systems had implemented these policies, up from sixty the previous year. In England, national guidance released in February 2024 supports headteachers who ban the use of phones during the school day. The Netherlands implemented a classroom ban with limited exceptions for learning and accessibility, achieving high compliance quickly. Finland took action in 2025 to limit device use during school hours and empowered teachers to confiscate devices that were disruptive to the learning environment. These actions were not just symbolic; they established a straightforward norm: unless a device clearly supports learning or health, it should stay away.

Figure 1: Smartphone rules scaled fast across systems. AI policy in schools can move just as quickly when guidance is simple and enforceable.

The evidence on distraction backs this conclusion. In PISA 2022, about two-thirds of students across the OECD reported being distracted by their own devices or by others' devices during math lessons, and those who reported distractions scored lower. In U.S. classrooms, 72% of high school teachers view phone distraction as a significant issue. In comparison, 97% of 11- to 17-year-olds with smartphones admit to using them during school hours. One year after Australia's nationwide restrictions, a New South Wales survey of nearly 1,000 principals found that 87% noticed a decrease in distractions and 81% reported improved learning outcomes. None of these figures proves causation on its own. Still, combined, they indicate a clear signal: less phone access leads to calmer classrooms and more focused learning.

There is also a warning. Some systems chose a different approach. Estonia has invested in managed device use and AI literacy, rather than imposing bans, believing the goal is to improve teaching practices, not just to restrict tools. UNESCO warns that the evidence base still lags behind product cycles; solid causal studies are rare, and technology often evolves faster than evaluations can keep up. Good policy learns from this. It should set clear, simple boundaries while leaving space for controlled, educational use. That balance serves as a model for AI policy in schools, and this policy must be transparent, keeping all stakeholders informed and confident.

Designing AI policy in schools: age, access, and accountability

The second lesson is about design. The smartphone rules that worked were straightforward but not absolute. They allowed exceptions for teaching and for students with health or accessibility needs. AI policy in schools should follow this pattern, with a stronger focus on privacy. The risks associated with AI go beyond distraction. They also include data collection, artificial relationships, and mistakes that can impact grades and records. Adoption of AI is already widespread. In 2024, 70% of U.S. teens reported using generative AI, with approximately 40% using it for schoolwork. Teacher opinions are mixed: in a 2024 Pew survey, a quarter of U.S. K-12 teachers stated that AI tools do more harm than good, while about a third viewed the benefits and drawbacks as evenly balanced. These findings suggest the need for clear guidelines, not blanket bans.

A practical blueprint is straightforward. For students under 13, the use of school-managed tools that do not retain chat histories and block conversational 'companions' should be the norm. From ages 13 to 15, use should be permitted only through school accounts with audit logs, age verification, and content filters; open consumer bots should not be allowed on school networks. From the age of 16 onward, students can use approved tools for specific tasks, subject to teacher oversight and clear attribution rules. Assessment should align with this approach. In-class writing and essential tasks should be 'AI-off' by default unless the teacher specifies a limited use case and documents it. Homework can include 'AI-on' tasks, but these must be cited with prompts and outputs. The aim is not to trap students, but to maintain high integrity, steady attention, and visible learning.

Procurement makes the plan enforceable. Contracts should require vendors to turn off data retention by default for minors, conduct age checks that do not collect sensitive personal information, provide administrative controls that block "companion" chat modes and jailbreak plug-ins, and share basic model cards that explain data practices and safety testing procedures. Districts should prefer tools that generate teacher-readable logs across classes. These expectations align with UNESCO's global guidance for human-centered AI in education and with evolving national guidance that emphasizes the importance of teacher judgment over automation.

Evidence we have—and what we still need—on AI policy in schools

We should be honest about the evidence. For phones, the connection between distraction and lower performance is strong across systems; however, the link between bans and test scores remains under investigation. Some notable improvements, such as those in New South Wales, come from principal surveys rather than randomized trials. With AI, the knowledge gap is wider. Early data indicate increased usage by teachers and students, along with a rapid expansion of training. In fall 2024, the percentage of U.S. districts reporting teacher training on AI rose to 48%, up from 23% the previous year. At the same time, teen use is spreading beyond homework. In 2025 research, 72% of teens reported using AI "companions," and over half used them at least a few times a month. This trend introduces a new risk for schools: AI tools that mimic friendships or therapy. AI policy in schools should draw a clear line here, and ongoing evaluation is crucial for its success.

Figure 2: Capacity is catching up but not there yet—teacher AI training doubled in a year, still below half of districts.

Method notes are essential. PISA 2022 surveyed roughly 690,000 15-year-olds across 81 systems; the 59–65% distraction figures represent OECD averages, not universal classroom rates. The Common Sense figures are U.S. surveys with national samples collected in 2024, with follow-ups in 2025 on AI trust and companion use. RAND statistics come from weighted panels of U.S. districts. The U.K. AI documents provide policy guidance, not evaluations, and the Dutch and Finnish measures are national rules that are just a year or so into implementation. This evidence should be interpreted carefully; it is valuable and credible, but still in the process of development.

This leads to a practical rule: every district AI deployment should include an evaluation plan. Set clear outcomes in advance. Monitor workload relief for teachers, changes in plagiarism referrals, and progress for low-income students or those with disabilities. Share the results on a public schedule, using clear and concise language. Smartphones taught us that policies work best when the public sees consistent, local evidence of their benefits. AI policy in schools will gain trust in the same way. If a tool saves teachers an hour a week without increasing integrity incidents, keep it. If it distracts students or floods classrooms with off-task prompts, deactivate it and provide an explanation for why.

A principled path forward for AI policy in schools

What does a principle-based AI policy in schools look like in practice? Start with transparency. Every school should publish a brief AI use statement for staff, students, and parents. This statement should list approved tools, clarify what data they collect and keep, and identify who is responsible. Move to accountability. Schools should maintain audit logs for AI accounts and conduct regular spot checks to ensure the integrity of assessments. Include human oversight in the design. Teachers should decide when AI is used and remain accountable for feedback and grading. Promote equity by design. Provide alternatives for students with limited access at home. Ensure tools are compatible with assistive technology. Teach AI literacy as part of media literacy, so that students can critically evaluate AI-generated outputs, rather than simply consuming them.

The policy should extend beyond the classroom. Procurement should establish privacy and safety standards in line with child-rights laws. England's official guidance clarifies what teachers can do and where professional judgment is necessary. UNESCO promotes human-centered AI, emphasizing the importance of strong data protection, capacity building, and transparent governance. Schools should choose tools that turn off data retention for minors, offer meaningful age verification, and enable administrators to block romantic or therapeutic "companion" modes, which many teens now report trying. This last restriction should be non-negotiable for primary and lower-secondary settings. The lesson from phones is clear: if a feature is designed to capture attention or influence emotions, it should not be present during the school day unless a teacher explicitly approves it for a limited purpose.

The number that opened this piece—40% of systems restricting phones—reflects a social choice. We experimented with a tool in schools and found that without a clear set of rules, it disrupted the day. This lesson is relevant now. Generative AI is emerging faster than smartphones did and will infiltrate every subject and routine. A confusing message will result in inconsistent rules and new inequalities. A simple, firm AI policy in schools can counteract this. It can safeguard attention, minimize integrity risks, and still enable students to learn how to use AI effectively. The path is clear: establish strict rules for age and access, ensure strong logs, prioritize transparent procurement, and include built-in evaluations. If policymakers adopt this plan now, the following statistic we discuss won't be about bans. It will reflect how many more hours of genuine learning we can achieve—quietly, week after week, in classrooms that feel focused again.


The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.


References

Australian Government Department of Education – Ministers. (2025, February 15). School behaviour improving after mobile phone ban and vaping reforms.
Common Sense Media. (2024, September 17). The Dawn of the AI Era: Teens, Parents, and the Adoption of Generative AI at Home and School.
Common Sense Media. (2025, February 6). Common Sense makes the case for phone-free classrooms.
Department for Education (England). (2024, February 19). Mobile phones in schools: Guidance.
Department for Education (England). (2024, August 28). Generative AI in education: User research and technical report.
Department for Education (England). (2025, August 12). Generative artificial intelligence (AI) in education.
Eurydice. (2025, June 26). Netherlands: A ban on mobile phones in the classroom.
Guardian. (2025, April 30). Finland restricts use of mobile phones during school day.
Guardian. (2025, May 26). Estonia eschews phone bans in schools and takes leap into AI.
OECD. (2024). Students, digital devices and success.
OECD. (2025). Managing screen time: How to protect and equip students to navigate digital environments?
OECD. (2023). PISA 2022 Results (Volume I): The State of Learning and Equity in Education.
RAND Corporation. (2025, April 8). More Districts Are Training Teachers on Artificial Intelligence.
UNESCO. (2023/2025). Guidance for generative AI in education and research.
UNESCO Global Education Monitoring Report team. (2023; updated 2025, January 24). To ban or not to ban? Smartphones in school only when they clearly support learning.

Picture

Member for

11 months 2 weeks
Real name
Catherine Maguire
Bio
Catherine Maguire is a Professor of Computer Science and AI Systems at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI). She specializes in machine learning infrastructure and applied data engineering, with a focus on bridging research and large-scale deployment of AI tools in financial and policy contexts. Based in the United States (with summer in Berlin and Zurich), she co-leads SIAI’s technical operations, overseeing the institute’s IT architecture and supporting its research-to-production pipeline for AI-driven finance.

Throughput Over Tenure: How LLMs Are Rewriting “Experience” in Education and Hiring

Throughput Over Tenure: How LLMs Are Rewriting “Experience” in Education and Hiring

Picture

Member for

11 months 2 weeks
Real name
David O'Neill
Bio
David O’Neill is a Professor of Finance and Data Analytics at the Gordon School of Business, SIAI. A Swiss-based researcher, his work explores the intersection of quantitative finance, AI, and educational innovation, particularly in designing executive-level curricula for AI-driven investment strategy. In addition to teaching, he manages the operational and financial oversight of SIAI’s education programs in Europe, contributing to the institute’s broader initiatives in hedge fund research and emerging market financial systems.

Modified

AI is recomposing jobs, not erasing them
Throughput with judgment beats years of experience
Schools and employers must teach, verify, and hire for AI-literate workflows

Between 60% and 70% of the tasks people perform at work today can be automated using current tools. This is not a prediction for 2040; it reflects what is already feasible, based on one of the most extensive reviews of work activities to date. Despite this, job data has not collapsed. Recent macro studies show no significant change in unemployment or hiring patterns since late 2022, even in fields most impacted by AI. The tension is evident: tasks are evolving rapidly, while overall job counts are changing more slowly. This gap highlights what matters now. The market is seeking individuals who can deliver results efficiently and effectively. It's not just about having ten years of experience; it’s about being able to convert ten hours of work into what used to take ten days—consistently, with sound judgment, and using the right tools. This new measure of expertise will determine how schools and employers are evaluated.

From “replacement” to “recomposition”

The key question has shifted from whether AI will replace jobs to where demand is headed and how teams are being restructured. Recent data from the global online freelance market illustrates this trend. In areas most affected by generative AI, the number of contracts fell by approximately 2%, and earnings decreased by roughly 5% after the late-2022 wave of new models. Meanwhile, searches for AI-related skills and job postings surged, and new, more valuable contract types emerged outside core technical fields. In short, while some routine roles are shrinking, adjacent work requiring AI skills is growing. The total volume of work is not the whole story; the mix of jobs is what really counts.

A broader perspective shows similar pressures for change. The OECD’s 2023 report estimated that 27% of jobs in member countries are in fields at high risk from automation across various technologies, with AI as a significant factor. The World Economic Forum's 2023 survey expects that 42% of business tasks will be automated by 2027 and anticipates major disruptions in essential skills. Yet, an ILO global study finds that the primary short-term impact of generative AI is enhancement rather than full automation, with only a small portion of jobs falling into the highest exposure category. This resolves the apparent contradiction. Tasks change quickly, but occupations adapt more slowly. Teams are reformed around what the tools excel at. The net result is a shift in the skills required for specific roles, rather than a straightforward replacement of jobs.

The education sector is actively involved in this evolution. Universities, colleges, and training providers are directly affected by these changes. Suppose educational programs continue to emphasize time spent in class and tenure alone. In that case, they will fail to meet learners' needs. Training should focus on throughput—producing repeatable, verifiable outputs with human insight and support from models. This shift requires clearer standards for tool use, demonstrations of applied judgment, and faster methods to verify skills. Educational policies must also acknowledge who may be left behind as task requirements evolve. Current data reveal disparities based on gender and prior skill levels, so policy must address these issues directly.

Figure 1: Most jobs face partial automation, not disappearance — education and scientific roles show the highest augmentation potential, underscoring why AI literacy matters more than job tenure.

When tools learn, “experience” changes

Research indicates that AI support improves output and narrows gaps, especially for less-experienced workers. In a large firm's customer support, using a generative AI assistant increased the number of issues resolved per hour by about 14%, with novice agents seeing the most significant benefits. Quality did not decline, and complaints decreased. This isn't laboratory data; it's based on over five thousand agents. The range narrowed because the assistant shared and expanded tacit knowledge. This is what “experience” looks like when tools are utilized effectively.

Figure 2: AI assistance raises productivity most for novices — shrinking the experience gap and redefining how skill growth is measured.

Software provides a similar picture with straightforward results. In a controlled task, developers using an AI coding assistant completed their work 55% faster compared to those without it. They also reported greater satisfaction and less mental strain. The critical takeaway is not just the increased speed, but what that speed allows: quicker feedback for beginners, more iterations before deadlines, and more time for review and testing. With focused practice and the right tools, six months of training can now generate the output that used to take multiple years of routine experience. While this doesn't eliminate the need for mastery, it shifts the focus of mastery to problem identification, data selection, security, integration, and dealing with challenging cases.

Looking more broadly, estimates suggest that current tools might automate 60% to 70%of the tasks occupying employees’ time. However, the real impact hinges on how processes are redesigned, not just on the performance of the tools. This explains why overall labor statistics change slowly, even as tasks become more automatable. Companies that restructure workflows around verification, human oversight, and data management will harness these benefits. In contrast, those merely adding tools to outdated processes will not see gains. Education programs that follow the first approach—training students to design and audit workflows using these models—will produce graduates who deliver value in weeks rather than years.

Degrees, portfolios, and proofs of judgment

If throughput is the new measure of expertise, we need clear evidence of judgment. Employers cannot gauge judgment based solely on tenure; they need to see it reflected in the work produced. This situation has three implications for education. First, assessments should change from “write it from scratch” to “produce, verify, and explain.” Students should be required to use approved tools to draft their work and then show their prompts, checks, and corrections—preferably against a brief aligned with the roles they want to pursue. This approach is not lenient; it reflects how work actually gets done and helps identify misuse more easily. UNESCO’s guidance for generative AI in education supports this direction: emphasizing human-centered design, ensuring transparency in tool use, and establishing explicit norms for attribution.

Second, credentials should verify proficiency with the actual tools being used. The World Economic Forum reports that 44% of workers' core skills will be disrupted by 2027, highlighting the increasing importance of data reasoning and AI literacy. However, OECD reviews reveal that most training programs still prioritize specialized tracks, neglecting broad AI literacy. Institutions can close this gap by offering micro-credentials within degree programs and developing short, stackable modules for working adults. These modules should be grounded in real evidence: one live brief, one reproducible workflow, and one reflective piece on model limitations and biases. The key message to employers is not merely that a student “used AI,” but that they can reason about it, evaluate it, and meet deadlines using it.

Third, portfolios should replace vague claims about experience. Online labor markets illustrate this need. After the 2022 wave of models, demand shifted toward freelancers who could specify how and where they used AI in their workflows and how they verified their results. In 2023, job postings seeking generative AI skills surged, even in non-technical fields such as design, marketing, and translation, with more of those contracts carrying higher value. Students can begin to understand this signaling early: each item in their portfolio should describe what the tool did, what the person did, how quality was ensured, and what measurable gains were achieved. This language speaks to modern teams.

Additionally, a brief method note should be included in the curriculum. When students report a gain (e.g., “time cut by 40 %”), they should explain how they measured it, such as the number of drafts, tokens processed, issues resolved per hour, or review time saved. This clarity benefits hiring managers and makes it easier to replicate in internships. It also cultivates the crucial habit of treating model outputs as hypotheses to be verified, not as facts to be uncritically accepted. That represents the essence of applied judgment.

Guardrails for a lean, LLM-equipped labor market

More efficient teams with tool-empowered workers can boost productivity. However, this transition has certain risks that education and policy must address. Exposure to new technologies is not uniform. ILO estimates indicate that a small percentage of jobs are in the highest exposure tier; yet, women are more often found in clerical and administrative jobs, where the risk is more pronounced, particularly in high-income countries. This situation creates a dual responsibility: develop targeted reskilling pathways for those positions and reform hiring processes to value adjacent strengths. Suppose an assistant can handle scheduling and drafting emails. In that case, the human element should focus on service recovery, exception handling, and team coordination. Programs should explicitly train and certify those skills.

The freelance market also serves as a cautionary tale. Research indicates varied impacts on different categories of work; some fields lose routine jobs, while others see an increase in higher-value opportunities linked to AI workflows. Additionally, the layers associated with data labeling and micro-tasks that support AI systems are known for low pay and scant protections. Education can play a role by teaching students how to price, scope, and contract for AI-related work, while also underscoring the ethical considerations of data work. Policy can assist by establishing minimum standards for transparency and pay on platforms that provide training data, and by tying public procurement to fair work evaluations for AI vendors. This approach prevents gains from accumulating in the model supply chain while shifting risks to unseen workers.

Hiring practices need to evolve in response to these changes. Job advertisements that require “x years” of experience as a blunt measure should instead focus on demonstrating throughput and judgment: a timed work sample with a specific brief, allowed tools, a fixed dataset, and an error budget. This adjustment is not a gimmick; it provides a better indication of performance within AI-enhanced workflows. For fairness, the brief and data should be shared in advance, along with clear rules regarding allowed tools. Candidates should submit a log of their prompts and checks. Education providers can replicate this format in capstone projects, sharing the outcomes with employer partners. Over time, this consistency will ease hiring processes for all parties involved.

Finally, understanding processes is just as important as knowing how to use tools. Students and staff should learn to break tasks into stages that the model can assist with (such as drafting, summarizing, and searching for patterns), stages that must remain human (like framing, ethical reviews, and acceptance), and stages that are hybrid (like verification and monitoring). They should also acquire a fundamental toolset, including how to retrieve trusted sources, maintain a library of prompts with version control, and utilize evaluation tools for standard tasks. None of this requires cutting-edge research; it requires diligence, proper documentation, and the routine practice of measurement.

Measure what matters—and teach it

The striking statistic remains: up to 60%-70% of tasks are automatable with today’s tools. However, the real lesson lies in how work is evolving. Tasks are shifting faster than job titles. Teams are being restructured to emphasize verification and handling exceptions. Experience, as a measure of value, now relies less on years worked and more on the ability to produce quality results promptly. Education must consciously respond to this change. Programs should allow students to use the tools, require them to show their verification methods, and certify what they can achieve under time constraints. Employers should seek evidence of capability, rather than merely years of experience. Policymakers should support transitions, particularly in areas with high exposure, and raise standards in the data supply chain. By taking these steps, we can turn a noisy period of change into steady, cumulative benefits, developing graduates who can achieve in a day what previously took a week, and who can explain why their work is trustworthy. This alignment of human talents with the realities of model-aided production hinges on knowing what to measure and teaching accordingly.


The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.


References

Brynjolfsson, E., Li, D., & Raymond, L. (2025). Generative AI at Work. Quarterly Journal of Economics, 140(2), 889–931.
Brynjolfsson, E., Li, D., & Raymond, L. (2023). Generative AI at Work (NBER Working Paper No. 31161). National Bureau of Economic Research.
GitHub. (2022, Sept. 7). Quantifying GitHub Copilot’s impact on developer productivity and happiness.
International Labour Organization. (2023). Generative AI and Jobs: A global analysis of potential effects on job quantity and quality (Working Paper 96).
International Labour Organization. (2025). Generative AI and Jobs: A Refined Global Index of Occupational Exposure (Working Paper 140).
McKinsey & Company. (2023). The economic potential of generative AI: The next productivity frontier.
OECD. (2023). OECD Employment Outlook 2023. Paris: OECD Publishing.
OECD. (2024). Readying Adult Learners for Innovation: Reskilling and Upskilling in Higher Education. Paris: OECD Publishing.
OECD. (2025). Bridging the AI Skills Gap. Paris: OECD Publishing.
Oxford Internet Institute. (2025, Jan. 29). The Winners and Losers of Generative AI in the Freelance Job Market.
Reuters. (2025, May 20). AI poses a bigger threat to women’s work than men’s, says ILO report.
UNESCO. (2023). Guidance for Generative AI in Education and Research. Paris: UNESCO.
Upwork. (2023, Aug. 22). Top 10 Generative AI-related searches and hires on Upwork.
Upwork Research Institute. (2024, Dec. 11). Redesigning Work Through AI.
Upwork Research Institute. (2024, Feb. 13). How Generative AI Adds Value to the Future of Work.
World Economic Forum. (2023). The Future of Jobs Report 2023. Geneva: WEF.
Yale Budget Lab & Brookings Institution (coverage). (2025, Oct.). US jobs market yet to be seriously disrupted by AI. The Guardian.

Picture

Member for

11 months 2 weeks
Real name
David O'Neill
Bio
David O’Neill is a Professor of Finance and Data Analytics at the Gordon School of Business, SIAI. A Swiss-based researcher, his work explores the intersection of quantitative finance, AI, and educational innovation, particularly in designing executive-level curricula for AI-driven investment strategy. In addition to teaching, he manages the operational and financial oversight of SIAI’s education programs in Europe, contributing to the institute’s broader initiatives in hedge fund research and emerging market financial systems.