Skip to main content

The Global AI Divide and the Imperative for Education Policy Reform

The Global AI Divide and the Imperative for Education Policy Reform

Picture

Member for

1 year 2 months
Real name
Ethan McGowan
Bio
Ethan McGowan is a Professor of Financial Technology and Legal Analytics at the Gordon School of Business, SIAI. Originally from the United Kingdom, he works at the frontier of AI applications in financial regulation and institutional strategy, advising on governance and legal frameworks for next-generation investment vehicles. McGowan plays a key role in SIAI’s expansion into global finance hubs, including oversight of the institute’s initiatives in the Middle East and its emerging hedge fund operations.

Modified

Advanced economies push AI policy because productivity gains are visible and immediate
Poorer countries lag as low returns and weak capacity dampen urgency
Education policy can still slow the widening AI divide

Since the emergence of accessible large language models, a clear pattern has developed in the economic dynamics of artificial intelligence. A small group of developed nations now control the critical resources, expertise, and financial rewards associated with AI, while much of the world remains excluded. This disparity has significant consequences. Businesses in leading economies are integrating AI into their operations and products, fundamentally altering value creation. This trend is projected to accelerate productivity gains, increase wages for AI-proficient workers, and drive further market consolidation. Without targeted public policy and educational reform, these effects will likely intensify. The education sector faces a pivotal choice: whether to treat AI as a specialized technical field limited to research and development, or to redesign curricula, training, and institutional incentives to enable broader regions to convert AI access into concrete economic benefits. The resolution of this issue will determine whether the global AI divide becomes a primary driver of inequality in the next decade.

Global AI Divide: The Concentration of Access, Talent, and Capital

A defining feature of AI is its uneven geographic distribution. The development of models, computing infrastructure, and investment capital is concentrated in a small number of countries and cities. This concentration is significant because AI is a general-purpose technology that transforms business structures, task execution, and the valuation of skills. Countries with established technology sectors benefit from efficient pathways that move innovations from research to market. Their universities produce skilled graduates, startups secure funding, and established firms implement AI systems to enhance productivity. In contrast, many lower-income countries encounter substantial barriers, including limited research facilities, insufficient cloud infrastructure, and minimal venture capital. In regions lacking AI capacity, the technology does not redistribute tasks but instead exacerbates the divide between firms that can adopt AI and those that cannot.

Statistical evidence demonstrates that institutions in a select group of countries have developed the leading AI models and attracted the majority of private investment in recent years. This disparity perpetuates a self-reinforcing cycle: success draws additional capital, talent gravitates toward these hubs, and local firms benefit from early AI adoption. Conversely, economies with limited capital and restricted AI research and development experience a negative feedback loop. Lacking local models, computing resources, or skilled labor, they depend on external tools that are often unsuited to their languages, regulatory environments, or economic contexts. This reliance leads to lower immediate returns and persistent challenges in tailoring AI solutions to local development goals.

Figure 1: AI policy leadership clusters where younger populations and higher incomes make productivity gains more visible and politically valuable.

Education Systems: The Decisive Factor in Narrowing or Widening the Gap

Education is essential to any viable strategy for bridging the global AI divide. It serves as the primary means by which economies transform technological capabilities into increased productivity. However, different approaches to education produce different results. Traditional methods treat AI as a specialized field best left to computer science departments. This perspective overlooks a crucial point. AI is transforming routine tasks in healthcare, agriculture, logistics, and public administration. Its impact is greatest when it is combined with knowledge of these specific areas, rather than when it exists in isolation. To produce economic benefits, education and training initiatives must simultaneously broaden basic digital skills across the workforce, cultivate practical AI knowledge within key industries, and establish pathways for technical experts to convert prototypes into operational systems. Without this comprehensive approach, investments in equipment or short courses will have minimal impact.

There are practical limitations to consider. Many lower-income regions still lack consistent, affordable internet connectivity. While about 90% of people in high-income countries have internet access, this figure is less than a third in the poorest countries. This disparity is a basic reality that education policy must address. Online laboratories or cloud-based curricula cannot be scaled effectively if a large portion of the population lacks reliable access. Furthermore, even where internet access is available, there is often a lack of local support systems, such as instructors with current practical experience, industry partnerships, and accessible datasets. This deficiency makes it more challenging for educational institutions and training centers to offer practical modules that lead to measurable improvements in productivity. Therefore, policymakers should avoid treating digital skills as a universal solution. Instead, they should create detailed, sector-specific programs linked to measurable outcomes.

Practical Policy: Aligning Curriculum with Economic Returns, Not Just Popular Trends

The key policy consideration is whether an intervention will increase the anticipated return on investment for implementing AI in local businesses and public services. If the answer is affirmative, the investment is worthwhile. If not, it will likely waste valuable resources. This principle directs us toward targeted, results-oriented education policies. First, identify industries where AI can produce immediate gains in productivity. For many developing economies, this includes agriculture, logistics, health diagnostics, and the administrative tasks of small and medium-sized enterprises. Second, develop short, modular credentials in collaboration with employers and industry experts, focusing on changes to business processes, data-collection standards, and the implementation of small-scale projects. Third, invest in individuals who can serve as translators. These hybrid practitioners possess knowledge of both the specific industry, such as crop management, and the practical aspects of deploying AI models. These investments offer greater returns than general computer science degrees when resources are limited.

Figure 2: Even with similar exposure to AI, differences in governance and political alignment shape whether policy ambition translates into durable action.

Procurement is another critical yet frequently overlooked factor. Governments, as major purchasers of goods and services, can drive demand for systems that are interoperable, locally adaptable, and auditable. Such demand supports the growth of domestic capabilities. Educational institutions can align curricula with these procurement requirements, ensuring graduates are equipped to meet government standards. This alignment creates direct pathways from training to implementation, reducing the likelihood that skilled individuals migrate to foreign firms or utilize tools unsuited to local needs. This strategy mitigates risk for students and harmonizes the objectives of educational institutions, businesses, and government agencies.

Addressing Common Concerns

Concern 1: Investment is excessively concentrated, rendering it impossible for lower-income countries to catch up. In reality, although capital and computing resources have become highly concentrated, several smaller economies have demonstrated significant progress in the past five years through coordinated public-private initiatives. Targeted investments in data infrastructure, cloud computing, and sector-specific training have enabled these countries to support local firms in developing customized solutions, rather than relying solely on foreign technologies. The objective is not immediate parity, but the cultivation of essential capabilities that yield long-term benefits.

Concern 2: AI will only benefit those with advanced education, increasing inequality. This is a valid concern, but it is not inevitable. Training that focuses on practical tasks, such as using AI-powered decision support for nurses or agricultural specialists, can increase the productivity and wages of mid-skilled workers. The best approach combines brief vocational modules with employer commitments, rather than assuming that only university-level programs are beneficial. The focus should be on those with moderate skills, not just the top performers. When training is linked to measurable productivity improvements in companies or public services, the benefits are more widely distributed.

Concern 3: Local curricula will be ineffective if global AI models are proprietary and controlled by multinational corporations. This is true, but local capability does not require complete ownership of AI models. It requires the ability to adapt and integrate tools, assess their outputs, and govern their responsible use. Public-private partnerships, cloud computing credits, and licensing arrangements can provide access while local institutions develop the skills and governance structures needed to manage and customize systems. The immediate policy priority is to transform access into local value creation. Procurement practices, standards, and training programs are the tools to achieve this.

A Targeted Call to Action

The global AI divide is not inevitable; it is shaped by choices regarding investment, education, and technology acquisition. Education policy can serve as a catalyst, transforming AI from a force for concentration into an instrument for inclusive development. Achieving this requires targeted reforms: prioritizing sectors with demonstrable gains, developing stackable credentials aligned with employer needs, integrating training with infrastructure support, and funding roles that convert prototypes into practical solutions. These measures can mitigate the risk of the AI divide becoming a permanent source of inequality. Inaction will only allow the gap to widen. Immediate, decisive action is required.


The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.


References

International Telecommunication Union. (2024). Facts and figures 2024: Internet use. ITU.
Stanford Institute for Human-Centered Artificial Intelligence (HAI). (2025). The 2025 AI Index Report. Stanford HAI.
Visual Capitalist. (2025). Visualizing global AI investment by country. Visual Capitalist.
World Economic Forum. (2023). The ‘AI divide’ between the Global North and the Global South. WEF.
Zürich Innovation / Dealroom. (2024). AI Europe report 2024.

Picture

Member for

1 year 2 months
Real name
Ethan McGowan
Bio
Ethan McGowan is a Professor of Financial Technology and Legal Analytics at the Gordon School of Business, SIAI. Originally from the United Kingdom, he works at the frontier of AI applications in financial regulation and institutional strategy, advising on governance and legal frameworks for next-generation investment vehicles. McGowan plays a key role in SIAI’s expansion into global finance hubs, including oversight of the institute’s initiatives in the Middle East and its emerging hedge fund operations.

Teach the Tool: Why AI literacy in schools must replace bans

Teach the Tool: Why AI literacy in schools must replace bans

Picture

Member for

1 year 2 months
Real name
David O'Neill
Bio
David O’Neill is a Professor of Finance and Data Analytics at the Gordon School of Business, SIAI. A Swiss-based researcher, his work explores the intersection of quantitative finance, AI, and educational innovation, particularly in designing executive-level curricula for AI-driven investment strategy. In addition to teaching, he manages the operational and financial oversight of SIAI’s education programs in Europe, contributing to the institute’s broader initiatives in hedge fund research and emerging market financial systems.

Modified

Schools are banning AI while workplaces are adopting it, creating a growing skills gap
AI literacy must be taught through teachers and curriculum, not enforced through restrictions on students
The real policy failure is institutional resistance to change, not student misuse of technology

By the close of 2024, a trend emerged: 40% of education systems worldwide had enacted regulations that either limit or completely ban the use of phones in schools. Surprisingly, about 25% of teens in the U.S. admitted to using an AI chatbot to help with school assignments, and more than half said they were using AI-based search engines or chat apps. It is hard to deny the disconnect: as young people are using AI outside of school, schools are banning devices. In the workforce, AI is becoming increasingly widespread. If school systems treat AI and phones the same way they did personal computers and video games in the past, they risk turning out students who can't make effective use of these technologies in most workplaces. Instead of just banning things, there should be investment in teaching about AI in schools. Students and teachers alike will then be able to tell when AI is helpful and when it is giving wrong information. Employers will get workers who can check, verify, and improve what AI produces, so they aren't tricked by it.

Why teaching AI skills in school is a must

There is a reason for all the schools' actions. It makes sense to have bans because of distractions, unfairness, and privacy issues. UNESCO has found that 79 educational systems across the globe have adopted rules on phones, indicating that people are concerned about safety. At the same time, more and more young people are using AI for school between 2023 and 2025. In the United States, about 26% of teens admitted to using an AI chatbot for school in early 2025. Young people outside of school are testing different search engines, ways to produce search results, picture and video tools, along with chat assistants. Such out-of-school influence shapes what they expect and what they are good at. When businesses and teams make plans to use AI on a large scale, studies show that companies are going to implement AI models and use them for daily tasks. Students who have not been taught to double-check what AI gives them will most likely be less able to detect mistakes or get work done.

Figure 1: Perceived AI risks differ sharply across stakeholders, helping explain why restrictive policies persist despite growing workplace demand for AI literacy.

The timing of this situation is an important consideration. The biggest danger from AI in education is not students finding ways to cheat. It is about students gaining wrong knowledge. When students copy a machine-made answer that is wrong, they pick up wrong facts and start the bad habit of always believing in things that are not checked. We know that AI language models commonly make up information. Studies have indicated that AI tools can fabricate references and provide incorrect information. The good thing is that it can be fixed! Classes should teach how to check; they can learn to use several sources, ask questions, and ensure things align with other resources. These skills can all be taught. Companies need those skills, too. Treating phones and chats as bad is not a sufficient plan. Avoiding the difficult work of giving teachers resources and designing classes means that students won't learn to get the most out of AI and spot its problems.

With teaching AI skills in school, teachers have to take charge and not get left in the dust

The challenge is teacher readiness and not student interest. Research shows that more teachers were using AI in 2023 and 2025. These days, quite a few teachers use AI to assist with plans, alter classwork, and come up with class material. However, training for teachers has been uneven. When teachers use safe AI systems managed by the school, they report saving more time and being better prepared. Otherwise, teachers are fearful and place bans. Consequently, the work is being pushed away from school leaders and onto students.

If leaders want to prepare students for their careers, they have to help adults out as well. Investing in in-service time, having checked training that gets folks to double check is important, and placing secure tools will keep student data safe as folks practice. Not to mention, teachers have to learn how to create prompts that make students need to think deeply, instead of just having a computer write a report, assign work in such a way that the manner someone does things is just as important as the end result, and know how to properly analyze AI detection that is similar. It is important to keep in mind that AI detection is not fully correct at all times. Studies have indicated that AI detection tools are not always reliable. Just punishing folks without a teacher's input will lead to wrongful accusations, break trust, and lead to behavior that makes the work harder rather than the learning better.

Teachers who recognize what AI cannot do have the upper hand. They can show students how to come up with the right questions, match what a machine says with valid resources, test different approaches, and document methods in a log. Businesses really value these thinking skills. Schedules should have time to practice, contemplate, and revise assignments, in helping them to grow. Leaders should do their part as well. Principals and supervisors have to develop practical policies that enable AI to be useful in the classroom, with clearly stated norms, rather than just imposing bans that take the issue out of the school. This is not about letting everything go; rather, it is about teaching AI skills in a practical manner.

When teaching AI skills in schools, you have to improve safety, course design, and how students are evaluated

If schools decide that AI skills are extremely important, how they evaluate students ' needs needs to change. Classic essays make way for people to misuse AI improperly. Rather, tasks have to focus on process and proof: drafts with notes, oral defenses, lab records, code walkthroughs, and research papers with places to add resources and steps taken to come to a conclusion. Point systems must include a review of resources, steps for verification, and explanations for why an AI recommendation was accepted or rejected. When exams must be held under controlled conditions, classwork should include AI-assisted exercises so students can recognize the difference between good content and solid information.

When creating a shift like the one above, fairness should be looked at directly. Not every student is able to get the same devices or AI at home. Platforms offered by schools that care about privacy will make it so that every student has access to things. However, districts have to spend money on them. Teacher learning should include inclusive ways of teaching that ensure people are getting help from AI and that teachers' decision-making abilities are not replaced for students who need extra assistance. With that said, policy must forecast and fix issues. AI detection tools are able to show material that could be used incorrectly, but they aren't always correct. It would be better to have students state they are using AI with assignments, teachers making work so that students have to explain, and holding brief check-ups and casual follow-ups. That mix will reduce people trying to take the easy way out and make learning environments that show how to check and create judgment.

Figure 2: Educational design determines whether students resist, passively rely on AI, or develop the agency needed to use it productively.

To conclude, governance has immense importance. To begin with, districts should put in place short-term rules that will allow controlled classroom use. On the other hand, they should require training for teachers ahead of time. They should mandate that sellers have privacy guarantees for schools and would rather use tools that have administrative controls. If bans seem like the right thing given the current political climate, leaders can pair them with timelines and pilot programs: briefly pause to collect data, then fund teacher growth and larger classroom pilots. That is a plan that protects students and prepares them for the future.

The point is that while rules do lower risk right away, they won't help folks grow the skill sets needed in the workplace. Instead of only having bans, schools must teach and verify. Schools need to shift from policing devices to truly growing judgment. Achieving this success requires investing in teachers' skill development, designing tests that focus on process rather than just end results, and securing district-level agreements that ensure fair access to verified tools. We can do more than end cheating if students are taught to test statements, check references, and repair mistakes made by AI. In other words, we must produce people who can contribute positively to society and become workers who help improve technology. Without change, the workforce risks misusing AI or failing to work effectively with it. Let's invest in adult abilities and classroom design, so AI skills are taught in schools. Schools must act now: treat AI as a core subject to teach, not just something to keep out, and ensure graduates can confidently confirm and use the resources common in today's workplaces.


The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.


References

Chelli, M., et al. (2024). Hallucination rates and reference accuracy of ChatGPT and Bard for systematic reviews. Journal of Medical Internet Research.
Gallup (2025). Three in 10 teachers use AI weekly, saving weeks per year. Gallup Education Research Brief.
McKinsey & Company (2024). The state of AI in early 2024: generative AI adoption and value. McKinsey Global Survey Report.
Pew Research Center (2025). About a quarter of U.S. teens have used ChatGPT for schoolwork; usage doubled since 2023. Pew Research Center Short Read.
UNESCO Global Education Monitoring (GEM) team (2025). To ban or not to ban? Monitoring countries’ regulations on smartphone use in school. UNESCO GEM Reports.
Turnitin/AI-detection literature and mixed-method reviews consulted for detection reliability and accuracy discussions (summarized from peer-reviewed evaluations of AI detection tools; representative review: Elkhatat, A.M., et al., 2023).

Picture

Member for

1 year 2 months
Real name
David O'Neill
Bio
David O’Neill is a Professor of Finance and Data Analytics at the Gordon School of Business, SIAI. A Swiss-based researcher, his work explores the intersection of quantitative finance, AI, and educational innovation, particularly in designing executive-level curricula for AI-driven investment strategy. In addition to teaching, he manages the operational and financial oversight of SIAI’s education programs in Europe, contributing to the institute’s broader initiatives in hedge fund research and emerging market financial systems.

India’s Energy Lifeline: Why Cheap Russian Oil Is the Hidden Engine of Delhi’s Strategic Choices

India’s reliance on cheap Russian oil is not a diplomatic preference but an economic necessity
Energy security now outweighs export dependence on the United States in India’s policy calculus
This constraint narrows India’s strategic options and weakens Washington’s leverage in Asia

Survival of the Fluent: Why the AI fluency gap Will Reorder Work and Schools

Survival of the Fluent: Why the AI fluency gap Will Reorder Work and Schools

Picture

Member for

1 year 3 months
Real name
Keith Lee
Bio
Keith Lee is a Professor of AI and Data Science at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI), where he leads research and teaching on AI-driven finance and data science. He is also a Senior Research Fellow with the GIAI Council, advising on the institute’s global research and financial strategy, including initiatives in Asia and the Middle East.

Modified

The AI fluency gap is becoming the new digital divide, reshaping who advances and who falls behind at work
Only a small group of fluent users capture most of AI’s productivity gains, concentrating power and opportunity
Education systems and policy must act now to make AI fluency a shared public skill, not a private advantage

The most vital skill in the coming years won't be coding or a specific degree, but rather the ease, almost second-nature, with which one can apply artificial intelligence. According to Sci-Tech Today, in 2025, 78 percent of global companies were using AI in their daily work, and over 70 percent had incorporated generative AI into at least one business function, indicating that this technology is already transforming teams and industries, even though universal adoption is not yet the norm. This difference is important because AI skills grow quickly. Those who use AI regularly improve faster, tackle bigger problems, and are offered better jobs. Those who don't will fall behind, and not just a little. This creates a divide, not between those who have computers and those who don't, but between those who are skilled with AI and those who aren't. As history suggests, the most successful people won't be those who first start using AI, but those who learn to think through its use. Schools, employers, and leaders must plan to help those who are struggling, rather than expecting everyone to catch up on their own. The question isn't whether AI will change work, but whether we can guide that change to benefit many, not just a few.

The AI skill gap repeats the problems of the PC era

The arrival of personal computers in the late 20th century changed job opportunities. People who quickly adopted the new technology gained a significant advantage. Others who resisted or learned too late found themselves in less important roles or out of work. This wasn't unavoidable; it was the result of how quickly computer skills became necessary for everyday work. We're seeing the same thing now with AI, but much faster. While using a PC required knowing how to use a keyboard and learning new habits, using generative AI requires being able to frame problems in ways the AI can provide useful, new answers. Then, one must edit, review, and add those answers to the organization's work. Skill isn't just one ability, but a collection: the ability to write good prompts, to judge the results, to design systems, and to know when AI helps and when it misleads.

AI is spreading unevenly, benefiting a small group. Studies from 2024 and 2025 show interest in AI at many companies, but few workers use it regularly. In some areas, about 16% of users were using generative AI regularly by late 2025. Many workplaces have a few highly skilled people, while most use AI sparingly or are unsure how to use it. When skill is concentrated, it affects who leads projects, who writes code or reports, and who gets promoted—impacting influence and income.

Figure 1: The share of regular PC users rose steadily over a decade, while regular generative AI use remains concentrated and grows unevenly, signaling a faster and more selective diffusion curve.

AI skill is not a simple yes-or-no. Some workers use AI for simple tasks like summarizing or scheduling. Others have included AI into entire workflows, such as writing in-depth analyses, generating product ideas, or automating reports. The main difference is the extent and automaticity of use. Basic use makes things easier. Deep use changes how work is done. A skilled analyst can oversee AI models that do the work of three or four less-skilled colleagues while improving creative output. This multiplying effect is why skill results in a major advantage.

How unequal adoption creates winners and losers in the job market

This happens through two related ways. First, AI increases a skilled worker's output in a day. Second, once AI handles routine tasks, the jobs left for people require more judgment, design, and the ability to combine information from different areas. These abilities are easier to practice when one regularly uses AI as a partner. Surveys from large consulting firms and research groups in 2024 and 2025 show a clear gap between the extent to which organizations have adopted AI and the extent to which they use it regularly. Firms report that AI is widely used, but only a few employees say it accounts for a large part of their daily work. In other words, AI is present, but its power lies in the hands of those skilled in using it.

A 2025 study estimated that about 1 in 6 people worldwide used generative AI tools regularly. In the European Union, around 30% of workers were using AI at work by late 2025, with variation across industries and job roles. Surveys show many use AI for basic tasks like summaries or simple writing, while a few use it for more than a quarter of their daily work. Thus, the truly skilled group is much smaller than overall adoption numbers suggest.

This focus causes problems in management. Early in the PC era, many mid-career managers were simply less able to coordinate work using software than their younger staff. This caused tension in the workplace. The same thing seems to be happening again. Managers who aren't skilled with AI must either give way to younger, AI-expert employees or add inefficient layers of checking. Some firms are already reporting disagreements between tech-savvy teams and slower approvers, which are slowing down the gains that AI promises. Crucially, this isn't just a skills or training issue, but also an issue of bargaining power. Skill provides negotiation power, such as who gets to set agendas, who is given important tasks, and who gets promoted. If we don't take action, the skill gap will become an inequality that reflects and widens existing social divides.

Figure 2: Although most workers report some exposure to AI tools, only a small minority integrate AI into a significant share of daily tasks, revealing where productivity and influence concentrate.

What must educators, employers, and leaders do to close the AI skill gap?

Leaders and educators should treat AI skills like basic reading and writing skills. It's not just something nice to have; it's a core skill for everyone. This requires three related steps. First, schools must shift their programs from occasional coding courses to teaching how to frame problems, evaluate models, and make ethical judgments across every subject. Second, employers must create ways to reward skills with AI, such as short certifications, task switching, and group projects that connect beginners with skilled mentors. Third, government policy must help people in poorer communities access low-cost tools and mentorship, so that adoption doesn't depend solely on where one works or lives.

Each of these steps will face resistance. Some will say that changing the curriculum too quickly will weaken basic knowledge, or that schools can't keep up with changing tools. Others will worry about increasing surveillance if firms connect pay raises to narrow AI use numbers. These are valid concerns. The correct response is to design things sensibly and teach lasting thinking skills (such as problem-solving, checking sources, and causal reasoning) while providing brief training on current tools. Employers should avoid strict usage quotas and instead measure results, such as accuracy, speed, and quality improvements, which show serious tool use without causing mindless reliance. Public help should focus on mentorship and learning-by-doing, not just on giving out hardware. Evidence from workplace projects suggests that guided, project-based learning yields greater, longer-lasting gains than simple online courses.

Clear and testable steps for schools are simple. Start with the teacher's skill. Fund regional hub-teacher programs that give teachers ongoing practice time with AI models and funding for designing programs together. Add short AI sections into existing subjects. For example, history students can learn to check AI claims, biology students can test model-created experimental designs, and literature students can edit model drafts. For employers, create internal programs that rotate high-potential staff through AI-skill roles, connecting them with product and process teams. For leaders, fund regional simulation labs where small firms and adult learners can access toolkits and mentorship. These actions aren't costly compared to the economic risk of a divided job market; they are investments in resilience.

When faced with budget and political questions about spending public money on what appears to be company skill-building, we must answer them directly. When a new, general technology rewrites how work is done, public systems that support movement must adapt. There's historical evidence for this. Public investment in job training during earlier industrial changes reduced disruption and ensured that new industries remained available to everyone. The other option is skill monopolies controlled by private companies. A few employers would hold the keys to skill, and with them, greater hiring, pay, and leadership power. Public action protects choice and competition.

Changing the focus: policy for increasing improvement, not one-off training

The policy talk must shift from training many as a one-time response to building systems that prevent increasing inequality. Short training courses are helpful, but not enough if skill increases through networks, project selection, and promotion. We need policies that intentionally level the playing field, as current policies increase advantage. This means changing hiring practices, promoting from proven project results rather than background, funding mid-career moves to avoid skill lock-in, and encouraging open collections of prompts, rating guides, and program parts so that good practices spread rather than staying within institutions.

Workplace rules matter here, too, as much as training. Collective bargaining and professional certifications can set minimum expectations for access and fair assessment of AI-assisted work. Firms that connect compensation and advancement to AI-enabled numbers should provide clear guides and ways to improve. This is a governance challenge where rules can help. Transparency rules for buying AI tools and equal funding for public-sector training create a base level below which skills can't be monopolized by those already privileged.

We must also be realistic about speed. Some market watchers believe that the AI-PC change will slow after an initial business rush tied to hardware upgrades. This slowing down is real; hardware cycles matter, but it doesn't diminish the need to build AI skills. Even if device adoption slows, AI running in the cloud and hybrid workflows will keep skills in demand, helping people gain influence and higher pay. The time to act is short because once career paths and promotion practices shift toward AI skills, reversing course will be difficult and controversial.

In short, skill is not just about access to tools. It's about where the work goes, who puts it together, and who gets noticed in organizations. Policies designed only to buy devices or pay for general courses will not be enough. The right approach connects accessible tools with mentorship, project-based learning, and rules that prevent a small group from locking in too much advantage.

We end as we began. The new digital divide is the AI skill gap. One in six users today doesn't mean fairness later. It likely means small groups of highly skilled people who shape promotions, projects, and economic opportunities. The lesson from the PC era was not that technology replaces everyone uniformly, but that advantage accrues when early users turn innovation into an everyday benefit. We can let that turn into deep inequality, or we can create a different path. A path with programs that teach lasting judgment alongside prompts, workplaces that reward results instead of clicks, and public funding that buys mentorship and access, not just hardware. The risks are distributional and democratic. Skill must be treated as public infrastructure.

Act now. Fund teacher-hub projects, require workplace transparency for AI metrics, and create regional simulation labs for adult learning. If we treat skill like a public skill rather than a private benefit, we can guide the coming changes toward shared growth rather than narrow capture. The alternative is a job market where the skilled live in a different economy, one that is faster, richer, and less accountable, while everyone else struggles.


The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.


References

EY. (2025). Work Reimagined Survey 2025: Companies missing up to 40% of AI productivity gains due to talent strategy gaps. EY Global.
Futurum Group. (2025). Don’t expect an acceleration in the rate of AI PC adoption in 2026. Futurum Group press release, Dec 10, 2025.
McKinsey & Company. (2024). The state of AI in early 2024: Gen AI adoption and business impact. McKinsey Global Survey.
Microsoft AI Economy Institute. (2026). Global AI Adoption in 2025 — A Widening Digital Divide (AI Diffusion Report 2025). Microsoft Research / AI Economy Institute.
European Commission, Joint Research Centre. (2025). Impact of digitalisation: 30% of EU workers use AI. JRC news release, Oct 21, 2025.

Picture

Member for

1 year 3 months
Real name
Keith Lee
Bio
Keith Lee is a Professor of AI and Data Science at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI), where he leads research and teaching on AI-driven finance and data science. He is also a Senior Research Fellow with the GIAI Council, advising on the institute’s global research and financial strategy, including initiatives in Asia and the Middle East.

Industrial Echoes: Why an AI-driven divergence will reshape who prospers — and what educators must do

Industrial Echoes: Why an AI-driven divergence will reshape who prospers — and what educators must do

Picture

Member for

1 year 2 months
Real name
Ethan McGowan
Bio
Ethan McGowan is a Professor of Financial Technology and Legal Analytics at the Gordon School of Business, SIAI. Originally from the United Kingdom, he works at the frontier of AI applications in financial regulation and institutional strategy, advising on governance and legal frameworks for next-generation investment vehicles. McGowan plays a key role in SIAI’s expansion into global finance hubs, including oversight of the institute’s initiatives in the Middle East and its emerging hedge fund operations.

Modified

AI is triggering a new global divergence, much like the industrial revolutions before it
Countries that control AI systems and skills will gain lasting economic and institutional power
Education and policy now decide who leads and who is left behind

A potential divide, spurred by advances in artificial intelligence, is emerging, and its advantages may not be shared equally across the globe. History offers some parallels. In the 18th century, a select few countries transformed their economies from agriculture to industry, gaining a head start that lasted for decades. Later, in the late 20th century, a smaller group of countries mastered computer and mobile technologies, again pulling ahead of the rest. Currently, a similar pattern seems to be forming around AI. The concentration of advanced AI models, computing power, investment, and skilled professionals in specific systems and regions is creating a new form of advantage. Those who adopt these technologies early do not just get ahead faster; they also reduce the cost of creating value with each subsequent improvement. Lowering the cost of tasks once performed by numerous workers enables a limited number of countries and companies to increase output with minimal labor costs. This situation has implications for educators and government officials. They must adjust educational programs, institutional focuses, and national plans now, before this unequal advantage becomes irreversible, leading to long-term stagnation for many.

Why AI concentrates advantage faster than previous revolutions

This AI-driven divergence echoes historical trends. A few entities adopt ground-breaking technologies, build systems around them, and reap significant rewards, while others fall behind. We saw this with mechanized textile production, coal and iron industries, and, more recently, with semiconductors and the internet. Compute capabilities, data resources, and engineering expertise are at the center of the current advantage. Private investment, model growth, and cloud computing resources are primarily concentrated in a handful of countries and firms. For instance, in 2024–2025, private AI investment in the U.S. exceeded that of other countries. Model releases and advanced computing power were also concentrated among a small number of companies and platforms. These concentrations create feedback loops: greater investment leads to better models, which attract more users and generate more data and expertise, which, in turn, widens the gap. While not inevitable, this trend is self-reinforcing without appropriate policy interventions.

Looking at real-world usage, the adoption of generative AI increased greatly through 2024–2025, but it is not universal. According to a study, only a minority of people use generative AI tools regularly, with higher adoption rates in more affluent countries. This is important because active adoption builds valuable skills. Regular and skilled users develop strategies, learn suitable tooling, and create workflows that increase productivity. Where adoption is limited, tools may remain unused or be used improperly, resulting in low-value outcomes and failing to deliver lasting productivity gains. In essence, access combined with knowledge leads to advantage, whereas simple access without knowledge does not. It should be noted that utilization numbers are based on reports from companies, institutions, and national surveys. Discrepancies have been addressed by using conservative averages.

Figure 1: The world is unequally prepared for AI, leading to uneven gains and disruptions across income groups.

The collapse of marginal labor cost and global market power

A key trait of advanced AI systems is their ability to perform many knowledge-intensive tasks at a lower cost than human labor. This is not hyperbole. AI models can duplicate text, translate languages, generate code drafts, and handle complex questions repeatedly without requiring much human input. When companies in leading countries utilize these systems on a large scale, the cost of services can decrease greatly. The financial result corresponds to the rise of factories: producers using machine-driven processes can offer lower prices than their competitors and gain control of the global market. The difference today is the speed and reach of these technologies. Entire white-collar jobs, such as research, initial drafts of legal documents, and standardized medical recommendations, can be automated. According to Axios, most lower-wage workers are concerned that artificial intelligence could threaten their job security and limit economic mobility, suggesting that advances in AI may reduce demand for certain middle-skill jobs and influence wages worldwide. The businesses that scale early gain benefits by being the first to use the technology.

Figure 2: As AI scales, labor costs collapse, locking in advantage for early adopters.

When the cost of replicating services approaches zero, leading companies do not simply grow; they secure their market control. Historical parallels can be drawn: textile factories in the 19th century were run by producers who automated their operations, while chip-making centers in the late 20th century set industry standards. A modern-day parallel is the AI stack—the combination of models, computing power, data management practices, and user interface design—that very few countries can completely control. This has geopolitical implications: countries that ground their industrial activity and public services on their own independent infrastructure preserve their policy-making freedom and gain financial benefits. Countries that lack these infrastructures may become dependent on foreign platforms for basic services, exporting raw data or simple services instead of developing high-margin products. This centralization is apparent in the number of computing resources and AI models, as well as warnings from organizations that indicate increasing risk if access gaps continue.

Education and policy as the last line against AI-driven inequality

With the advantage going to those who combine infrastructure with skilled usage, education becomes a critical ground. The goal is not simply to teach students to use specific AI tools. Instead, learning should be redesigned to blend tool knowledge with critical judgment, data handling skills, ethical reasoning, and system design. Educational institutions should produce graduates who can find where AI adds value, verify the output of AI models, and incorporate AI in collaborative workflows with people. This requires some change to educational programs. First, applied AI prompt engineering, model review, and essential data statistics must be added to standard courses so that students can test and validate the AI’s outputs. Second, data management and data privacy principles should be taught across all subjects, so that institutions can form partnerships that protect benefits when models from other countries are utilized. Third, vocational training and mid-career training programs should be expanded with modular certifications tied to local industry needs. These adjustments are crucial for national stability in an economy shaped by AI.

Teaching should incorporate collaborative projects that involve both humans and AI. For example, students can work together to design and evaluate AI systems for practical tasks. They can then think about the AI’s defects and biases. Schools should allocate funds to labs that include access to AI models, safe computing areas, and ethical oversight. Government officials can support this by funding regional computing clusters and public AI models, lowering entry costs for smaller institutions. Without these actions, classroom training will not be enough, and graduates will not have the practical skills needed to be valuable users rather than just producers of poor outputs. The biggest risk is not unemployment; it is a future of underemployment, in which people are stuck in low-value roles because they lack proper mastery of the tools.

Some may say that technology alone doesn't determine a country's long-term financial prosperity. Institutions, geography, and political will also matter. This is valid. Technology increases both strengths and weaknesses, but it cannot create them. Others may rebut that model decentralization and edge computing will quickly democratize power, thus softening any significant split. This is possible, and some research already shows how this could work. Personalization requires investment in chips, engineering, and maintenance resources that are scarce outside leading economies. Another possible argument is that late adopters can catch up by skipping certain steps. This is also possible, but it requires careful government funding and diplomatic actions to ensure fair access to computing power, talent, and public data structures. Otherwise, late adoption could look like dependency rather than catching up.

Effective policy actions include public spending on computing power and connection infrastructure, technology transfers that build local capability, education changes tied to internships, and protections that prevent value capture by foreign platforms without sharing profits locally. Organizations are already sounding the alarm about this. Multilateral reports suggest investing in people and digital independence to prevent inequality between states. The technical and political approaches will depend on the country, but the goal is to turn AI from an extractive force into a capability owned at home. If not, their economy may be split into an AI-supported core and low-margin periphery.

In conclusion, an AI-caused divergence will be a drawn-out process that includes infrastructure, capital, and human capital building over the years. The historical parallels are strong because the systems are alike: those who learn the new technology early make lasting achievements. For educators, the best approach is adaptation. They should remodel educational programs, invest in shared computer access, and design partnerships that keep most of the value at home. If countries act now to turn access into ability, AI can become a global opportunity rather than just another industrial enclosure. Nations that delay this transformation risk assigning regions to the role of spectators, providing raw inputs to systems managed elsewhere. The opportunity to affect whether AI becomes a force for the greater good or a source of global inequality is here now, and educational systems must act.


The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.


References

Anthropic / Microsoft / industry diffusion reports. (2025). AI diffusion and compute concentration. Industry white paper.
Brookings Institution. (2026). The Next Great Divergence: How AI could split the world. Brookings Essays.
LSE Public Policy Group. (2025). Will AI create a new Great Divergence? LSE Articles.
McKinsey & Company. (2025). The State of AI: Global Survey 2025. McKinsey Insights.
Microsoft AI Economy Institute. (2026). Global AI Adoption in 2025. Corporate report.
Reuters. (2025). AI could increase divide between rich and poor states, UN report warns. Reuters Technology.
Stanford Institute for Human-Centered Artificial Intelligence (HAI). (2025). AI Index Report 2025. Stanford HAI.
United Nations Development Programme (UNDP). (2025). The Next Great Divergence: Why AI May Widen Inequality Between Countries. UNDP Policy Report.
World Trade Organization / Financial Times reporting. (2025). AI risks widening global wealth gap. Financial Times analysis.

Picture

Member for

1 year 2 months
Real name
Ethan McGowan
Bio
Ethan McGowan is a Professor of Financial Technology and Legal Analytics at the Gordon School of Business, SIAI. Originally from the United Kingdom, he works at the frontier of AI applications in financial regulation and institutional strategy, advising on governance and legal frameworks for next-generation investment vehicles. McGowan plays a key role in SIAI’s expansion into global finance hubs, including oversight of the institute’s initiatives in the Middle East and its emerging hedge fund operations.