Skip to main content

Powering AI data centers: Why the electron gap will reshape the US–China contest

Powering AI data centers: Why the electron gap will reshape the US–China contest

Picture

Member for

1 year 3 months
Real name
Keith Lee
Bio
Keith Lee is a Professor of AI and Data Science at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI), where he leads research and teaching on AI-driven finance and data science. He is also a Senior Research Fellow with the GIAI Council, advising on the institute’s global research and financial strategy, including initiatives in Asia and the Middle East.

Modified

China’s AI edge is increasingly driven by faster, cheaper access to power and land
U.S. grid constraints are slowing large-scale AI deployment and raising costs
Energy infrastructure, not code alone, will shape AI leadership

The crucial factor in today's artificial intelligence competition isn't just processing speed or venture capital investment; it's energy capacity, measured in gigawatts. In 2024, China's new power capacity grew by an estimated 429 gigawatts, while the United States added about 51 gigawatts. This difference reflects two distinct approaches: one views consistent, large-scale electricity as a vital public resource, while the other relies on fragmented markets and lengthy approval processes. If electricity supply limits processing power, the country capable of constructing, supplying, and operating large-scale artificial intelligence data centers will have a considerable advantage in speed, costs, and strategic resilience. This isn't just about technology; it affects where research centers develop, which businesses can innovate quickly, and which governments must create the regulations for future infrastructures. Energy capacity for artificial intelligence data centers is a geopolitical strategy more than an engineering issue.

China's Structural Energy Advantage

China's advantage isn't just in the sheer amount of capacity added. It's their combined access to land, centralized planning, and a political system that accelerates project completion. Extensive areas in China’s interior offer flat land close to energy transmission routes. Local governments can synchronize power grid upgrades, set up renewable energy sources, and allocate industrial land. While large companies in the U.S. face various public hearings and interconnection delays that can take years, Chinese projects progress in months. Because of this, their industrial sector can quickly handle large data centers. Energy capacity for artificial intelligence data centers involves providing continuous, affordable energy to co-located computing and cooling systems at the speed modern artificial intelligence demands, rather than installing solar panels or gas turbines.

Specifically, data centers are using more energy than ever. The International Energy Agency predicts that, in a basic scenario, electricity use in data centers will grow roughly 15% annually between 2024 and 2030, doubling to almost 945 terawatt-hours by 2030. This trend favors locations that can quickly add significant capacity and have fewer regulatory or land-use restrictions. China's rapid growth in both conventional and renewable energy sources enables the location and operation of facilities that consume a lot of energy, often using local generation and direct grid connections that are easier to protect than in many U.S. areas. While many Chinese regions face power grid issues and industrial prioritization, the overall picture is clear: China is building the basic infrastructure that makes large-scale artificial intelligence deployments simpler.

Figure 1: China’s ability to add large amounts of new power capacity each year creates a structural advantage in hosting energy-intensive AI data centers at scale.

The U.S. Energy Shortfalls for Artificial Intelligence

In the United States, the factors that encourage innovation also create issues for the infrastructure needed to support it. Land-use policies, varying levels of required permissions, and an outdated interstate grid result in longer wait times between investment decisions and energy delivery. The U.S. Energy Information Administration (EIA) estimates that U.S. electricity consumption will reach new highs through 2027, partly due to larger data center requirements, but the growth in energy generation and grid connections isn't keeping pace with commitments from large tech and artificial intelligence companies. These businesses are responding by creating long-term supply agreements and supporting local energy projects. However, these solutions are costly and require levels of coordination that the U.S. system isn't prepared to provide at the scale artificial intelligence now demands.

These issues create measurable economic effects. The IEA and other experts estimate that U.S. data centers consumed about 183 terawatt-hours in 2024, a figure that accounts for a significant share of nationwide electricity consumption and is increasing pressure on local electricity prices in areas with new facilities. While a Chinese data center operator can negotiate a local route for a dedicated transmission line, an American operator often deals with multiple utilities, grid connection delays, and market-driven price changes that can significantly raise the actual energy costs. This difference is important because it separates a workable model of continuous, low-cost computing from one that must constantly guard against congestion, price increases, and the risk of outages. Because of this, energy capacity for artificial intelligence data centers reflects financial reality as much as engineering reality.

Strategic Consequences for Education, Research, and Policy

If the presence of dependable, affordable, and secure energy guides where large-scale training and inference clusters are located, then educators and research labs must prepare for a changed environment. For educators, the meaning is clear. Curricula must consider hardware, energy finances, and infrastructure policy as essential skills for future artificial intelligence experts. Learning about algorithms in theory is no longer enough. Students should learn about energy purchasing models, grid connection procedures, and basic energy system stability. For leaders, determining where to locate computer-intensive programs requires a comprehensive risk assessment, including energy contracts, latency considerations, and the track record of locating in areas with challenging power grids. Higher education institutions that have treated cloud credits as the primary capital expense will now face organizational questions about whether to own—or partner with—specific computer networks connected to energy resources.

Lawmakers face tougher choices. They can choose to speed up approvals and expand capacity with public funding, or they can attempt to limit electricity use for high-usage purposes through pricing and allocation regulations. The first option aligns with a mission-focused industrial policy that treats energy capacity for artificial intelligence data centers as a strategic investment in technology independence. The second option protects market rules but risks shifting computing to areas with cheaper, more readily available electricity. In reality, this suggests that the future of advanced artificial intelligence research may depend on whether governments agree on the need for rapid, sometimes centralized, infrastructure decisions. If the U.S. continues to prefer individual, competitive grids with no plans for fast upgrades, it will lose affordability and the chance to gather professionals where the computing takes place. Recent reporting from Reuters and others on China’s narrowing technology gap focuses on how these infrastructure choices complement other key investments.

Addressing Concerns and Considering Counterarguments

One possible point of disagreement is technical. Gains in hardware and software will reduce the increase in electricity consumption. This is partly true. Improvements in model sparsity, chip performance, and cooling can reduce kilowatt-hours per operation. Past data show that performance increases often lead to greater demand that exceeds any savings. The IEA’s models already account for substantial performance gains and still expect rapid overall growth in data center electricity consumption. It has short efficiency delays but doesn't eliminate the need for scale. Another issue involves emissions and climate policy. Critics argue that depending on quick capacity growth risks more use of fossil fuels. Here, the relevant point is policy design. Between 2024 and 2025, China increased the number of its renewable and thermal plants. The outcome for emissions depends on distribution rules, reduction, and fuel types. For the U.S., the policy choice isn't between growth and green results; it's between regulated, coordinated capacity increases that can be low-carbon from the start and unplanned, costly solutions that prioritize short-term speed at the cost of additional lifecycle emissions.

There is also a governance concern: speeding up energy capacity expansion through a centralized, government-led approach could increase monitoring and geopolitical power. That point is serious and real. Infrastructure decisions have political effects. Turning infrastructure over to less effective governance models or to market situations that support only the biggest private players isn't a neutral step. The trade-offs in both the U.S. and China involve control, transparency, and who bears the societal costs. For democratic countries, the policy answer should include speed with protection, such as quick permitting paired with transparency, grid investments alongside community benefit agreements, and energy purchasing that puts low-carbon sources first. This balances affordable capacity inside a rules-based system.

Steps for Organizations and Lawmakers

Universities and research labs should analyze their computing usage now. Analysis should list kWh per experiment, purchasing methods, and backup plans for supply failures. Leaders should consider off-site computing as an energy partner and negotiate energy terms that hedge against local. On the policy side, three steps are useful. Start conditional fast-track interconnection routes for research and strategic computing with environmental and community safety measures. Support regional energy centers that pair renewable energy, storage, and flexible needs to support educational-industrial groups. Ask for transparency in large companies’ power deals so that public organizations understand how they are affected by expenses and societal trade-offs. These actions support growth while protecting public interests.

Operational details are important. Fast-track routes don't mean going around environmental reviews. They mean organizing review steps, standardizing decline, and providing adequate timelines. Energy centers must combine storage with flexible load—research computing can be scheduled to take advantage of low-price windows, if contracts and software allow. The goal isn't to freeze markets but to create predictable, rule-based channels where computing can grow while remaining balanced. If the U.S. chooses to remain slow, its organizations will continue to incur high costs. If it chooses to invest in coordinated infrastructure, it can combine market strength with public purpose.

The artificial intelligence competition depends on more than chips and code. It will come down to who can consistently provide large amounts of affordable, low-emission electricity quickly and predictably, as modern models demand. The 429 GW versus 51 GW comparison is clear because it captures a larger strategic difference: one countrytreates electricity as a tool of nationwide tech power, while the otherlargely treats it as a controlled asset. For educators, leaders, and lawmakers, the policy question is clear. Will we create the rules and infrastructure to support large-scale research and training under clear governance? Or will we allow professionals and computing to go to places where energy is affordable, and rules are simpler? The clear choice is to act quickly while protecting democratic values. Doing so keeps both competition and civic oversight. Handling energy needs for artificial intelligence data centers isn't only an engineering project. It's about the political structure of our technological future, and it needs to be instructed, planned for, and managed with urgency and care.


The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.


References

Clemente, Jude. 2025. “China vs. U.S.: AI Supremacy Requires Reliable Electricity.” Forbes.
Energy Information Administration (EIA). 2026. Short-Term Energy Outlook. U.S. Department of Energy.
International Energy Agency (IEA). 2024–2025. “Energy and AI: Energy demand from AI” and “Global data centre electricity consumption” reports. IEA, Paris.
Office of the U.S. Secretary of Energy. 2025. DOE Final EO Report: Evaluating the Reliability and Security of the United States Electric Grid (July 7, 2025).
OpenAI (reported). 2025. Public briefing summarized in industry reporting on power capacity additions (2024).
Reuters. 2026. “China is closing in on US technology lead despite constraints, AI researchers say.” Reuters.
Stanford Review. 2025. “How China’s Energy Supremacy Threatens U.S. AI Dominance.”
Ember. 2025. “China Energy Transition Review 2025.”
Pew Research Center. 2025. “What we know about energy use at US data centers amid the AI boom.”

Picture

Member for

1 year 3 months
Real name
Keith Lee
Bio
Keith Lee is a Professor of AI and Data Science at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI), where he leads research and teaching on AI-driven finance and data science. He is also a Senior Research Fellow with the GIAI Council, advising on the institute’s global research and financial strategy, including initiatives in Asia and the Middle East.

Robot tax is no longer a joke: why revenue design must catch up with automation

Robot tax is no longer a joke: why revenue design must catch up with automation

Picture

Member for

1 year 2 months
Real name
Ethan McGowan
Bio
Ethan McGowan is a Professor of Financial Technology and Legal Analytics at the Gordon School of Business, SIAI. Originally from the United Kingdom, he works at the frontier of AI applications in financial regulation and institutional strategy, advising on governance and legal frameworks for next-generation investment vehicles. McGowan plays a key role in SIAI’s expansion into global finance hubs, including oversight of the institute’s initiatives in the Middle East and its emerging hedge fund operations.

Modified

AI-driven automation is shrinking both labor and consumption tax bases
A robot tax is becoming a practical fiscal tool, not a provocation
Welfare systems may also need less funding as labor is partially emancipated

In 2024, the average density of robots in factories was approximately 162 per 10,000 employees. In certain countries, this figure is now in the hundreds. This development isn't a future possibility; it reflects the present state of production. As automation increases capital's share of output while reducing the need for routine labor, two related tax bases become unstable: income taxed at source and the spending resulting from earned wages. The arithmetic is straightforward, but the political implications are complex. There's a reduction in payroll and personal income tax receipts, along with smaller VAT bases if incomes decrease. The phrase "robot tax" was once considered controversial. could be interpreted as a warning: governments must decide either to disregard the decrease in the tax base, or to change revenue regulations to ensure that public services and social security continue to exist throughout the automation wave.

Why a Robot Tax Is a Serious Consideration

The key idea is quite clear. Modern automation increases the capital used in production and, assuming all other factors remain constant, lowers the share of national income received as wages. When salaries decline, consumer spending tends to fall as well. A sizable share of public revenue in most developed countries comes from taxes on work (payroll taxes and personal income taxes) and consumption taxes like VAT. Because automation reduces both earnings from work and spending, the combined decrease of these bases creates the possibility of long-term deficits if tax systems are not adjusted.

Recent data shows this trend. In OECD countries, social contributions and personal income taxes combined accounted for roughly half of total tax revenue in the most recent year for which consolidated data were available. VAT and related consumption taxes made up about one-fifth. At the same time, there has been a sharp increase in robot installations and robot density. The average number of robots globally has doubled in recent years across several manufacturing centers, with China and other economies growing especially rapidly. Separately, working-age populations and employment rates differ, but surveys reveal widespread concern among workers about how AI will change job duties and lead to job losses. Taken as a whole, the math is concerning. An increase in the capital share, combined with lower wage growth, could reduce two of the three primary sources of income for many governments: wages and consumption. The third source, taxes on businesses and capital, might seem like a solution, but it comes with political and practical problems. Businesses can move their profits, try to influence policy, or change investment costs. Heavily taxing capital could discourage productive investments if not done carefully.

Figure 1: Rising automation intensity coincides with a steady decline in labor-linked tax revenues, weakening the fiscal foundations of welfare states.

Therefore, talking about a robot tax is more than just talk. It represents a range of policies, such as taxes on the use of automation, higher taxes on capital, broader taxes on profits from economic activity, or new user fees. These are meant to stabilize income in an economy where machines do a larger share of the work. The discussion is not only about collecting income from robots. It concerns how value should be claimed publicly when private machines generate more of it. The political importance is great. Governments must either change to guarantee income for public services and social insurance, or they must allow insufficient funding for welfare programs and growing inequality to continue.

Where Public Revenue Will and Won't Come from in an AI Economy

To accurately estimate the areas where tax revenues will fall short, thorough and open estimations are needed. Yet, various reliable patterns are already showing: (1) Across numerous sectors, labor's share of earnings is under pressure; (2) Consumption is still the biggest single driving force of demand in the majority of countries; and (3) The use of robots and AI is most common in areas that formerly offered numerous middle-income jobs. These three realities turn the dual risk of VAT and payroll taxes into a real danger.

To the point, consider this conservative calculation: In a developed economy, suppose employee compensation decreases by 5% in relation to GDP over a ten-year compression era. If families' marginal inclination to spend from labor income is 0.6, and their final consumption accounts for around 55% of the GDP (these total metrics are consistent with many totals from the World Bank and OECD economies), then a 5% decrease in the amount earned from work could lead to a 1.65 percentage point decrease in spending as a proportion of GDP in a steady state (0.05 × 0.6 × 0.55 ≈ 0.0165). With VAT income normally in direct proportion to consumption, a comparable drop in VAT income is possible without a counterbalancing policy. This is a simplified model for illustrative purposes; it presumes no offsetting fiscal transfers, no corresponding salary increases in other fields, and no immediate policy demand to stabilize. The intent is not to provide precise numbers but to indicate scale: even small drops in labor income result in noticeable declines in consumption tax bases.

Other options for making money may look good on paper, but come with restrictions. Taxes on capital gains and corporate income can capture a part of the new value produced by machines, particularly when automation increases company earnings. Yet corporate tax income is already unpredictable and prone to avoidance. Higher taxes on wealth or increased top rates on capital income may be helpful, but they often face political opposition and are administratively costly. New taxes tied directly to automation, such as fees on robot installations, increased taxes on depreciable capital accelerated by AI, or a surcharge on the use of AI deployments for business, may be created to track the degree of automation. Yet any tax on productive capital carries the risk of weakening adoption and financial expansion unless it is precisely targeted and phased in.

Lastly, outcomes differ greatly across countries. Countries with fast automation and weak social security will face tighter budget constraints sooner. Nations with strong public services but slower automation expansion may be protected for longer. The result will be differing budgetary pressures rather than a single, global disaster, which policymakers should consider when creating cross-border tax cooperation.

Designing Realistic Robot Tax Policies That Protect Expansion and Fairness

Policy needs to accomplish three objectives at once: guarantee steady revenue, preserve incentives for productive investment, and maintain equal distribution outcomes. While this is a difficult set of goals, it narrows the options to only the practical ones. To start, broaden the base on which capital taxes are applied in ways that are easy to manage and hard to exploit. This entails stronger regulations against profit shifting, greater tax collection of algorithmic and digital rents, and well-designed tax systems with minimum rates enforced in practice. Tax regulations shouldn't promote displacement through overly generous write-offs or depreciation plans if automation delivers quick gains in work efficiency. Specific actions, such as a small fee on accelerated capital allowances, for instance, could increase revenue without halting investment in supporting capital that widens job opportunities for automation that substitutes for labor.

Figure 2: As automation weakens labor and consumption tax bases, public revenue shifts toward capital- and automation-linked sources.

Second, consider usage-based taxes that mimic how society already taxes other movable capital. For example, automobiles are taxed through insurance, gasoline taxes, and registration fees, which reflect the social costs. Similar ideas can be applied to robots and autonomous systems. A per-unit registration charge for industrial robots, a safety and liability charge for robots working in public spaces, or a transaction fee for commercial AI services could generate stable income while promoting safety and shared standards. The fees are not a tax on innovation in and of themselves; they are a funding tool that accounts for social costs and finances safety nets, training, and monitoring.

Third, combine revenue changes with spending redesign. Governments can redirect funds freed up by automation towards transitional wage supports, retraining initiatives, and lifelong learning if automation reduces certain welfare costs (such as lower long-term health burdens from repetitive work). Those are the public goods that go well with private automation. In some situations, governments may, in the long term, require less headline spending on transfer programs. But the need for active labor-market policy and retraining is the most believable near-term outcome. For this reason, policymakers ought to tie new income streams to investments in community digital infrastructure, job-transition services, and education. Creating that connection helps keep support for new taxes while making automation more inclusive.

Fourth, coordinate tax policy across national borders. AI services and digital cross-national lines. A race to the bottom will ensue when countries adopt widely differing regulations, and earnings will shift. Standardized reporting of automation intensity, multilateral agreements on how to treat algorithmically produced rents, and coordinated minimum tax floors will reduce arbitrage and stabilize revenue.

Lastly, invest in early warning systems and real-time data. Tax law requires a stronger sensor network to identify structural changes, such as employer-employee data that matches the VAT receipts by sector, and the automation adoption indexes. With better data, governments can set tax rates so they rise when labor markets get worse and fall when overall employment improves. This active approach lowers blunt distortions and focuses policy where it is most needed.

The educators and administrators have immediate effects. Universities and schools shouldn't promise the static degrees that go with jobs of the past. Education ministries must budget for pathways with continuous learning, create micro-credential stacks that stack into recognized degrees, and collaborate with industry to track which tasks are being displaced by automation. Instead, curricula ought to be linked to civic skillsets directly - digital stewardship, adaptability, and the human skills that go well with AI. Administrators will require new measurement standards of institutional success. They also include stable job transitions, earnings, and not just graduate placement during the first six months.

By anticipating potential criticisms, the design is strengthened. Others will argue automation boosts productivity enough to widen the tax base, while some will claim the “robot tax” will suppress expansion. That's the old dispute over compromise. However, loopholes and poorly designed tax systems already misprice automation today. Automation of simple labor can be charged a small, transparent usage fee or covered by a modest surcharge. Social buffers would be funded, and contributing investment would be preserved. Productivity gains, however, are typically focused on as rents, not wages, in the medium to short term. The profits will not translate into shared welfare or stable revenue without methods to collect a portion of those earnings for community purposes.

A Useful Plan for the Years ahead

We can admit three awkward facts. The first is that public income depends on automation, shifting income toward capital. The second truth is that regular tax tools, such as payroll taxes and VAT, are affected by the shift. The third is that unthinking responses risk either starving public services or stifling innovation. A sensible balance involves a functional bundle. Modernize tax collection and anti-avoidance for capital, introduce precisely targeted usage fees and safety charges for automation, and link new income to investments in transition supports and lifelong learning. The bundle treats the robot tax as a design challenge rather than a slogan.

For administrators and instructors, it's clear that students need to be prepared for a job market where human work is defined by digital stewardship, social intelligence, and judgment. For the policymakers, fiscal engineering is the job under democratic restraints- creating tools that fairly collect economic rents, support public services that make automation socially sustainable, and stabilize incomes. We will avoid the worst outcomes if we treat the coming change as an opportunity to reset the contract between earnings, wealth, and public claims. We will be forced to make harder choices later if we avoid it.


The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.


References

International Federation of Robotics. (2024). World Robotics – Industrial Robots: Executive summary and robot density statistics.
International Monetary Fund. Velasquez, A. (2023). Production Technology, Market Power, and the Decline of the Labor Share (IMF Working Paper).
ILO. (2024). Global Wage Report 2024–25.
OECD. (2023). Revenue Statistics 2023.
OECD. (2023). Employment Outlook 2023.
OECD. (2024). The Impact of Artificial Intelligence on Productivity, Distribution and Growth.
Reuters. (2024). China overtakes Germany in industrial use of robots, says report.
The Guardian. (2024). AI may displace 3m jobs but long-term losses 'relatively modest', says Tony Blair's thinktank.
World Bank. (latest). Households and NPISHs final consumption expenditure (% of GDP) — World Development Indicators.

Picture

Member for

1 year 2 months
Real name
Ethan McGowan
Bio
Ethan McGowan is a Professor of Financial Technology and Legal Analytics at the Gordon School of Business, SIAI. Originally from the United Kingdom, he works at the frontier of AI applications in financial regulation and institutional strategy, advising on governance and legal frameworks for next-generation investment vehicles. McGowan plays a key role in SIAI’s expansion into global finance hubs, including oversight of the institute’s initiatives in the Middle East and its emerging hedge fund operations.

Fewer Than One in Five: Why the AI Hype Still Misses Most European Workplaces

Fewer Than One in Five: Why the AI Hype Still Misses Most European Workplaces

Picture

Member for

1 year 2 months
Real name
Ethan McGowan
Bio
Ethan McGowan is a Professor of Financial Technology and Legal Analytics at the Gordon School of Business, SIAI. Originally from the United Kingdom, he works at the frontier of AI applications in financial regulation and institutional strategy, advising on governance and legal frameworks for next-generation investment vehicles. McGowan plays a key role in SIAI’s expansion into global finance hubs, including oversight of the institute’s initiatives in the Middle East and its emerging hedge fund operations.

Modified

AI adoption in Europe is still limited, with most firms using AI only as a supporting tool
The gap between AI hype and real workplace use reflects risk, skills gaps, and institutional limits
Policy and education must focus on practical capacity, not promises of rapid transformation

In 2025, less than one in five European companies says they formally use artificial intelligence. This number paints a more realistic picture than the idea of a fast-paced move toward automation, and it brings up a valid question for those who make policy and teach: If AI is truly changing work, why isn't it used in most workplaces? The answer isn't resistance or lack of awareness, but rather has to do with structure. Across Europe, AI use is inconsistent, basic, and often just for show. Many companies use AI as a tool, not as a core system. This difference between what people think and what's really happening is important now because policies, funding, and training plans are starting to assume that AI is everywhere already. They're being created for a future that isn't here yet. Until we recognize what AI use in Europe actually looks like, investments will continue to reward hype over real work, and training programs will prepare workers for tools they may never need.

AI use in Europe: what the numbers tell us

Recent surveys of businesses across the European Union show a common theme. About 20% of companies say they use AI in some form, mostly big companies and those in knowledge-based fields. Among small and medium-sized businesses, where most Europeans work, the numbers are much lower. In some countries, such as Italy, fewer than 1 in 10 companies report using AI. These aren't just unusual cases. They come from official statistics that define AI narrowly and ask companies to report when they intentionally use it, not just when it's part of a software update.

This is an important point. Many companies use digital tools with machine-learning components without calling them AI. At the same time, some companies say they've adopted AI after doing small tests that never became part of their daily work. These things balance each other out. The result isn't exact, but it gives us a good idea of the trend. AI use in Europe is growing, but it's starting from a low base and remains inconsistent across company sizes, industries, and countries.

Figure 1: Despite intense public attention, total generative AI adoption remains limited in Germany, Italy, and Spain, with most usage concentrated in experimental or limited applications rather than intensive deployment.

The popular story of quick spread comes from different sources. Consulting surveys and reports from sellers often claim that half or more of companies are using AI, especially generative tools. These surveys often focus on tech leaders, executives at large companies, or early adopters. They measure interest and experiments, not ongoing use. Both views are helpful, but they tell us different things. One tells us how many companies depend on AI as part of their usual work. The other tells us how many are curious or trying things out. Confusing these two can lead to bad policy decisions.

This reality has direct results for those who teach and run programs. Training systems are being changed to focus on general AI knowledge, as if every workplace needs it. But many companies need workers who can judge tools, handle data quality, and use small AI functions in their current work. Teaching advanced AI use without covering these basics could widen the skills gap with employers' needs. The policy challenge is not to assume everyone is using AI, but to help companies move from curiosity to skill.

Why AI is still a helper, not a main system

Across both Europe and the United States, surveys of workers show a similar story. About one in five adults say they use AI tools at work, and even those who do mostly see them as helpers, not replacements. They help write emails, summarize documents, or brainstorm ideas. They rarely control important decisions, schedules, or how things are made. This isn't by chance. It shows how companies handle risk.

Being correct is still the first problem. AI systems, especially general tools, still make confident mistakes. For tasks where mistakes can lead to legal, financial, or safety problems, companies prefer human judgment. Using AI to help write a draft is safe, but using it to make a final decision is not. Until it becomes more reliable or until better ways to assign responsibility emerge, companies will keep AI in low-risk areas.

Figure 2: Generative AI adoption rises sharply with firm size and varies widely by sector, reinforcing that AI integration remains a capability issue rather than a universal technological shift.

Skills and the ability to make changes form the second problem. Using AI means having good data, changing processes, and regularly checking on things. Big companies can handle these costs, but most small companies can't. For them, using AI often means buying a ready-made tool that fits into their current work without changing it. This limits how much they can gain, but also how much disruption it causes.

The third problem is uncertainty. Companies face changing rules, unclear responsibilities, and rapidly changing seller options. Tying a key process to a specific AI system feels risky when standards are still changing. Using it a little at a time becomes the logical choice. This is why AI use in Europe seems basic from the outside. It's not because companies don't see the potential, but because they're being careful within their limitations. Policy should start with this understanding, not with frustration.

Critics often point to successful companies that have already changed their operations using AI. These examples exist and are important, but they're not typical. They have strong data systems, skilled workers, and leaders who are willing to change work from the ground up. The danger is creating policies for these exceptions instead of for the average company. When that happens, public money goes to experiments that don't expand.

The policy mistake: thinking use is unavoidable

Many current policies assume that AI use will speed up on its own, as long as rules don't get in the way. This assumption shapes training plans, funding for new ideas, and even job market predictions. But it's wrong. Technology spreading is rarely automatic. It depends on related investments, trust in institutions, and changes in how organizations work. AI is no different. Without help for using it, adoption stops at the test stage. Research shows that gains come not from the technology itself, but from how it changes tasks and decisions.

For education systems, this means a change in focus. Instead of making broad claims about AI-ready graduates, courses should focus on practical skills. Students and workers need to know how to check outputs, handle data, and change workflows to fit imperfect tools. These skills can be used across different platforms and industries, and they're what companies really need.

Government buying offers another way to make a difference. Governments buy a lot of digital systems, but often reward newness instead of use. Contracts should require clear plans for changing workflows, training staff, and measuring results. This would show sellers and companies that AI adoption is not about demos, but about lasting use. There's also a role for shared resources. Many small companies can't afford to hire data engineers or AI checkers. Regional support centers, industry-specific support, and neutral evaluation centers could make adoption more affordable. This isn't old-style industrial policy; it focuses on making adoption possible rather than trendy.

Skeptics say these steps slow the spread of new ideas. But they actually do the opposite. By reducing uncertainty and spreading knowledge, they help more companies move beyond basic use. The alternative is a two-speed economy: a few advanced users and a large group watching from the sidelines.

From hype to habit: what lasting adoption needs

The last change is about culture. AI adoption in Europe is often seen as a competition with other regions. This encourages speed over suitability. A better way to think about it is to form a habit. Companies adopt what they can keep up with. Habits form through repetition, feedback, and trust.

To build these habits, policy should reward consistent use rather than big announcements. Metrics should measure how deeply AI is used, not just if it's present. Education systems should confirm practical skills, not just knowledge of tools. Those who make rules should focus on clarity and fairness to reduce fear of unknown responsibilities.

Teachers, especially, are at a key point. If they teach AI as magic, graduates will expect workplaces that don't exist. If they teach it as a fallible system within social and organizational contexts, graduates will be better prepared for real work. This isn't lowering goals, but redefining what it means to be ready.

The opening number is worth repeating because it sets the foundation for the discussion: less than one in five European companies use AI today. This isn't a failure, but a starting point that yields several key takeaways: current AI use is limited, policy and training should align with actual adoption, and real productivity comes from grounded, step-by-step integration. Policies built on this reality can make AI adoption more meaningful and productive, rather than driven by hype.

Creating a policy for Europe that is

The gap between AI headlines and what's really happening in workplaces isn't going to close on its own. It continues because policies, education, and investments have been built on assumptions rather than facts. AI adoption in Europe remains limited, inconsistent, and largely used as a helper. Pretending otherwise doesn't speed up change, but distorts it. A better way is to start with honesty. The key takeaways are: most companies are cautious with good reason, AI is mostly a workplace helper, and actual productivity depends on how technology is used, not just on exposure to it. Policies that reflect these truths will better support slow, meaningful progress: skills development, thoughtful workflow changes, and building institutional trust. The challenge ahead is not to force adoption, but to make it possible. This means matching education to real needs, government buying to real results, and rules to real risks. If Europe succeeds, it will be because it moved deliberately, not because it moved fastest. The future of work won't be announced; it will be built, one workflow at a time.


The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.


References

Brookings Institution. (2025). How are Americans using AI? Evidence from a nationwide survey.
CEPR / VoxEU. (2026). Embracing AI in Europe: New evidence from harmonised central bank business surveys.
European Commission (Eurostat). (2025). Use of artificial intelligence in enterprises.
McKinsey & Company. (2024). The state of AI in early 2024: Gen AI adoption and value capture.
OECD. (2025). The adoption of artificial intelligence in firms.
Reuters / Istat. (2025). Italian firms lag in AI adoption.

Picture

Member for

1 year 2 months
Real name
Ethan McGowan
Bio
Ethan McGowan is a Professor of Financial Technology and Legal Analytics at the Gordon School of Business, SIAI. Originally from the United Kingdom, he works at the frontier of AI applications in financial regulation and institutional strategy, advising on governance and legal frameworks for next-generation investment vehicles. McGowan plays a key role in SIAI’s expansion into global finance hubs, including oversight of the institute’s initiatives in the Middle East and its emerging hedge fund operations.