Skip to main content

[HAIP] When Soft Law Meets Strategic Competition: Can HAIP Survive Geopolitical Divergence?

[HAIP] When Soft Law Meets Strategic Competition: Can HAIP Survive Geopolitical Divergence?

Picture

Member for

1 year 3 months
Real name
Keith Lee
Bio
Professor of AI/Finance, Gordon School of Business, Swiss Institute of Artificial Intelligence

Keith Lee is a Professor of AI/Finance at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI). His work focuses on AI-driven finance, quantitative modeling, and data-centric approaches to economic and financial systems. He leads research and teaching initiatives that bridge machine learning, financial mathematics, and institutional decision-making.

He also serves as a Senior Research Fellow with the GIAI Council, advising on long-term research direction and global strategy, including SIAI’s academic and institutional initiatives across Europe, Asia, and the Middle East.

Modified

HAIP faces pressure as national AI governance models increasingly diverge
China’s state-led approach changes the incentives for voluntary global standards
Without economic rewards, HAIP risks losing durability in strategic competition

By late 2025, more than fifty countries had joined the Hiroshima AI Process Friends Group, but the OECD platform listed fewer than thirty public HAIP reports. Meanwhile, China accelerated its AI governance plan, connecting AI oversight to industrial policy, security reviews, and platform registration. This highlights a key tension: HAIP tries to build agreement through openness, while China’s approach centers on growth through state control. When openness and speed compete, openness often loses. This leads to a political-economy problem: countries that cooperate pay a price, while those with looser rules gain market and technological advantages. The main question is whether HAIP can stay relevant as strategic competition changes the incentives.

Divergence is no longer just an idea

At first, global AI governance focused on shared language and principles. By 2025, the differences between models became clear and practical. The EU chose binding, risk-based regulations. The US used a mix of voluntary standards, security measures, and sector-specific guidance. Japan promoted HAIP as a flexible bridge between systems. In contrast, China expanded algorithm registration, security checks, content controls, and platform rules, linking governance directly to its industrial and national goals instead of focusing on voluntary openness. These approaches are not just different in style; they are fundamentally different in how countries expect to benefit—whether through legitimacy and openness, or through operational advantages from state involvement.

A World Economic Forum analysis in late 2025 showed that national AI governance models now balance innovation, trust, and authority in different ways. According to the OECD, the Hiroshima AI Process (HAIP) is meant to help stakeholders work together to promote safe, secure, and trustworthy AI, but it does not require enforceable rules. This cooperative approach helps with diplomacy, but it can be less effective in the face of international competition. Collaboration helps with alignment, but it does not prevent countries from taking advantage or using regulatory gaps. As differences affect market access, procurement, and government support, voluntary openness is at a disadvantage because it relies on reputation in a world where speed, scale, and state support matter more.

This is why educators and leaders should pay attention to these differences. Governance is not just about safety; it also shapes how countries compete. Since governance affects innovation, it influences where companies invest, where talent goes, and how fast new models are used. Flexible frameworks need to compete on both legitimacy and economic value. If not, countries may only join in occasionally or just for appearances.

China’s model changes the game

China’s updated AI governance plan shifts from fragmented rules to a more complete, state-led system. A late 2025 East Asia Forum analysis called this a phased approach that values flexibility and alignment over early rules. AI governance in China is closely tied to platform management, security reviews, and pilot programs that accelerate adoption while maintaining central control. The system isn't necessarily easier; it's just structured differently. It prioritizes speed, as long as it stays within state limits.

From a game-theory perspective, this difference is key. HAIP asks companies to report openly, share their processes, and go through peer review, using reputation and transparency as incentives. In contrast, China’s system requires companies to meet state goals and pass security checks to get faster market access, resource support, and coordinated rollout. These are different choices: one rewards openness and reputation, the other rewards following state rules and fast growth. As a result, companies might choose openness in HAIP markets but focus on speed and state support in others.

Figure 1: Different governance systems trade transparency for deployment speed, reshaping competitive incentives.

The real risk is not only that China could outpace others in governance, but that global standards could break down. If companies find that HAIP reporting does not help them commercially, while other systems offer more growth and speed, they may see HAIP as a waste of effort. This could undermine the cooperation that flexible rules depend on. It’s similar to a prisoner's dilemma: if others stop cooperating, the best choice is to do the same, even though everyone would benefit more from working together for safety and trust.

The prisoner’s dilemma for G7 governance

The G7’s challenge is not to copy China’s model, but to ensure HAIP sets a clear limit rather than just a starting point. The cooperative approach must bring real benefits. If not, countries will move to systems that offer better rewards, like easier market access, procurement advantages, insurance benefits, or fewer overlapping rules. Without these incentives, HAIP may exist only for show while real governance occurs elsewhere. Early HAIP participation highlights a real concern: there is strong support in principle, but less action. The small number of public reports is not because HAIP is rejected, but because companies are weighing the costs and benefits. Companies share only when it helps them, which is normal in a competitive world.

Figure 2: As faster regimes gain advantage, voluntary HAIP participation becomes a competitive cost.

For leaders, this means HAIP cannot rely only on reputation. It needs to offer real economic benefits. Procurement rules, funding, and international agreements should be tied to verified reporting. Without these incentives, HAIP becomes optional, while competitors use governance as a tool for the industry. The G7 cannot succeed by appealing to norms alone; it must adjust the incentives.

What it would take to last

For HAIP to last, openness must be a core part of the system. This takes three steps. First, G7 procurement groups should accept each other’s HAIP reports to make things smoother. If a company has a strong HAIP report, buyers should act more quickly. Second, public funding and research should depend on the quality of governance. This makes reporting necessary for key partnerships. Third, there should be different levels of review: high-risk systems get expert checks, while low-risk systems have simpler reporting.

These steps aren't about punishing those who don't participate, but about making cooperation appealing. They increase the benefits of cooperating and reduce the temptation to go it alone. They also balance models that trade openness for speed. Importantly, they keep HAIP flexible while giving it strength through markets, not mandates.

For educators and leaders, this is important. It’s not enough to teach governance as just ethics; it must be taught as a strategy. Students should learn to assess incentive structures, not just principles. HAIP is a real example of how flexible law interacts with hard competition. Using it in this way will prepare future leaders to create systems that can survive in challenging environments.

HAIP was designed for a world where agreement through openness seemed possible. By 2025, governance will have also become a tool for competition. China’s different model and the slow adoption of voluntary reporting pose a risk: cooperative frameworks weaken when cooperation is costly and acting alone is rewarded. This does not mean HAIP will fail, but it does need to change. Openness must bring real economic and institutional benefits. Reviews must be fair and reliable. Procurement and funding should reward those who take part. Without these changes, HAIP could become little more than a symbol in a divided system. With them, it can still serve as a useful bridge. The real choice is not between flexible and strict law, but between flexible law with incentives and flexible law without them. Only the first can survive strategic competition.


The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.


References

Brookings Institution. 2026. “The HAIP Reporting Framework: Its value in global AI governance and recommendations for the future.” Brookings.
East Asia Forum. 2025. “China resets the path to comprehensive AI governance.” East Asia Forum, December 25, 2025.
OECD. 2025. “Launch of the Hiroshima AI Process Reporting Framework.” OECD.
World Economic Forum. 2025. “Diverging paths to AI governance: how the Hiroshima AI Process offers common ground.” World Economic Forum, December 10, 2025.
Ema, A.; Kudo, F.; Iida, Y.; Jitsuzumi, T. 2025. “Japan’s Hiroshima AI Process: A Third Way in Global AI Governance.” Handbook on the Global Governance of AI (forthcoming).

Picture

Member for

1 year 3 months
Real name
Keith Lee
Bio
Professor of AI/Finance, Gordon School of Business, Swiss Institute of Artificial Intelligence

Keith Lee is a Professor of AI/Finance at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI). His work focuses on AI-driven finance, quantitative modeling, and data-centric approaches to economic and financial systems. He leads research and teaching initiatives that bridge machine learning, financial mathematics, and institutional decision-making.

He also serves as a Senior Research Fellow with the GIAI Council, advising on long-term research direction and global strategy, including SIAI’s academic and institutional initiatives across Europe, Asia, and the Middle East.

[HAIP] Transparency Without Teeth: Making the Hiroshima AI Process a Practical Bridge

[HAIP] Transparency Without Teeth: Making the Hiroshima AI Process a Practical Bridge

Picture

Member for

1 year 3 months
Real name
Keith Lee
Bio
Professor of AI/Finance, Gordon School of Business, Swiss Institute of Artificial Intelligence

Keith Lee is a Professor of AI/Finance at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI). His work focuses on AI-driven finance, quantitative modeling, and data-centric approaches to economic and financial systems. He leads research and teaching initiatives that bridge machine learning, financial mathematics, and institutional decision-making.

He also serves as a Senior Research Fellow with the GIAI Council, advising on long-term research direction and global strategy, including SIAI’s academic and institutional initiatives across Europe, Asia, and the Middle East.

Modified

HAIP increases transparency but does not yet change behavior
Voluntary reporting without incentives becomes symbolic
Real impact requires linking HAIP to audits and procurement

By June 2025, the OECD reported that twenty organizations from various sectors and regions had voluntarily joined the HAIP reporting process, while many more countries joined the HAIP friends network. This highlights a gap between broad diplomatic support and the smaller number of organizations actually reporting. Political momentum and voluntary disclosure often outpace real-world adoption. Reporting is useful, but it is not a complete governance plan. For HAIP to reduce real harms, it needs three things: a small, machine-readable set of comparable fields; affordable, modular checks for higher-risk systems; and clear purchasing or accreditation rewards that change developers' incentives. Without these steps, HAIP risks becoming a ritual of openness that reassures the public but does not change developer behavior on a large scale.

Introducing HAIP and its promise

The HiThe Hiroshima AI Process started with a simple idea: rapid advances in AI called for a shared starting point and a way for organizations to show how they manage risk. The G7 created a code of conduct and guiding principles, and the OECD developed a reporting plan so groups could share their governance and testing practices. The approach is simple. Developers explain how they identify risks, what tests they run, who approves them, and how they handle incidents. These accounts are made public so peers and buyers can learn. This design makes sense for a divided global system because it uses openness and peer learning as the first step in governance. offers immediate value for teaching. For teachers and policy trainers, the portal provides real, up-to-date case documents for students to analyze. These reports turn abstract ideas like “governance” and “risk” into concrete evidence that can be read, checked, and compared. However, for narrative reporting to lead to learning and better practice, it must be usable. If reports are too long, inconsistent, or hard to understand, they will help researchers more than they will help buyers. That is why HAIP should be used as a living teaching resource and a place to test new methods. Students can extract key fields, compare reported tests, and design follow-up audit plans. This approach turns HAIP, a collection of essays, into practical governance tools.

Figure 1: Political participation in HAIP is broad, but actual public reporting remains limited, highlighting the gap between endorsement and operational adoption.

Functionality and mechanics of HAIP reporting

HAIP is meant to show how institutions work, not just to rate one system. The OECD reporting plan covers governance, risk identification, information security, content authentication, safety research, and actions that support people and global interests. In 2024, the pilot phase included companies, labs, and compliance firms from ten countries. In 2025, the OECD opened a public portal for submissions. These steps focus on depth and method, not just quick comparisons. The form asks who did the red-teaming, the size of the tests, if third-party evaluators were involved, and how incidents were handled. These are specific points that can be taught, checked, and improved.

Figure 2: Early HAIP engagement is driven mainly by researchers and public-interest actors, not deployers.

This design has three main effects. First, it creates a useful record of what organizations actually do, giving teachers and researchers real data to use and compare. Second, it supports learning that fits different situations, since a test that works in one place may not work in another. Third, it adds a cost barrier. Writing a public, institution-level report takes time for legal review, test documentation, and senior approval. The OECD and G7 noticed this during their pilot and mapping stages and have made tools and mapping a priority. Still, making it easier to take part depends on lowering these costs. The policy challenge is clear: keep the parts of the form that teach and reveal information, and add a small, standard set of machine-readable fields for buyers and researchers to compare at scale.

Limits: divergence, incentives, and checks

HAIP works in a world with many national rules. Europe uses strict, risk-based laws. The United States uses voluntary frameworks, market incentives, and security measures. China takes a directive, pilot-led approach focused on state goals and industry growth. HAIP’s strength is its ability to work across these different systems. Its weakness is that it is voluntary and not enforced. When countries have different priorities, like rapid growth or strategic independence, a voluntary report is unlikely to change behavior. In practice, agreeing on words does not mean agreeing on incentives.

The next point is practical. Voluntary reports only matter when there are economic rewards or penalties. If buyers, funders, or procurement teams see a verified HAIP report as a quick way to secure contracts, then developers have a reason to disclose. If not, many will choose not to take part. The early record of submissions, with few reports despite strong political support, shows the gap between public support and real market incentives. In short, transparency needs to be rewarded to make a difference.

Verification is the hardest part. Narrative reports can be vague. An organization might describe its tests in a way that sounds strong but lacks real evidence. The solution is a modular approach. For low-risk systems, self-attested reports with a small machine-readable core may be enough. For high-risk areas like healthcare, critical infrastructure, or justice, there should be accredited technical checks and a short audit statement. This tiered model keeps costs manageable and focuses the most thorough checks where they are needed. The policy tool here is not force, but targeted standards and public funding to help small teams take part without high costs. Results from pilots and procurement trials will show which approach works best.

What teachers and institutions should do now

For teachers, HAIP is a hands-on lab. Assign an HAIP submission as the main document. Have students pull out key data, such as test dates, model types, red team details, whether third-party testing was conducted, and the incident-handling timeline. Then, ask them to write a short audit plan to check the claims. This exercise builds two important governance skills: turning words into checks you can verify, and designing simple evidence requests that protect trade secrets.

For institutions like universities, hospitals, and research labs, HAIP should serve two purposes. First, build internal records that align with HAIP’s main areas to enable decision auditing. Second, link disclosure to buying and hiring. Ask outside vendors for HAIP-style reports on sensitive projects. Make governance maturity part of grant and vendor selection. Invest in a small legal and technical team to prepare solid, redacted reports. The goal is clear: use HAIP to improve institutional practices and make verified reporting a real advantage in procurement and partnerships.

From transparency to policy design: small, testable moves

What is a practical way forward? Start with a pilot in one sector, such as healthcare or education. The OECD suggests that any AI system used for decisions in this sector should need verified HAIP attestation for a year, with funding to help small providers create accurate reports. Track three results: whether verified reporting cuts down on repeated checks, catches real harms or near misses, and speeds up safe procurement. If the pilot works, expand using shared recognition and simple procurement rules. At the same time, fund open-source tools that turn internal logs into safe, redacted attachments and fill out the basic machine-readable fields. Together, these steps make reporting useful in practice. The approach is modest and experimental, matching the size of the challenge: not sweeping global law, but tested, repeatable local policy that builds trust and evidence.

The Hiroshima AI Process did what it set out to do diplomatically: carve out a shared space for principles and a voluntary channel for public reporting. According to Brookings, the HAIP Framework's early history highlights both its achievements in international diplomacy and the practical challenges it faces. Rather than replacing HAIP with stricter laws or discarding it, the focus should remain on building upon its foundation as an innovative global AI governance tool. The work is to pair it with small, practical policy tools: a short, machine-readable core, a tiered checks-and-balances regime for higher-risk systems, procurement incentives that reward verified reports, and funded tools to lower the cost of participation. For teachers, HAIP is already a classroom-ready corpus. For policymakers, HAIP is a scaffolding that can support tested, sectoral pilots. If governments and institutions take these steps now, reporting will stop being a ritual and start changing behavior.


The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.


References

Brookings Institution. 2026. “The HAIP Reporting Framework: Its value in global AI governance and recommendations for the future.” Brookings.
East Asia Forum. 2025. “China resets the path to comprehensive AI governance.” East Asia Forum, December 25, 2025.
OECD. 2025. “Launch of the Hiroshima AI Process Reporting Framework.” OECD.
World Economic Forum. 2025. “Diverging paths to AI governance: how the Hiroshima AI Process offers common ground.” World Economic Forum, December 10, 2025.
Ema, A.; Kudo, F.; Iida, Y.; Jitsuzumi, T. 2025. “Japan’s Hiroshima AI Process: A Third Way in Global AI Governance.” Handbook on the Global Governance of AI (forthcoming).

Picture

Member for

1 year 3 months
Real name
Keith Lee
Bio
Professor of AI/Finance, Gordon School of Business, Swiss Institute of Artificial Intelligence

Keith Lee is a Professor of AI/Finance at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI). His work focuses on AI-driven finance, quantitative modeling, and data-centric approaches to economic and financial systems. He leads research and teaching initiatives that bridge machine learning, financial mathematics, and institutional decision-making.

He also serves as a Senior Research Fellow with the GIAI Council, advising on long-term research direction and global strategy, including SIAI’s academic and institutional initiatives across Europe, Asia, and the Middle East.

Invisible Value: Why AI intangible assets Make the Economy Look Smaller Than It IsZ

Invisible Value: Why AI intangible assets Make the Economy Look Smaller Than It IsZ

Picture

Member for

1 year 3 months
Real name
Keith Lee
Bio
Professor of AI/Finance, Gordon School of Business, Swiss Institute of Artificial Intelligence

Keith Lee is a Professor of AI/Finance at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI). His work focuses on AI-driven finance, quantitative modeling, and data-centric approaches to economic and financial systems. He leads research and teaching initiatives that bridge machine learning, financial mathematics, and institutional decision-making.

He also serves as a Senior Research Fellow with the GIAI Council, advising on long-term research direction and global strategy, including SIAI’s academic and institutional initiatives across Europe, Asia, and the Middle East.

Modified

AI capital is being miscounted, shrinking the economy on paper
Hidden AI assets distort policy, funding, and skills planning
Fixing measurement is now a growth and governance priority

Currently, most official economic reports treat the money companies spend on building AI models, gathering data, and integrating AI into their workflows as regular expenses rather than investments that will create value later. This is a problem because these investments are valuable.

Some studies suggest that AI could add trillions of dollars to the global economy each year. If policymakers and educators rely on economic data that does not accurately reflect these investments, they will misallocate resources for training, infrastructure, and education. This could lead to underfunded training programs, poorly planned public investments, and education systems that do not prepare people for jobs that require AI skills. Accurately measuring what we are building with AI will help us direct resources to the right places. If we continue to ignore AI's strengths, the public sector will focus on minor, obvious things rather than the fundamental drivers of economic growth.

Why Traditional Accounting Misses AI's Value

Intangible assets are now essential to how companies create value. Companies are buying AI models and cloud computing time. Also, they are paying engineers to develop prompt libraries and rating systems. They are also creating ways for AI to function smoothly within their businesses. Much of this spending is recorded as current expenses under standard accounting practices. This makes it appear that these expenses reduce current profits without increasing the company's value. This accounting method affects decisions about treasury forecasts, education budgets, and public investments. Because the way we measure things changes slowly, an economy that is shifting toward scalable digital services that cost little to expand will seem weaker than it is on paper.

Three reasons exist why this miscounting continues. First, many AI-related expenses are diverse and change quickly, making them difficult to categorize neatly. Second, it is hard to measure the true cost because the prices of software-like services do not keep pace with technological changes. Therefore, improvements are hidden and hard to find in economic data. Third, many of AI's products are not sold directly or are bundled with other products. It shows up as increased benefits to consumers or as advantages that spread across companies, rather than as clear payments. Together, these things create an invisible capital problem. Fixing this issue is not just a matter of accounting. It changes which skills schools focus on, how administrators allocate funds for labs and cloud resources, and how regulators design incentives for public infrastructure such as power and data centers.

Measuring the Invisible with Evidence and Estimates

Recent research and reports show that AI's value is visible enough to take action. Studies that include broader categories of intangible assets (such as research and development, software, brands, and marketing—things you can't physically touch but that provide value) in national accounts show significant improvements in measured capital and worker output. The Bureau of Economic Analysis (a U.S. government agency that measures economic statistics) and related research have found that including recognized intangible assets significantly changes how we understand economic growth, especially in service and technology-heavy industries. These updated measurements indicate that intangible assets, including digital investments, have become major factors in production in many advanced economies.

To show how big this is, we can use a simple estimation. If we take the middle estimate of AI's potential impact (about $3.5 trillion per year globally) and assign 30–40% of that to company capital (model building, data sets, organization, cloud infrastructure), we get a capital contribution of about $1.05–$1.4 trillion per year. Also, a separate industry prediction suggests that about $100–200 billion in AI infrastructure spending (data centers, networking, high-performance chips) has been added to global investment through 2024–25. A portion of this spending increases physical capital and should be counted as investment. When national accounts fail to properly record these flows by treating them as immediate expenses or by ignoring changes in quality, measured GDP can underestimate real production by billions or trillions of dollars in large economies. The exact number depends on reasonable assumptions, but the key point is that the amount missed is large enough to change policy decisions.

Figure 1: Most generative AI value is created through intangible assets that are largely invisible in official investment statistics, creating a widening gap between real economic capacity and what national accounts record.

How This Affects Education, Administration, and Policy

If capital is not being properly accounted for, education is where these issues must be addressed. Traditional education that focuses solely on specific software skills or outdated IT training categories will miss the most valuable skills that create value when AI is used as capital. These skills include data management, prompt engineering as a form of system design, evaluation skills, and the ability to change work processes using AI outputs. This means three things must change. First, schools must teach students to design AI as a company asset, rather than just using individual tools. Second, administrative budgets should treat some AI-related expenses as investments in the business. This includes lab access, curated datasets, and faculty time for building reusable educational resources, rather than treating these as one-time operating costs. Third, policymakers should change funding formulas and accreditation standards to reward creations that can be reused and have lasting worth, such as data sets, modular courses, and validated evaluation methods. These changes ensure that both public and private groups create assets that provide long-term services.

Administrators must also change how they purchase and budget for things. Typical line-item purchasing for software or consulting fails to indicate whether a purchase creates a lasting asset. Treating certain purchases as capital would change depreciation accounting, free up cash, and yield clearer measures of return on investment for training programs. It also affects how institutions negotiate with cloud providers and chip sellers. Longer-term contracts that promote portability and reuse can be seen as investments that support the mission of education. Finally, policymakers should invest in measurement projects. For example, an AI intensity index that combines provider data, procurement records, and survey data would give education leaders the information they need to prioritize investments effectively.

Addressing Concerns

Some may argue that counting AI expenses as capital could inflate balance sheets with things that disappear quickly. This is a fair warning. The solution is to have clear definitions and standards. Not every software purchase is capital. Standards must define AI capital as expenses that create redeployable service flows, such as model data, labeled datasets, internal evaluation tools, and systems that enable repeatable processes. National statistical agencies and standards organizations can make these distinctions clear. NIST and its partners can help create these standards. Once agreed upon, they allow auditors to distinguish between long-term assets and short-term consumption. The result is more accurate, not less.

Governance, Infrastructure, and Fairness

Measurement is important for governance. When national accounts undercount AI-related capital, policymakers underinvest in public resources that enable fair access, such as electricity upgrades near data hubs, shared public datasets with privacy protections, and training programs for underserved communities. Industry investment in large data centers and specialized chips can increase overall capacity, but without public frameworks and shared infrastructure, benefits concentrate in companies and regions that already have advantages. Several recent studies point to a noticeable, but uneven, improvement in GDP from data center and chip investment. These improvements are meaningful, but they increase regional inequality if left unaddressed.

Figure 2: Physical AI infrastructure spending is accelerating and increasingly visible to policymakers, while most AI intangible assets that drive long-term productivity remain largely unmeasured.

This suggests two policy priorities. First, create public-private funding mechanisms that support shared AI investments, such as curated datasets for public use, subsidized computing for universities and small companies, and regional improvements to electricity and networking that lower costs for newcomers. Second, change the way taxes are handled so that investments in reusable educational AI assets qualify for public support and favorable depreciation, encouraging the development of common resources rather than just supporting existing vendors. Both steps help make the benefits of AI more accessible while ensuring that national accounts better reflect social worth.

If the way we measure the economy is flawed, so will our policy choices. By treating many AI expenses as regular costs, official statistics make the economy seem smaller and less wealthy than it is. This affects everything from workforce planning to infrastructure investment. The solution is simple: use a strategy that combines a short-term AI intensity index with longer-term changes to national accounts that clearly identify AI's intangible assets. For educators and administrators, the message is clear: shift from teaching basic tool use to building lasting skills, and treat the creation of reusable data sets, models, and evaluation systems as valuable investments. Policymakers must fund public resources and set accounting rules that recognize long-term AI capital. When we fully account for what we build, our decisions will align with our ambitions.


The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.


References

Bureau of Economic Analysis. (2023). Marketing, Other Intangibles, and Output Growth in 61 Industries (Working Paper). U.S. Department of Commerce.
Brookings Institution. (2026). Counting AI: A blueprint to integrate AI investment and use data into US national statistics. Brookings Economic Studies.
Business Insider. (2025). AI's economic boost isn't showing up in GDP, and Goldman says that's a $115 billion blind spot. Business Insider.
McKinsey & Company. (2023). The economic potential of generative AI: The next productivity frontier. McKinsey Global Institute.
OECD. (2024). Digital Economy Outlook 2024 (Volume 1): Embracing the Technology Frontier. Organisation for Economic Co-operation and Development.
Reuters. (2025). J.P. Morgan forecasts spending on data centers could boost US GDP by 20 basis points in 2025-26. Reuters.

Picture

Member for

1 year 3 months
Real name
Keith Lee
Bio
Professor of AI/Finance, Gordon School of Business, Swiss Institute of Artificial Intelligence

Keith Lee is a Professor of AI/Finance at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI). His work focuses on AI-driven finance, quantitative modeling, and data-centric approaches to economic and financial systems. He leads research and teaching initiatives that bridge machine learning, financial mathematics, and institutional decision-making.

He also serves as a Senior Research Fellow with the GIAI Council, advising on long-term research direction and global strategy, including SIAI’s academic and institutional initiatives across Europe, Asia, and the Middle East.

Protect the Floor, Save the Top: Rethinking the Firm-Level Minimum Wage

Protect the Floor, Save the Top: Rethinking the Firm-Level Minimum Wage

Picture

Member for

1 year 2 months
Real name
Ethan McGowan
Bio
Professor of AI/Finance, Gordon School of Business, Swiss Institute of Artificial Intelligence

Ethan McGowan is a Professor of AI/Finance and Legal Analytics at the Gordon School of Business, SIAI. Originally from the United Kingdom, he works at the frontier of AI applications in financial regulation and institutional strategy, advising on governance and legal frameworks for next-generation investment vehicles. McGowan plays a key role in SIAI’s expansion into global finance hubs, including oversight of the institute’s initiatives in the Middle East and its emerging hedge fund operations.

Modified

Minimum wages insure routine workers inside firms
Shocks tend to push adjustment onto high-skill jobs
Policy must pair the firm-level minimum wage with portable support for talent

The increase in South Korea's statutory minimum wage during the late 2010s and early 2020s provides an opportunity to examine how minimum wage laws affect businesses. According to the International Labour Organization, the minimum monthly wage in South Korea in 2017 was 1,352,230 KRW, rather than the previously stated figure. Instead of viewing the minimum wage as just another expense for companies, it can be seen as a form of internal insurance. It sets a standard for routine jobs, allowing for more flexibility in higher-skilled positions when the business faces challenges. This shift has implications for developing talent, fostering local innovation, and ensuring that government investments in training programs yield the expected benefits. This perspective shifts our thinking about minimum wage policy from a broad issue of income distribution to a matter of organizational structure, offering practical solutions to protect both the minimum wage and the ability to perform complex work within the country.

Minimum Wage as Internal Insurance

A minimum wage acts like an insurance agreement written into a company's payroll. When sales decline, managers decide where to cut costs. If the base pay for many routine positions is protected by law, adjustments are more likely to affect bonuses, working hours for mid-level employees, or pay cuts and layoffs for those in top-skilled positions. This situation creates an imbalance where lower-paid workers keep their wages but face a greater risk of job loss, while higher-paid workers experience wage reductions and job cuts. Studies have confirmed this pattern, showing that minimum wage laws reduce wage losses at the lower end but increase negative adjustments for higher-paid workers within the same business.

There are three direct consequences to this imbalance. First, companies with few highly skilled workers struggle to innovate, maintaining routine operations but struggling with risky projects. Second, while routine workers benefit short term, the company’s long-term product strategy declines as investment in skilled personnel drops. Third, advanced skill development through apprenticeships, internal training, and mentoring is hurt when top talent is used as a buffer. These effects reach beyond single businesses, influencing public training programs, university courses, and innovation plans, all reliant on returns from advanced skills.

Figure 1: Effects of removing the minimum wage across skill groups. Removing the legal floor improves expected welfare for higher-skill groups while reducing it for lower-skill groups, illustrating the minimum wage’s role as within-firm insurance rather than a uniform distortion.

Measuring Impact and Providing Support

Policy debates often gauge overall winners and losers, but this is too broad. For decisive action, measure how many workers at a company earn the minimum wage, and how sensitive higher wages are to company sales. Monitoring these metrics by industry and region would help benchmark risk. Businesses and policymakers could then see when skilled staff are being used as a buffer, and target support more effectively.

Another tool is to provide temporary wage support for displaced high-skilled workers. This support is not just a standard unemployment check but rather a wage-linked payment that helps cover part of a displaced specialist's lost income while they retrain or seek new employment. This support reduces the cost of retaining or rehiring skilled staff and minimizes skill loss during unemployment. Paired with employer-recognized certifications, it preserves the value of advanced skills within the economy. Public funding can be structured in stages, with shorter, full payments that decrease as the worker finds a new job, along with employer contributions tied to rehiring. The goal is to keep skilled workers active in the job market rather than unemployed. A report by Kim Kyeong-pil notes that South Korea’s unemployment benefits can exceed the net income of full-time minimum-wage workers, highlighting a flaw in the country's unemployment insurance structure.

A further step would require companies to report data on minimum-wage employment. A simple disclosure, such as the percentage of employees earning at or near the minimum wage and how their wages change in response to sales fluctuations, can be easily collected through payroll systems. This information helps to better target retraining programs and support for employers. It also influences management decisions, as boards and investors can see when a company is using top-level pay as a shock absorber and can push for better workforce strategies or invest in programs that maintain the company's ability to innovate.

Mobility, Retention, and Talent

The ease with which skilled workers can move across borders changes the situation. When domestic productivity declines, highly skilled workers may choose to move abroad rather than remain underemployed. While this can benefit individuals, it represents a loss for the country, as it loses skilled workers and the return on public training investments decreases. Data indicate high levels of worker mobility into developed countries in recent years, with permanent migration increasing significantly around 2021–2023. This is important for smaller economies that struggle to replace their skilled workforce. Therefore, policies must balance the ability to move with incentives to stay.

When mobility is limited, minimum wage laws can keep skilled workers in the country but push them into lower-quality jobs, slowing productivity and lowering the long-term value of education. This is concerning for markets with less mobility, as public investment in education becomes less effective and public support weakens. The best approach combines portable wage support, rapid retraining, and temporary public funding to support job transitions. For high-emigration countries, focus on retention through tax credits for rehiring, short public projects for displaced specialists, and international exchanges that maintain connections while offering mobility.

Figure 2: Minimum wage floors make low-skill jobs rigid, forcing firms to absorb shocks by cutting or downgrading high-skill roles instead.

Practical Steps

Educators should update certifications to be more transferable and prioritize short, industry-recognized options like micro-credentials and fast-track training. Career services must actively support workers in their job transitions. Universities should partner with industry for short fellowships that rapidly reintegrate workers into the workforce. Such programs should directly address how minimum wage increases shift economic pressures, ensuring that training supports mobility and reduces rehiring costs while maintaining the domestic skill base.

According to the International Labour Organization, administrators should create a national system to track minimum-wage employment and introduce short-term, industry-specific retraining grants, available only if employers invest in retraining their displaced workers. According to a report from Aju Business Daily, policymakers should consider launching wage-support programs targeted at specialized groups such as R&D staff or senior engineers and closely track their re-employment and retention rates against control groups. This approach could help identify which policies retain skilled workers most cost-effectively, particularly given the connection between low domestic pay and the rising overseas employment of these professionals.

It is essential to address minimum wage laws with a commitment to both social equity and economic resilience. Policymakers, industry leaders, and educators must unite to develop strategies that safeguard low-wage workers while sustaining innovation and advanced skills. Demand transparency: publish concrete data on policy impacts, and communicate how portable wage support can maintain the value of domestic training. Push for temporary employer incentives that prioritize rehiring skilled workers. Only by taking decisive, coordinated action can we ensure that minimum wage protections lead to sustainable growth and readiness for future challenges.

Examining minimum wage laws at the company level reveals a trade-off: while a minimum wage protects lower-income workers, it puts pressure on skilled workers. This is not an argument against minimum wages but rather a call to combine them with well-aimed tools that keep talent engaged, mobile within the country, and productive. Start by measuring the impact, creating portable wage support and rapid retraining programs, and funding short-term public projects that employ displaced specialists. These actions maintain the protective intent of minimum wage laws while preserving the benefits of training and innovation. Failing to act within companies will protect lower-wage workers while weakening the base of skilled labor, leading to slower growth and reduced opportunities. The solution lies in policy design; the cost of inaction is predictable and significant.


The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.


References

Adamopoulou, E., et al. (2024). Minimum wages and insurance within the firm. SSRN working paper.
CEPR / VoxEU. (2024). Minimum wages and insurance within the firm. VoxEU column summarizing firm-level evidence.
CountryEconomy. (2025). South Korea National Minimum Wage (USD equivalents). CountryEconomy data series.
International Migration Outlook 2023. OECD. (2023). International Migration Outlook 2023. OECD Publishing.
MacroTrends. (2023). South Korea inflation rate (CPI) 2022. MacroTrends economic data.

Picture

Member for

1 year 2 months
Real name
Ethan McGowan
Bio
Professor of AI/Finance, Gordon School of Business, Swiss Institute of Artificial Intelligence

Ethan McGowan is a Professor of AI/Finance and Legal Analytics at the Gordon School of Business, SIAI. Originally from the United Kingdom, he works at the frontier of AI applications in financial regulation and institutional strategy, advising on governance and legal frameworks for next-generation investment vehicles. McGowan plays a key role in SIAI’s expansion into global finance hubs, including oversight of the institute’s initiatives in the Middle East and its emerging hedge fund operations.