Skip to main content

Technological Adaptability: Why Averages Can Hide the Real Crisis

Technological Adaptability: Why Averages Can Hide the Real Crisis

Picture

Member for

1 year 3 months
Real name
Keith Lee
Bio
Keith Lee is a Professor of AI and Data Science at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI), where he leads research and teaching on AI-driven finance and data science. He is also a Senior Research Fellow with the GIAI Council, advising on the institute’s global research and financial strategy, including initiatives in Asia and the Middle East.

Modified

Technological adaptability differs sharply across individuals, making averages misleading
Social and economic factors determine who can realistically reskill
Policy must target adaptability gaps, not average exposure

When analyzing the U.S. workforce's preparedness for artificial intelligence, research sometimes presents a misleadingly optimistic view. For example, one study measured AI exposure alongside the capacity to adapt. The results suggested that many workers at risk from AI can adjust to new roles. This may create a false sense that most workers are well-equipped for this transition. Averages smooth out the extreme differences. The ability to learn, change careers, and use new technologies isn't consistent across the population. It is greatly influenced by factors such as intelligence, age, education, language, financial stability, family responsibilities, and prior experience with technology. When we consider individuals, the averages should serve as a warning. The claim that most can adapt masks the fact that millions of people may find adapting difficult or unrealistic.

Technological Adaptability: The Danger of Averages

The problem is urgent: relying on averages conceals the wide range of abilities. Policymakers who trust average adaptation capacity may dangerously misestimate how prepared most people are. Averages can be skewed upward by those quickly adopting AI, while millions fall dangerously behind. These include older workers, those with less education, and those in rural or low-wage jobs. When analyses look at job-level ability, the situation appears far more critical. Policymakers must confront these differences now, with urgency, not complacency.

Figure 1: Counties with similar exposure to AI show sharply different adaptive capacity, revealing that vulnerability clusters geographically rather than tracking national averages.

Consider the data on digital skills, a key element of adaptability. In the EU in 2023, only about 56% of adults (ages 16–74) had basic digital skills. This was 80% for those with higher education, but only 34% for those with little or no education. This 46-point gap is major. Similar patterns appear in OECD data. Digital divides correlate with age, income, and education. These same factors also predict who will struggle to change careers or learn to use AI tools in the workplace. Any average that doesn't account for these divides will overestimate how many people can adapt without help.

Labor market data support this point. Studies from the OECD and McKinsey show that automation and AI are impacting many industries and skill levels. However, the ability to retrain tends to be concentrated among those with higher education and those at larger companies with training programs. The OECD says that some jobs are changing in terms of required skills, not being eliminated completely. These new demands favor management, digital skills, and problem-solving. All of these skills are more common among those who already have access to education and training. Where few adults participate in lifelong learning, those most impacted by these technologies are the least likely to get retrained. In short, being affected by technology and adapting to it are not the same. The gap between the two is closely tied to existing inequalities.

Technological Adaptability and the Unequal Opportunity to Retrain

The pace of change demands immediate attention. Companies may adopt AI swiftly, but public training programs lag dangerously behind. Numerous reports warn of the massive, urgent need for workers to switch careers or gain new skills. Yet ability is consistently confused with access. When job losses happen quickly, only those with resources—savings, flexible schedules, childcare, and local training—can adapt in time. Others will be left behind, facing longer unemployment, plummeting earnings, and worsening job losses concentrated among the vulnerable. Policymakers must treat technological adaptability as a national emergency—delay will cost lives and livelihoods.

The numbers show this mismatch through looking at adult learning. In Europe, the rate of adults participating in learning was below 40% in recent years. These rates differ greatly across countries and social groups. We cannot assume that people can adapt to technology without targeted programs. Reports show that increasing general training will only worsen inequality if we don't address who is participating. Adults with more education benefit more from the same training opportunities. This explains why averages can be misleading: training and its benefits tend to be concentrated among those who are already better off, thereby misrepresenting the ability of the wider population.

Age and mental load can also change the capability to adapt. Older workers may find it harder to learn new digital skills. They may have less time to retrain because of family or health responsibilities. Language is also important. People who do not speak the dominant language may understand and learn more slowly if training and tools are primarily offered in that language. Those from wealthier origins often have advantages that help. They include savings that serve as a financial safety net, local networks, and sometimes awareness of different cultures. All of these factors impact how easily a person can adapt. The average can't accurately show that distribution. Evidence shows that the same issues appear across multiple studies.

Figure 2: Many large occupations combine high AI exposure with low adaptive capacity, showing that risk is driven by variance within the workforce—not by average exposure alone.

Changing Education for Technological Adaptability

If adaptability varies from person to person, policies must be personalized as well. We need to shift from focusing on general numbers to diagnosing individual needs and creating specific plans.

Here are three things we should do:

First, assess each worker’s adaptability with practical tools that measure skills in technology, language, caregiving, and local job needs. Second, ensure free training and credentials, especially for those who score lowest in these assessments, and provide paid leave so they can participate. Third, combine training with direct financial aid and job placement support, helping workers complete training without losing income. Evidence supports that this approach leads to better outcomes.

To make this work, we also need to rethink how we prepare students for the workforce. Schools should teach how to learn, evaluate information, and adapt to new tasks. Attitudes are important. Those who are worse off may have less motivation to learn on their own. This can be addressed with early support and mentoring. Education should encourage lifelong learning so that adapting to new technology becomes a normal part of life. Otherwise, retraining will be a struggle for those who have the least means to do so.

Some argue that employers should provide most of the training needed. This idea is true when employers value their workers for the long term and when companies are large and have resources. But many workers are in small firms, temporary jobs, and positions with high turnover. In these positions, employer-provided training is limited. Studies show that employer training is inconsistent. Smaller businesses and their workers are left behind. Public policy must provide a backup plan through subsidies, training vouchers, results-based tax incentives, and training hubs in underserved areas. The goal is not to replace employer training, but to ensure everyone has the basics to adapt.

AI hasn't yet caused mass unemployment. Studies suggest AI hasn’t resulted in net job loss. AI's effects have been uneven, often changing tasks rather than eliminating positions. However, this does not mean we should delay action. The absence of job losses now doesn't change the fact that we don't all have an equal possibility to benefit from task changes. Some workers who learn how to work with AI will see higher wages, while those who can't will see their wages stop growing. Waiting until displacement happens will be more costly and less helpful. A careful equity focus is both prudent and wise economically and socially.

Finally, local factors are important. The ability to adapt to technology depends on the local job market. If training programs are for jobs that don't exist locally, people who complete the training will not get hired. Successful regional efforts connect training to employer needs, create apprenticeships, and invest in technology infrastructure in areas where it is lacking. This means moving funds from national campaigns to local training systems. These systems should involve employer groups, community colleges, and social services that collaborate jointly to achieve a common goal. These ecosystems can customize language, schedules, and content to local needs. Evidence suggests they lead to better job placement and retention than national programs that are not designed for regional factors.

Make Adaptability the Goal of Policy

Remember: averages cannot mask the real and immediate stakes as technological adaptability diverges dramatically. There is no time to let averages overshadow the urgent needs of those at the bottom. Policymakers and educators can no longer view adaptation as a side issue—it is a public emergency. Act now: measure individual needs, provide financial support for the vulnerable, and tie training to income and job placement. Hand retraining power to local entities. Only by moving swiftly can we prevent mass displacement and extend the gains from automation and AI. Translating numbers into urgent policy requires recognizing and measuring real needs and acting now to correct today's dangerous inequalities. The time to act is now: adaptability can no longer wait.


The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.


References

McKinsey Global Institute. (2024). A new future of work: The race to deploy AI and raise skills. McKinsey & Company.
OECD. (2023). Skills Outlook 2023. OECD Publishing.
OECD. (2024). Artificial intelligence and the changing demand for skills in the labour market. OECD Publishing.
OECD. (2024). Who will be the workers most affected by AI? OECD Publishing.
OECD. (2024). Digital Economy Outlook 2024. OECD Publishing.
Eurostat. (2024). Skills for the digital age. European Commission.
Brookings Institution. (2026). Measuring U.S. workers’ capacity to adapt to AI-driven job displacement.
World Bank. (2024). Digital Skills Development.
Yale University & Brookings Institution. (2025). Study on AI and labor market outcomes. Financial Times coverage.

Picture

Member for

1 year 3 months
Real name
Keith Lee
Bio
Keith Lee is a Professor of AI and Data Science at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI), where he leads research and teaching on AI-driven finance and data science. He is also a Senior Research Fellow with the GIAI Council, advising on the institute’s global research and financial strategy, including initiatives in Asia and the Middle East.

[HAIP] When Soft Law Meets Strategic Competition: Can HAIP Survive Geopolitical Divergence?

[HAIP] When Soft Law Meets Strategic Competition: Can HAIP Survive Geopolitical Divergence?

Picture

Member for

1 year 3 months
Real name
Keith Lee
Bio
Keith Lee is a Professor of AI and Data Science at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI), where he leads research and teaching on AI-driven finance and data science. He is also a Senior Research Fellow with the GIAI Council, advising on the institute’s global research and financial strategy, including initiatives in Asia and the Middle East.

Modified

HAIP faces pressure as national AI governance models increasingly diverge
China’s state-led approach changes the incentives for voluntary global standards
Without economic rewards, HAIP risks losing durability in strategic competition

By late 2025, more than fifty countries had joined the Hiroshima AI Process Friends Group, but the OECD platform listed fewer than thirty public HAIP reports. Meanwhile, China accelerated its AI governance plan, connecting AI oversight to industrial policy, security reviews, and platform registration. This highlights a key tension: HAIP tries to build agreement through openness, while China’s approach centers on growth through state control. When openness and speed compete, openness often loses. This leads to a political-economy problem: countries that cooperate pay a price, while those with looser rules gain market and technological advantages. The main question is whether HAIP can stay relevant as strategic competition changes the incentives.

Divergence is no longer just an idea

At first, global AI governance focused on shared language and principles. By 2025, the differences between models became clear and practical. The EU chose binding, risk-based regulations. The US used a mix of voluntary standards, security measures, and sector-specific guidance. Japan promoted HAIP as a flexible bridge between systems. In contrast, China expanded algorithm registration, security checks, content controls, and platform rules, linking governance directly to its industrial and national goals instead of focusing on voluntary openness. These approaches are not just different in style; they are fundamentally different in how countries expect to benefit—whether through legitimacy and openness, or through operational advantages from state involvement.

A World Economic Forum analysis in late 2025 showed that national AI governance models now balance innovation, trust, and authority in different ways. According to the OECD, the Hiroshima AI Process (HAIP) is meant to help stakeholders work together to promote safe, secure, and trustworthy AI, but it does not require enforceable rules. This cooperative approach helps with diplomacy, but it can be less effective in the face of international competition. Collaboration helps with alignment, but it does not prevent countries from taking advantage or using regulatory gaps. As differences affect market access, procurement, and government support, voluntary openness is at a disadvantage because it relies on reputation in a world where speed, scale, and state support matter more.

This is why educators and leaders should pay attention to these differences. Governance is not just about safety; it also shapes how countries compete. Since governance affects innovation, it influences where companies invest, where talent goes, and how fast new models are used. Flexible frameworks need to compete on both legitimacy and economic value. If not, countries may only join in occasionally or just for appearances.

China’s model changes the game

China’s updated AI governance plan shifts from fragmented rules to a more complete, state-led system. A late 2025 East Asia Forum analysis called this a phased approach that values flexibility and alignment over early rules. AI governance in China is closely tied to platform management, security reviews, and pilot programs that accelerate adoption while maintaining central control. The system isn't necessarily easier; it's just structured differently. It prioritizes speed, as long as it stays within state limits.

From a game-theory perspective, this difference is key. HAIP asks companies to report openly, share their processes, and go through peer review, using reputation and transparency as incentives. In contrast, China’s system requires companies to meet state goals and pass security checks to get faster market access, resource support, and coordinated rollout. These are different choices: one rewards openness and reputation, the other rewards following state rules and fast growth. As a result, companies might choose openness in HAIP markets but focus on speed and state support in others.

Figure 1: Different governance systems trade transparency for deployment speed, reshaping competitive incentives.

The real risk is not only that China could outpace others in governance, but that global standards could break down. If companies find that HAIP reporting does not help them commercially, while other systems offer more growth and speed, they may see HAIP as a waste of effort. This could undermine the cooperation that flexible rules depend on. It’s similar to a prisoner's dilemma: if others stop cooperating, the best choice is to do the same, even though everyone would benefit more from working together for safety and trust.

The prisoner’s dilemma for G7 governance

The G7’s challenge is not to copy China’s model, but to ensure HAIP sets a clear limit rather than just a starting point. The cooperative approach must bring real benefits. If not, countries will move to systems that offer better rewards, like easier market access, procurement advantages, insurance benefits, or fewer overlapping rules. Without these incentives, HAIP may exist only for show while real governance occurs elsewhere. Early HAIP participation highlights a real concern: there is strong support in principle, but less action. The small number of public reports is not because HAIP is rejected, but because companies are weighing the costs and benefits. Companies share only when it helps them, which is normal in a competitive world.

Figure 2: As faster regimes gain advantage, voluntary HAIP participation becomes a competitive cost.

For leaders, this means HAIP cannot rely only on reputation. It needs to offer real economic benefits. Procurement rules, funding, and international agreements should be tied to verified reporting. Without these incentives, HAIP becomes optional, while competitors use governance as a tool for the industry. The G7 cannot succeed by appealing to norms alone; it must adjust the incentives.

What it would take to last

For HAIP to last, openness must be a core part of the system. This takes three steps. First, G7 procurement groups should accept each other’s HAIP reports to make things smoother. If a company has a strong HAIP report, buyers should act more quickly. Second, public funding and research should depend on the quality of governance. This makes reporting necessary for key partnerships. Third, there should be different levels of review: high-risk systems get expert checks, while low-risk systems have simpler reporting.

These steps aren't about punishing those who don't participate, but about making cooperation appealing. They increase the benefits of cooperating and reduce the temptation to go it alone. They also balance models that trade openness for speed. Importantly, they keep HAIP flexible while giving it strength through markets, not mandates.

For educators and leaders, this is important. It’s not enough to teach governance as just ethics; it must be taught as a strategy. Students should learn to assess incentive structures, not just principles. HAIP is a real example of how flexible law interacts with hard competition. Using it in this way will prepare future leaders to create systems that can survive in challenging environments.

HAIP was designed for a world where agreement through openness seemed possible. By 2025, governance will have also become a tool for competition. China’s different model and the slow adoption of voluntary reporting pose a risk: cooperative frameworks weaken when cooperation is costly and acting alone is rewarded. This does not mean HAIP will fail, but it does need to change. Openness must bring real economic and institutional benefits. Reviews must be fair and reliable. Procurement and funding should reward those who take part. Without these changes, HAIP could become little more than a symbol in a divided system. With them, it can still serve as a useful bridge. The real choice is not between flexible and strict law, but between flexible law with incentives and flexible law without them. Only the first can survive strategic competition.


The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.


References

Brookings Institution. 2026. “The HAIP Reporting Framework: Its value in global AI governance and recommendations for the future.” Brookings.
East Asia Forum. 2025. “China resets the path to comprehensive AI governance.” East Asia Forum, December 25, 2025.
OECD. 2025. “Launch of the Hiroshima AI Process Reporting Framework.” OECD.
World Economic Forum. 2025. “Diverging paths to AI governance: how the Hiroshima AI Process offers common ground.” World Economic Forum, December 10, 2025.
Ema, A.; Kudo, F.; Iida, Y.; Jitsuzumi, T. 2025. “Japan’s Hiroshima AI Process: A Third Way in Global AI Governance.” Handbook on the Global Governance of AI (forthcoming).

Picture

Member for

1 year 3 months
Real name
Keith Lee
Bio
Keith Lee is a Professor of AI and Data Science at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI), where he leads research and teaching on AI-driven finance and data science. He is also a Senior Research Fellow with the GIAI Council, advising on the institute’s global research and financial strategy, including initiatives in Asia and the Middle East.

[HAIP] Transparency Without Teeth: Making the Hiroshima AI Process a Practical Bridge

[HAIP] Transparency Without Teeth: Making the Hiroshima AI Process a Practical Bridge

Picture

Member for

1 year 3 months
Real name
Keith Lee
Bio
Keith Lee is a Professor of AI and Data Science at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI), where he leads research and teaching on AI-driven finance and data science. He is also a Senior Research Fellow with the GIAI Council, advising on the institute’s global research and financial strategy, including initiatives in Asia and the Middle East.

Modified

HAIP increases transparency but does not yet change behavior
Voluntary reporting without incentives becomes symbolic
Real impact requires linking HAIP to audits and procurement

By June 2025, the OECD reported that twenty organizations from various sectors and regions had voluntarily joined the HAIP reporting process, while many more countries joined the HAIP friends network. This highlights a gap between broad diplomatic support and the smaller number of organizations actually reporting. Political momentum and voluntary disclosure often outpace real-world adoption. Reporting is useful, but it is not a complete governance plan. For HAIP to reduce real harms, it needs three things: a small, machine-readable set of comparable fields; affordable, modular checks for higher-risk systems; and clear purchasing or accreditation rewards that change developers' incentives. Without these steps, HAIP risks becoming a ritual of openness that reassures the public but does not change developer behavior on a large scale.

Introducing HAIP and its promise

The HiThe Hiroshima AI Process started with a simple idea: rapid advances in AI called for a shared starting point and a way for organizations to show how they manage risk. The G7 created a code of conduct and guiding principles, and the OECD developed a reporting plan so groups could share their governance and testing practices. The approach is simple. Developers explain how they identify risks, what tests they run, who approves them, and how they handle incidents. These accounts are made public so peers and buyers can learn. This design makes sense for a divided global system because it uses openness and peer learning as the first step in governance. offers immediate value for teaching. For teachers and policy trainers, the portal provides real, up-to-date case documents for students to analyze. These reports turn abstract ideas like “governance” and “risk” into concrete evidence that can be read, checked, and compared. However, for narrative reporting to lead to learning and better practice, it must be usable. If reports are too long, inconsistent, or hard to understand, they will help researchers more than they will help buyers. That is why HAIP should be used as a living teaching resource and a place to test new methods. Students can extract key fields, compare reported tests, and design follow-up audit plans. This approach turns HAIP, a collection of essays, into practical governance tools.

Figure 1: Political participation in HAIP is broad, but actual public reporting remains limited, highlighting the gap between endorsement and operational adoption.

Functionality and mechanics of HAIP reporting

HAIP is meant to show how institutions work, not just to rate one system. The OECD reporting plan covers governance, risk identification, information security, content authentication, safety research, and actions that support people and global interests. In 2024, the pilot phase included companies, labs, and compliance firms from ten countries. In 2025, the OECD opened a public portal for submissions. These steps focus on depth and method, not just quick comparisons. The form asks who did the red-teaming, the size of the tests, if third-party evaluators were involved, and how incidents were handled. These are specific points that can be taught, checked, and improved.

Figure 2: Early HAIP engagement is driven mainly by researchers and public-interest actors, not deployers.

This design has three main effects. First, it creates a useful record of what organizations actually do, giving teachers and researchers real data to use and compare. Second, it supports learning that fits different situations, since a test that works in one place may not work in another. Third, it adds a cost barrier. Writing a public, institution-level report takes time for legal review, test documentation, and senior approval. The OECD and G7 noticed this during their pilot and mapping stages and have made tools and mapping a priority. Still, making it easier to take part depends on lowering these costs. The policy challenge is clear: keep the parts of the form that teach and reveal information, and add a small, standard set of machine-readable fields for buyers and researchers to compare at scale.

Limits: divergence, incentives, and checks

HAIP works in a world with many national rules. Europe uses strict, risk-based laws. The United States uses voluntary frameworks, market incentives, and security measures. China takes a directive, pilot-led approach focused on state goals and industry growth. HAIP’s strength is its ability to work across these different systems. Its weakness is that it is voluntary and not enforced. When countries have different priorities, like rapid growth or strategic independence, a voluntary report is unlikely to change behavior. In practice, agreeing on words does not mean agreeing on incentives.

The next point is practical. Voluntary reports only matter when there are economic rewards or penalties. If buyers, funders, or procurement teams see a verified HAIP report as a quick way to secure contracts, then developers have a reason to disclose. If not, many will choose not to take part. The early record of submissions, with few reports despite strong political support, shows the gap between public support and real market incentives. In short, transparency needs to be rewarded to make a difference.

Verification is the hardest part. Narrative reports can be vague. An organization might describe its tests in a way that sounds strong but lacks real evidence. The solution is a modular approach. For low-risk systems, self-attested reports with a small machine-readable core may be enough. For high-risk areas like healthcare, critical infrastructure, or justice, there should be accredited technical checks and a short audit statement. This tiered model keeps costs manageable and focuses the most thorough checks where they are needed. The policy tool here is not force, but targeted standards and public funding to help small teams take part without high costs. Results from pilots and procurement trials will show which approach works best.

What teachers and institutions should do now

For teachers, HAIP is a hands-on lab. Assign an HAIP submission as the main document. Have students pull out key data, such as test dates, model types, red team details, whether third-party testing was conducted, and the incident-handling timeline. Then, ask them to write a short audit plan to check the claims. This exercise builds two important governance skills: turning words into checks you can verify, and designing simple evidence requests that protect trade secrets.

For institutions like universities, hospitals, and research labs, HAIP should serve two purposes. First, build internal records that align with HAIP’s main areas to enable decision auditing. Second, link disclosure to buying and hiring. Ask outside vendors for HAIP-style reports on sensitive projects. Make governance maturity part of grant and vendor selection. Invest in a small legal and technical team to prepare solid, redacted reports. The goal is clear: use HAIP to improve institutional practices and make verified reporting a real advantage in procurement and partnerships.

From transparency to policy design: small, testable moves

What is a practical way forward? Start with a pilot in one sector, such as healthcare or education. The OECD suggests that any AI system used for decisions in this sector should need verified HAIP attestation for a year, with funding to help small providers create accurate reports. Track three results: whether verified reporting cuts down on repeated checks, catches real harms or near misses, and speeds up safe procurement. If the pilot works, expand using shared recognition and simple procurement rules. At the same time, fund open-source tools that turn internal logs into safe, redacted attachments and fill out the basic machine-readable fields. Together, these steps make reporting useful in practice. The approach is modest and experimental, matching the size of the challenge: not sweeping global law, but tested, repeatable local policy that builds trust and evidence.

The Hiroshima AI Process did what it set out to do diplomatically: carve out a shared space for principles and a voluntary channel for public reporting. According to Brookings, the HAIP Framework's early history highlights both its achievements in international diplomacy and the practical challenges it faces. Rather than replacing HAIP with stricter laws or discarding it, the focus should remain on building upon its foundation as an innovative global AI governance tool. The work is to pair it with small, practical policy tools: a short, machine-readable core, a tiered checks-and-balances regime for higher-risk systems, procurement incentives that reward verified reports, and funded tools to lower the cost of participation. For teachers, HAIP is already a classroom-ready corpus. For policymakers, HAIP is a scaffolding that can support tested, sectoral pilots. If governments and institutions take these steps now, reporting will stop being a ritual and start changing behavior.


The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.


References

Brookings Institution. 2026. “The HAIP Reporting Framework: Its value in global AI governance and recommendations for the future.” Brookings.
East Asia Forum. 2025. “China resets the path to comprehensive AI governance.” East Asia Forum, December 25, 2025.
OECD. 2025. “Launch of the Hiroshima AI Process Reporting Framework.” OECD.
World Economic Forum. 2025. “Diverging paths to AI governance: how the Hiroshima AI Process offers common ground.” World Economic Forum, December 10, 2025.
Ema, A.; Kudo, F.; Iida, Y.; Jitsuzumi, T. 2025. “Japan’s Hiroshima AI Process: A Third Way in Global AI Governance.” Handbook on the Global Governance of AI (forthcoming).

Picture

Member for

1 year 3 months
Real name
Keith Lee
Bio
Keith Lee is a Professor of AI and Data Science at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI), where he leads research and teaching on AI-driven finance and data science. He is also a Senior Research Fellow with the GIAI Council, advising on the institute’s global research and financial strategy, including initiatives in Asia and the Middle East.