This article is based on ideas originally published by VoxEU – Centre for Economic Policy Research (CEPR) and has been independently rewritten and extended by The Economy editorial team. While inspired by the original analysis, the content presented here reflects a broader interpretation and additional commentary. The views expressed do not necessarily represent those of VoxEU or CEPR.
This article is based on ideas originally published by VoxEU – Centre for Economic Policy Research (CEPR) and has been independently rewritten and extended by The Economy editorial team. While inspired by the original analysis, the content presented here reflects a broader interpretation and additional commentary. The views expressed do not necessarily represent those of VoxEU or CEPR.
This article is based on ideas originally published by VoxEU – Centre for Economic Policy Research (CEPR) and has been independently rewritten and extended by The Economy editorial team. While inspired by the original analysis, the content presented here reflects a broader interpretation and additional commentary. The views expressed do not necessarily represent those of VoxEU or CEPR.
This article is based on ideas originally published by VoxEU – Centre for Economic Policy Research (CEPR) and has been independently rewritten and extended by The Economy editorial team. While inspired by the original analysis, the content presented here reflects a broader interpretation and additional commentary. The views expressed do not necessarily represent those of VoxEU or CEPR.
This article is based on ideas originally published by VoxEU – Centre for Economic Policy Research (CEPR) and has been independently rewritten and extended by The Economy editorial team. While inspired by the original analysis, the content presented here reflects a broader interpretation and additional commentary. The views expressed do not necessarily represent those of VoxEU or CEPR.
As China transitions from a low-cost manufacturer to a direct rival in high-value sectors, tariffs are not merely political maneuvers but economic counterweights designed to manage industrial parity.
How we scientifically model 'word of mouth' in rankings
Picture
Member for
11 months 4 weeks
Real name
Keith Lee
Bio
Keith Lee is a Professor of AI and Data Science at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI), where he leads research and teaching on AI-driven finance and data science. He is also a Senior Research Fellow with the GIAI Council, advising on the institute’s global research and financial strategy, including initiatives in Asia and the Middle East.
Published
Modified
While founding a university (SIAI), I encountered a surprising reality—university rankings, like any evaluative system, are shaped by more than just academic performance. Factors such as institutional branding, media visibility, and methodological choices play a role in shaping how institutions are perceived and ranked. This has led to ongoing debates about how rankings should be structured and whether certain metrics introduce unintended biases.
Although I still respect world-class newspapers that produce university rankings, I began to wonder: Can we create a ranking system that is free from predefined metrics and reflects real-time public perception? In the age of 'Big Data', where everyone’s opinions contribute to digital landscapes—just as Google’s PageRank algorithm does for web search—shouldn’t rankings be more dynamic, reflecting organic engagement and discussion rather than static formulas?
Google’s search ranking is determined by a webpage’s popularity and its informative potential, both shaped by user behavior. By the same logic, I envisioned a PageRank-like ranking system that extends beyond web search—one that could be applied to any domain, as long as the data is properly structured and the model is well-designed.
Of course, no single index can perfectly capture the true potential of an institution. However, unlike traditional rankings that rely on fixed methodologies and expert-driven criteria, my approach removes human discretion from the equation and mirrors the real-world mechanisms used by the most trusted system in the digital age—Google’s search engine.
At the end of the day, ranking is not static—it evolves continuously. Just as Google updates search results dynamically based on new data and shifting ranking logic, the rankings measured by GIAI follow the same principle. Because our system is based on real-time internet data rather than an annual retrospective dataset, it is inherently more up-to-date, even if it still has many imperfections.
The Challenge of Trustworthy Rankings
From a statistical perspective, ranking is a form of dimensionality reduction, similar to Factor Analysis. Researchers often believe that many observed variables are influenced by a smaller set of hidden variables, called 'factors'. Ranking is, in essence, an attempt to condense multidimensional information into a single index (or a few indices at most), making it conceptually similar to factor analysis.
In fact, this logic is central to SIAI’s education. I teach students that even Deep Neural Networks (DNNs) are multi-stage factor analysis models—each layer extracts hidden factors that were not visible at the earlier stage. The reason we need DNN-based complex factor analysis instead of simple statistical methods is that modern data structures, such as images or unstructured text, require deeper representations.
For ranking, however, I use network data instead of traditional column-based data, which demands a different type of factor analysis technique—one designed for network structures. This means that our ranking methodology is not merely a standard statistical reduction but a network-driven eigenvector centrality approach, akin to Google's search ranking algorithm.
Unfortunately, like all dimensionality reduction methods, this approach has limitations. Some information is inevitably lost in the process, which introduces potential bias into the ranking. Just as Google continuously updates its ranking algorithm to refine search results, we also need to iterate and improve our model to account for distortions that arise from the dimensional reduction process.
In this regard, we admit that our ranking is not definitive replacement of existing ranking services, but a complementary support that established system may have missed.
Quick side tour: How does Google determine page ranking? - PageRank
When Google was founded, its revolutionary innovation was PageRank, an algorithm designed to rank web pages based on their importance, rather than just keyword matching. Instead of relying solely on how many times a word appeared on a webpage, PageRank measured how web pages were linked to each other—operating on the principle that important pages tend to be linked to by many other important pages.
PageRank works like a network-based ranking system:
Every webpage is treated as a node in a network.
A link from one page to another is considered a vote of confidence.
Pages that receive more inbound links from high-quality sources receive a higher score.
This approach allowed Google to rank pages based on their structural relevance, rather than just keyword stuffing—a major flaw in early search engines.
PageRank assigns a score to each webpage based on the probability that a random internet user clicking on links will land on that page. This is determined through an iterative formula:
$$PR(A)= (1-d) + d \sigma (PR(B) / L(B))$$
Where:
$PR(A)$ = PageRank of webpage A
$d$ = Damping factor (typically set to 0.85 to prevent infinite loops)
$PR(B)$ = PageRank of a webpage linking to A
$L(B)$ = Number of outbound links from webpage B
(There can be N(>1) of webpage B, all of which will be summed and weighted by $d$).
Each iteration refines the scores until a stable ranking emerges. This method effectively identifies pages that are central to the web’s information flow.
While PageRank was the foundation of Google Search, it is no longer the sole ranking factor. Over time, Google evolved its algorithm to address manipulation and improve search quality. Some key developments include:
✅ Link Quality Over Quantity: Google penalized spammy, low-quality link-building tactics that artificially boosted PageRank. ✅ Personalization & Context Awareness: Google now considers user search history, location, and device to tailor results. ✅ RankBrain (2015): An AI-based ranking system that understands semantic relationships rather than just word matching. ✅ BERT & MUM (2019-Present): Advanced natural language models that improve the understanding of complex queries and intent.
Even though PageRank is no longer Google’s sole ranking mechanism, its core logic—using network-based ranking rather than static indicators—still drives how web relevance is determined.
So, in theory, Google’s PageRank algorithm and GIAI's ranking model share the same fundamental principle: Ranking entities based on their structural position in a network, rather than relying on arbitrary human-defined scores. The difference lies in:
Google ranks web pages, whereas I rank companies, universities, or movies based on word-of-mouth networks.
Google uses hyperlinks between pages, while I use word associations in discussions and media coverage.
Google refines its ranking with AI models like RankBrain; I adjust weights based on time sensitivity and data source credibility.
By following this well-established methodology, my ranking system is not arbitrary or subjective—it is grounded in the same kind of network-based analysis that transformed the internet into an organized and searchable knowledge system.
Potential risks in the 'word of mouth' based ranking model
1. The Impact of Sarcasm and Irony
One of the greatest challenges in natural language processing (NLP) is distinguishing between genuine praise and sarcastic remarks. While humans can easily identify irony in statements like:
"Oh wow, another groundbreaking innovation from this company…" (actually expressing frustration)
"This is the best movie ever. I totally didn't fall asleep halfway through." (obvious sarcasm)
A text-based ranking model, however, may interpret these statements as positive sentiment, artificially boosting an entity’s score.
Why It’s Hard to Fix:
Traditional sentiment analysis models rely on word-based classification, which struggles with sarcasm.
Even advanced contextual models (e.g., BERT, GPT) require massive amounts of labeled sarcastic text for accurate detection.
Sarcasm often lacks explicit markers, making it difficult to distinguish from genuine praise without deep contextual understanding.
2. Bias in Data Sources
Text-based rankings are only as good as the data they rely on. The internet is filled with skewed sources, and the way an entity is discussed can vary wildly depending on where the data is collected from:
Twitter & Social Media: Driven by trends, memes, and emotional reactions, often amplifying sensational or controversial entities.
News Articles: More structured but still prone to corporate PR influence and selective coverage.
Online Forums (e.g., Reddit, community discussions): Stronger opinions, but highly demographic-dependent.
Why It’s Hard to Fix:
No single dataset represents a true public opinion.
Many manipulation attempts look organic, making them hard to detect algorithmically.
Tracking IP origins and user patterns is outside the scope of text-based ranking.
Real engagement can resemble manipulation, making it difficult to separate genuine sentiment shifts from artificial ones.
4. Time-Sensitivity Issues
Public sentiment isn’t static—entities gain and lose popularity rapidly. A ranking system based on static text analysis may fail to capture recent shifts, such as:
A scandal that rapidly deteriorates a company’s reputation.
A viral moment that temporarily inflates a movie’s ranking.
A forgotten entity that suddenly resurfaces.
Why It’s Hard to Fix:
Adjusting for recency bias can distort historical credibility.
Short-term spikes in attention don’t always indicate long-term influence.
Despite my best efforts, a purely text-based ranking will never be fully trustworthy—not because of a flaw in the methodology, but because language itself is unpredictable, biased, and easily manipulated. While I have built automatic weight-adjusting mechanisms to counteract some of these issues, certain biases remain:
✅ Strong at identifying entity prominence: If an entity is widely discussed, my model captures it well.
✅ Good at detecting discussion clusters: My eigenvector-based approach ensures ranking reflects influence rather than just raw frequency.
⚠️ Vulnerable to sarcasm and manipulation: Without deeper NLP work, sarcasm and fake engagement can skew results.
⚠️ Sensitive to source biases: Ranking outcomes depend on where the data comes from.
⚠️ Time-dependent accuracy issues: Spikes in discussion may create misleading rankings if not adjusted correctly.
Moving forward, one potential improvement is to overlay sentiment adjustments carefully, ensuring that rankings are influenced but not dictated by sentiment. However, I will remain cautious about over-engineering the model—sometimes, allowing raw data to speak for itself is more honest than trying to force it into artificial classifications.
At the end of the day, no ranking system is perfect. But by understanding its flaws, we can interpret the results more intelligently, rather than assuming any model has a monopoly on truth.
Picture
Member for
11 months 4 weeks
Real name
Keith Lee
Bio
Keith Lee is a Professor of AI and Data Science at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI), where he leads research and teaching on AI-driven finance and data science. He is also a Senior Research Fellow with the GIAI Council, advising on the institute’s global research and financial strategy, including initiatives in Asia and the Middle East.
[MSc Research topic 2025-2026] Advancing AI-Driven Narrative Intelligence for public opinion
Picture
Member for
11 months 4 weeks
Real name
Keith Lee
Bio
Keith Lee is a Professor of AI and Data Science at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI), where he leads research and teaching on AI-driven finance and data science. He is also a Senior Research Fellow with the GIAI Council, advising on the institute’s global research and financial strategy, including initiatives in Asia and the Middle East.
Published
Modified
I have spent years in AI and data science, believing that structured models and quantitative analysis were the future. That perspective changed the moment I became a target of an orchestrated misinformation campaign—one that wasn’t random but designed to destroy my credibility, my institution’s reputation, and my work.
What I witnessed was beyond just social media hate—it was engineered narrative manipulation. The same keywords appeared repeatedly in different online communities, the same phrases were echoed by different sources, and an invisible conductor seemed to be controlling public sentiment. The attacks weren’t spontaneous; they were structured.
Then I asked myself: What if this isn’t just about me? What if this is how narratives are shaped globally—in politics, in business, and in the financial markets?
During my research, I collaborated with a team monitoring public narratives in real time, initially for defensive purposes. They needed a way to detect emerging misinformation, neutralize harmful narratives before they spread, and assess whether their own strategic messaging was effective. The results were game-changing: by tracking word relationships and monitoring sentiment shifts, they were able to counteract disinformation, reinforce positive messaging, and stay ahead of competitors.
That experience made one thing clear: narrative manipulation is a reality, and businesses, financial institutions, and governments need AI-driven intelligence to track, analyze, and respond to it.
Bridging Academia and Business: AI for Narrative Intelligence
At the Swiss Institute of Artificial Intelligence (SIAI), our MSc AI/Data Science program is committed to pioneering research that bridges theoretical AI concepts with real-world impact. Our latest research focus is on AI-driven word network analysis, a powerful framework for narrative intelligence, crisis detection, and reputation management.
The very example of the network analysis for words is the above image with SIAI's logo and a network array of AI/Data Science related keywords. We have crawled SIAI's lecture notes and created the chart. Below research is to find the best use of the simple mathematics to real world.
Research Overview: Understanding and Controlling Narrative Influence
Traditional sentiment analysis and keyword tracking provide shallow insights, failing to capture the structural relationships behind word networks, narrative evolution, and hidden agenda orchestration. Our approach leverages AI, NLP, and Network Theory to:
Build narrative networks from large-scale text data (news articles, social media, online communities).
Detect clusters of related words and topics using graph-based centrality measures (e.g., Betweenness Centrality).
Identify coordinated messaging efforts and the key actors driving sentiment changes.
Predict how narratives will evolve over time using Machine Learning, Deep Learning, and Reinforcement Learning.
This methodology enables businesses, investors, and policymakers to analyze the power dynamics behind narratives, revealing not just what is being said, but who is controlling the conversation.
Practical Applications: The Business Case for Narrative Intelligence - Beyond sentiment analysis
This research is not just academic—it has direct, real-world implications. Just as financial institutions rely on algorithmic trading for predictive insights, companies will soon require AI-powered narrative intelligence to safeguard their brand and control public sentiment.
Potential applications include:
Corporate Risk Management: Identifying reputation threats and misinformation campaigns before they escalate.
Financial Markets & Hedge Funds: Tracking public narratives that influence stock prices and investment trends.
Crisis Management & PR Strategy: Evaluating the effectiveness of messaging strategies in real time.
Political & Geopolitical Analysis: Understanding how narratives shape public policy and voter behavior.
A Case study: The Business of Monitoring, Defending, and Attacking Narratives
As narrative intelligence matures, businesses will require a structured, AI-driven subscription service to monitor, counteract, and proactively manage their public perception. This research could evolve into:
A B2B subscription model for corporations to monitor brand sentiment.
A financial intelligence tool for hedge funds assessing market-moving narratives.
A cybersecurity and misinformation detection service for governments and media firms.
Let me give you a fictional but realistic example case of using this tool.
Chapter 1: A Brewing Crisis
It started with a single tweet—an anonymous account posted a claim that OrionTech, a rising AI startup, was exaggerating the capabilities of its flagship product, NeuraSync, an AI-driven customer service chatbot. Within hours, the tweet was shared by a prominent tech influencer, and by the next morning, it had made its way onto major tech news sites.
By lunchtime, OrionTech’s marketing team was in full panic mode. Stock prices dipped 4%, venture capital partners were sending urgent emails, and their biggest client was asking for clarification. The PR team scrambled to control the damage, drafting a corporate statement and instructing their social media team to respond—but they had no idea where the fire started or who was fanning the flames.
Then, they turned to their secret weapon: SIAI’s AI-powered narrative intelligence platform.
Chapter 2: Mapping the Attack
As soon as the PR team fed the trending keywords into the system, the word network analysis kicked in. The AI quickly mapped out how the negative narrative was spreading, identifying key word clusters and influential nodes in the network. The system flagged several crucial insights:
The Anonymous Tweet Wasn’t Random – The AI detected similar phrasing and keywords in older forum posts from months ago, revealing a pattern of coordinated messaging targeting OrionTech. This was not an organic complaint—it was a strategic attack.
A Competitor Was Involved – The AI identified a subtle but critical connection: many of the accounts amplifying the backlash had also promoted a new product launch from OrionTech’s biggest competitor two weeks prior. A deeper dive into the network graph showed that the same influencer boosting the anonymous tweet had previously collaborated with the competitor’s PR team.
The Narrative Was Not Yet Fully Cemented – The AI projected that while the sentiment was turning negative, the backlash was still containable—if countermeasures were deployed within 24 hours.
Chapter 3: Counterattack and Narrative Defense
With these insights, OrionTech’s PR team took a multi-layered response strategy:
✅ Neutralize the influencer – Instead of directly confronting the tech influencer who amplified the attack, OrionTech’s CEO invited them for a private demonstration of NeuraSync, offering full transparency. The influencer agreed to an exclusive behind-the-scenes look—leading to a follow-up post praising OrionTech’s technology, shifting the conversation.
✅ Redirect the public narrative – Instead of merely defending against accusations, OrionTech launched a proactive campaign highlighting real customer success stories with NeuraSync. The AI platform recommended specific key phrases and hashtags that would be most effective in steering public perception back in OrionTech’s favor.
✅ Expose the coordinated attack – Without directly accusing their competitor, OrionTech’s PR team leaked data-backed insights to industry journalists, showing how misinformation campaigns were becoming a growing problem in the tech sector. The story wasn’t about OrionTech anymore—it became a broader conversation about ethics in corporate PR warfare, shifting scrutiny away from them and onto industry-wide practices.
Chapter 4: Victory in the Narrative War
Within 48 hours, OrionTech’s AI-driven response had turned the tide:
Stock prices rebounded by 6% after positive media coverage.
The influencer’s correction post reached 1.2 million views, overshadowing the initial attack.
The anonymous tweet stopped gaining traction, and discussions moved on to new topics.
Venture capital partners re-engaged, reassured by OrionTech’s proactive handling of the crisis.
OrionTech’s executive team had learned a valuable lesson: in today’s world, public perception isn’t just shaped—it’s engineered. Companies that fail to monitor, defend, and shape their narratives will be at the mercy of unseen forces.
But those who harness AI-powered narrative intelligence? They don’t just survive the storm—they control the winds.
Join the Research: MSc AI/Data Science at SIAI
To further develop this study, we seek MSc AI/Data Science students and research collaborators with expertise in:
✅ Natural Language Processing (NLP) for large-scale text data analysis. ✅ Network Theory & Graph Models to model word relationships dynamically. ✅ Machine Learning, Deep Learning, and Reinforcement Learning for predictive analysis and automation. ✅ Game Theory (optional, future expansion) for modeling strategic interactions within narrative control.
Students and researchers participating in this initiative will gain hands-on experience with cutting-edge AI methodologies, real-world applications of graph-based NLP models, and exposure to industry-relevant case studies on narrative intelligence and influence tracking.
If you are interested in joining this research initiative as an MSc student, research collaborator, or industry partner, we welcome applications and inquiries. This is a unique opportunity to contribute to next-generation AI applications in business, finance, and global information ecosystems.
If interested, feel free to ask questions in comments through GIAI Square.
Keith Lee is a Professor of AI and Data Science at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI), where he leads research and teaching on AI-driven finance and data science. He is also a Senior Research Fellow with the GIAI Council, advising on the institute’s global research and financial strategy, including initiatives in Asia and the Middle East.
Why SIAI failed 80% of Asian students: A Cultural, Not Genetic, Explanation
Picture
Member for
11 months
Real name
David O'Neill
Bio
David O’Neill is a Professor of Finance and Data Analytics at the Gordon School of Business, SIAI. A Swiss-based researcher, his work explores the intersection of quantitative finance, AI, and educational innovation, particularly in designing executive-level curricula for AI-driven investment strategy. In addition to teaching, he manages the operational and financial oversight of SIAI’s education programs in Europe, contributing to the institute’s broader initiatives in hedge fund research and emerging market financial systems.
Published
Modified
80% of Korean students failed at SIAI not due to lack of intelligence but due to deep-rooted cultural conditioning that discourages independent thought and risk-taking
The Confucian, exam-based education system promotes rote memorization over problem-solving, making students struggle in an environment that requires deep, abstract thinking
Korea’s broader economic and corporate structure reinforces a ‘safe thinking’ mindset, making it unlikely that Western-style innovation will thrive here without significant systemic change
Before going into details, please allow me to emphasize that I am well aware that this article is an unfiltered critique, but this also comes from our team's painful 4 years experience in Korea while running a pilot program for MBA AI/BigData and MSc Data Science (PreMSc in AI/Data Science) under our research lead, Keith Lee, a Korean national, whose academic background lies in Mathematical Finance along with Investment banking and Data Science industry experience. Together with below two earlier articles, our analysis so far helps us to conclude that most East Asian countries, except China, are not our target market. For China, we have another mention at the end of this article.
SIAI was never designed to be an easy program. It is built around problem-first learning, where students must struggle through difficult challenges before being given answers. The idea is that true expertise comes not from memorization but from direct engagement with problems. However, Korean students have failed at a disproportionately high rate, often not because of a lack of intelligence but because they simply could not adapt to this mode of learning.
The failure of Korean students at SIAI is not an isolated incident. It mirrors Korea’s broader struggles in fostering high-risk, innovation-driven industries like AI startups. The same traits that lead to failure at SIAI—risk aversion, hierarchical thinking, and an over-reliance on structured answers—are the same factors that limit Korea’s ability to compete in global high-tech industries.
This raises an important question: If intelligence is not the issue, why do Korean students fail at SIAI at such high rates? The answer lies in deeply ingrained cultural conditioning, reinforced by Korea’s education system and work culture.
The Education System Teaches Memorization, Not Thinking
Korea’s education system is one of the most intense in the world, yet it produces students who struggle with independent problem-solving. Why?
The National Exam Mentality – Success in Korea is defined by performance on standardized exams like the CSAT (Suneung). These tests reward students who can memorize massive amounts of information and reproduce it under time pressure.
Lack of Open-Ended Problem-Solving – Korean students are rarely taught how to deal with ill-defined problems where multiple solutions exist. They are conditioned to expect one correct answer.
Fear of Making Mistakes – The Korean school system does not encourage risk-taking. Making a mistake is seen as a failure, not a learning opportunity. As a result, Korean students are reluctant to explore ideas that might not lead to immediate success.
At SIAI, students are deliberately given incomplete information and forced to struggle through uncertainty—something the Korean education system has trained them to avoid at all costs. The result?
Mental shutdown, frustration, and high dropout rates.
Students Have a Passive Learning Mentality
A key observation from SIAI’s Korean students is their tendency to: ✅ Wait for direct explanations instead of exploring solutions themselves ✅ Copy existing solutions rather than develop their own ✅ Give up when confronted with open-ended questions
This passive learning mentality is not their fault—it’s a survival strategy that works in Korea’s academic and corporate environments.
In schools, students are rewarded for following the teacher’s guidance exactly, rather than questioning the material.
In companies, employees are expected to obey superiors rather than challenge ideas or propose new solutions.
In social interactions, independent thinking can be seen as arrogance or defiance rather than intelligence.
At SIAI, these habits become a liability. When students are told to figure out a problem before receiving a solution, many experience anxiety and paralysis, as this goes against everything they have been trained to do.
Culture of Risk Aversion Prevents Deep Thinking
Deep, abstract thinking requires a willingness to take intellectual risks—to explore different possibilities, challenge assumptions, and tolerate uncertainty. However, Korea’s society is structured around minimizing risk, not embracing it.
Corporate & Social Hierarchy – Questioning authority or challenging ideas is discouraged. Instead of debating ideas critically, Koreans are expected to align with the dominant view.
High-Stakes Consequences for Failure – In Korea, failing an exam or business venture can have lifelong social and economic consequences, making risk-taking too dangerous for most people.
Short-Term Thinking – Success is measured by immediate results, whether it’s exam scores, company profits, or startup funding. Long-term strategic thinking and foundational research are undervalued.
This cultural mindset clashes directly with the Western-style, research-driven, exploratory approach that SIAI promotes. Students who have spent their whole lives avoiding intellectual risk struggle to suddenly embrace it.
Korea’s Confucian-influenced hierarchy impacts how students approach learning and problem-solving:
Respect for authority over logic – Many students hesitate to challenge assumptions, even when they recognize flaws in a solution.
Preference for pre-existing formulas – Instead of inventing new methods, students tend to rely on what has already been written or accepted.
Fear of standing out – Independent thinkers often get labeled as "weird" or "difficult," discouraging students from expressing unique perspectives.
At SIAI, students must develop their own methodologies to solve complex problems. Korean students, conditioned to seek pre-approved frameworks, often struggle with this level of intellectual freedom.
Even if a Korean student somehow overcomes these limitations, their society does not reward them for it.
Korean corporations hire based on university ranking, not problem-solving skills.
AI startups struggle because investors prefer “safe” business models over high-risk innovation.
Government-funded AI projects focus on applications, not fundamental research.
As a result, even Koreans who succeed at Western-style deep thinking often find themselves with no place in Korea’s economy. This reinforces the idea that memorization and safe thinking are the only viable survival strategies.
Korea Is Not Built for Western-Style Innovation
Korea’s failure at producing high-level AI researchers and independent thinkers is not due to a lack of intelligence but rather a fundamental mismatch between its cultural/economic system and the traits required for deep, abstract thinking.
SIAI’s teaching model aligns with Western academic traditions of independent problem-solving, but Korea’s students are conditioned to avoid risk, challenge, and deep exploration.
Korean society does not reward the type of intelligence that SIAI promotes. Even students who do well at SIAI may find that Korea has no place for them.
As a result, Korea is not just failing to produce AI experts—it is failing to cultivate the kind of innovative minds that could drive long-term global competitiveness.
In the end, SIAI was never going to succeed in Korea, because Korea is not built for this kind of education. Raising independent, abstract thinkers here requires enormous effort, but the country itself does not value or support such minds.
For Korea to truly change, it would need to:
Replace its rote-learning, exam-based education system with research-based learning.
Encourage intellectual risk-taking and debate at all levels of society.
Redefine success beyond standardized test scores and corporate hierarchy.
But given the country’s historical patterns, such change is unlikely to happen anytime soon. That is why SIAI has shifted focus to the global market, where its philosophy is more likely to be understood and valued.
For Koreans who wish to truly think independently and engage in deep research, the best path may not be to change Korea—but to leave it altogether.
Why I think Korea, once a tech leader, will soon be China's tech colony
As mentioned at the beginning, I am fully aware that it’s an unfiltered critique, but it reflects exactly what I’ve observed over the years together with Keith Lee. He has seen irsthand how these structural barriers prevent not just our students at SIAI, but the entire country, from evolving into a true deep-tech and innovation powerhouse.
This is not about attacking Korea just for the sake of criticism—it’s about identifying why certain types of high-level intellectual pursuits simply don’t thrive there. We tried to break the cycle with SIAI, but the overwhelming response confirmed that Korea isn’t ready, and perhaps never will be. The country is optimized for exam-driven intelligence, corporate hierarchy, and predictable business models—not for nurturing independent, abstract thinkers.
We ’re not alone in this realization. Many of Korea’s most brilliant minds either left the country or had to work under constraints that killed their full potential. That’s why even Korea’s so-called "AI industry" is largely just AWS and OpenAI API integrations rather than real algorithmic breakthroughs.
However, we have witnessed the similar East Asian background but distinctly different stories from China. (Before going any further, please allow me to emphasize that we are not pro-China. We just lay facts and analyses that we have found on the table.)
Despite a similar cultural background, China is making massive strides in AI, semiconductors, and electric vehicles, while Korea seems stuck in safe, incremental improvements. We earlier thought Confucian-structured social system is one of the fundamental cultural influences for Korea's debacle in tech innovation, but we had to change our earlier theory.
Here’s why China is breaking ahead:
1.Massive Long-Term State Investment in Deep Tech
The Chinese government is willing to pour billions into AI, quantum computing, and electric vehicles, even with no immediate return.
Korea, on the other hand, only funds projects that have predictable, short-term success—which is why most Korean AI companies just build applications using OpenAI’s APIs rather than actual models.
2.Tolerating Experimentation & Failure
Chinese tech firms (like Tencent, Baidu, and Huawei) allow moonshot projects to fail because they have strong state backing and long investment cycles.
Korea’s corporate culture punishes failure harshly, which forces companies to play it safe rather than push technological boundaries.
3.Government-Backed Industrial Policy vs. Market-Driven Hesitation
China strategically subsidizes key industries (like batteries, EVs, and AI models) to ensure global dominance.
Korea’s companies, despite having world-class battery tech, have to compete without meaningful government protection.
4.Better Retention of Top Talent
Many of China’s best AI and deep tech researchers return home from the U.S., thanks to both government incentives and nationalism - Chinese universities' research papers are phenomenal these days.
Korean researchers, on the other hand, often prefer to stay abroad because they know Korea’s rigid corporate culture won’t let them do meaningful work.
Keith is the best example for this point #4. After years of hopeless trial, he has completely turned his back to his own country and leading our research team here at GIAI. We are glad to have his full attention to GIAI's research and SIAI's Euro operation, but what a loss to his home country.
Among many tech sectors, we admit that there still is Korea's marginal tech lead in EV batteries to China. None of us are EV battery experts, but tracking what they publish in academic (and not-so-academic) journals, we are almost certain that Korea’s battery dominance (LG Energy Solution, Samsung SDI, SK On) is also under threat from China, and it is highly likely that China could soon overtake both Korea and Tesla in EV battery tech.
China’s advantages: ✅ Cheaper production due to massive economies of scale ✅ Aggressive government subsidies that lower manufacturing costs ✅ Faster innovation cycles due to high tolerance for risk
If Korea’s battery makers don’t shift to long-term, high-risk research, they will lose their lead within 5-10 years. And knowing Korea’s business culture, they will likely just play defense rather than take bold steps forward, which will only delay, not prevent, China’s takeover.
My final thought: Korea Is Losing, But It’s a Choice
The key difference between Korea and China is that China is willing to take long-term risks, while Korea isn’t. China sacrifices short-term efficiency for long-term dominance, whereas Korea only funds safe bets with immediate ROI. If Korea wants to stay competitive, it must change how it approaches innovation:
Fund actual AI research, not just API-based applications.
Encourage experimental, high-risk tech startups instead of just supporting chaebol-driven projects.
Give top researchers a reason to stay in Korea instead of moving abroad.
But given Korea’s deeply ingrained corporate and academic structure, I don’t think this change will happen. Instead, Korea will likely continue doing incremental improvements while China overtakes in every major deep-tech sector.
For other East Asian countries, we see that Japan, Mongolia, Vietnam, and other Southern Eastern Asian countries are still in infant stage in AI/Data Science. SIAI's hard training may not have chances to blossom in there, as we already have seen in Korea, for a different reason. If we go to Asia, it will mainly be South Asia and Middle East.
Picture
Member for
11 months
Real name
David O'Neill
Bio
David O’Neill is a Professor of Finance and Data Analytics at the Gordon School of Business, SIAI. A Swiss-based researcher, his work explores the intersection of quantitative finance, AI, and educational innovation, particularly in designing executive-level curricula for AI-driven investment strategy. In addition to teaching, he manages the operational and financial oversight of SIAI’s education programs in Europe, contributing to the institute’s broader initiatives in hedge fund research and emerging market financial systems.
Why SIAI failed 80% of Asian students: A Cognitive, Not Mathematical, Problem
Picture
Member for
11 months
Real name
David O'Neill
Bio
David O’Neill is a Professor of Finance and Data Analytics at the Gordon School of Business, SIAI. A Swiss-based researcher, his work explores the intersection of quantitative finance, AI, and educational innovation, particularly in designing executive-level curricula for AI-driven investment strategy. In addition to teaching, he manages the operational and financial oversight of SIAI’s education programs in Europe, contributing to the institute’s broader initiatives in hedge fund research and emerging market financial systems.
Published
Modified
Not due to math knowledge, but due to difficulty applying knowledge in real-world scenarios
accustomed to structured learning, struggle more with open-ended, problem-first approaches compared to those trained in Western-style
superficial engagement, reliance on structured guidance, avoidance of ambiguity, and resistance to open-ended problem-solving
Failed in abstraction (encoding) and application (decoding)
Since 2021, the Swiss Institute of Artificial Intelligence (SIAI) has refined its approach to teaching AI and data science (DS), learning valuable lessons from our early cohorts of students. One of the most significant insights we have gained is that students do not struggle due to a lack of mathematical knowledge. Instead, they find it difficult to engage with knowledge in a way that allows them to apply it effectively in real-world scenarios.
Many of these difficulties arise from differences in learning styles. Students from highly structured educational backgrounds, particularly those accustomed to traditional Asian learning methods, often face challenges adapting to our problem-first, exploratory approach. Western-style education, which emphasizes independent problem-solving and conceptual reasoning, has proven to be a significant shift for many of our students. While this transition can be difficult, we believe it is essential for real-world success.
Beyond Math: The Real Challenge
SIAI’s experience over the past few years confirms that success in AI and DS is not just about understanding formulas or solving equations but about knowing how to use this knowledge in practice. Students from various backgrounds have joined our programs, and we have found that their struggles are not necessarily correlated with their university’s prestige. Instead, the greatest challenge for many students has been moving from structured, well-defined problem-solving toward the type of open-ended, real-world thinking required in AI and DS.
Key Observations:
Students struggle not with math, but with application. Many know the formulas but cannot use them in uncertain, real-world contexts.
Textbook knowledge is an abstraction. Students must learn to reverse the abstraction process when applying theories in practice.
Those accustomed to structured, test-based learning struggle the most. They are used to predefined solutions rather than exploratory problem-solving.
Our teaching philosophy is rooted in the belief that textbook knowledge alone is insufficient. Many students fail not because they do not understand theoretical concepts but because they cannot translate those concepts into real-world applications. This is where a significant cognitive gap exists. Textbooks present an abstracted version of reality, simplifying complex problems into models, theories, and equations. However, when students need to apply this knowledge in practice, they must learn how to reverse the abstraction process, translating theoretical models back into the messy, uncertain, and highly variable problems of the real world.
For many students, this transition is difficult because they have been trained to focus on structured problem sets with clear solutions rather than dealing with ambiguous, real-world challenges. Understanding AI and DS is not just about encoding knowledge—it requires decoding reality itself.
In short, a majority of Asian students failed to grasp the concept of encoding and decoding.
Asian vs. Western Learning Approaches
Asian educational systems are well-known for their strong emphasis on procedural mastery, structured problem-solving, and rigorous test-based evaluation. These methods produce students who are highly skilled at following established processes and excelling in standardized assessments. However, while this approach works well for structured learning, it does not always prepare students for fields like AI and DS, which require flexible, adaptive thinking.
Key Differences Between Asian and Western Approaches:
Asian education emphasizes structure and memorization. Students excel at following predefined formulas but struggle with ambiguity.
Western education emphasizes conceptual reasoning and exploration. Students are encouraged to justify their reasoning and navigate uncertainty.
AI and DS require the Western approach. Success in AI depends on solving ill-defined problems and working with incomplete data.
Western education, on the other hand, emphasizes conceptual reasoning, exploratory problem-solving, and open-ended discussions. Students are encouraged to test different approaches, justify their reasoning, and work through uncertainty. Studies, such as a 2019 paper in Cognition and Instruction, have shown that while Western students may not always outperform their Asian counterparts in computational efficiency, they tend to excel in applying knowledge in real-world settings.
At SIAI, we have deliberately adopted a Western-style, problem-first teaching approach because we believe it is the most effective way to prepare students for the realities of AI and DS. Success in this field requires more than technical knowledge—it requires the ability to navigate complexity, adapt to new challenges, and derive solutions without predefined steps.
Key Challenges Faced by Students
From our experience, students who struggle the most at SIAI tend to face the following challenges:
Superficial Engagement with Learning Materials – Some students read only the surface-level content and assume they have understood it. When asked to explain concepts in their own words or apply them in a different context, they realize they lack a deep understanding.
Difficulty in Independent Research – Many students expect direct answers rather than seeking out information themselves. This reliance on structured guidance prevents them from developing the self-learning skills necessary for AI and DS careers.
Avoidance of Struggle and Ambiguity – In AI and DS, many problems do not have clear-cut solutions. Some students become frustrated when they cannot immediately find the “right” answer, leading them to disengage rather than persist through trial and error.
Lack of Open-Ended Thinking – AI and DS require working with incomplete information and making educated decisions based on limited data. Some students resist this uncertainty, preferring problems where a single correct answer exists.
Why We Focus on Western-Style Education
Over the past four years, we have refined our approach at SIAI to focus on what truly matters: bridging the gap between theory and real-world problem-solving. While some students initially struggle with this transition, those who push through emerge as independent thinkers capable of tackling complex AI and DS challenges.
Our Core Teaching Principles:
Textbook knowledge is not enough. Students must learn how to apply theory to real-world, uncertain environments.
AI and DS require adaptive thinking. Rigid, structured learning does not translate well to real-world challenges.
Western-style education fosters independence. Our program forces students to solve problems autonomously, just as they will need to do in the workforce.
Our message to students is clear: success in AI and DS is not about memorizing more formulas or perfecting structured exercises. It is about developing the ability to think, adapt, and problem-solve in the face of uncertainty. Those who embrace this challenge will thrive. Those who remain dependent on structured, execution-based learning will find it difficult to transition into real-world applications.
At SIAI, we do not fail students. We provide the environment and challenges necessary for growth. It is up to students to make the transition from structured learners to adaptive problem-solvers. Those who succeed will find that this transformation is not only valuable for AI and DS but for any complex field where innovation and independent thinking are required.
What does SIAI take going forward
From this painful experience over the past four years, we have shifted our focus of admission from academic credentials to encoding/decoding flexibility. Our earlier assumption that outperformance in earlier schooling can be a persuasive indicator of academic and business success at and beyond SIAI has been disproven by 100+ students from Asia.
Although we do believe western schools run higher education with significantly different direction, it has come to our attention that siding with specific background may limit our potential to grow in network and more creative thinking.
From the understanding all together, going forward, the admission process will mainly focus on whether students can overcome hurdles each by each. More skillful,, versatile, and flexible students will have less trouble overcoming the hurdles, and those the key features we believe will be the very key of the academic success at SIAI as well as future success in the field. In the end, all students will be benefited by our alumni network.
Picture
Member for
11 months
Real name
David O'Neill
Bio
David O’Neill is a Professor of Finance and Data Analytics at the Gordon School of Business, SIAI. A Swiss-based researcher, his work explores the intersection of quantitative finance, AI, and educational innovation, particularly in designing executive-level curricula for AI-driven investment strategy. In addition to teaching, he manages the operational and financial oversight of SIAI’s education programs in Europe, contributing to the institute’s broader initiatives in hedge fund research and emerging market financial systems.