AlphaGenome is pushing education systems to rethink how AI, genomics, and policy are taught
Predictive genomics now demands skills beyond traditional biology
Without reform, the benefits will stay concentrated in a few institutions
The dollar’s strength is increasingly driven by funding stress, not stable safe-haven confidence
China’s shift from U.S. Treasuries toward gold reflects rising concern over U.S.
Europe’s AI gap is not about technology — it is about weak hands-on use at work
Productivity gains come from daily tool use, not from policy frameworks alone
Without faster workplace adoption, Europe will fall further behind global peers
Make Time Teachable: Scaling Learning with AI-Integrated Courses
Picture
Member for
1 year 2 months
Real name
Catherine McGuire
Bio
Professor of AI/Tech, Gordon School of Business, Swiss Institute of Artificial Intelligence
Catherine McGuire is a Professor of Computer Science and AI Systems at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI). She specializes in machine learning infrastructure and applied data engineering, with a focus on bridging research and large-scale deployment of AI tools in financial and policy contexts. Based in the United States (with summer/winter in Berlin and Zurich), she co-leads SIAI’s technical operations, overseeing the institute’s IT architecture and supporting its research-to-production pipeline for AI-driven finance.
Published
Modified
AI-integrated courses can handle routine questions and free teachers for higher-value work
Well-designed course bots cut response time without hurting learning quality
The real policy issue is how to govern AI, not whether to use it
Imagine if a big chunk of the questions students ask in an online course could be answered by the course materials themselves, with a little help from a smart computer program. It turns out, this is totally doable! This means teachers could spend less time repeating the same answers and more time on what really matters. So, instead of banning these AI tools, we should figure out how to use them to improve courses, reduce teachers' workloads, and ensure students get the most out of their learning. The goal is to create courses where AI acts like a helpful assistant, providing students with quick, reliable support while freeing up teachers to focus on coaching, grading, and ensuring everyone has a fair chance to succeed. The big question for leaders is whether to embrace these helpful tools or stick to old ways that might not be as good or cost-effective.
Rethinking Automation with AI Courses
People often argue about whether AI is a threat or a miracle cure for education. But the truth is somewhere in between. Let's forget about whether AI should teach and instead focus on what parts of teaching can be automated and how. The best tasks to automate are those that involve many simple questions, such as explaining instructions, giving examples, or checking whether students followed the rules for formatting their work. An AI system can handle these tasks by sticking to the course materials and following clear guidelines. This means teachers get fewer interruptions, students get faster help, and we can see where students are struggling and need more personal attention.
Figure 1: Synthesis of online course Q&A studies; systematic reviews of chatbots in education; conservative estimate from hybrid tutoring trials
Three things are coming together to make this possible: First, AI can now provide clear, helpful answers when asked the right questions. Second, course websites and tools enable us to connect course materials more effectively. Third, studies show that AI tutors can improve learning when used carefully. Instead of banning AI, we should focus on establishing rules and standards to ensure it's used safely and effectively. This way, we can reduce routine work for teachers and let them focus on what makes the biggest difference for students.
What Works and What to Watch Out For:
Studies are showing that AI tutors and chatbots can help students learn and save time. But it depends on how they're designed. One study in 2025 found that students learned more and were more engaged when using an AI tutor that adhered to specific teaching methods and used course materials to guide its responses.
Figure 2: Careful system design and ongoing curation sharply increase the instructional time reclaimed by AI-integrated courses.
This means that if a course or program keeps track of common student questions and has clear answers ready, it can use AI to handle many requests accurately. Tests of systems that combine tutoring data with AI show they perform better when the AI follows a script and when teachers review tricky cases. Experts estimate that a course bot can handle about 60–90% of routine questions in its first year. This is based on the fact that many online course questions are repeated and that AI tutors can give accurate answers when used properly. Keep in mind that the more the bot is improved and updated, the better it will perform.
However, we need to be careful. AI can sometimes simplify or misrepresent scientific or technical information if not properly controlled. This can be dangerous in subjects like science and health. To prevent this, we need to ensure the AI uses only course-approved materials and that any complex or important questions are referred to a teacher. AI can save time, but it needs to be managed rather than used as a replacement for teachers.
Creating AI Courses That Students Can Trust:
To make AI courses work well, start by carefully planning the course. Identify the learning goals, solutions to problems, common mistakes, grading guidelines, and when to involve a teacher. Then create a collection of approved materials for the AI to use, and write instructions to guide it in providing helpful, accurate answers. Whenever possible, the AI should show students exactly where in the course materials it found the answer. This makes the AI more trustworthy and reduces the likelihood that it makes things up.
It's also important to keep collecting data and improving the AI over time. Track every interaction the AI has with students and label whether the issue was resolved, passed on to a teacher, or corrected. Use this information to improve the AI's answers, update the instructions, and adjust when to involve a teacher. Studies show that chatbots are most helpful when they're treated as tools that can be improved over time. For quality control, give teachers the ability to see how the AI is working and correct it if needed. Before using the AI in a course, test it with real student questions to make sure it's accurate and helpful. According to a study on Tutor CoPilot, students whose tutors used the AI tool were 4 percentage points more likely to master topics, with even greater improvements seen among students working with lower-rated tutors.
Fairness and Policy:
It is important to consider who benefits from AI in education and who may not have equal access to it. Automating routine tasks should give teachers more time to help students who are struggling or have special needs. But if AI is only used in certain courses or programs, it could create even more inequality. To prevent this, policies should connect the use of AI with fairness goals. Schools should report how much time teachers save, how quickly students receive responses, and how often questions are referred to teachers, broken down by student groups.
There also needs to be clear rules and oversight. Governments and organizations should require that AI systems use approved materials, have a clear process for involving teachers, align with learning goals, and protect student data. The U.S. Department of Education says that AI in education should be inspectable, allow for human intervention, and be fair. Providing funding for pilot programs, shared resources, and teacher training will make it easier for all schools to use AI effectively. Without this support, AI will remain a luxury for some rather than a tool for everyone.
The debate over AI in classrooms shouldn't be about banning it or blindly accepting it. Instead, we should focus on designing AI courses that are reliable, transparent, and know when to ask for human help. By preparing carefully, collecting data, and providing ongoing support, schools can free up teachers to focus on the complex tasks that only humans can do well. Leaders should fund shared tools and require transparency. Teachers should demand AI systems that are inspectable and ensure AI helps them teach, rather than replacing them. Students deserve quick, accurate help, and teachers deserve time to teach. We can achieve both by using AI wisely to improve education.
The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.
References
ACM. (2025). Combining tutoring system data with language model capabilities. ACM Proceedings. Davard, N. F. (2025). AI Chatbots in Education: Challenges and Opportunities. Information (MDPI). Kestin, G. (2025). AI tutoring outperforms in-class active learning. Nature (Scientific Reports). Research on LLMs summarization errors. (2025). Analysis: LLMs oversimplify scientific studies. LiveScience. Systematic review: Chatbots in education objectives and outcomes. (2025). Computers & Education (ScienceDirect). U.S. Department of Education. (2023). Artificial Intelligence and the Future of Teaching and Learning (policy guidance).
Picture
Member for
1 year 2 months
Real name
Catherine McGuire
Bio
Professor of AI/Tech, Gordon School of Business, Swiss Institute of Artificial Intelligence
Catherine McGuire is a Professor of Computer Science and AI Systems at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI). She specializes in machine learning infrastructure and applied data engineering, with a focus on bridging research and large-scale deployment of AI tools in financial and policy contexts. Based in the United States (with summer/winter in Berlin and Zurich), she co-leads SIAI’s technical operations, overseeing the institute’s IT architecture and supporting its research-to-production pipeline for AI-driven finance.
SFDR has increased disclosure, but it has not shifted capital in a meaningful way
Europe’s sustainable finance rules prioritize paperwork over market consequences
Real reform must link sustainability claims to enforceable financial incentives
Beliefs shape perception before evidence is even processed
Video and data do not correct bias; they often reinforce it
Education systems must redesign how evidence is interpreted, not just collected
The Age of the Reality Notary: How Schools and Institutions Must Learn to Certify Truth
Picture
Member for
1 year 2 months
Real name
Ethan McGowan
Bio
Professor of AI/Finance, Gordon School of Business, Swiss Institute of Artificial Intelligence
Ethan McGowan is a Professor of AI/Finance and Legal Analytics at the Gordon School of Business, SIAI. Originally from the United Kingdom, he works at the frontier of AI applications in financial regulation and institutional strategy, advising on governance and legal frameworks for next-generation investment vehicles. McGowan plays a key role in SIAI’s expansion into global finance hubs, including oversight of the institute’s initiatives in the Middle East and its emerging hedge fund operations.
Published
Modified
Digital truth can no longer be judged by human sight or sound alone
Institutions must certify reality, not just detect fakes after harm occurs
Education systems now play a central role in rebuilding trust in evidence
In 2024, there was a deepfake attempt about every five minutes. This shows a significant shift in how we see public truth. Almost everyone is worried about being fooled by fake audio or video. These two things together are a warning: fake sights and sounds have quickly become a significant problem. Schools, leaders, and lawmakers need to do more than teach people how to spot fakes. They need to rebuild the ways we trust images, recordings, and documents. We need to create a system of reality keepers—professionals and organizations that can certify trustworthy digital proof. If we don't, we risk losing control over public truth to those who can easily weaponize fake media.
Reality Keepers: Changing the Role of Verification in Education and Organizations
Usually, we think of verification as something experts do after something has been created. An expert examines the details of a film, metadata, or an object, then makes a decision. However, this approach is becoming outdated. Now, fake media is generated by computer programs trained on large datasets of images and audio. These programs can replicate lighting, facial expressions, and even unique speaking styles. Verification is no longer just about finding a single-pixel error. It's about proving the source of an image or recording and when it was taken. We need to start verifying information earlier, closer to where images and recordings first appear. Schools and organizations need to teach and use methods to verify content sources when they are created and shared, rather than trying to identify fakes later. New verification tools are improving, but they still have issues. Detectors work well on things they already know, but they struggle with new methods and changes. This means we can't rely only on detection. Organizations need processes in place to support.
This is important because it changes who is responsible for the truth. Before, newspapers, courts, and labs were the ones who decided what was real, even though they weren't perfect. Now, schools, online platforms, and service providers need to help certify content as real enough for specific uses. The idea of a reality keeper includes being a technician, an auditor, and a teacher. A reality keeper might create a digital record that includes information about where something came from, when it was created, which device was used, and any changes made. This record would stay with the media. For everyday tasks such as online notarizations, rental agreements, and school assignments, this record could be the difference between trust and fraud. This focus on origin is important because, while detectors are improving, fake media methods are improving even faster. We need to focus on protecting the truth from the outset, not after the damage is done.
Figure 1: Deepfake activity and digital forgery have moved from isolated incidents to industrial-scale operations in just three years.
The Evidence: The Problem, the Cost, and the Limits of Human Judgment
The numbers show why we need to act quickly. There has been a massive increase in fake content and fraud. Companies that track identity fraud report that deepfake attempts are occurring frequently. Surveys say that people are apprehensive about being tricked. Experts say fraud will increase significantly due to new computer programs, unless we take action. Fraud could increase by billions of dollars across finance, real estate, and business between now and the middle of the decade. These problems are not imaginary. Title insurers, escrow agents, and banks have already reported cases in which fake voices or videos were used to steal funds. The finance world is like a warning sign. When fraud becomes too costly, the response is swift: transactions are frozen, rules are tightened, and lawsuits are filed.
Figure 2: Fear of deepfakes is widespread, but human ability to spot them remains weak and unreliable.
How well people can spot fakes tells another part of the story. Studies show that people are poor at detecting high-quality forgeries. We can't rely on human accuracy anymore. Computer programs can perform well at detecting forgeries, but only in controlled settings. In real-world settings, where attackers continually adapt their methods, detectors perform less effectively, particularly as fake media techniques evolve. This means that organizations that rely on people to detect forgeries will miss more forgeries. Those who rely only on detectors will face problems with false positives and policies that don't work against new attacks. We need a mix of human and computer verification, along with clear steps for verifying information and holding individuals responsible if they don't follow those steps. This is where the role of the reality keeper comes in.
What Schools, Leaders, and Lawmakers Must Do Now to Build Reality Keeper Systems
First, schools need to do more than teach a single lesson on media knowledge. They need to create skills that build over time, from kindergarten through college and professional training. Younger students should learn simple habits, like noting who took a picture, when, and why. Older students should learn to read metadata, understand how information is stored, and practice tracking sources. Professionals such as journalists, notaries, and title agents should verify information, maintain audit logs, and require proof of origin from others. These skills are essential for everyone. We can teach these skills in practical ways, such as pairing classroom activities with foundational tools, running simulations in which students create and identify fakes, and offering short professional certificates that require ongoing training. The goal is not to create many image verification experts, but to ensure that organizations have simple, repeatable ways to make it harder to create fakes.
Second, we should require proof of origin for essential transactions. This proof would include information about the device used to create the content, a secure timestamp, a record of any changes made, and, when needed, a verified identity. For matters such as remote closings, loan approvals, and election materials, organizations should not accept media that doesn't meet a basic standard of proof. The point is not to make small things complicated, but to create easy, automated checks that don't inconvenience legitimate users while making it more complex and more expensive to commit fraud.
Third, we need to create official reality keeper roles and give organizations a reason to use them. Reality keepers would be licensed professionals who work in organizations such as schools, banks, and courts, or who are provided by trusted third parties. Their license would require them to comply with standards for verification, audit practices, and legal responsibilities. Getting accredited wouldn't require advanced technical skills. Still, it would require following standard procedures: capturing proof of origin, tracking the source of information, maintaining basic security knowledge, and handling disagreements. It's essential to give organizations a reason to use reality keepers. According to EasyLearn ING, professionals can strengthen defenses against digital fraud, including deepfakes, by participating in short, practical courses focused on verification skills. Insurance policies and regulations could reward those who consistently use these reality-keeper techniques. Lawmakers should see these programs as essential and fund pilot programs in schools and high-risk areas.
Addressing the Main Concerns
Some might worry that this will be expensive and complicated. But the cost of doing nothing is rising faster. The cost of adding proof of origin to a video or document is small compared to the value of transactions in finance and real estate. Studies show that the extra work is manageable and can be offset by lower fraud losses and insurance savings. Others might say that attackers will adapt, and that verification will never be perfect. That's true, but we don't need perfection. We just need to make it harder for fraud to be worth it. Even a single additional verification step for wire transfers can prevent much of the crime. Some may be concerned about surveillance and privacy. But we can design the proof of origin to include only the necessary information and to use privacy-protecting methods. We need rules and oversight to minimize the use of data. The alternative—allowing fake content to spread unchecked—creates much bigger problems for privacy and democracy by weakening our trust in facts.
A deepfake every five minutes is not just a statistic; it's a call to action. If we let the ways we trust images and recordings weaken, the future will be one where truth is controlled by whoever can create the best fakes. The answer is to teach about origin, require proof of origin for essential exchanges, and create accredited reality keepers who are legally and operationally responsible. Schools should teach the skills, leaders should establish standards, and lawmakers should require basic verification and fund training programs. This won't solve everything, but it will rebuild trust where it has been lost. According to a recent article, a recommended first step is launching phased pilot programs in selected educational settings that use synthetic personas, along with systems to verify the origin of essential materials and dedicated training for roles focused on maintaining digital authenticity. If these initiatives are not implemented, there is a risk that others may create deceptive content and shape public opinion. That would be a loss we can't afford.
The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.
References
Deloitte. (2024). Generative AI and Fraud: Projected economic impacts. Deloitte Advisory Report. Entrust. (2024). Identity Fraud Report 2025: Deepfake attempts and digital document forgeries. Entrust, November 19, 2024. Jumio. (2024). Global Identity Survey 2024 (consumer fear of deepfakes). Jumio Press Release, May 14, 2024. Alliant National Title Insurance Co. (2026). Deepfake Dangers: How AI Trickery Is Targeting Real Estate Transactions. Blog post, Jan 20, 2026. Tasnim, N., et al. (2025). AI-Generated Image Detection: An Empirical Study and Benchmarking. arXiv preprint. Group-IB. (2025). The Anatomy of a Deepfake Voice Phishing Attack. Group-IB Blog. Stewart Title. (2025). Deepfake Fraud in Real Estate: What Every Buyer, Seller, and Agent Needs to Know. Stewart Insights, Oct 30, 2025.
Picture
Member for
1 year 2 months
Real name
Ethan McGowan
Bio
Professor of AI/Finance, Gordon School of Business, Swiss Institute of Artificial Intelligence
Ethan McGowan is a Professor of AI/Finance and Legal Analytics at the Gordon School of Business, SIAI. Originally from the United Kingdom, he works at the frontier of AI applications in financial regulation and institutional strategy, advising on governance and legal frameworks for next-generation investment vehicles. McGowan plays a key role in SIAI’s expansion into global finance hubs, including oversight of the institute’s initiatives in the Middle East and its emerging hedge fund operations.
The Death of the Entry-Level Role and the University Mandate
Picture
Member for
1 year 2 months
Real name
David O'Neill
Bio
Professor of AI/Policy, Gordon School of Business, Swiss Institute of Artificial Intelligence
David O’Neill is a Professor of AI/Policy at the Gordon School of Business, SIAI, based in Switzerland. His work explores the intersection of AI, quantitative finance, and policy-oriented educational design, with particular attention to executive-level and institutional learning frameworks.
In addition to his academic role, he oversees the operational and financial administration of SIAI’s education programs in Europe, contributing to governance, compliance, and the integration of AI methodologies into policy and investment-oriented curricula.
Published
Modified
AI is permanently erasing the entry-level roles that once trained new graduates
Public reinvestment funds will fail to rescue these jobs from corporate efficiency measures
Universities must urgently adopt high-intensity training models to prevent a workforce crisis
By 2025, it looks like the traditional starting point for young tech professionals will be gone. According to a report from the Korea Labor Institute, while traditional forms of AI were linked to lowered growth in full-time, permanent jobs in areas such as manufacturing between 2018 and 2023, there is no evidence that generative AI, such as Large Language Models, has caused a fundamental drop in entry-level jobs in tech-related fields in Korea during that period. These AI systems can now handle basic tasks—like fixing code errors, writing drafts, and cleaning up data—that used to be how new graduates learned the ropes.
Some people suggest creating a special fund to help people get new skills and find different types of apprenticeships. But this idea doesn't account for how companies actually operate or the fact that these kinds of training funds haven't worked well in the past. What we're seeing is a real problem: the path from college to a job is breaking down. Schools need to take swift, severe action to address this gap. If they don't, less reputable bootcamps that make big promises but don't deliver will likely take their place and charge high fees.
Why an AI Retraining Fund Won't Work
The idea of a fund to save junior positions sounds good at first. But in today's political and economic situation, it’s not very realistic. There have been past attempts by governments or groups to tax companies for using automation to pay for worker retraining. These funds rarely work as intended. Right now, companies are trying to cut costs by reducing staff and using AI to improve efficiency. It doesn't make sense to expect them to voluntarily give money to a fund that supports the very entry-level jobs they're trying to replace with AI. Even if a fund like this existed, it would probably take years for the money to be distributed because of all the red tape involved. By the time a training program is approved, the AI tools used in that field will already have changed significantly.
There's also a basic problem of motivation. Companies focus on their results every three months, but people need years to develop their skills. If a company can switch from paying a junior employee $70,000 a year to paying $20 a month for an AI tool, a small subsidy isn't going to change their minds. This creates a gap in the workforce: we have experienced professionals who can manage AI, and we have students who understand the theory, but there's no clear way to gain experience and advance their careers. Depending on donations or taxes to fix this problem ignores the fact that the job market has already moved on. Schools need to step up and accept responsibility, since they're the ones who promise to prepare students for their careers. They're not keeping up with how quickly the market is changing.
Figure 1: While senior roles remain stable, entry-level positions have decoupled from the broader market, confirming that automation is displacing "learning roles" rather than expert ones.
The Problem with Low-Quality Coding Bootcamps
When schools don't equip people with the skills they need for jobs, other options arise. This was clear during the rise of coding bootcamps from 2015 to 2023. Many private schools popped up promising to turn anyone into a software engineer in just a few months. But the results haven't been that great. In places like South Korea, there are now too many graduates from these coding schools who know the basics of coding but don't really understand computer science or AI.
These programs usually focus on how to use a tool rather than on why it works. When AI is doing more and more, knowing just the how isn't very useful, because the AI already knows it. If you only teach someone to write simple code, you're teaching them to compete with a machine that can do it better, faster, and for free.
The most concerning aspect of this trend is that quality is getting worse. A good AI training program is hard, and many people won't pass. According to Algocademy, coding bootcamps often lack uniform curricula, which can lead to inconsistent course quality and gaps in essential knowledge, potentially making it easier for students to enroll without guaranteeing that they will be job-ready. According to a 2024 Gartner survey, marketing teams are leading in AI adoption, but their approaches vary widely, as reflected in how AI bootcamps are currently marketed. According to an OECD report on AI and the labor market in Korea, while many programs offer training in areas like prompt engineering or model tuning, these often provide only a basic understanding of the technology. As a result, there is a growing number of certified individuals who may still lack the advanced skills required for the remaining entry-level technical roles that have not yet been automated by AI.
Why Universities Have to Step Up
Schools can't just teach theory and leave the practical training to employers anymore. Employers have made it clear they're unwilling to pay to train new graduates. This means schools need to provide the equivalent of a residency program. They need to include hands-on, challenging projects that are similar to the work experienced professionals do. This isn't simply about adding a few more labs; it's about changing the way schools are funded and taught. Universities need to stop operating in silos and start acting as incubators that help students develop advanced skills. If a student graduates today with only a conceptual understanding of algorithms, they probably won't get a job. They need to demonstrate that they can use AI systems to produce professional-quality work from the outset.
This change is difficult because it's expensive. The Swiss Institute of Artificial Intelligence (SIAI) provides a clear case study of the need for a structural pivot. Faced with the economic reality that a rigorous, high-quality Master’s track in AI operates effectively as a "loss leader," the institute was forced to innovate its underlying business model. The solution was to establish an "Executive AI MBA"—aimed at high-level leaders who need to understand AI strategy—to cross-subsidize the intensive, capital-heavy training of research students. This financial juxtaposition allows the institute to maintain a survival rate that reflects the true difficulty of the subject matter, preserving elite standards without succumbing to the insolvency that currently threatens traditional academic departments. Most universities aren't willing to make this kind of change yet, but they'll have to if they want to stay relevant.
Figure 2: To produce "Day 1" competent graduates, the cost of GPU compute and expert mentorship exceeds tuition; this deficit must be subsidized by high-margin executive programs.
The Reality of Layoffs and Professional Survival
Some people might say that universities shouldn't be trade schools and should focus on teaching students how to think. That sounds good, but it ignores the reality of student debt and the shortage of jobs. If learning how to think doesn't lead to a job, the university system as we know it will collapse as students choose cheaper, even if lower-quality, options. We can't stop companies from cutting jobs to save money. Efficiency is the primary driver of today's economy. So, the only way to protect the future of the workforce is to ensure that entry-level graduates can perform at the level of someone with several years of experience. This requires a level of rigorous education that most schools aren't prepared to provide.
The message is clear: education leaders need to stop waiting for a government-funded retraining program that's unlikely to materialize. They need to stop competing with low-quality bootcamps by lowering their own standards. Instead, they need to change how they're funded to support challenging, job-ready training. This means accepting the possibility of higher failure rates and the cost of expensive, practical resources. We need to connect schools and industry within the university. If we don't, we'll leave a whole generation of students stuck in a situation where they're too qualified for manual labor but not qualified enough to compete with AI systems taking over their jobs. The time for small changes is over; universities need to become the gatekeepers of professional success.
The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.
References
Bureau of Labor Statistics. (2024). Employment Projections: 2023-2033 Summary. U.S. Department of Labor. Gartner. (2024). The Impact of Generative AI on the Tech Talent Market. Gartner Research. International Labour Organization. (2023). Generative AI and Jobs: A Global Analysis of Potential Effects on Job Quantity and Quality. ILO Publishing. OECD. (2024). OECD Employment Outlook 2024: The AI Revolution in the Workplace. OECD Publishing. Stanford Institute for Human-Centered AI. (2024). Artificial Intelligence Index Report 2024. Stanford University. World Economic Forum. (2023). The Future of Jobs Report 2023. WEF.
Picture
Member for
1 year 2 months
Real name
David O'Neill
Bio
Professor of AI/Policy, Gordon School of Business, Swiss Institute of Artificial Intelligence
David O’Neill is a Professor of AI/Policy at the Gordon School of Business, SIAI, based in Switzerland. His work explores the intersection of AI, quantitative finance, and policy-oriented educational design, with particular attention to executive-level and institutional learning frameworks.
In addition to his academic role, he oversees the operational and financial administration of SIAI’s education programs in Europe, contributing to governance, compliance, and the integration of AI methodologies into policy and investment-oriented curricula.