Payment stablecoins now hold a quiet form of monetary privilege
It comes from settlement design, not true money creation
Until issuers are regulated as banks, the system remains distorted
In 2024, stablecoins used for payments handled tri
The market value of a degree now depends more on skills than prestige
Employers pay for verified, job-ready learning
Education policy must validate outcomes, not labels
The worth of a college degree is changing quickly.
AlphaGenome is pushing education systems to rethink how AI, genomics, and policy are taught
Predictive genomics now demands skills beyond traditional biology
Without reform, the benefits will stay concentrated in a few institutions
The dollar’s strength is increasingly driven by funding stress, not stable safe-haven confidence
China’s shift from U.S. Treasuries toward gold reflects rising concern over U.S.
Europe’s AI gap is not about technology — it is about weak hands-on use at work
Productivity gains come from daily tool use, not from policy frameworks alone
Without faster workplace adoption, Europe will fall further behind global peers
Make Time Teachable: Scaling Learning with AI-Integrated Courses
Picture
Member for
1 year 2 months
Real name
Catherine McGuire
Bio
Professor of AI/Tech, Gordon School of Business, Swiss Institute of Artificial Intelligence
Catherine McGuire is a Professor of Computer Science and AI Systems at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI). She specializes in machine learning infrastructure and applied data engineering, with a focus on bridging research and large-scale deployment of AI tools in financial and policy contexts. Based in the United States (with summer/winter in Berlin and Zurich), she co-leads SIAI’s technical operations, overseeing the institute’s IT architecture and supporting its research-to-production pipeline for AI-driven finance.
Published
Modified
AI-integrated courses can handle routine questions and free teachers for higher-value work
Well-designed course bots cut response time without hurting learning quality
The real policy issue is how to govern AI, not whether to use it
Imagine if a big chunk of the questions students ask in an online course could be answered by the course materials themselves, with a little help from a smart computer program. It turns out, this is totally doable! This means teachers could spend less time repeating the same answers and more time on what really matters. So, instead of banning these AI tools, we should figure out how to use them to improve courses, reduce teachers' workloads, and ensure students get the most out of their learning. The goal is to create courses where AI acts like a helpful assistant, providing students with quick, reliable support while freeing up teachers to focus on coaching, grading, and ensuring everyone has a fair chance to succeed. The big question for leaders is whether to embrace these helpful tools or stick to old ways that might not be as good or cost-effective.
Rethinking Automation with AI Courses
People often argue about whether AI is a threat or a miracle cure for education. But the truth is somewhere in between. Let's forget about whether AI should teach and instead focus on what parts of teaching can be automated and how. The best tasks to automate are those that involve many simple questions, such as explaining instructions, giving examples, or checking whether students followed the rules for formatting their work. An AI system can handle these tasks by sticking to the course materials and following clear guidelines. This means teachers get fewer interruptions, students get faster help, and we can see where students are struggling and need more personal attention.
Figure 1: Synthesis of online course Q&A studies; systematic reviews of chatbots in education; conservative estimate from hybrid tutoring trials
Three things are coming together to make this possible: First, AI can now provide clear, helpful answers when asked the right questions. Second, course websites and tools enable us to connect course materials more effectively. Third, studies show that AI tutors can improve learning when used carefully. Instead of banning AI, we should focus on establishing rules and standards to ensure it's used safely and effectively. This way, we can reduce routine work for teachers and let them focus on what makes the biggest difference for students.
What Works and What to Watch Out For:
Studies are showing that AI tutors and chatbots can help students learn and save time. But it depends on how they're designed. One study in 2025 found that students learned more and were more engaged when using an AI tutor that adhered to specific teaching methods and used course materials to guide its responses.
Figure 2: Careful system design and ongoing curation sharply increase the instructional time reclaimed by AI-integrated courses.
This means that if a course or program keeps track of common student questions and has clear answers ready, it can use AI to handle many requests accurately. Tests of systems that combine tutoring data with AI show they perform better when the AI follows a script and when teachers review tricky cases. Experts estimate that a course bot can handle about 60–90% of routine questions in its first year. This is based on the fact that many online course questions are repeated and that AI tutors can give accurate answers when used properly. Keep in mind that the more the bot is improved and updated, the better it will perform.
However, we need to be careful. AI can sometimes simplify or misrepresent scientific or technical information if not properly controlled. This can be dangerous in subjects like science and health. To prevent this, we need to ensure the AI uses only course-approved materials and that any complex or important questions are referred to a teacher. AI can save time, but it needs to be managed rather than used as a replacement for teachers.
Creating AI Courses That Students Can Trust:
To make AI courses work well, start by carefully planning the course. Identify the learning goals, solutions to problems, common mistakes, grading guidelines, and when to involve a teacher. Then create a collection of approved materials for the AI to use, and write instructions to guide it in providing helpful, accurate answers. Whenever possible, the AI should show students exactly where in the course materials it found the answer. This makes the AI more trustworthy and reduces the likelihood that it makes things up.
It's also important to keep collecting data and improving the AI over time. Track every interaction the AI has with students and label whether the issue was resolved, passed on to a teacher, or corrected. Use this information to improve the AI's answers, update the instructions, and adjust when to involve a teacher. Studies show that chatbots are most helpful when they're treated as tools that can be improved over time. For quality control, give teachers the ability to see how the AI is working and correct it if needed. Before using the AI in a course, test it with real student questions to make sure it's accurate and helpful. According to a study on Tutor CoPilot, students whose tutors used the AI tool were 4 percentage points more likely to master topics, with even greater improvements seen among students working with lower-rated tutors.
Fairness and Policy:
It is important to consider who benefits from AI in education and who may not have equal access to it. Automating routine tasks should give teachers more time to help students who are struggling or have special needs. But if AI is only used in certain courses or programs, it could create even more inequality. To prevent this, policies should connect the use of AI with fairness goals. Schools should report how much time teachers save, how quickly students receive responses, and how often questions are referred to teachers, broken down by student groups.
There also needs to be clear rules and oversight. Governments and organizations should require that AI systems use approved materials, have a clear process for involving teachers, align with learning goals, and protect student data. The U.S. Department of Education says that AI in education should be inspectable, allow for human intervention, and be fair. Providing funding for pilot programs, shared resources, and teacher training will make it easier for all schools to use AI effectively. Without this support, AI will remain a luxury for some rather than a tool for everyone.
The debate over AI in classrooms shouldn't be about banning it or blindly accepting it. Instead, we should focus on designing AI courses that are reliable, transparent, and know when to ask for human help. By preparing carefully, collecting data, and providing ongoing support, schools can free up teachers to focus on the complex tasks that only humans can do well. Leaders should fund shared tools and require transparency. Teachers should demand AI systems that are inspectable and ensure AI helps them teach, rather than replacing them. Students deserve quick, accurate help, and teachers deserve time to teach. We can achieve both by using AI wisely to improve education.
The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.
References
ACM. (2025). Combining tutoring system data with language model capabilities. ACM Proceedings. Davard, N. F. (2025). AI Chatbots in Education: Challenges and Opportunities. Information (MDPI). Kestin, G. (2025). AI tutoring outperforms in-class active learning. Nature (Scientific Reports). Research on LLMs summarization errors. (2025). Analysis: LLMs oversimplify scientific studies. LiveScience. Systematic review: Chatbots in education objectives and outcomes. (2025). Computers & Education (ScienceDirect). U.S. Department of Education. (2023). Artificial Intelligence and the Future of Teaching and Learning (policy guidance).
Picture
Member for
1 year 2 months
Real name
Catherine McGuire
Bio
Professor of AI/Tech, Gordon School of Business, Swiss Institute of Artificial Intelligence
Catherine McGuire is a Professor of Computer Science and AI Systems at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI). She specializes in machine learning infrastructure and applied data engineering, with a focus on bridging research and large-scale deployment of AI tools in financial and policy contexts. Based in the United States (with summer/winter in Berlin and Zurich), she co-leads SIAI’s technical operations, overseeing the institute’s IT architecture and supporting its research-to-production pipeline for AI-driven finance.
SFDR has increased disclosure, but it has not shifted capital in a meaningful way
Europe’s sustainable finance rules prioritize paperwork over market consequences
Real reform must link sustainability claims to enforceable financial incentives
Beliefs shape perception before evidence is even processed
Video and data do not correct bias; they often reinforce it
Education systems must redesign how evidence is interpreted, not just collected
The Age of the Reality Notary: How Schools and Institutions Must Learn to Certify Truth
Picture
Member for
1 year 2 months
Real name
Ethan McGowan
Bio
Professor of AI/Finance, Gordon School of Business, Swiss Institute of Artificial Intelligence
Ethan McGowan is a Professor of AI/Finance and Legal Analytics at the Gordon School of Business, SIAI. Originally from the United Kingdom, he works at the frontier of AI applications in financial regulation and institutional strategy, advising on governance and legal frameworks for next-generation investment vehicles. McGowan plays a key role in SIAI’s expansion into global finance hubs, including oversight of the institute’s initiatives in the Middle East and its emerging hedge fund operations.
Published
Modified
Digital truth can no longer be judged by human sight or sound alone
Institutions must certify reality, not just detect fakes after harm occurs
Education systems now play a central role in rebuilding trust in evidence
In 2024, there was a deepfake attempt about every five minutes. This shows a significant shift in how we see public truth. Almost everyone is worried about being fooled by fake audio or video. These two things together are a warning: fake sights and sounds have quickly become a significant problem. Schools, leaders, and lawmakers need to do more than teach people how to spot fakes. They need to rebuild the ways we trust images, recordings, and documents. We need to create a system of reality keepers—professionals and organizations that can certify trustworthy digital proof. If we don't, we risk losing control over public truth to those who can easily weaponize fake media.
Reality Keepers: Changing the Role of Verification in Education and Organizations
Usually, we think of verification as something experts do after something has been created. An expert examines the details of a film, metadata, or an object, then makes a decision. However, this approach is becoming outdated. Now, fake media is generated by computer programs trained on large datasets of images and audio. These programs can replicate lighting, facial expressions, and even unique speaking styles. Verification is no longer just about finding a single-pixel error. It's about proving the source of an image or recording and when it was taken. We need to start verifying information earlier, closer to where images and recordings first appear. Schools and organizations need to teach and use methods to verify content sources when they are created and shared, rather than trying to identify fakes later. New verification tools are improving, but they still have issues. Detectors work well on things they already know, but they struggle with new methods and changes. This means we can't rely only on detection. Organizations need processes in place to support.
This is important because it changes who is responsible for the truth. Before, newspapers, courts, and labs were the ones who decided what was real, even though they weren't perfect. Now, schools, online platforms, and service providers need to help certify content as real enough for specific uses. The idea of a reality keeper includes being a technician, an auditor, and a teacher. A reality keeper might create a digital record that includes information about where something came from, when it was created, which device was used, and any changes made. This record would stay with the media. For everyday tasks such as online notarizations, rental agreements, and school assignments, this record could be the difference between trust and fraud. This focus on origin is important because, while detectors are improving, fake media methods are improving even faster. We need to focus on protecting the truth from the outset, not after the damage is done.
Figure 1: Deepfake activity and digital forgery have moved from isolated incidents to industrial-scale operations in just three years.
The Evidence: The Problem, the Cost, and the Limits of Human Judgment
The numbers show why we need to act quickly. There has been a massive increase in fake content and fraud. Companies that track identity fraud report that deepfake attempts are occurring frequently. Surveys say that people are apprehensive about being tricked. Experts say fraud will increase significantly due to new computer programs, unless we take action. Fraud could increase by billions of dollars across finance, real estate, and business between now and the middle of the decade. These problems are not imaginary. Title insurers, escrow agents, and banks have already reported cases in which fake voices or videos were used to steal funds. The finance world is like a warning sign. When fraud becomes too costly, the response is swift: transactions are frozen, rules are tightened, and lawsuits are filed.
Figure 2: Fear of deepfakes is widespread, but human ability to spot them remains weak and unreliable.
How well people can spot fakes tells another part of the story. Studies show that people are poor at detecting high-quality forgeries. We can't rely on human accuracy anymore. Computer programs can perform well at detecting forgeries, but only in controlled settings. In real-world settings, where attackers continually adapt their methods, detectors perform less effectively, particularly as fake media techniques evolve. This means that organizations that rely on people to detect forgeries will miss more forgeries. Those who rely only on detectors will face problems with false positives and policies that don't work against new attacks. We need a mix of human and computer verification, along with clear steps for verifying information and holding individuals responsible if they don't follow those steps. This is where the role of the reality keeper comes in.
What Schools, Leaders, and Lawmakers Must Do Now to Build Reality Keeper Systems
First, schools need to do more than teach a single lesson on media knowledge. They need to create skills that build over time, from kindergarten through college and professional training. Younger students should learn simple habits, like noting who took a picture, when, and why. Older students should learn to read metadata, understand how information is stored, and practice tracking sources. Professionals such as journalists, notaries, and title agents should verify information, maintain audit logs, and require proof of origin from others. These skills are essential for everyone. We can teach these skills in practical ways, such as pairing classroom activities with foundational tools, running simulations in which students create and identify fakes, and offering short professional certificates that require ongoing training. The goal is not to create many image verification experts, but to ensure that organizations have simple, repeatable ways to make it harder to create fakes.
Second, we should require proof of origin for essential transactions. This proof would include information about the device used to create the content, a secure timestamp, a record of any changes made, and, when needed, a verified identity. For matters such as remote closings, loan approvals, and election materials, organizations should not accept media that doesn't meet a basic standard of proof. The point is not to make small things complicated, but to create easy, automated checks that don't inconvenience legitimate users while making it more complex and more expensive to commit fraud.
Third, we need to create official reality keeper roles and give organizations a reason to use them. Reality keepers would be licensed professionals who work in organizations such as schools, banks, and courts, or who are provided by trusted third parties. Their license would require them to comply with standards for verification, audit practices, and legal responsibilities. Getting accredited wouldn't require advanced technical skills. Still, it would require following standard procedures: capturing proof of origin, tracking the source of information, maintaining basic security knowledge, and handling disagreements. It's essential to give organizations a reason to use reality keepers. According to EasyLearn ING, professionals can strengthen defenses against digital fraud, including deepfakes, by participating in short, practical courses focused on verification skills. Insurance policies and regulations could reward those who consistently use these reality-keeper techniques. Lawmakers should see these programs as essential and fund pilot programs in schools and high-risk areas.
Addressing the Main Concerns
Some might worry that this will be expensive and complicated. But the cost of doing nothing is rising faster. The cost of adding proof of origin to a video or document is small compared to the value of transactions in finance and real estate. Studies show that the extra work is manageable and can be offset by lower fraud losses and insurance savings. Others might say that attackers will adapt, and that verification will never be perfect. That's true, but we don't need perfection. We just need to make it harder for fraud to be worth it. Even a single additional verification step for wire transfers can prevent much of the crime. Some may be concerned about surveillance and privacy. But we can design the proof of origin to include only the necessary information and to use privacy-protecting methods. We need rules and oversight to minimize the use of data. The alternative—allowing fake content to spread unchecked—creates much bigger problems for privacy and democracy by weakening our trust in facts.
A deepfake every five minutes is not just a statistic; it's a call to action. If we let the ways we trust images and recordings weaken, the future will be one where truth is controlled by whoever can create the best fakes. The answer is to teach about origin, require proof of origin for essential exchanges, and create accredited reality keepers who are legally and operationally responsible. Schools should teach the skills, leaders should establish standards, and lawmakers should require basic verification and fund training programs. This won't solve everything, but it will rebuild trust where it has been lost. According to a recent article, a recommended first step is launching phased pilot programs in selected educational settings that use synthetic personas, along with systems to verify the origin of essential materials and dedicated training for roles focused on maintaining digital authenticity. If these initiatives are not implemented, there is a risk that others may create deceptive content and shape public opinion. That would be a loss we can't afford.
The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.
References
Deloitte. (2024). Generative AI and Fraud: Projected economic impacts. Deloitte Advisory Report. Entrust. (2024). Identity Fraud Report 2025: Deepfake attempts and digital document forgeries. Entrust, November 19, 2024. Jumio. (2024). Global Identity Survey 2024 (consumer fear of deepfakes). Jumio Press Release, May 14, 2024. Alliant National Title Insurance Co. (2026). Deepfake Dangers: How AI Trickery Is Targeting Real Estate Transactions. Blog post, Jan 20, 2026. Tasnim, N., et al. (2025). AI-Generated Image Detection: An Empirical Study and Benchmarking. arXiv preprint. Group-IB. (2025). The Anatomy of a Deepfake Voice Phishing Attack. Group-IB Blog. Stewart Title. (2025). Deepfake Fraud in Real Estate: What Every Buyer, Seller, and Agent Needs to Know. Stewart Insights, Oct 30, 2025.
Picture
Member for
1 year 2 months
Real name
Ethan McGowan
Bio
Professor of AI/Finance, Gordon School of Business, Swiss Institute of Artificial Intelligence
Ethan McGowan is a Professor of AI/Finance and Legal Analytics at the Gordon School of Business, SIAI. Originally from the United Kingdom, he works at the frontier of AI applications in financial regulation and institutional strategy, advising on governance and legal frameworks for next-generation investment vehicles. McGowan plays a key role in SIAI’s expansion into global finance hubs, including oversight of the institute’s initiatives in the Middle East and its emerging hedge fund operations.