Cheap solar has reshaped the growth logic for power-scarce economies
Solar-first strategies deliver faster, cheaper energy than nuclear in most cases today
The challenge is timing: build solar now and scale complexity only when demand rises
Light for Sale: Reframing the Data Center Community Impact as a Power and Trust Problem
Picture
Member for
1 year 3 months
Real name
Keith Lee
Bio
Professor of AI/Finance, Gordon School of Business, Swiss Institute of Artificial Intelligence
Keith Lee is a Professor of AI/Finance at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI). His work focuses on AI-driven finance, quantitative modeling, and data-centric approaches to economic and financial systems. He leads research and teaching initiatives that bridge machine learning, financial mathematics, and institutional decision-making.
He also serves as a Senior Research Fellow with the GIAI Council, advising on long-term research direction and global strategy, including SIAI’s academic and institutional initiatives across Europe, Asia, and the Middle East.
Published
Modified
AI data centers are straining local power systems
Donations cannot replace enforceable community agreements
Real benefits require binding commitments to the grid
In 2023, U.S. data centers consumed about 4.4% of the country's electricity. Some predict this will go up a lot in the next five years. Now, 4.4% might not sound like much, but it's enough to make any mayor, school head, or utility company think hard about those energy bills and possible power shortages in the winter. Here's the thing: when companies build these huge AI data centers, they're not merely setting up servers. They're changing the local energy landscape, the town's negotiating position, and public trust in the system. So, those donations to schools, promises of jobs, and fancy community events? They're nice, but they don't replace solid, legally binding agreements that address energy and the community's needs directly. If we keep treating these donations as the only factor in data center community impact, we're letting private deals obscure risks everyone shares. This is about changing the way we talk about this issue. Community benefits are important, but only if they're tied to clear, verifiable commitments on energy, jobs, and governance. This way, we protect regular people from being left in the dark.
Why Charity Isn't Enough
We all know the usual routine: A big company promises money for schools, says they'll train local people, and maybe builds a nice public space. These things do help. Schools get stuff, some people get jobs, and some groups get more money. But these are often one-time things that don't really solve the main problem: the huge amount of electricity needed to run AI and the decisions about how to reliably get that energy. In the U.S., data centers are using more and more electricity. It's more than a possibility anymore. Government reports and international energy groups all show that this demand is going way up. When we treat community benefits as just for show, rather than as legally tied to energy outcomes, we're making a deal where the community takes on the risks while the companies get good publicity. This difference between what looks good and what's really required is the main thing we need to fix.
What happens because of this? Places with many AI data centers experience higher energy demand, less available energy, and higher prices when there are issues with energy delivery or fuel. Energy companies and studies warn of real risks: winter peaks, cold weather, and fuel shortages can disrupt the system when these large new energy users come online. The usual response – thank-you ceremonies and awards – doesn't do anything to make more electricity, improve energy delivery, or create reserve energy options. A real data center community impact plan connects those gifts to long-term investments that change the energy supply: promises of energy capacity, on-site energy generation with strict rules about pollution and reliability, and legal support for energy grid upgrades. These are very different from just handing out donations.
Figure 1: Data centers are moving from a marginal load to a system-level stressor within a single decade.
Looking at the Money Behind the Gifts
To create enforceable deals, we must understand the costs and what matters in negotiations. Building an AI data center can cost hundreds of millions, sometimes over half a billion, for a typical-sized facility. Equipment, especially high-powered computer processors and energy infrastructure, is a high cost up front.
Figure 2: Community donations remain financially marginal compared to hardware and power investments.
That's why companies talk about local benefits: jobs and donations are cheap compared to long-term investments in energy. But these gestures can't replace what the community needs to keep the lights on. When a data center needs new energy sources or major upgrades to the energy grid, the community has to wait years for approvals and construction before it sees any benefit. And the community's trust is easily broken if the only visible benefits are temporary and cannot be enforced.
Smart community benefit agreements can change this situation. Instead of accepting a one-time payment for approval, local governments should require commitments contingent on specific conditions: starting operations only when the energy grid is ready, contributing to energy delivery or generation, and clear job-training programs with defined employment and wage targets.
This changes who the community is in the data center community impact, from people who watch the PR to people who sign contracts with the right to check on things. It's important to remember that many companies don't have unlimited money. Hardware costs and financial cycles limit their cash. That means getting creative with money – using public and private funds, long-term energy contracts, or even local bonds – can assist in bridging the gap while keeping the risks public and the benefits enforceable.
Governance, Fairness, and the Energy Grid
Saying that community benefits should be part of a contract isn't simply a technical thing. It's concerning fairness. Voluntary donations often go to visible places – schools, parks, sports fields – while less obvious problems happen elsewhere: small businesses paying higher rates, renters living near power stations, or communities dealing with construction issues.
Binding agreements can specify how benefits are shared, require community monitoring, and fund investments that reduce local risks, such as helping people make their homes more energy-efficient or providing targeted assistance with energy bills. The more that deals link company actions to scheduled grid improvements, the less likely it is that a whole area will have to cut back on energy use to keep the system working. This puts the community at the center of the discussion, not on the sidelines.
To make this happen, companies, energy providers, and local leaders need to share responsibility for planning over longer periods than companies usually plan. Data center community impact means creating new systems: regional groups that include community representatives, monitoring systems that can be enforced, and rules that stop operations if energy reliability goals aren't met. These aren't unusual things. They're similar to environmental or labor agreements used in other areas. Using them here would turn donations into tools that lock in investments in energy capacity and reliability. In short, charity becomes powerful when it's combined with enforceable engineering.
Addressing the Criticisms: Profit, Speed, and Competition:
Two common complaints will come up. First, this will slow investment and cost jobs. Second: Tougher deals will send projects to other countries or more friendly states during the AI boom. Both deserve honest answers. According to a report from Solar Power World, nearly 2,600 gigawatts of new power generation and energy storage are now seeking grid interconnection across the United States, showing that setting permit requirements in stages that align with business schedules and ensuring that grid upgrades are on track can help protect jobs and communities while supporting project progress. It ensures that the benefits occur at the same time as the impacts.
On the second point, the notion that any U.S. restriction will cause us to lose ground to competitors is exaggerated. The world needs secure, well-regulated data centers, and that favors places that combine reliability with clear rules. Investors want predictability, and communities want certainty. A report from Data Center Watch notes that in the past three months alone, 20 data center projects worth $98 billion faced delays or blockage due to local opposition. This highlights the importance of reducing political risk through careful planning and addressing community concerns to improve the chances of long-term project success.
There's also a technical argument that companies often use: We'll solve this with on-site gas generation or batteries. Many companies are using on-site fuels or combined ways to meet immediate needs. This can reduce short-term stress on the grid, but it can also increase pollution if natural gas is used without strict limits. Recent reports indicate that more people are using natural gas for short-term reliability, raising environmental concerns that communities must consider. Good community benefit agreements include conditions about pollution and efficiency, and a clear plan for adding renewable energy and storage as the energy grid improves. The goal isn't to ban all temporary solutions, but to make them conditional, transparent, and connected to progress toward cleaner energy.
A Policy for Light and Trust
If 4.4% seems abstract, imagine a winter evening when the city asks people to use less electricity because a few new data centers are using a lot of power. That's the risk we take when we stop the conversation at charitable giving. A real data center community impact policy treats benefits as binding agreements that protect public resources and share the benefits fairly. Cities and counties should require operations to happen in stages based on verified grid upgrades, demand financial plans which support long-term energy capacity, and insist on governance systems that give residents the right to check on things and have real ways to fix problems.
To be clear: donations, training programs, and community projects are good things. But they only become fair when they're part of enforceable agreements that precede, not follow, company operations. We can welcome new ideas without losing power. The choice isn't between jobs and reliability. It's between careful, enforceable partnerships and the slow loss of citizen trust when charity tries to fill the gaps that infrastructure investment should cover. Let's create rules that make the benefits of AI real and lasting, not just a show.
The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.
References
Department of Energy. (2024). DOE Releases New Report Evaluating Increase in Electricity Demand from Data Centers. U.S. Department of Energy. International Energy Agency. (2024). Energy and AI: Energy Demand from AI. IEA. Lancaster City. (2025). Lancaster AI Hub: Community Benefits Agreement (Draft). City of Lancaster (PA). National Association for the Advancement of Colored People (NAACP). (2026). Community Benefits Agreement Template. NAACP. Reuters. (2026, January 28–29). Forecast record electricity demand to test largest US power grid; US faces growing risks of power outages due to rising winter demand, changing fuel mix. Reuters. Cleanview / Axios reporting. (2026, Feb). The AI boom is making natural gas great again (analysis of planned on-site power equipment). Cushman & Wakefield. (2025). Data Center Development Cost Guide 2025. Cushman & Wakefield. Industry market reports on GPU and data center equipment costs (2024–2025), including market surveys and pricing guides.
Picture
Member for
1 year 3 months
Real name
Keith Lee
Bio
Professor of AI/Finance, Gordon School of Business, Swiss Institute of Artificial Intelligence
Keith Lee is a Professor of AI/Finance at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI). His work focuses on AI-driven finance, quantitative modeling, and data-centric approaches to economic and financial systems. He leads research and teaching initiatives that bridge machine learning, financial mathematics, and institutional decision-making.
He also serves as a Senior Research Fellow with the GIAI Council, advising on long-term research direction and global strategy, including SIAI’s academic and institutional initiatives across Europe, Asia, and the Middle East.
LLM-powered tutoring and the Discreet Reordering of Teaching
Picture
Member for
1 year 2 months
Real name
Ethan McGowan
Bio
Professor of AI/Finance, Gordon School of Business, Swiss Institute of Artificial Intelligence
Ethan McGowan is a Professor of AI/Finance and Legal Analytics at the Gordon School of Business, SIAI. Originally from the United Kingdom, he works at the frontier of AI applications in financial regulation and institutional strategy, advising on governance and legal frameworks for next-generation investment vehicles. McGowan plays a key role in SIAI’s expansion into global finance hubs, including oversight of the institute’s initiatives in the Middle East and its emerging hedge fund operations.
Published
Modified
LLM-powered tutoring is already automating routine teaching at scale
The core challenge is redesigning education labor and governance around AI
Without reinvestment in human expertise, automation will widen inequality
Picture classrooms where AI tutors are helping students learn. These aren't just high-tech gadgets; they're changing how teaching works. These AI tutors can quickly answer many student questions, much faster than a teacher could grade a single paper. Since 2023, studies have shown that students using AI tutors improve their skills in practice and analytical reasoning. With AI handling quick answers and explanations, schools can free teachers for more critical tasks, such as understanding each student's needs and creating better lessons. The big question isn't whether AI can teach, but how schools will use experienced teachers once AI takes over some of the routine work. If we don't plan carefully, AI could make education even more unequal and lessen the demand for skilled teachers who can create engaging lessons. AI-powered tutoring is already changing who does the teaching.
Rethinking Teaching
We need to rethink teaching. Instead of seeing it as a single job, we should view it as a team effort. AI tutors can handle simple, repetitive tasks, such as giving examples, answering basic questions, and providing quick feedback. Human teachers can then focus on designing the curriculum, determining each student's needs, and supporting students with social and emotional challenges. Many large educational programs already divide tasks among different roles. So, instead of asking whether AI will replace teachers, we should ask which tasks are better suited to machines and which require human involvement.
Two important considerations are teacher pay and the rate at which AI is being adopted. Teachers in many developed countries don't earn as much as other professionals with similar levels of education. A Forbes report notes that, while AI tools have been increasingly adopted in schools between 2024 and 2025, many teachers feel unprepared: 76% of teachers in the UK and 69% in the US report receiving little to no formal AI training from their schools. This means that the choices we make now will shape how students learn for years to come.
Figure 1: LLM-powered tutoring shifts human effort away from repetition toward design and intervention, changing teaching from delivery to system stewardship.
Changing Incentives
Thinking about teaching in this new way changes how everyone in a school system acts. If administrators can use AI to provide basic instruction, they should reduce the number of human teachers. This is more likely to happen if teachers are paid poorly and there isn't enough oversight. We've seen schools use pre-made lesson plans when money is tight. AI is just a faster and cheaper way to do the same thing.
But if we see teaching as a team effort, we can see where human teachers are most valuable: creating lessons and assisting students. Expert teachers should focus on designing the AI tutoring system, choosing the best materials, and helping students who are struggling. These tasks require more skill and have a bigger impact than simply following a script. Teachers who do these tasks should be paid more and get more support. If we don't reward these roles, AI will create a system in which some teachers are highly skilled, and others are not, widening the gap between rich and poor schools.
In places where school districts already decide on the curriculum, it will be easier to use AI because everyone can share and check the same materials. In the classroom, teachers can spend more time mentoring students, giving feedback, and supporting their emotional needs. These are things that machines can't do well. For those in charge of education, this means changing how we license and evaluate teachers to recognize the value of lesson design and mentoring. Sharing knowledge and materials lowers the cost of maintaining high-quality education across all schools. It also enables hiring smaller, highly skilled teams to manage the AI tutoring system.
What the Evidence Shows
The evidence from 2023 to 2025 shows both the benefits and drawbacks of AI tutoring. Studies have found that AI tutors can improve practice, understanding, and time spent on learning when used with an appropriate curriculum. A 2025 study in Nature found that an AI tutor outperformed traditional active learning in the classroom, particularly for practice-based tasks. Reviews of intelligent tutoring systems show mixed results overall, but they are more positive when the AI aligns well with the curriculum, provides clear feedback, and offers support. This is why elementary and high schools have adopted AI tutoring more quickly than colleges: their curricula are often more structured, making it easier to evaluate the AI's performance.
Figure 2: Curriculum grounding and task structure reduce hallucination risk by more than half across most educational tasks.
There are two essential things to keep in mind. First, AI tutoring can work well if schools invest in ensuring it aligns with the curriculum. This means creating shared materials, practice questions, and regular checks to ensure the AI helps students meet learning goals. Second, the best results happen when AI tutors work with human teachers. Humans set the goals, track progress, and step in when needed. This raises a question: if this mixed approach is most effective, how should we pay and promote teachers who contribute to it? If we don't address this, the blended approach could become a justification for cost cuts. AI would take over routine work, leaving the remaining human teachers underpaid and unsupported. We need to invest in both technology and people, not treat them as substitutes for each other.
The benefits of AI tutoring vary by subject and grade level. Subjects such as mathematics and tasks involving repetition show greater improvement than subjects such as writing or history. This doesn't mean AI can't help with writing or historical thinking. Still, it requires clear guidelines, scoring rubrics, and human review to ensure the feedback is valuable rather than merely superficial. We should focus on using AI in areas where it aligns more easily with the curriculum, while also testing it in other areas under careful human oversight.
The Risks of AI Tutors
One of the most significant risks of AI tutors is that they can make mistakes. AI can provide answers that appear correct but are actually incorrect. Research from 2020 to 2025 has examined these errors, their causes, and ways to address them. But there's no single solution that eliminates the problem. The risk of errors depends on the situation. AI is less likely to make mistakes in organized tasks with reliable sources than in open-ended research questions. However, even a small number of confident errors can damage trust, lead to incorrect learning, and compromise data analysis. Schools need to plan for errors as a regular part of AI use.
To reduce errors, we need to use several safeguards:
Grounding: Make sure the AI uses only approved, up-to-date sources.
Human escalation: Send uncertain or essential questions to trained staff.
Transparent audit trails: Keep records of every answer, linking it to the curriculum and source.
Changing the way we prompt the AI and using strategies to improve its information retrieval can reduce errors. But this requires ongoing work: maintaining prompt libraries, checking information sources, and updating knowledge bases. This means we need fewer routine instructors and more people who can design curricula, create prompts, and monitor the AI's performance. A small, skilled team can keep the AI tutoring system accurate for many tasks, as long as they have the authority and funding to fix problems when they arise.
For example, if an AI tutor invents a formula or makes a false historical claim, it can lead a student down the wrong path, which is hard to correct. Once trust is lost in a classroom, it's hard to regain, as parents and administrators expect accountability. That's why it's essential to have error transparency, clear ways to correct mistakes, and a system for sending uncertain outputs to humans. These requirements change how schools should buy AI systems. They should choose modular, auditable systems with shared knowledge repositories rather than systems that are difficult to understand and trace.
The Future of Teaching
The way our economy works explains why AI adoption will speed up. According to a memo from Commissioner Anastasios Kamoutsas, some teachers have not received pay increases because of what he described as "unnecessary and prolonged contract negotiations" by unions. Automation enables the maintenance of basic instruction while reducing costs. But this has consequences. According to Keith Lee, while AI can help teachers save time on tasks like creating rubrics or outlining lessons, these productivity gains will only benefit students and teachers if schools change their processes to ensure the regained time leads to better feedback, stronger curricula, and fairer outcomes instead of being diverted elsewhere. Linking automation to deliberate reinvestment is essential to make sure these benefits serve the public good.
A practical plan rests on three things. First, set standards for how AI should be used. These standards should include clear learning targets, permissible error rates, human escalation procedures, and public reporting of outcomes. Second, fund regional centers that can provide vetted knowledge bases and compliance services. This will help smaller districts access high-quality AI systems without taking on too much risk. These centers can also provide shared evaluation and benchmarks for comparing results across districts. Third, change teachers' career paths. Create and appropriately compensate roles for curriculum design, system management, and student support. There should be clear paths for teachers to move from the classroom into roles such as curriculum engineer or intervention specialist, with corresponding pay increases. Policymakers should test reinvestment clauses linked to AI pilots, so that savings are used to support people and programs.
There are lessons from the past. When standardized testing and pre-packaged curricula became common, teaching became more scripted and less creative. Automation could make this worse unless we link productivity gains to reinvestment in professional roles. To prevent this, districts should report how they allocate savings from automation and monitor the impact on disadvantaged students. This kind of conditional funding, along with transparency, will be controversial, but it's necessary if automation is to expand opportunity rather than limit it.
Remember, AI tutoring is already automating a lot of routine instructional work. This isn't necessarily a bad thing. It only becomes bad if schools use it to cut investments in human expertise and social support. If we instead design a system where AI handles routine tasks, expert humans handle design, and savings are reinvested in higher-skill roles and support, we can expand access and advance learning. The choice is ours to make.
Fund the teams that manage and oversee the AI tutoring systems, require open audits of error rates, and link automation gains to teacher career upgrades. Policymakers should set clear timelines for pilots, require open reporting of error rates, reinvest savings into staff development, and support regional hubs that pool expertise. Education leaders must require both technical audits and human-centered data that captures mentorship, trust, and access. Together, these measures will determine whether AI tutoring serves as a tool to expand opportunities or deepen existing divides.
The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.
References
Microsoft. (2025). AI in Education: A Microsoft Special Report. Nature. (2025). Kestin, G., Miller, K., Klales, A., Milbourne, T., & Ponti, G. (2025). AI tutoring outperforms in-class active learning: an RCT introducing a novel research-based design in an authentic educational setting. Scientific Reports. OECD. (2023). What do OECD data on teachers’ salaries tell us? (Education Indicators in Focus). OECD. (2025). AI adoption in the education system. (Report). OpenAI. (2025). Why language models hallucinate. Research blog. Springer / academic review. (2025). The rise of hallucination in large language models: systematic review and mitigation strategies. U.S. Department of Education, Office of Educational Technology. (2023). Artificial Intelligence and the Future of Teaching and Learning: Insights and Recommendations.
Picture
Member for
1 year 2 months
Real name
Ethan McGowan
Bio
Professor of AI/Finance, Gordon School of Business, Swiss Institute of Artificial Intelligence
Ethan McGowan is a Professor of AI/Finance and Legal Analytics at the Gordon School of Business, SIAI. Originally from the United Kingdom, he works at the frontier of AI applications in financial regulation and institutional strategy, advising on governance and legal frameworks for next-generation investment vehicles. McGowan plays a key role in SIAI’s expansion into global finance hubs, including oversight of the institute’s initiatives in the Middle East and its emerging hedge fund operations.
Japan’s growth problem is not a lack of effort, but weak output per hour
Extending work hours raises costs without fixing productivity or wages
Policy should shift from time worked to skills, management, and productivity gains
Payment stablecoins now hold a quiet form of monetary privilege
It comes from settlement design, not true money creation
Until issuers are regulated as banks, the system remains distorted
In 2024, stablecoins used for payments handled tri
The market value of a degree now depends more on skills than prestige
Employers pay for verified, job-ready learning
Education policy must validate outcomes, not labels
The worth of a college degree is changing quickly.
AlphaGenome is pushing education systems to rethink how AI, genomics, and policy are taught
Predictive genomics now demands skills beyond traditional biology
Without reform, the benefits will stay concentrated in a few institutions
The dollar’s strength is increasingly driven by funding stress, not stable safe-haven confidence
China’s shift from U.S. Treasuries toward gold reflects rising concern over U.S.
Europe’s AI gap is not about technology — it is about weak hands-on use at work
Productivity gains come from daily tool use, not from policy frameworks alone
Without faster workplace adoption, Europe will fall further behind global peers
Make Time Teachable: Scaling Learning with AI-Integrated Courses
Picture
Member for
1 year 2 months
Real name
Catherine McGuire
Bio
Professor of AI/Tech, Gordon School of Business, Swiss Institute of Artificial Intelligence
Catherine McGuire is a Professor of Computer Science and AI Systems at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI). She specializes in machine learning infrastructure and applied data engineering, with a focus on bridging research and large-scale deployment of AI tools in financial and policy contexts. Based in the United States (with summer/winter in Berlin and Zurich), she co-leads SIAI’s technical operations, overseeing the institute’s IT architecture and supporting its research-to-production pipeline for AI-driven finance.
Published
Modified
AI-integrated courses can handle routine questions and free teachers for higher-value work
Well-designed course bots cut response time without hurting learning quality
The real policy issue is how to govern AI, not whether to use it
Imagine if a big chunk of the questions students ask in an online course could be answered by the course materials themselves, with a little help from a smart computer program. It turns out, this is totally doable! This means teachers could spend less time repeating the same answers and more time on what really matters. So, instead of banning these AI tools, we should figure out how to use them to improve courses, reduce teachers' workloads, and ensure students get the most out of their learning. The goal is to create courses where AI acts like a helpful assistant, providing students with quick, reliable support while freeing up teachers to focus on coaching, grading, and ensuring everyone has a fair chance to succeed. The big question for leaders is whether to embrace these helpful tools or stick to old ways that might not be as good or cost-effective.
Rethinking Automation with AI Courses
People often argue about whether AI is a threat or a miracle cure for education. But the truth is somewhere in between. Let's forget about whether AI should teach and instead focus on what parts of teaching can be automated and how. The best tasks to automate are those that involve many simple questions, such as explaining instructions, giving examples, or checking whether students followed the rules for formatting their work. An AI system can handle these tasks by sticking to the course materials and following clear guidelines. This means teachers get fewer interruptions, students get faster help, and we can see where students are struggling and need more personal attention.
Figure 1: Synthesis of online course Q&A studies; systematic reviews of chatbots in education; conservative estimate from hybrid tutoring trials
Three things are coming together to make this possible: First, AI can now provide clear, helpful answers when asked the right questions. Second, course websites and tools enable us to connect course materials more effectively. Third, studies show that AI tutors can improve learning when used carefully. Instead of banning AI, we should focus on establishing rules and standards to ensure it's used safely and effectively. This way, we can reduce routine work for teachers and let them focus on what makes the biggest difference for students.
What Works and What to Watch Out For:
Studies are showing that AI tutors and chatbots can help students learn and save time. But it depends on how they're designed. One study in 2025 found that students learned more and were more engaged when using an AI tutor that adhered to specific teaching methods and used course materials to guide its responses.
Figure 2: Careful system design and ongoing curation sharply increase the instructional time reclaimed by AI-integrated courses.
This means that if a course or program keeps track of common student questions and has clear answers ready, it can use AI to handle many requests accurately. Tests of systems that combine tutoring data with AI show they perform better when the AI follows a script and when teachers review tricky cases. Experts estimate that a course bot can handle about 60–90% of routine questions in its first year. This is based on the fact that many online course questions are repeated and that AI tutors can give accurate answers when used properly. Keep in mind that the more the bot is improved and updated, the better it will perform.
However, we need to be careful. AI can sometimes simplify or misrepresent scientific or technical information if not properly controlled. This can be dangerous in subjects like science and health. To prevent this, we need to ensure the AI uses only course-approved materials and that any complex or important questions are referred to a teacher. AI can save time, but it needs to be managed rather than used as a replacement for teachers.
Creating AI Courses That Students Can Trust:
To make AI courses work well, start by carefully planning the course. Identify the learning goals, solutions to problems, common mistakes, grading guidelines, and when to involve a teacher. Then create a collection of approved materials for the AI to use, and write instructions to guide it in providing helpful, accurate answers. Whenever possible, the AI should show students exactly where in the course materials it found the answer. This makes the AI more trustworthy and reduces the likelihood that it makes things up.
It's also important to keep collecting data and improving the AI over time. Track every interaction the AI has with students and label whether the issue was resolved, passed on to a teacher, or corrected. Use this information to improve the AI's answers, update the instructions, and adjust when to involve a teacher. Studies show that chatbots are most helpful when they're treated as tools that can be improved over time. For quality control, give teachers the ability to see how the AI is working and correct it if needed. Before using the AI in a course, test it with real student questions to make sure it's accurate and helpful. According to a study on Tutor CoPilot, students whose tutors used the AI tool were 4 percentage points more likely to master topics, with even greater improvements seen among students working with lower-rated tutors.
Fairness and Policy:
It is important to consider who benefits from AI in education and who may not have equal access to it. Automating routine tasks should give teachers more time to help students who are struggling or have special needs. But if AI is only used in certain courses or programs, it could create even more inequality. To prevent this, policies should connect the use of AI with fairness goals. Schools should report how much time teachers save, how quickly students receive responses, and how often questions are referred to teachers, broken down by student groups.
There also needs to be clear rules and oversight. Governments and organizations should require that AI systems use approved materials, have a clear process for involving teachers, align with learning goals, and protect student data. The U.S. Department of Education says that AI in education should be inspectable, allow for human intervention, and be fair. Providing funding for pilot programs, shared resources, and teacher training will make it easier for all schools to use AI effectively. Without this support, AI will remain a luxury for some rather than a tool for everyone.
The debate over AI in classrooms shouldn't be about banning it or blindly accepting it. Instead, we should focus on designing AI courses that are reliable, transparent, and know when to ask for human help. By preparing carefully, collecting data, and providing ongoing support, schools can free up teachers to focus on the complex tasks that only humans can do well. Leaders should fund shared tools and require transparency. Teachers should demand AI systems that are inspectable and ensure AI helps them teach, rather than replacing them. Students deserve quick, accurate help, and teachers deserve time to teach. We can achieve both by using AI wisely to improve education.
The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.
References
ACM. (2025). Combining tutoring system data with language model capabilities. ACM Proceedings. Davard, N. F. (2025). AI Chatbots in Education: Challenges and Opportunities. Information (MDPI). Kestin, G. (2025). AI tutoring outperforms in-class active learning. Nature (Scientific Reports). Research on LLMs summarization errors. (2025). Analysis: LLMs oversimplify scientific studies. LiveScience. Systematic review: Chatbots in education objectives and outcomes. (2025). Computers & Education (ScienceDirect). U.S. Department of Education. (2023). Artificial Intelligence and the Future of Teaching and Learning (policy guidance).
Picture
Member for
1 year 2 months
Real name
Catherine McGuire
Bio
Professor of AI/Tech, Gordon School of Business, Swiss Institute of Artificial Intelligence
Catherine McGuire is a Professor of Computer Science and AI Systems at the Gordon School of Business, part of the Swiss Institute of Artificial Intelligence (SIAI). She specializes in machine learning infrastructure and applied data engineering, with a focus on bridging research and large-scale deployment of AI tools in financial and policy contexts. Based in the United States (with summer/winter in Berlin and Zurich), she co-leads SIAI’s technical operations, overseeing the institute’s IT architecture and supporting its research-to-production pipeline for AI-driven finance.