Teach the Tool: Why AI literacy in schools must replace bans
Published
Modified
Schools are banning AI while workplaces are adopting it, creating a growing skills gap AI literacy must be taught through teachers and curriculum, not enforced through restrictions on students The real policy failure is institutional resistance to change, not student misuse of technology

By the close of 2024, a trend emerged: 40% of education systems worldwide had enacted regulations that either limit or completely ban the use of phones in schools. Surprisingly, about 25% of teens in the U.S. admitted to using an AI chatbot to help with school assignments, and more than half said they were using AI-based search engines or chat apps. It is hard to deny the disconnect: as young people are using AI outside of school, schools are banning devices. In the workforce, AI is becoming increasingly widespread. If school systems treat AI and phones the same way they did personal computers and video games in the past, they risk turning out students who can't make effective use of these technologies in most workplaces. Instead of just banning things, there should be investment in teaching about AI in schools. Students and teachers alike will then be able to tell when AI is helpful and when it is giving wrong information. Employers will get workers who can check, verify, and improve what AI produces, so they aren't tricked by it.
Why teaching AI skills in school is a must
There is a reason for all the schools' actions. It makes sense to have bans because of distractions, unfairness, and privacy issues. UNESCO has found that 79 educational systems across the globe have adopted rules on phones, indicating that people are concerned about safety. At the same time, more and more young people are using AI for school between 2023 and 2025. In the United States, about 26% of teens admitted to using an AI chatbot for school in early 2025. Young people outside of school are testing different search engines, ways to produce search results, picture and video tools, along with chat assistants. Such out-of-school influence shapes what they expect and what they are good at. When businesses and teams make plans to use AI on a large scale, studies show that companies are going to implement AI models and use them for daily tasks. Students who have not been taught to double-check what AI gives them will most likely be less able to detect mistakes or get work done.

The timing of this situation is an important consideration. The biggest danger from AI in education is not students finding ways to cheat. It is about students gaining wrong knowledge. When students copy a machine-made answer that is wrong, they pick up wrong facts and start the bad habit of always believing in things that are not checked. We know that AI language models commonly make up information. Studies have indicated that AI tools can fabricate references and provide incorrect information. The good thing is that it can be fixed! Classes should teach how to check; they can learn to use several sources, ask questions, and ensure things align with other resources. These skills can all be taught. Companies need those skills, too. Treating phones and chats as bad is not a sufficient plan. Avoiding the difficult work of giving teachers resources and designing classes means that students won't learn to get the most out of AI and spot its problems.
With teaching AI skills in school, teachers have to take charge and not get left in the dust
The challenge is teacher readiness and not student interest. Research shows that more teachers were using AI in 2023 and 2025. These days, quite a few teachers use AI to assist with plans, alter classwork, and come up with class material. However, training for teachers has been uneven. When teachers use safe AI systems managed by the school, they report saving more time and being better prepared. Otherwise, teachers are fearful and place bans. Consequently, the work is being pushed away from school leaders and onto students.
If leaders want to prepare students for their careers, they have to help adults out as well. Investing in in-service time, having checked training that gets folks to double check is important, and placing secure tools will keep student data safe as folks practice. Not to mention, teachers have to learn how to create prompts that make students need to think deeply, instead of just having a computer write a report, assign work in such a way that the manner someone does things is just as important as the end result, and know how to properly analyze AI detection that is similar. It is important to keep in mind that AI detection is not fully correct at all times. Studies have indicated that AI detection tools are not always reliable. Just punishing folks without a teacher's input will lead to wrongful accusations, break trust, and lead to behavior that makes the work harder rather than the learning better.
Teachers who recognize what AI cannot do have the upper hand. They can show students how to come up with the right questions, match what a machine says with valid resources, test different approaches, and document methods in a log. Businesses really value these thinking skills. Schedules should have time to practice, contemplate, and revise assignments, in helping them to grow. Leaders should do their part as well. Principals and supervisors have to develop practical policies that enable AI to be useful in the classroom, with clearly stated norms, rather than just imposing bans that take the issue out of the school. This is not about letting everything go; rather, it is about teaching AI skills in a practical manner.
When teaching AI skills in schools, you have to improve safety, course design, and how students are evaluated
If schools decide that AI skills are extremely important, how they evaluate students ' needs needs to change. Classic essays make way for people to misuse AI improperly. Rather, tasks have to focus on process and proof: drafts with notes, oral defenses, lab records, code walkthroughs, and research papers with places to add resources and steps taken to come to a conclusion. Point systems must include a review of resources, steps for verification, and explanations for why an AI recommendation was accepted or rejected. When exams must be held under controlled conditions, classwork should include AI-assisted exercises so students can recognize the difference between good content and solid information.
When creating a shift like the one above, fairness should be looked at directly. Not every student is able to get the same devices or AI at home. Platforms offered by schools that care about privacy will make it so that every student has access to things. However, districts have to spend money on them. Teacher learning should include inclusive ways of teaching that ensure people are getting help from AI and that teachers' decision-making abilities are not replaced for students who need extra assistance. With that said, policy must forecast and fix issues. AI detection tools are able to show material that could be used incorrectly, but they aren't always correct. It would be better to have students state they are using AI with assignments, teachers making work so that students have to explain, and holding brief check-ups and casual follow-ups. That mix will reduce people trying to take the easy way out and make learning environments that show how to check and create judgment.

To conclude, governance has immense importance. To begin with, districts should put in place short-term rules that will allow controlled classroom use. On the other hand, they should require training for teachers ahead of time. They should mandate that sellers have privacy guarantees for schools and would rather use tools that have administrative controls. If bans seem like the right thing given the current political climate, leaders can pair them with timelines and pilot programs: briefly pause to collect data, then fund teacher growth and larger classroom pilots. That is a plan that protects students and prepares them for the future.
The point is that while rules do lower risk right away, they won't help folks grow the skill sets needed in the workplace. Instead of only having bans, schools must teach and verify. Schools need to shift from policing devices to truly growing judgment. Achieving this success requires investing in teachers' skill development, designing tests that focus on process rather than just end results, and securing district-level agreements that ensure fair access to verified tools. We can do more than end cheating if students are taught to test statements, check references, and repair mistakes made by AI. In other words, we must produce people who can contribute positively to society and become workers who help improve technology. Without change, the workforce risks misusing AI or failing to work effectively with it. Let's invest in adult abilities and classroom design, so AI skills are taught in schools. Schools must act now: treat AI as a core subject to teach, not just something to keep out, and ensure graduates can confidently confirm and use the resources common in today's workplaces.
The views expressed in this article are those of the author(s) and do not necessarily reflect the official position of the Swiss Institute of Artificial Intelligence (SIAI) or its affiliates.
References
Chelli, M., et al. (2024). Hallucination rates and reference accuracy of ChatGPT and Bard for systematic reviews. Journal of Medical Internet Research.
Gallup (2025). Three in 10 teachers use AI weekly, saving weeks per year. Gallup Education Research Brief.
McKinsey & Company (2024). The state of AI in early 2024: generative AI adoption and value. McKinsey Global Survey Report.
Pew Research Center (2025). About a quarter of U.S. teens have used ChatGPT for schoolwork; usage doubled since 2023. Pew Research Center Short Read.
UNESCO Global Education Monitoring (GEM) team (2025). To ban or not to ban? Monitoring countries’ regulations on smartphone use in school. UNESCO GEM Reports.
Turnitin/AI-detection literature and mixed-method reviews consulted for detection reliability and accuracy discussions (summarized from peer-reviewed evaluations of AI detection tools; representative review: Elkhatat, A.M., et al., 2023).
Comment