The ostrich strategy towards ChatGPT is failing

Banning ChatGPT creates a false sense of security, and plagiarism detection does not work. Instead, Barend Last (educational consultant) and Erdinç Saçan (teacher at Fontys) suggest providing teachers with training and time to experiment.

This article was previously published in Dutch on

Generative AI tools based on language models, such as ChatGPT, are causing disruptive innovation in various sectors, including education. However, the reactions to this technology are highly varied: tech evangelists see opportunities, while skeptics warn (rightly) about ethical dilemmas. Unfortunately, there are also many technophobes and uninformed individuals who are currently adopting an ostrich strategy, which is cause for concern.

ChatGPT is difficult to detect or ban.

When it comes to generative AI tools, fear dominates the education sector. This fear is manifested in an obsession with control and prevention, as well as distrust, hesitation, and a lack of knowledge. Two observations make this painfully clear.

Firstly, the fraud narrative places blame on students. After all, they can easily have their school assignments, such as essays and papers, written by ChatGPT. As a result, schools and educational institutions ban the use of ChatGPT and install plagiarism detection software. This is naive and hypocritical; the shady side of education has existed for years, with smart companies writing complete theses or preparing students for their final exams. Moreover, plagiarism detection and digital watermarks are often unreliable. For example, OpenAI's own plagiarism detection marks almost ten percent of "real" texts as fake. Even worse, the American Constitution and Shakespeare's Macbeth are identified as texts written by AI.

Banning ChatGPT creates a false sense of security. A hotspot or VPN - a digital bypass - is easily fixed, and since the ban on ChatGPT in Italy, numerous alternatives have emerged, such as PizzaGPT. In short, it is like trying to bail out water with a leaky bucket, while ignoring the potential of generative AI. Not to mention the fact that we should question why students would want to cheat in the first place. As long as we maintain a system that incentivizes cheating, the problem lies with that system and not with the students themselves.

Allow teachers to gain knowledge about ChatGPT.

Secondly, teachers are hardly given the opportunity to discover the possibilities that generative AI offers. They have been struggling with time constraints for years, partly due to the high number of class hours and heavy administrative burdens. Sometimes they are not even allowed to experiment due to a rigid, predetermined exam program that insists on writing assignments. This lack of experimentation leads to a lack of progress in the education field as a whole. And in this case, familiarity breeds appreciation.

Of course, generative AI brings with it numerous risks. However, we must identify and mitigate these risks as soon as possible. Generative AI is not intelligent; it is an unreliable calculator for words. It can hallucinate, babble nonsense, and spit out falsehoods - but it can also automate teacher tasks, freeing up more time for personal interaction, support dyslexic students in reading, assist low-literacy individuals in writing, and act as a sounding board for introverted students.

ChatGPT can be the incentive to acquire more knowledge, develop better writing skills, and give media literacy and digital literacy the place it deserves. As a required basic skill, it is just as important as language and math. To achieve this, the locus of control must shift to the teacher. This can only happen if the teacher knows the tool thoroughly and understands its limitations.

Parallel to the introduction of cars

The introduction of cars offers a historical parallel. The car also brought about disruptive innovation in the market, from horse-drawn carriages to roaring engines. Although the use of cars annually claims thousands of victims and burdens the environment, we have normalized their impact to some extent. We accept that the technology has a downside. Thus, regulations have been created, and continuous work is being done to raise awareness of safe driving behaviors. Furthermore, the market is continually evolving.

In short, every technology has a downside, including generative AI. We must understand it as soon as possible. As long as fear dominates and teachers are not facilitated, we are insufficiently protecting our students and not teaching them what they really need. This should not happen.

Distrust towards ChatGPT is a bad quick fix

Therefore, a broad, societal dialogue on the role of AI in education is essential, with a focus on our public values. Schools and educational institutions should not ban generative AI. Instead, they should encourage its use. Let everyone - teacher, student, and learner - keep track of and justify how they use AI tools and where they encounter problems. Experiment. Let generative AI work for you and discover how it can reduce workload and improve learning processes, but also examine all limitations and risks. And ask yourself continuously: is this really making us better off? Don't reinvent the wheel. Webinars, lectures, whitepapers, infographics, and tips are being organized and available on the internet.

Without training and time for teachers to get to know the pros and cons of ChatGPT, the narrative of fraud will persist. In a workday dominated by testing programs, protocols, meetings, care consultations, lesson preparations, grading, lesson visits, break services, and party committees, expressing distrust towards ChatGPT and those who use it is the ideal quick fix. Fear continues to reign, with all its consequences.

Co-Author: Barend Last (educational consultant)



Dit artikel heeft 1 reactie

Reactie van Leo Frehe

Goed artikel. Zelf heb ik chatGTP getest, het resultaat is te lezen op… . Het betreft een uitwerking van een lesplan voor 7 colleges over chatbots middels chatGTP. Mijn conclusie is indrukwekkend & prima bruikbaar voor college ontwikkeling. Uiteraard vind er geen fact checking / bronvermelding plaats dus zelf toetsing & verantwoording nemen is een vereiste. Is het model getraind voor jouw doel dan is het goed bruikbaar. Valt het daar buiten dan is het niet toepasbaar. Succes Leo