Look beyond the jubilant optimists or the doomsday preachers of AI

The meteoric developments in AI have created a public debate that hovers between blind adoration and apocalyptic alarmism. The average citizen, however, has no clue. What lies between these extremes, and why is that the most crucial conversation to be had?

 

The rapid developments in AI have created a public debate that hovers between blind adoration and apocalyptic alarmism. The average citizen, however, has little clue. What lies between these extremes, and why is that the most crucial conversation to be had?

 It's hard to keep up with the avalanche of AI tools being released. Whereas before the summer the Future Tools platform only counted 1,800 tools, a month and a half later there are 2,200. A bubble is unfolding; some say the hype is over, but in reality it is  still growing. A bubble that may eventually burst, but many people don't realize the impact and scope.

To put this development into perspective, it makes sense to look at the accessibility and adoption of new AI tools. By no means all are immediately usable; many are still in beta (meaning they are still in the testing phase and may not yet have all the features or work flawlessly) or require technical know-how. Others are not for everyone: explaining a tool like MidJourney to someone unfamiliar with Discord is quite a challenge..

On one side, the popular response is driven by fear of missing out (the well-known FOMO): "I don't know what it is or what it can do, but I must stay on top of what's happening!" . This fear is also what drives us to social media or sites like nu.nl to see if the world might not be coming to an end. Take the growth of therecently released tool Heygen, which allows you to dub yourself into multiple languages almost perfectly: the tool had a queue of some 70,000 people at launch. FOMO in full effect.

The antithesis of FOMO is JOMO, or joy of missing out. Yes, it's new, but it will pass. A fine reaction with most things. But sometimes it can be dangerous, especially when it comes to developments in AI and its social effects. It's been over 10 months since ChatGPT was launched, and  we still meet people who barely know about it.

We worry about the large group between these extremes. It's not our intention to suggest that everyone must use these new AI tools, but we think it's crucial that everyone is aware of developments in AI, can critically evaluate them and contextualize them to their own situation. This is what's lacking. There is a growing divide between those who can evaluate and effectively use these tools and those who cannot. We are concerned about that.

In the public discourse about AI, there are now two clearly visible camps: the uncritical AI enthusiasts (or as Rens van der Vorst calls them, the "AI-lovers") and those who warn of great dangers, even to our survival (such as Yuval Noah Harari, author of Sapiens). But between these ends of the spectrum, there is a significant middle ground that also deserves attention. People need to know what AI is and is not. What the opportunities are, but also the real dangers. And we're not talking about robots taking over the world. That is nonsense. It's one of many myths to be busted.

The media plays a critical role here; it should report responsibly and accurately on AI, without sensationalism. AI is now often either life-threatening or  the solution to all our problems. Both contribute to an uncritical attitude, mostly awe, which the producers of AI tools are no doubt very happy about, but which doesn't help the silent majority who need nuance to understand the real properties of such tools.

This means that the language and images surrounding AI must change. In the media, science fiction colors the imagery and public debate. There is little discussion about the current properties of today's AI, while nuanced and accurate information about this is essential. People in education and in the workplace should not be busy wondering if they will be replaced by robots, they should learn to understand the impact of these technologies on life and work. Aren't we failing in that?

In the land of the blind, one-eye is king. The jubilant optimists and doomsday preachers got there early and are trying to push their narratives as much as possible. Both are useful, but they are the extremes of the curve. It is high time for the rest. We hope that everyone familiarizes a bit with AI and gets a basic understanding of its possibilities and fundamental limitations. That they consider them critically and in the context of their own situation. And no, we don't have shares in any AI company.

Barend Last, Erdinç Saçan & Emile van Bergen

Barend Last is educational consultant, lecturer and author of the book 'Chatting with Napoleon' on responsible working with generative AI in education - www.barendlast.com

Erdinç Saçan is a lecturer & researcher at Fontys ICT - www.erdincsacan.nl

Emile van Bergen is a senior software developer working as a computer vision specialist.

Auteur

Reacties

Dit artikel heeft 0 reacties