Misinformation. Lies. And artificial intelligence
In March 2023 it was widely reported that a Belgian man had committed suicide on the advice of an AI chatbot in an app called Chai.
His widow provided police with chatlogs in which AI fuelled the man鈥檚 existing anxiety surrounding climate change and encouraged him to take his own life.
With the rapidly evolving power of AI, and instant accessibility to the mis/information it can provide, there is growing pressure
鈥 including from the Australian Federal Government 鈥 for regulation.
, believes that as a society we can use AI and other technologies to help solve broader societal problems 鈥 but only once we have a firm understanding of how it works, its limitations and how we respond to it.
鈥淭he same problems keep arising, including over reliance on technical systems, and a lack of understanding from the engineers who build these systems, about how humans make decisions,鈥 says Carolyn.
With a professional background spanning defence, law and cognitive psychology, Carolyn has seen the effects of adopting technology too early; including people being wrongfully convicted, and social media impacting people鈥檚 mental health.
The case of the man in Belgium may be a warning sign of consequences if AI is left unregulated - but with the rise of AI chatbots across popular social media apps, we are steadily gaining increased access to this technology.
What are the consequences, for example, of a young person seeking advice from an AI chatbot instead of a healthcare professional?
鈥淧eople are not seeing that these chatbots are just models that have been trained on the entire internet, which in itself contains content that is misleading, false and harmful,鈥 says Carolyn.
The speed with which misinformation can be generated through chatbots is unprecedented: AI is often articulate, dangerously confident and highly persuasive.
With the enormous information load, and diversity of viewpoints fed into the internet daily, AI itself can鈥檛 be considered a trusted source of information when its data input lacks consensus or proper expertise.
鈥淭here are myriad psychological studies about mental health that may have been published over the last 20 years and as an expert you spend years learning how to assess the evidence for the claims made from those studies,鈥 Carolyn says. 鈥淚 know what a good study is. I know what the scientific method is. I know how statistics work.
鈥淚 can look at a study and know whether I should believe the conclusions.The average person using ChatGPT has none of my training or experience, and so they鈥檙e reliant on the program鈥檚 confidence in assessing the accuracy of that information.鈥
AI shows immense promise to help us overcome major challenges, but without regulation, the quality of information it provides could be harmful.
While it is clear that AI can provide dangerous misinformation for individuals, the dangers of its use in the geopolitical landscape present an even greater threat.
With the thousands of speeches, images and videos of politicians, religious figures and the like, AI has an abundance of data to draw from to generate content.
Its use so far has often been for comedic effect. Take, for example, deep fake images of the Pope donning a lavish new puffer jacket that fooled many social media users; those of Donald Trump facing a dramatic arrest upon his indictment; and former US Presidents with luscious mullets.When it comes to fake recordings, videos and images of these prominent figures, it is increasingly difficult to discern fact from fiction.
In late May this year, the Federal Government expressed the need for regulations surrounding the use of AI software.
鈥淭he upside (of AI) is massive,鈥 says Industry and Science Minister Ed Husic. 鈥淲hether it鈥檚 fighting superbugs with new AI developed antibiotics or preventing online fraud. But as I have been saying for many years, there needs to be appropriate safeguards to ensure the ethical use of AI.鈥
Our politicians aren鈥檛 the only ones concerned. The so-called 鈥楪odfather of AI鈥, Geoffrey Hinton, quit his position at Google earlier this year to warn the masses of the dangers of AI鈥檚 ability to generate fake images and text, proclaiming that 鈥渢he time to regulate AI is now鈥.
AI could soon regularly fuel the agendas and propaganda created by governments and 鈥渂ad actors鈥 around the world through mass misinformation campaigns.With the current conflict in the Ukraine, and the alleged manipulation of elections, it could be argued that this is already happening.
, agrees that the issue of misinformation and what to do about it has become more complex and more pressing with the advent of sophisticated AI technologies. Keith鈥檚 main project, Monitoring and Guarding Public Information Environment, or 鈥楳AGPIE鈥, focuses on how best to protect the public information environment to ensure that reliable information spreads widely and quickly whilst unreliable information does not.
鈥淭he spread of misinformation and undue influence being exerted by hostile actors is an issue as old as time,鈥 says Keith.
鈥淏ut while propaganda isn鈥檛 a new thing, the construction methods, the industrialisation, the rate and scale of automation and dissemination, that鈥檚 new, and that鈥檚 something we need to prepare for.鈥
With AI comes the opportunity to craft propaganda like never before, with wars potentially taking place through campaigns based on misinformation.
鈥淭ake a claim like 鈥楿kraine should cede territory to Russia in order to cease conflict鈥,鈥 Keith says. 鈥淚f I give ChatGPT that claim, and I prompt it to 鈥榯hink about all the arguments that feed into that鈥, it can generate an argument like 鈥榠t should because Russia has a historic claim to territories in Crimea and the Donbas鈥.
鈥淪o then I can take that argument and ask it 鈥榞ive me some reasons why that is鈥 and it can elaborate.鈥
Keith is hoping a situational awareness tool he is developing with Dr Rachel Stephens, Associate Professor Carolyn Semmler and Professor Lewis Mitchell will help to map, detect, and defend against hostile influence campaigns being generated and orchestrated using AI.
鈥淭he beauty of AI is that it can generate arguments and reasons for things, before anyone has even brought them up,鈥 he says. 鈥淭his gives you a planning and 鈥榳hat-if 鈥 capability that lets you say, 鈥榣ook, they haven鈥檛 started using that argument here, but if they do, look what happens鈥.鈥
As AI continues to improve, the situational awareness tool, in conjunction with advances in the writing capabilities of the software, can generate a rapid-fire and self-evaluating writing machine that could help analysts understand the influence of campaigns being used by malignant actors. Identification of these campaigns will help to protect democratic processes and ensure that populations are not misled as they participate in public debates and decision making.
鈥淲e just don鈥檛 know where the ability to automate influence will go. But there鈥檚 a strong reason for us to investigate it. Now is the time that we should be getting experience with these tools for these purposes as we鈥檙e pretty sure someone else is doing the same,鈥 says Keith.
鈥淎I shows immense promise to help us overcome major challenges, but without regulation, the quality of information it provides could be harmful,鈥 says Carolyn.
鈥淚t鈥檚 up to us to determine the best way forward.鈥
Story by Isaac Freeman, Communications Assistant for the 成人大片, and Photographic Editor for Lumen.