Autor/es reacciones

Carlos Carrasco-Farré

Lecturer at Toulouse Business School (France), member of the editorial team at PLoS ONE (Social Sciences) and Doctor of Management Sciences (ESADE Business School)

We have all, to a greater or lesser extent, experimented with ChatGPT or other Large Language Models. That is why we may not be surprised to learn that they write very well, but we may be surprised to learn that they also know how to adapt their arguments to the person in front of them in order to convince them. And not only is their ability to convince surprising, even more disturbing is their ability to do it better than a real person. This finding is especially relevant in a world where AI assistants are integrated into messaging, social media and customer service platforms. This research by Salvi et al. confirms with solid data a growing concern: that these technologies can be used to manipulate, misinform or polarise on a large scale. 

Although the study was conducted with US participants, the personalisation and persuasion mechanisms tested are extrapolable to contexts such as Spain, where there is also a strong digital presence, growing exposure to AI-generated content, and increasing social and political polarisation. This can be problematic because, as discussed in the article - unlike humans, who need time and effort to adapt their arguments to different audiences - GPT-4 can adapt its message instantly and on a large scale, giving it a disproportionate advantage in environments such as political campaigns, personalised marketing or social media conversations. This automated microtargeting capability opens up new possibilities for influencing public opinion, but also exacerbates the risk of covert manipulation. The authors of the study therefore recommend that platforms and regulators take steps to identify, monitor and, if necessary, limit the use of language models in sensitive persuasive contexts. Just as targeted advertising was once regulated, perhaps the time has come to think about actions to control algorithmic persuasion.

EN