Carlos Carrasco-Farré
Lecturer at Toulouse Business School (France), member of the editorial team at PLoS ONE (Social Sciences) and Doctor of Management Sciences (ESADE Business School)
I find this an interesting and necessary paper: it shows that AI can be right and still be wrong. Correcting false information is fine; the problem arises when the objective is to recognise the speaker's belief and the model avoids it with a premature fact check. If I say, “I believe that X”, I first want the system to register my state of mind and then, if appropriate, to verify the fact. This confusion between attributing beliefs and verifying facts is not a technicality: it is at the heart of critical interactions in medical consultations, in court or in politics. In other words, AI gets the data right, but fails the person.
What is interesting (and worrying) is how easily this social myopia is triggered: it is enough for the belief to be in the first person for many models to be wrong. This forces us to rethink the guidelines for use in sensitive contexts: first, recognise the state of mind; then, correct. This is a design alert for responsible AI. My interpretation is that this work does not demonise the models, but reminds us that if we want safe and useful AI, we must teach it to listen before we teach it to educate. And that means redesigning prompts, metrics and deployments with one simple rule: first, empathy; then, evidence.