They are organising the first scientific conference with AI systems as authors and reviewers
A research group at Stanford University (United States) has organised the first academic conference in which artificial intelligence (AI) tools serve as both authors and reviewers of scientific articles. Called Agents4Science 2025, the conference will take place on 22 October.
251020 agents4science raffaele EN
Raffaele Ciriello
Senior Lecturer in Business Information Systems at the University of Sydney (Australia)
The idea of a research conference where both the authors and the reviewers are artificial intelligence systems is, at best, an amusing curiosity and, at worst, an unfunny parody of what science is meant to be. If the authors and reviewers are AI, then perhaps the conference attendees should be AI too, because no human should mistake this for scholarship.
Science is not a factory that converts data into conclusions. It is a collective human enterprise grounded in interpretation, judgment, and critique. Treating research as a mechanistic pipeline where hypotheses, experiments, and papers can be autonomously generated and evaluated by machines reduces science to empiricism on steroids. It presumes that the process of inquiry is irrelevant so long as the outputs appear statistically valid. But genuine scholarship is less about p-values than it is about conversation, controversy, and embodied knowing.
Equating AI agents with human scientists is a profound category error. Large language models do not think, discover, or know in any meaningful sense. They produce plausible sequences of words based on patterns in past data. Granting them authorship or reviewer status anthropomorphises what are essentially stochastic text-prediction machines. It confuses the illusion of reason with reason itself.
There is, of course, a legitimate discussion to be had about how AI tools can assist scientists in analysing data, visualising results, or improving reproducibility. But a conference built fully on AI-generated research reviewed by AI reviewers embodies a dangerous kind of technocratic self-parody. It reflects an ideology of techno-utilitarianism, in which efficiency and automation are celebrated even when they strip away the very human elements that make science legitimate.
So, to me, 'Agents4Science' is less a glimpse of the future than a satire of the present. A prime example of Poe’s law, where parody and extremism become indistinguishable. It reminds us that while AI can extend our capabilities, it cannot replace the intellectual labour through which knowledge becomes meaningful. Without humans, there is no science, just energy-intensive computation.
251020 agents4science david EN
David Powers
Researcher in Computer and Cognitive Science and oversees a wide range of projects in artificial intelligence, robotics and assistive technology
Agents4Science 2025 is an interesting experiment. The whole conference restricted to AI written papers, and reviewed by AIs.
Many authors are now routinely using AI to write or rewrite their papers, including finding missed references. Conversely conferences and publishers are now exploring how AI can be used to referee papers - and establish which work is genuine and which is AI-hallucinated. I myself have found AIs have hallucinated several papers my colleagues and I might have (and possible should have) written. In one case, this was in a grant application and it turned out the applicant had asked the AI for further relevant papers from our group.
AI researchers are still trying to get a grip on this, and the Association for the Advancement of Artificial Intelligence (AAAI) this year introduced AI reviewing as a supplement to human reviewing (with authors seeing both anonymous human and AI reviews, as well as an AI-generated summary). AAAI-26 saw another massive increase in submissions, and this review system tested in practice. But recognizing AI-authored papers, distinguishing AI-hallucinated 'research' from real work, and assuring the ongoing quality of publication venues remain daunting challenges.
Agents4Science 2025 will provide an opportunity to see papers that are openly AI-written and openly AI-reviewed, and analyse this data to inform the community’s efforts to ensuring research integrity and optimized processing in our new AI-driven age. This doesn’t mean just identifying AI-generated papers, but exploring the scope for active human-AI teaming in solving important research problems, and deploying AI help systems, advisors and chatbots. I’ll look forward to seeing the data.
The acceptance rate of ~16% (24 out of 300+) is comparable to many journals and lower than most conferences. This looks like being an interesting and useful dataset for analysis to help us understand the use of AI in the research world.
Hussein Abbas - conferencia IA Stanford EN
Hussein Abbass
Researcher from the School of Engineering and Information Technology at UNSW-Canberra
My 35 years of experience as an AI researcher taught me that AI does not qualify for academic authorship.
Academic papers are a unique form of publications due to expectations for innovation and discovery. Authorship is a sacred section in academic publications. We must pause and ask: what has changed to demand authorship for an AI?
Academic authorship has four corners: contribution, integrity, accountability, and consent; AI can’t get held accountable and does not have the will or agency for consent; current AI systems can’t guarantee integrity without human oversight; simply put, authorship of academic papers is a human responsibility and is inappropriate for an AI.
AI has been making scientific discoveries since its inception. Thanks to large language models, significant advances have been made that allows the AI to partially or fully automate the scientific method in defined contexts, opening the possibility for AI to automatically generate academic papers.
Authorship is a different pool game! As an advocate for AI and as an AI psychologist who designs and diagnoses AI cognition and behaviour, there is a sacred line I do not cross; the line that distinguishes humans from machines; academic authorship is only meaningful for humans, not AI.
Ali Knott - conferencia IA Stanford EN
Ali Knott
Professor in Artificial Intelligence, Victoria University of Wellington (New Zealand)
I think the important thing is that this conference is recognised as an experiment. Its purpose (as I understand it) is to evaluate the possibility of AI authors and AI reviewers, rather than to advocate for AI systems in these roles. It is far too early for that kind of advocacy - I’m sure most researchers would agree. But evaluations and experiments are fine. It’s in the nature of science to run experiments and evaluations. My main worry is that the conference is understood (by journalists, or the public) as a substantive research conference, rather than an experiment. That would be a misconception.
I'd like to point out the [New Zealand] Royal Society Te Apārangi’s guidelines on use of Generative AI in research, which I helped to develop. These guidelines are rather high-level - but they include a general principle which basically rules out the use of autonomous AIs as authors or reviewers. It’s Principle 3.2.2: [human researchers] should ‘be responsible for research outputs’. Specifically, 'GenAI systems are neither authors nor co-authors. Authorship implies agency and responsibility, and therefore lies with human researchers’ (my highlights).
This principle doesn’t preclude a conference of the kind being run, provided it’s understood as experimental in purpose. If the conference is understood as presenting and reviewing actual substantive research, it would contravene the guidelines we laid down.