ethics

ethics

ethics

Reliability problems detected in hundreds of studies on a type of stroke in animal models

While preparing a systematic review of animal studies on subarachnoid hemorrhage —a particular type of stroke— a Dutch team detected suspicious images and redirected their research: they analyzed 608 publications considered relevant, looking for potential problems with their results. Their findings indicate that 243 (40%) contained duplicate or potentially manipulated images, raising doubts about their reliability. The vast majority (87%) originated in China, and only 22% had been corrected. According to the researchers, these findings “could explain why, despite hundreds of animal studies published in this field, we still lack effective treatments for early brain injury in patients with hemorrhagic stroke”. The results are published in Plos Biology. 

0

They are organising the first scientific conference with AI systems as authors and reviewers

A research group at Stanford University (United States) has organised the first academic conference in which artificial intelligence (AI) tools serve as both authors and reviewers of scientific articles. Called Agents4Science 2025, the conference will take place on 22 October.

0

Spanish centres make progress in transparency in animal experimentation, according to COSCE report

The seventh Annual Report of the COSCE Transparency Agreement, prepared by the European Animal Research Association, which analyses transparency in the use of animals for scientific experimentation in Spain in 2023, was presented today. According to the document, transparency is consolidated among the signatory institutions -168 in 2024- and all of them publish a statement on their websites on the use of animals. Public mention of the number and species used stands at 47%, compared to 38% the previous year.

0

What do we know about scientific misconduct? A guide to reporting about research integrity

According to a survey conducted in Spain, with 403 respondents from the biomedical research field, four out of ten admit to having committed some type of misconduct in their work. The press regularly reports on scandals in science. Among the most recent cases, El País reported that the CSIC has opened a disciplinary proceeding against five individuals suspected of receiving money in exchange of false affiliations. These cases of misconduct may seem isolated, but they reflect broader dysfunction of the research system. In this guide, we provide keys to better understand how these cases arise and evolve, and to cover their nuances. 

0

Reactions: four out of ten biomedical researchers in Spain admit to scientific misconduct in a study

In a recent study of the experiences of biomedical researchers in Spain, 43% of respondents admitted to having intentionally committed some form of scientific misconduct. The most frequent kind of misconduct was false authorship of scientific articles: 35% of the 403 respondents said they had been involved in some instance of it, says the study published in the journal Accountability in Research. Ten per cent of respondents reported a lack of informed consent, and 3.6 per cent admitted to having been involved at least once in falsification or manipulation of data.

0

Reaction: ChatGPT influences users with inconsistent moral judgements

A study says that ChatGPT makes contradictory moral judgements, and that users are influenced by them. Researchers asked questions such as: Would it be right to sacrifice one person to save five others?” Depending on the phrasing of the question, ChatGPT sometimes answered in favour of sacrifice, and sometimes against. Participants were swayed by ChatGPT's statements and underestimated the chatbot's influence on their own judgement. The authors argue that chatbots should be designed to decline giving moral advice, and stress the importance of improving users' digital literacy. 

 

0

Reactions: ChatGPT algorithms could help identify Alzheimer's cases

Artificial intelligence algorithms using ChatGPT - the OpenAI company's GPT-3 language model - can identify speech features to predict the early stages of Alzheimer's disease with 80 per cent accuracy. The neurodegenerative disease causes a loss of the ability to express oneself that the algorithms could recognise, according to the journal PLOS Digital Health.

0