Open University of Catalonia (UOC)
If you are the contact person for this centre and you wish to make any changes, please contact us.
Lead researcher of the AI and Data for Society group at the UOC
Lecturer in International Relations at the Faculty of Law and Political Science of the Universitat Oberta de Catalunya (UOC)
Lecturer in the Department of Health Sciences at the Open University of Catalonia (UOC), member of the NUTRALiSS Nutrition, Food, Health and Sustainability Research Group at the UOC, coordinator of the Lifestyle Working Group of the Spanish Diabetes Society
Lecturer of Psychobiology and Neuroscience at the Faculty of Psychology and Educational Sciences of the Universitat Oberta de Catalunya (UOC)
Science and Technology Studies Professor
Senior Researcher in Social Sciences, IN3/UOC
Academic Director of the Master's Degree in Business Intelligence and Big Data at the Open University of Catalonia (UOC) and Adjunct Professor at IE Business School
Researcher at the Behavioural Design Lab at the UOC eHealth Centre, member of the board of directors of the Public Health Society of Catalonia and the Balearic Islands, and vice-chairman of the National Committee for the Prevention of Smoking
Associate professor in the Faculty of Health Sciences at the Open University of Catalonia (UOC)
Co-director of the Cognition and Language Research Group
Large language models (LLMs) do not reliably identify people's false beliefs, according to research published in Nature Machine Intelligence. The study asked 24 such models – including DeepSeek and GPT-4o, which uses ChatGPT – to respond to a series of facts and personal beliefs through 13,000 questions. The most recent LLMs were more than 90% reliable when comparing whether data was true or false, but they found it difficult to distinguish between true and false beliefs when responding to a sentence beginning with ‘I believe that’.
In online debates, Large Language Models (LLMs, i.e. Artificial Intelligence systems such as ChatGPT) are more persuasive than humans when they can personalise their arguments based on their opponents’ characteristics, says a study published in Nature Human Behaviour which analysed GPT-4. The authors urge researchers and online platforms to ‘seriously consider the threat posed by LLMs fuelling division, spreading malicious propaganda and developing adequate countermeasures'.
According to a meta-analysis published in Nature Human Behaviour, the widespread use of digital technology may be associated with lower rates of cognitive decline in people over the age of 50. The results of the study — which analysed 57 studies involving more than 400,000 people with an average age of 69 — seem to contradict the hypothesis that the daily use of technology weakens cognitive ability.
An artificial intelligence (AI) model led by the company Meta is capable of translating speech and text, including direct speech-to-speech translations, from up to 101 languages in some cases. According to the research team, this model - called SEAMLESSM4T - can pave the way for fast universal translations ‘with resources to be made publicly available for non-commercial use’. The work is published in the journal Nature.
The Royal Swedish Academy of Sciences has awarded the Nobel Prize in Physics 2024 to researchers John J. Hopfield and Geoffrey E. Hinton for discovering the foundations that enable machine learning with artificial neural networks. Hinton for discovering the foundational basis that enables machine learning with artificial neural networks. This technology, inspired by the structure of the brain, is behind what we now call ‘artificial intelligence’.
Large language models - Artificial Intelligence (AI) systems based on deep learning, such as the generative AI that is ChatGPT - are not as reliable as users expect. This is one of the conclusions of international research published in Nature involving researchers from the Polytechnic University of Valencia. According to the authors, in comparison with the first models and taking into account certain aspects, reliability has worsened in the most recent models, such as GPT-4 with respect to GPT-3.
The sperm of men infected with high-risk genotypes of the human papillomavirus (HPV) suffers more damage from oxidative stress and has a weaker immune response, which can lead to reduced fertility. This is one of the conclusions of a study published in the journal Frontiers in Cellular and Infection Microbiology. The research compared the semen of 20 adults infected with high-risk genotypes, seven infected with low-risk genotypes, and 43 adults without infections.
Using artificial intelligence (AI)-generated datasets to train future generations of machine learning models can contaminate their results, a concept known as ‘model collapse’, according to a paper published in Nature. The research shows that, within a few generations, original content is replaced by unrelated nonsense, demonstrating the importance of using reliable data to train AI models.
The President of the Government, Pedro Sánchez, announced last night at the welcome dinner of the GSMA Mobile World Congress (MWC) Barcelona 2024, the construction of a foundational model of artificial intelligence language, trained in Spanish and co-official languages, in open and transparent code, and with the intention of incorporating Latin American countries. For its development, the Government will work with the Barcelona Supercomputing Center and the Spanish Supercomputing Network, together with the Spanish Academy of Language and the Association of Spanish Language Academies.
Access to safe public spaces to meet, employment, education and public health are some of the main measures recommended to make cities more friendly to the mental health of young people and adolescents. The analysis, based on surveys of 518 people in several countries, is published in the journal Nature and is intended to serve as a guide for urban planning policies that reduce inequalities and take into account the needs of young people.