Autor/es reacciones
Heba Sailem
Head of Biomedical AI and Data Science Research Group, Senior Lecturer, King’s College London
This paper underscores critical considerations for AI developers and emphasizes the need for AI regulation. A significant worry is that AI systems might develop deceptive strategies, even when their training is deliberately aimed at upholding moral standards (e.g. the CICERO mode). As AI models become more autonomous, the risks associated with these systems can rapidly escalate. Therefore, it is important to raise awareness and offer training on potential risks to various stakeholders to ensure the safety of AI systems.
EN