TY - JOUR
T1 - Artificial intelligence in healthcare
T2 - Tailoring education to meet EU AI-Act standards
AU - Bignami, Elena
AU - Darhour, Luigino Jalale
AU - Buhre, Wolfgang
AU - Cecconi, Maurizio
AU - Bellini, Valentina
N1 - Publisher Copyright:
© 2025 Fellowship of Postgraduate Medicine
PY - 2025/12
Y1 - 2025/12
N2 - The integration of Artificial Intelligence (AI) in Intensive Care Units (ICUs) has the potential to transform critical care by enhancing diagnosis, management, and clinical decision-making. Generative and Predictive AI technologies offer new opportunities for personalized care and risk stratification, but their implementation must prioritize ethical standards, patient safety, and the sustainability of care delivery. With the EU AI-Act entering into force in February 2025, a structured and responsible adoption of AI is now imperative. This article outlines a strategic framework for ICU AI integration, emphasizing the importance of a formal declaration of intent by each unit, detailing current AI-use, implementation plans, and governance strategies. Central to this approach is the development of tailored AI education programs adapted to four distinct professional profiles, ranging from experienced clinicians with limited AI knowledge to new intensivists with strong AI backgrounds but limited clinical experience. Training must foster critical thinking, contextual interpretation, and a balanced relationship between AI tools and human judgment. A multidisciplinary support team should oversee ethical AI-use and continuous performance monitoring. Ultimately, aligning regulatory compliance with targeted education and practical implementation could enable a safe, effective, and ethically grounded use of AI in intensive care. This balanced approach would support a culture of transparency and accountability, while preserving the central role of human clinical reasoning and improving the overall quality of ICU care.
AB - The integration of Artificial Intelligence (AI) in Intensive Care Units (ICUs) has the potential to transform critical care by enhancing diagnosis, management, and clinical decision-making. Generative and Predictive AI technologies offer new opportunities for personalized care and risk stratification, but their implementation must prioritize ethical standards, patient safety, and the sustainability of care delivery. With the EU AI-Act entering into force in February 2025, a structured and responsible adoption of AI is now imperative. This article outlines a strategic framework for ICU AI integration, emphasizing the importance of a formal declaration of intent by each unit, detailing current AI-use, implementation plans, and governance strategies. Central to this approach is the development of tailored AI education programs adapted to four distinct professional profiles, ranging from experienced clinicians with limited AI knowledge to new intensivists with strong AI backgrounds but limited clinical experience. Training must foster critical thinking, contextual interpretation, and a balanced relationship between AI tools and human judgment. A multidisciplinary support team should oversee ethical AI-use and continuous performance monitoring. Ultimately, aligning regulatory compliance with targeted education and practical implementation could enable a safe, effective, and ethically grounded use of AI in intensive care. This balanced approach would support a culture of transparency and accountability, while preserving the central role of human clinical reasoning and improving the overall quality of ICU care.
KW - AI-Act
KW - Artificial intelligence
KW - Education
KW - Policy
UR - https://www.scopus.com/pages/publications/105011830612
U2 - 10.1016/j.hlpt.2025.101078
DO - 10.1016/j.hlpt.2025.101078
M3 - Article
AN - SCOPUS:105011830612
SN - 2211-8837
VL - 14
JO - Health Policy and Technology
JF - Health Policy and Technology
IS - 6
M1 - 101078
ER -