Artificial Intelligence in Clinical Practice: Ethical Dilemmas and Responsible Implementation Pathways

A recent opinion paper outlines key ethical pillars and strategic directions to ensure equitable, transparent, and patient-centered AI in medicine

Medical Affairs

Medical Affairs

5min

18 jul, 2025

The integration of artificial intelligence (AI) into clinical practice has fueled significant improvements in diagnostic accuracy and care efficiency. However, the opinion article by Weiner et al., published in PLOS Digital Health (2025), emphasizes that without addressing ethical considerations, such progress may reinforce existing healthcare inequalities. The paper highlights five core domains: justice and fairness, trust and transparency, patient consent and privacy, accountability, and patient-centered care.

Justice and fairness: AI systems often replicate or amplify biases embedded in historical data. A notable example is an algorithm that underestimated the health needs of Black patients by using healthcare costs as a proxy. The authors advocate for data representativeness, inclusion of social determinants of health, and initiatives such as open-source models to mitigate inequities.

Trust and transparency: Trustworthy AI demands explainable and interpretable models, especially in healthcare. Yet, the “black box” nature of deep learning hinders this. Transparency must extend across data, algorithms, processes, and outcomes. Clinicians should be able to communicate the reasoning and limitations of AI-driven decisions to patients.

Consent and privacy: AI’s reliance on large datasets raises concerns around data ownership and the scope of informed consent. Patients increasingly demand clarity, especially when AI replaces human judgment. Continuous consent models and opt-out options are critical, but these must be reconciled with AI’s need for ongoing data updates.

Accountability and automation bias: Assigning responsibility when AI-driven decisions go wrong is complex. The paper warns of automation bias—clinicians blindly trusting AI recommendations. Multi-stakeholder accountability frameworks are essential to ensure ethical safeguards and protect patient safety.

Patient-centered and empathetic care: AI must complement, not replace, human clinicians. The paper notes that patients are less likely to trust or follow AI-only decisions due to perceived lack of empathy. Developers should design AI that adapts to individual needs and communicates clearly and sensitively.

The article proposes frameworks such as SHIFT (Sustainability, Human-centeredness, Inclusiveness, Fairness, and Transparency) and algorithmovigilance—a continuous monitoring approach akin to pharmacovigilance.

Current regulations, including the GDPR and FDA frameworks, are progress but remain insufficient. Most focus on post-hoc review and lack early ethical validation. Only a third of FDA-cleared AI devices underwent external validation. The authors call for diverse teams, inclusion of patient advocates, and long-term impact studies.

Editorial note: This content was developed with the support of artificial intelligence technologies to optimize the writing and structuring of information. All the material was carefully reviewed, validated and supplemented by human experts before publication, guaranteeing scientific accuracy and compliance with good editorial practices.

#AIinHealthcare #EthicalAI #HealthEquity #AlgorithmBias #DigitalHealth

Artificial Intelligence

Sources

  • Weiner EB, Dankwa-Mullan I, Nelson WA, Hassanpour S. Ethical challenges and evolving strategies in the integration of artificial intelligence into clinical practice. PLOS Digit Health. 2025;4(4):e0000810
Medical Affairs

Written by Medical Affairs