Abstract: AI holds the potential to transform healthcare delivery by improving decision-making and operational efficiency. However, it is important to address the ethical, governance, and operational issues associated with AI-enabled applications before they can be safely and effectively deployed. AI-enabled healthcare presents unique challenges for beneficence, non-maleficence, and patient autonomy. The AI development lifecycle is often not under the control of healthcare institutions, nor are the outputs of AI systems properly understood. Consequently, the true impact of AI on patient outcomes, equity, and justice cannot be adequately evaluated.
Governance frameworks play an essential role in establishing an initial level of assurance. A well-conceived but imperfectly implemented implementation governance framework can help reduce harm and increase public trust. IRBs, in combination with government-sponsored risk management and safety assurance measures, can address most of the requirements of the Safe and Effective Product Regulations. These agencies are best placed to prevent harm emanating from the use of AI-enabled interventions. The next operational development steps focus on building the evidence needed to inform and guide healthcare AI. Proactive information sharing, together with proper documentation and knowledge capes, can mitigate some of the consequences of working without feedback or clinical validation.
Keywords: Artificial intelligence, health, healthcare ethics, ethical theory, beneficence, non-maleficence, patient autonomy.
Downloads:
|
DOI:
10.17148/IJARCCE.2022.111260
[1] Shashikala Valiki, "Ethical and Governance Challenges in Artificial Intelligence–Enabled Healthcare," International Journal of Advanced Research in Computer and Communication Engineering (IJARCCE), DOI: 10.17148/IJARCCE.2022.111260