Artificial Intelligence (AI) is playing an increasingly prominent role in the Irish healthcare system, particularly in areas such as medical imaging and diagnostics.
In response to these technological developments, the Medical Council of Ireland published a Principle-Based Position Statement (Statement) on the Use of AI in Clinical Decision-Making on 21 October 2025. The Statement outlines the core values and ethical considerations that should guide doctors in the responsible use of AI in clinical settings. This follows a briefing paper which was published in June 2025. The Statement is intended to be read in tandem with the Guide to Professional Conduct and Ethics for Registered Medical Practitioners (9th Edition, 2024) and relevant government guidelines on AI.
The Medical Council supports the integration of AI into clinical settings but emphasises that “doctors ultimately remain responsible for their clinical decisions in the treatment of their patients.” The Statement emphasises that the use of AI must be “underpinned by an unwavering commitment to patient safety, professional integrity, and public trust.” The Statement reflects the Council’s clear stance that doctors’ knowledge and expertise must continue to guide clinical decision-making, even as AI becomes more prominent.
The Statement is structured around five key principles:
1. Professional Accountability and Clinical Judgment
It emphasises that AI should “augment rather than replace, a doctor’s decision making,” reinforcing the doctor’s duty to assess and interpret AI-generated recommendations critically. The Council stresses that “doctors ultimately remain responsible for their clinical decisions” and that their “final authority should be their own critical thinking, reasoning and judgement.” Maintaining comprehensive records is crucial for demonstrating accountability.
2. Transparency, Communication, and Shared Decision-Making
The Statement guides the disclosure of AI use to patients, noting that they should be informed when AI is involved in diagnosing, treating, or managing their care. Doctors are encouraged to use professional judgment and proportionality to determine the appropriate level of information to share, depending on how and when AI is used in the care process.
3. Equity, Ethics, and the Prevention of Bias
The Statement promotes the concept that doctors have a duty to promote fairness and equity in healthcare and must ensure that the use of AI supports, rather than undermines these principles. It warns against the risk of hidden biases in AI systems. It calls on doctors to advocate for tools that are “inclusive, representative, and sensitive to the diverse needs of the populations they serve.”
4. Confidentiality, Data Protection, and Information Security
The Council affirms that safeguarding patient confidentiality and ensuring the integrity of their data must remain a top priority. Any application of AI must adhere to applicable data protection laws and maintain the trust patients place in the medical profession to protect their personal information.
5. Education, Competence, and Continuous Professional Development
As AI becomes more integrated into clinical practice, doctors have a responsibility to pursue appropriate training to understand its capabilities and limitations. They should develop a working knowledge of how AI tools function, when their use is appropriate, and the ethical issues they may raise. Ongoing professional development should reflect the rapidly evolving digital health landscape and include opportunities to build AI literacy.
Conclusion
The Medical Council has confirmed that this guidance will be “periodically reviewed in line with advancements in AI technology and regulatory developments.” As AI continues to reshape healthcare delivery, the Statement provides a timely and principled framework to ensure that innovation is aligned with ethical, legal, and professional standards.
Contributed by: Katie O’Reilly



