Explainable AI: PREPARE at the First International Conference on AI Health

The PREPARE project was published in an abstract submitted by Trilateral Research to the first international conference on AI Health. Hosted by the International Academy, Research, and Industry Association (IARIA), the AIHealth 2024 conference aimed to encourage the exchange of ideas and results between academics and the industry, to drive advancements in AI and health sciences. The publication explored the integration of Explainable AI (XAI) in healthcare, drawing insights from 3 EU-funded projects, including PREPARE. 

A common issue experienced by physicians is the rise in workload pressures attributed to increased administrative burdens, resulting in reduced patient interaction time. Healthcare facilities generate large amounts of data and maintain extensive patient records vital to providing quality patient care. However, processing this information in a timely manner requires enormous effort, detracting from efforts that can be used for predicting, evaluating, and monitoring patients’ health. AI offers a solution: it can accurately, precisely, and rapidly process large volumes of data. AI-based health systems benefit from advancements in AI mechanisms to predict patient health conditions, generate valuable analytics, and monitor patients. 

While AI has revolutionised healthcare by providing diagnostics, personalised treatments, and enhanced patient outcomes, XAI is essential to ensure transparent and ethical decision-making. Despite its numerous benefits, AI presents a significant risk to healthcare due to its lack of transparency and interpretability, which can undermine trust and hinder adoption. In this publication titled “Can We Explain Al?: Explainable Al in the Health Domain as Told Through Three European Commission-funded Projects“, Dr Lem Ngongalah and Dr Robin Renwick emphasise the importance of understanding and addressing the complexities of XAI in healthcare to achieve the following objectives: fostering trust among stakeholders, adhering to ethical principles and enhancing patient care. The authors highlight how each of the three case-studies, including the PREPARE project, prioritise transparency, accountability, accessibility, demonstrating the potential of XAI in enhanced decision-making. They also recognise the limitations of XAI, such as the absence of standardised approaches and challenges in balancing AI complexity with transparency. The publication highlights a need for continuous refinement and adaptation to ensure the successful integration of AI across various healthcare settings. These ongoing efforts are essential to optimise the benefits of AI while mitigating its risks and upholding ethical and transparent practices in healthcare decision-making. 

PREPARE aims to pave the way for personalised and holistic rehabilitation and care by integrating real world clinical datasets using innovative machine learning techniques, all while safeguarding sensitive patient data. Existing solutions lack transparency and user-friendliness, posing challenges for adoption and integration into clinical workflows. PREPARE seeks to overcome these challenges using clear language, user-friendly interfaces and visual representations to enhance understanding of AI predictions. Additionally, it will provide comprehensive training for healthcare professionals, with an emphasis on plain language and visual aids to bridge the gap between technical processes and user understanding.  

Interested in learning more? Register to our mailing list to stay up to date on insights and developments and don’t forget to follow us on social media. 

Author: Melanie Rodriguez & Dr Lem Ngongalah, Trilateral Research