info@prepare-rehab.eu

Regulatory perspective of prediction models

AI-driven predictive models hold enormous potential for the world of rehabilitation. They can enable better patient care, empower doctors to make better-informed decision, and lead to improved health outcomes. However, like all powerful tools, they need to be used responsibly. By ensuring that these predictive models adhere to the relevant legal framework, we can harness the benefits of the technology while minimizing its risks.

Medical Devices: No Longer Just Hardware

As technology has advanced, the very definition of medical devices has expanded. Today, cutting-edge software and AI tools can be classified as medical devices under the recent Medical Devices Regulations (MDR). This is more than just a title—it means these tools must meet stringent quality and safety standards before they touch patient care. The era where software is side-lined is over; it’s central to modern healthcare and thus should meet the same rigorous standards as any other medical equipment.
AI and the Medical Field: a Complex Regulatory Landscape

As AI seamlessly integrates into the medical sphere, the already complex set of rules that govern the field becomes even more complex. AI is not just a novel medical tool; it’s an enabler in advancements in patient care and healthcare management. However, this shift doesn’t come without its share of challenges.

Historically, medical regulations were conceived in a world where ‘medical tools’ meant tangible devices – devices such as stethoscopes or MRI machines. Now, as intangible algorithms are increasingly used for diagnosis, treatment, and rehabilitation, how do these older rules apply?

Quite often, they mesh with new legislation, like the EU’s upcoming AI Act. The result can be confusing. These technologies present challenges both in their own sphere (as standalone technologies), but also in their application (enabling better patient care). This section will discuss how the plethora of applicable legislation interacts, highlighting the need for a structured road towards compliance.

Navigating the AI Act

The AI Act isn’t just legislation—it’s a compass guiding the responsible development and application of AI technologies. It forces everyone to play by the same rules, ensuring that AI is safe, transparent, and mindful of individual rights. Europe’s legal framework on AI, the AI Act, seeks to ensure that AI systems are safe and respect existing rights. For predictive models in rehabilitation, this would mean ensuring these systems:

  • Are transparent in their predictions.
  • Don’t discriminate between patients.
  • Are reliable enough to be trusted with patient care.
  • Protect patient data and privacy.

The AI act uses a risk-based classification system to distinguish between applications of AI. For the medical sector, European Union places specific emphasis on requirement of oversight and transparency when using AI tools. The categories used are: Unacceptable Risk, High Risk, and Limited or Low Risk applications.

Whilst a risk-based system, in comparison to a more traditional ‘rigid’ regulatory approach, theoretically allows for a more future-proof and adaptive regulation, it is not without its shortcomings. The categorization of complex AI systems into just three risk-categories fails to consider the diverse range of their applications. For instance, any AI model classified as a medical device is deemed as high-risk AI. However, this classification scheme does not align directly with the MDR’s system, which differentiates between Class I, Class IIa, Class IIb, and Class III devices. As a result, even if an AI model is categorized as high-risk AI, it might still fall under a Class IIa medical device or another class within the MDR’s framework. The precise implications of the AI Act remain unclear, pending further EU decisions.
Further regulatory considerations

Privacy and Data Protection
Every piece of medical data captures a chapter of someone’s life story. While the General Data Protection Regulation (GDPR) stands as Europe’s protector of these personal narratives, the challenge is in the details of its implementation.

It’s crucial to note that the processing of health data is fundamentally prohibited. This data can only be tapped into under exceptional circumstances, emphasizing the sanctity and sensitivity of health-related information. In instances where data is accessed, consent must be detailed, explicit, and specific to the cause. For example, data initially collected for treatment can’t be repurposed for training prediction models without clear and separate consent. The overarching principle should be that training data remains devoid of any personal identifiers, particularly medical specifics.

Ethical Considerations
Medical Ethical Review Committees act as moral compasses, ensuring clinical research respects human dignity and rights. With AI, this also means ensuring that algorithms are transparent and understandable, free from biases, and serve the best interests of patients and professionals.
Challenges of Transparency and ‘Black Box’ AI

While AI systems are celebrated for their capacity to decipher complex patterns and generate results, their inner workings, and especially how they arrived at certain outcomes, remains difficult. This concept of ‘black box’ AI is particularly challenging in the medical field, where concepts of transparency and accountability are central.

Firstly, transparency in the decision-making process is required. Patients, practitioners, and regulators need confidence in an AI system’s decisions, and that is contingent on understanding how these decisions are made. The so-called ‘black box’ dilemma thwarts this transparency, making it hard to ensure and prove compliance.

Moreover, many AI models, especially neural networks, are continuously evolving as they learn from new data. If an AI system updates its knowledge base and subsequently alters its decision-making process, it can be argued that the system has fundamentally changed. In that case, there exists a paradox between alleged compliance at the time of implementation (static), and any subsequent changes (or even improvements) the system undergoes, starkly contrasting with the legal principles of reproducibility and standardization. Consequently, clinical trials to prove the medical device’s worth can establish the effectiveness and safety of medical prediction models at a specific point in time during its validation. However, this clinical evidence may become obsolete if the model’s self-improving algorithms subsequently yield results that differ from those observed during the initial studies.

Lastly, the existing legal architecture is often underpinned by the principle of accountability. When an outcome occurs, particularly an adverse one, there’s a need to trace back to the cause. Attributing responsibility in ‘black box’ AI systems, which continuously learn, is challenging.

Author: Anne Sophie, NAALA