Researchers are exploring regulatory approaches to artificial intelligence in the healthcare sector.
Patient-centered care is a pillar of the American healthcare system. But, as America’s population grows and ages, providers need new ways to handle ever-increasing workloads. Artificial intelligence (AI) offers the opportunity to improve the efficiency of healthcare administration, disease diagnosis and detection, drug development, and more.
Although AI is poised to transform business operations in various sectors of the economy, experts agree that it holds increased potential in the healthcare sector. AI has helped detect breast cancer in mammography screening programs and improved the diagnosis of anxiety disorders. It even enabled personalized treatments based on an individual patient’s DNA. Efficiency in healthcare can do more than speed up processes: it can save lives.
So why are advances in healthcare AI causing fear?
Healthcare regulations protect patients from shoddy care, breaches of medical confidentiality and inadequate technical standards, but untrusted and unregulated AI risks undermining these regulatory safeguards. Regulators, practitioners and the public are reluctant to delegate control of patient health to technology. In healthcare, any mistake can lead to injury, and AI systems require access to vast amounts of patient information. In turn, these data can incorporate biases and perpetuate inequalities.
Recognizing the growing role of AI in healthcare, the US Department of Health and Human Services (HHS) established an AI Office in March 2021 and appointed its first AI Officer. The regulator’s responsibilities include designing a strategic approach to drive AI adoption, creating an AI governance structure, and coordinating HHS’s response to related federal mandates. to AI.
Meanwhile, the U.S. Food and Drug Administration (FDA) released an action plan in early 2021 outlining the agency’s goals for encouraging innovation in AI and updating its proposed regulatory framework. The FDA has compiled a list of more than 500 AI and machine learning-enabled medical devices currently marketed in the United States. Yet the complex regulatory landscape remains underdeveloped.
In this week’s Saturday seminar, experts examine the challenges of regulating AI in the healthcare sector, considering its applications, legal implications and potential governance structures.
- In an article published in the Yale Journal of Health Policy, Law, and Ethics, Sara Gerke of Penn State Dickinson Law advocates a new regulatory framework for AI-based medical devices. Although the FDA has issued regulations and guidelines to protect the use of AI in healthcare, Gerke worries that “the FDA is not yet ready for AI in healthcare.” Fearing that current regulations will undermine patient safety and public trust, Gerke is advocating for statutory reform and changes to the FDA’s premarket pathway for AI-based medical devices. To ensure AI-based devices are safe and effective, Gerke argues, the FDA needs to refocus its regulatory oversight and enforcement discretion.
- Using personal health information to advance AI innovations in healthcare raises privacy and equity challenges, say Jenifer Sunrise Winter and Elizabeth Davidson of the University of Hawaii in Manoa in an article published by Digital policy, regulation and governance. Winter and Davidson explain that while AI technologies can harness personal health data to address important societal concerns and advance healthcare research, they simultaneously pose new threats to privacy and security given the AI’s opaque algorithmic process. Thus, AI requires improved regulatory techniques to preserve individuals’ right to control their own personal health information, argue Winter and Davidson.
- Despite AI’s promise to improve health care outcomes, algorithmic discrimination in medicine may exacerbate health disparities, say Sharona Hoffman and Andy Podgurski of Case Western Reserve University in a paper published in the Yale Journal of Health Policy, Law, and Ethics. According to Hoffman and Podgurski, AI defects may disadvantage certain groups of patients. To address issues of discrimination and ensure appropriate use of AI in medicine, Hoffman and Podgurski recommend creating a private cause of action for disparate impact cases, enacting algorithmic liability legislation, and include algorithmic fairness in FDA oversight standards.
- In an article published in The SciTech Lawyer, Nicholson Price II of the University of Michigan Law School explores the possibilities of regulating AI in healthcare to ensure the quality and safety of medical algorithms. Although experts debate whether the FDA should classify medical algorithms as medical devices, Price argues that the FDA has the authority to regulate AI healthcare mechanics, including complex algorithms. . Additionally, the FDA should analyze the market before and after the implementation of medical algorithms to develop a monitoring and evaluation framework, according to Price.
- The sophisticated nature of AI software raises questions for regulators, according to Sandeep Reddy and several co-authors in an article published in the Journal of the American Medical Informatics Association. It is difficult to determine at what stage to implement monitoring and evaluation of AI-enabled services – approval, introduction or deployment – because the decision-making processes of algorithms are often inexplicable and dynamic. Reddy’s team suggests a governance model emphasizing “fairness, transparency, reliability and accountability”. For example, regulators can increase fairness by instituting a data governance oversight board to monitor bias in AI software. To improve reliability, regulators could implement policies to educate both healthcare professionals and patients about AI, as well as requiring healthcare professionals to obtain informed consent from patients before using AI. to use AI-based healthcare software.
- In an article published by Digital Health PLOS, Trishan Panch of the Harvard TH Chan School of Public Health and several co-authors advocate a particular set of regulations for clinical artificial intelligence. Although regulation is necessary to ensure the safety and fairness of clinical artificial intelligence, Panch and his co-authors argue that a centralized regulatory system alone, such as FDA regulation, fails to address the reasons where the algorithms might fail and can increase the disparities. Thus, the Panch team advocates a hybrid model of regulation, where most applications of clinical AI are delegated to local health systems and the most risky tasks are regulated centrally.
#healthcare #dilemma #Regulatory #Review