In April, the Food and Drug Administration (FDA) issued a discussion paper, “Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD).” The paper represents FDA’s response to the growing number of medical device manufacturers who are utilizing artificial intelligence and machine learning technologies to continuously improve their products. On June 3, PRIM&R submitted comments in response to the discussion paper, thanking the FDA for their consideration of the public health implications of the use of these technologies, but also cautioning that any new regulatory approach in this area must address the protection of individuals whose personal information and data are being used in the creation and ongoing testing of these technologies.
The FDA’s discussion paper rightly discusses how continuous learning by AI/ML may affect devices’ performance, safety, and risk profile, and how to ensure effective safeguards. The agency believes that its current medical device regulatory structure is not suited for adaptive AI/ML technologies that collect and use evidence in real time to improve how devices operate. The FDA is therefore looking at developing a “total product lifecycle” approach to oversight in this area.
PRIM&R’s concerns were primarily prompted by a number of recently published articles describing how Facebook has developed and is using algorithms, based on AI, to identify users who may be at risk for suicide. The company does not consider its algorithms a medical device, but it could be argued, and some have, that this application fits the definition of a device given that the algorithms serve to “diagnose” those at risk for suicide. In a press release, Facebook indicated that their algorithms were developed and tested using research that involved connecting identifiable information about individuals from their Facebook profiles with specific outcomes (e.g., whether or not people attempted suicide). If this is the case, then development of the algorithms was likely supported by human subjects research without oversight by a research ethics review committee independent of the research team.
Facebook is, of course, not the only tech company exploring how AI and machine learning can be used to develop products and applications that may fall into a grey area when it comes to device regulation. As such, PRIM&R urges the FDA, as it revises its regulatory framework in this area, to consider how the private tech sector is using AI/ML in the development of health and wellness products.
Companies seeking to develop products no longer rely solely on archives of existing data or secondary data sets to improve their health and wellness products. They increasingly use real-world data and experience, frequently interacting with human beings and collecting data and information from those interactions. AI testing, for example, often involves perturbing aspects of people’s real-world and online engagements and private lives. Companies must constantly collect data to identify patterns of decision-making that provide training data for developing algorithmic models. Such activities go far beyond “market research,” in which companies study the response of consumers to new or proposed products in order to make improvements to those products or how they are marketed. Typically, individuals are unaware that such interactions are designed to produce data about them and their behavior for research purposes
PRIM&R points out that the FDA may need to revisit whether the scope of its definition of software as a medical device is broad enough to cover the range of health and disease-related AI/ML-based applications currently being developed. This should include applications that aim to prompt behavioral and psychological responses, whether or not they are identified as “medical” or “health” programs, and whether or not they are developed by entities operating within the traditional medical/pharmaceutical realm. As AI/ML requires collecting data on many individuals to generalize to the population, the line between basic research and product development is increasingly blurry.
The regulatory framework the FDA develops should therefore include provisions that protect the rights, welfare, and interests of the individuals involved in this process, just as it now does for other human research subjects. A number of human research protection issues are raised by the development of the AI/ML-based software in the healthcare space, including: honoring people’s wishes regarding the use of, and access to, their information; privacy risks, including risks of re-identification of anonymized data; understandability of disclosures; and transparency. When it comes to the last item, the FDA mentions the principle of transparency in several places in its discussion paper, but we encourage the agency to include binding language in this regard as well as to consider the source entities of data used by the private tech sector to develop their products. For instance, healthcare systems are increasingly providing the data that companies use to develop their AI/ML software.
What do you think of FDA’s discussion paper and proposed regulatory framework? Do you agree with PRIM&R that continuous learning algorithms that identify or shape health behaviors should be treated as regulated devices? Why or why not? Leave us a comment below!
Thanks for sharing. Artificial intelligence and machine learning have been drastically developing in the healthcare industry. In specific, recent research are fully focused on computational techniques for interpreting huge amount of data and provide us with precise suggestion for treatment. Like wise, artificial intelligence and machine learning are used for medical device fabrication. This article is very interesting and informative. Best wishes.
Thanks for this blog, Title of the blog are very Interesting to visit the website