Policymakers who want to regulate AI need to start talking

Source

Recently, the fervor over artificial intelligence has given way to expressions of fears of the unknown. The excitement of the democratization of technology has also given way to calls for regulation to “control this growing AI beast.” The Federal Trade Commission has even opened an investigation into OpenAI. as well as the opening of investigations into some of the players. We are just beginning an important worldwide societal debate about the untapped potential of AI and its risks.

But discussions about how to both integrate AI into society and regulate it are often missing voices from a crucial field: health care.

AI’s potential applications in health care — such as helping create new, more effective drugs with fewer side effects; guiding physicians to optimal treatments for their patients; and robot-assisted surgeries — could redefine our access to treatment, our understanding of diseases, and even our ability to create groundbreaking medicines. It promises to increase accessibility, improve quality, and reduce costs — all critically needed advancements in a country where health care costs are escalating and life expectancies are dropping.

Recently, the CEOs of key AI organizations like Alphabet, Microsoft, and OpenAI were invited to the White House to discuss the technology’s potential implications and the need for regulations, with a focus on generative AI technologies. Other experts in AI testified before Congress on the same topic. According to reports, President Biden and Vice President Harris have been organizing meetings with technology industry stakeholders, many of whom are fierce critics (and sometimes veterans) of the technology industries. These meetings are a crucial step toward acknowledging the widespread influence of AI and educating policymakers about the many facets they will need to consider if AI is to come under regulatory scrutiny.

But where are representatives and stakeholders of the health care sector in the conversations with policymakers? To the best of our knowledge, so far the only person with a stake in health care innovation in these high-level government meetings has been professor Jennifer Doudna of University of California, Berkeley, Nobel laureate, and co-discoverer of CRISPR technology. Doudna, who took part in a meeting with President Biden in San Francisco in June, possesses genuine bona fides in leading public dialogs on health tech ethics, particularly in human gene editing. She has also helped found drug discovery and diagnostic companies, all of which undoubtedly are adopting AI in various parts of their workflow. We applaud her inclusion in these meetings.

But one expert voice on the intersection of AI and health care simply isn’t enough. We need more.

This isn’t the only way discussions about AI are overlooking health care. In June, Senate Majority Leader Chuck Schumer announced the SAFE Innovation framework to “support responsible systems in the areas of misinformation, bias, copyright, liability, and intellectual property.” But his introduction to this major policy framework proposal did not mention health care even though it is also susceptible to these concerns.

Now at least there are signs that the House is considering it. Reps. Ted Lieu, a Democrat from California, and Ken Buck, a Republican from Colorado, are cosponsoring a bill to create a blue-ribbon committee on artificial intelligence. Lieu told the Washington Post that AI “can be disruptive to society, from the arts to medicine to architecture to so many different fields.” (Emphasis ours).

Both congressional initiatives would do all of us a service to include medicine and health care as a major focus area. Simply wrapping applications of AI technology in an overarching manner that includes life science, health care, and medicine is truly life-critical. By including more varied representatives and stakeholders from the sector, policymakers can better understand the considerations of AI that are more relevant in health care. This will help shape effective and responsible regulations that foster innovation while safeguarding patient well-being. One good place to start would be the Alliance for AI in Healthcare. (We are admittedly a little biased here: Two of us, Sarah and Rafael, are on the AAIH board of directors; all three of us work for companies that are members of the alliance.)

Health care deserves particular consideration because it presents a much broader spectrum of risks than most other uses of AI. While a hallucination by a consumer-facing chatbot may cause a student to get the wrong answer on their homework, an error by an AI program that is used to diagnose or treat a disease could cause physical harm to a patient, even death. How should such systems be tested? And how should the risks associated with their use be communicated to doctors and patients? Even the training of AI systems presents different risks in health care. While some artists are rightfully upset about their artwork being used to train AI systems without their permission, that’s nothing compared with how patients will feel about companies training AIs on their private health information.

Fortunately, sound regulatory frameworks already exist for new medical technologies. In most cases, the FDA oversees these technologies and ensures they are safe and effective for their intended use. AI-based technologies designed to solve medical problems should fall under the same regulatory purview as traditionally discovered medicines, diagnostics, and devices. The principle of do no harm, and the goals of expanding access and improving health care outcomes, can be equally applied to AI-based health care advances. Increasingly there are calls for the creation of new regulatory bodies to oversee general AI systems. If these bodies are given purview over AI applied to health care, the approach could create unnecessary complications by creating gray areas around jurisdiction and definitions. If a large model is primarily trained on health care data, should it be considered a general-purpose model? If a general-purpose model is applied to a health care problem, is it now a medical device? Who should regulate these systems — the FDA or a new AI regulatory agency?

Determining the regulatory scope of AI models trained on health care data and their application to medical problems requires careful consideration. Health care does not need a blunt ax regulatory framework designed for the general purpose but rather a concerted effort to educate stakeholders and to extend regulation to contemplate the nuances of AI-based innovation. Instead of spending resources on creating new federal agencies, we should empower existing bodies like the FDA to regulate these new medical technologies effectively and collaborate with organizations such as the AAIH to work for data standardization and adequate policy work to avoid pitfalls. This approach would ensure the safety and efficacy of AI applications in health care without stifling innovation, thus ultimately benefiting patients.

It’s crucial that policymakers incorporate the voices of health care stakeholders into AI policy conversations to ensure that any new regulations support responsible innovation. While this certainly includes experts such as nurses and physicians and representatives from biotechnology, digital health, and pharmaceutical industries, it must also include the most important group of stakeholders in health care: patients themselves.

Charles Fisher, Ph.D., is CEO of Unlearn.ai. Sarah Benson-Konforty, M.D., is managing partner at 1010VC and advisor to Pepticom. Rafael Rosengarten, Ph.D., is CEO of Genialis.

SHOPPING CART

close