The unexpected source of inspiration for regulating artificial intelligence (AI) in healthcare comes from the aviation industry, known for its stringent safety standards. A group of MIT scientists, led by Assistant Professor Marzyeh Ghassemi and Professor Julie Shah, is exploring the parallels between the historical evolution of aviation and the current challenges faced by AI models in healthcare.

The key issues being addressed include transparency in AI models and concerns about biased algorithms. The MIT researchers believe that the aviation industry can serve as a model to ensure that AI in healthcare doesn’t harm marginalized patients. To delve into this, they formed a cross-disciplinary team, including researchers, attorneys, and policy analysts from MIT, Stanford University, Microsoft, and other institutions. The outcomes of their research have been accepted for presentation at the Equity and Access in Algorithms, Mechanisms, and Optimization Conference.

The researchers draw historical parallels between the aviation industry and the current state of AI in healthcare. They highlight the 1920s as the “Golden Age of Aviation,” marked by frequent fatal accidents. This prompted President Calvin Coolidge to enact the Air Commerce Act in 1926, which regulated air travel through the Department of Commerce.

The subsequent automation in aviation during the 1970s, including autopilot systems, mirrors the evolution of AI in healthcare. Both fields have faced challenges related to transparency and explainability, with AI’s “black box” problem aligning with the complexities of human interaction with autonomous systems in aviation.

The extensive training required to become a commercial airline captain—1,500 hours of logged flight time, instrument training, and a 15-year process—is proposed as a potential model for training medical professionals to use AI tools in clinical settings.

The study suggests encouraging reports of unsafe health AI tools, similar to the reporting system for pilots managed by the Federal Aviation Agency (FAA), which offers “limited immunity” for unintentional errors. The goal is to address challenges in AI explainability and biases by drawing insights from the historical evolution of aviation regulations.

Considering the high incidence of adverse events in healthcare, where one in every 10 patients is harmed, robust governance for health AI is crucial. The researchers propose leveraging existing government agencies, including the FDA, FTC, and NIH, for regulating health AI. They also advocate for the establishment of an independent auditing authority, akin to the National Transportation Safety Board, to conduct safety audits for malfunctioning health AI systems.

In the rapidly evolving landscape of AI regulation, the researchers emphasize the importance of balancing safety and innovation. With ongoing regulatory efforts, the study envisions a future where technology is developed and regulated to ensure safety without stifling innovation. The lessons drawn from the aviation industry provide valuable insights for navigating the complex intersection of AI and ethics in healthcare.