FEATURES

Examining Challenges and the Potential of Integrating AI in Healthcare

Exploring AI integration in healthcare is crucial for improving outcomes and revolutionizing the medical industry.

Source: Adobe Stock

- As the healthcare industry continues to embrace artificial intelligence (AI), experts are examining the challenges that hamstring the adoption of this transformative technology.  

Understanding the challenges and potential of integrating AI in healthcare can elevate patient outcomes and catalyze a ground-breaking transformation of the medical industry. 

In an interview with PharmaNewsIntelligence, Matt Hollingsworth, MSc, MBA, CEO of Carta Healthcare, explained the challenges of integrating AI into healthcare systems and the role of AI in combatting medical misinformation. Hollingsworth also discussed promising applications of AI and highlighted how healthcare providers could address privacy concerns while leveraging the benefits of AI. 

Challenges of Integrating AI in Healthcare 

Change management, the rise of medical misinformation on social media, and data privacy and security concerns are all significant challenges to integrating AI into the healthcare system. 

Change Management 

One major hurdle in the adoption of AI in healthcare, according to Hollingsworth, is change management. AI is held to a higher standard than existing healthcare processes, making addressing human resistance to change essential. Integrating AI into healthcare systems poses several key challenges. Hollingsworth emphasizes that the primary challenge lies in implementing change.  

“When implementing any kind of AI in healthcare, the primary challenge is always, first and foremost, change management — changing a current practice or process to an AI-enabled variant,” Hollingsworth said. 

According to Hollingsworth, the technology itself is rarely the problem. If AI development were frozen today, there would still be untapped potential behind change management barriers for decades.  

To overcome these challenges and improve AI adoption, Hollingsworth suggests focusing on risk-free adoption of existing AI technology within the healthcare providers' domain and using AI to enhance efficiency alongside human expertise instead of replacing human efforts entirely. 

Medical Misinformation 

The rise of social media and AI chatbots — namely, ChatGPT  — has led to the rapid spread of medical misinformation, which can have deadly consequences. When discussing the role of AI in combating medical misinformation, Hollingsworth emphasizes the need for caution. For example, he points out that ChatGPT’s algorithm runs without a concept of accuracy, which can be incredibly dangerous.  

“ChatGPT has no concept of correctness or incorrectness. It is fundamentally an extremely sophisticated auto-complete/next-word prediction algorithm. Putting unqualified trust in any information that ChatGPT generates is dangerous,” Hollingsworth warned. “Healthcare is an industry where misinformation can kill people.” 

Hollingsworth suggests that AI outputs in the medical realm should be filtered through experts who can fact-check and edit the information before it is considered trustworthy. Carta Healthcare ensures this by employing trained clinical AI experts to bridge the gap between AI algorithm output and the final product. 

“This ensures that any inevitable mistakes the AI makes do not make it into the final product,” he added. 

It is crucial to recognize that AI is merely a tool and should not replace medical advice provided by physicians. Hollingsworth compares AI to a hammer, emphasizing experts' need for responsible usage to add value rather than cause harm. AI should be effectively used within appropriate contexts and expertise to avoid harmful consequences. 

“A hammer can be used by a construction expert to build a house, or it can be misused as a weapon by anyone. It is our job as a society to ensure that AI is a tool used effectively by experts who know how to add value with it — rather than misuse to cause harm,” explained Hollingsworth. 

Privacy and Data Security 

Privacy and data security concerns are inevitable with  AI use in healthcare. Hollingsworth assures that Carta Healthcare will only send generated or collected data within the provider's control. The company believes that sharing data should rest in the hands of clinicians who understand when it is appropriate.  

“There is too much moral hazard in reselling patient data or other such activities, and it is much better for us as a society to leave that up to the clinicians who know when to share data and when not to,” Hollingsworth shared. 

By respecting the provider's control over their data, he aims to balance leveraging AI's benefits and safeguarding privacy. 

The Future of AI 

Hollingsworth sees promise in addressing administrative and operational overhead in healthcare using AI. By relying on automation for repetitive or manual administrative tasks, healthcare workers can lessen their workload and curb burnout. 

“Carta Healthcare plans to scale operations and research and development so that we can continue making healthcare data accessible and actionable while reducing the time clinicians spend on manual administrative tasks,” he continued. “Automating data entry and registry submission can support physicians and nurses plagued by burnout, returning them to patient care and improving job satisfaction. It’s a big task — but an important one.” 

Carta Healthcare's Atlas product line will drive these efforts, with a primary focus this year. The Navigator product line also aims to leverage captured data for operational improvements, cost savings, and efficient scheduling. 

Operational use cases can immediately impact providers' bottom lines, providing relief amidst financial challenges. Hollingsworth also looks forward to medium-term applications, such as AI-powered diagnostic and treatment devices/software, with the FDA now having workable frameworks for AI deployment. 

“The FDA has only fairly recently come up with a stance that is workable at scale for AI deployment in those realms so that we will see a substantial increase in those applications over the next decade,” he predicted. 

The relationship between AI and human expertise in healthcare is evolving. According to Hollingsworth, a human-in-the-loop is essential for reviewing and approving the work of AI systems. Human experts, ideally those who previously performed the tasks now undertaken by AI, should assess the accuracy of results and determine appropriate actions based on them. 

Hollingsworth suggests a gradual adoption of AI in healthcare to ensure reliability and effectiveness. This allows time for review, improvement, and inclusion of a human-in-the-loop to verify the work performed by AI tools. Technology change takes effort and time, and a cautious approach is necessary to validate accuracy and safety. 

“A human in the loop is essential for reviewing and approving the work of the AI system,” he added. “AI is not at a point where it is flawless and perfectly accurate.” 

Addressing challenges and maximizing its potential will be crucial as AI advances in the healthcare sector. The insights provided by Matt Hollingsworth shed light on the importance of change management, responsible usage, and collaboration between AI and human expertise. With careful implementation and continuous evaluation, AI has the potential to transform healthcare and improve patient outcomes while reducing costs. 

“Technology change is often good but typically takes time and effort to implement. Like most new technologies, AI adoption in healthcare will be gradual, and it should be gradual to allot time to review and improve its performance and include a human-in-the-loop to check its work,” Hollingsworth concluded.