Generative AI in Healthcare – Doctors Perspectives and Statistical Insights

This article explores physician attitudes towards artificial intelligence in healthcare, examining their expectations and potential reservations

Introduction

With growing internal problems in the healthcare sector – such for example a 90% burnout rate among physicians – it seems a natural step to look to technology for a solution.

The most promising direction in recent years is of course Artificial Intelligence. Properly implemented, AI can relieve doctors of some tasks, allowing them to spend more time with patients or rest, ultimately aiming to improve health outcomes.  

But the real question is what do doctors themselves think about artificial intelligence technologies in healthcare?

Two German researchers, Sophie Isabelle Lambert and Murielle Madi, set out to understand the medical community’s evolving relationship with artificial intelligence and to identify the factors that most influence doctors’ acceptance of AI tools. 

In a comprehensive review published in the prestigious journal Nature, they analyzed more than 21,000 (!!) publications, ultimately selecting 42 studies that met their rigorous criteria. Based on these, they identify several factors that are most important for the successful implementation of AI in medicine.

Our blog post explores key findings from Lambert and Madi’s work, focusing on three critical questions related to the integration of AI technologies in healthcare organizations:

Let’s dive in!

How do doctors feel about using AI in their practice?

There isn’t a clear consensus on whether doctors and nurses believe AI will help them do their jobs better. 

Some studies suggest that doctors find AI tools helpful. Other studies, however, suggest that in some situations, such tools are more likely to get in the way.

According to a survey conducted by the American Medical Association, 65% of participating physicians see advantages in using artificial intelligence in healthcare.

Specifically, 56% of respondents believe that AI can enhance:

  • care coordination, 
  • patient convenience, 
  • patient safety.

So, the survey above shows that the majority of physicians are positive about the possibilities offered by artificial intelligence. This is also reflected in doctors’ opinions after pilot implementations:

  • Some studies show that physicians believe that clinical decision support systems (CDSS) reduce the rate of medical errors through alerts and recommendations
  • In another study, 85% of 100 surgeons, anesthesiologists, and nurses found alerts useful for early detection of complications.
  • In an analysis of radiologists’ attitudes toward AI, 51.9% of respondents felt that AI-based diagnostic tools would help save radiologists’ time.

However, in research projects focused on the adoption of AI in radiology and the integration of machine learning into clinical operations, medical doctors and nursing staff viewed AI as precise and grounded in ample scientific proof when it came to diagnosis, impartiality, and information quality.

However, research suggests that implementing these systems in emergency rooms (ERs) could lead to an increase in errors. A separate study identified usability challenges, with respondents reporting alert fatigue due to the high volume of alerts and a tendency for some physicians to disregard them.

What worries do doctors have about AI in medicine?

In the studies cited, physicians’ concerns about using AI tools relate to 5 areas:

  • Reliability,
  • Transparency,
  • Legal clarity,
  • Efficiency in terms of time and workload
  • Being replaced

 

Concerns about the reliability of AI systems

Regarding reliability, there was particular concern about the quality of the information obtained.

  • In 3 surveys, participants found it insufficient to correctly make a diagnosis, which translated into a general doubt about the accuracy of diagnostic systems.
  • In another study, respondents noted that decision support systems can be helpful but limited.
  • 49.3% of physicians (277/562) in a survey evaluating the use of artificial intelligence in ophthalmology indicated that the quality of the system is difficult to guarantee.
A bar graph shows the top five concerns about artificial intelligence among 562 respondents. The most common concern is that medical responsibilities are unclear (317), followed by concerns about service quality (277), cost (252), medical ethical risks (240), and lack of policy support (235).
What are your concerns about AI in ophthalmology? (multiple selection)
source: https://link.springer.com/article/10.1186/s12913-021-07044-5/tables/7

The belief that AI systems will create more work

The belief that these tools will create additional workloads rather than streamline work also seems to be a barrier to adoption. 

  • A study exploring how doctors use CDSS systems found that doctors felt these systems took up too much of their time.
  • In one study, doctors suggested that, paradoxically, AI-based systems could reduce the time they spend with patients.
  • Doctors involved in a robotic surgery study worried that these procedures might take longer, but nurses and other operating room staff didn’t share this concern.

 

Limited understanding of how the tool works

Doctors involved in a study on AI adoption highlighted two key concerns: transparency and adaptability.

They felt that if a diagnostic system, whether a CDSS or machine learning, lacked clarity in how it worked or couldn’t adjust to new information, it would be less likely to be accepted. This concern aligns with another study where participants considering a new predictive machine learning system emphasized the need for clear and well-researched guidelines for its development. 

Finally, a separate study on implementing a CDSS in pediatrics found that unfamiliarity with the system led to resistance from doctors.

 

Lack of legal knowledge about liability in case of system errors

Beyond transparency in system function, physicians also have concerns about who is responsible if the AI fails.

This uncertainty can lead to apprehension and hinder adoption.

However, studies have shown a positive aspect of AI implementation: maintaining patient privacy. Participants in two studies highlighted the importance of confidentiality, indicating that AI systems should effectively uphold this principle.

 

Concerns that AI will take over healthcare jobs

A major concern among healthcare professionals is the potential for AI to replace them. 

Over half (54.9%) of those surveyed believe future doctors should choose specialties less likely to be dominated by AI. The same study showed that in radiology specifically, some specialists (6.3%) fear being completely replaced by generative AI in the future.

However, research suggests a more nuanced outlook.  While a study by Zheng et al. found most ophthalmologists (77%) and technicians (57.9%) believe AI will play a role in their field, only a quarter (24%) believe it will completely replace doctors. 

These anxieties can create resistance to adopting AI tools. Addressing these concerns should be a key part of implementing AI in healthcare.

What features are doctors hoping to see in AI tools?

The research highlights several factors that influence the adoption of artificial intelligence solutions. They are:

  • Safety
  • Intuitiveness of the system
  • Integration with workflow
  • Real support
  • Advanced training

 

Feeling secure in a system fosters trust

Diagnostic support systems can have errors, but those powered by artificial intelligence (AI) generally have fewer compared to traditional programs (link). This reduction in errors is particularly pronounced in systems designed for straightforward tasks, leading to a heightened sense of trust in the tool’s capabilities. 

Therefore, when designing an AI system, it is safer to create a less complex solution, that is, one that helps with simpler tasks.

 

Easy-to-use systems have higher adoption rates

In the study on perceptions of CDSS, systems that are intuitive, simple, and easy to learn were highly valued by healthcare providers. In the overall study on AI in healthcare, as many as 70% of respondents agreed with every statement about the usability of AI programs.

This also applies to the way they interact with the user, i.e. the doctor. Several studies have observed the phenomenon of alarm fatigue. Too high sensitivity of these indicators can negatively affect the customer satisfaction index. It also leads to security risks. Studies have shown that in critical situations, warnings from the system were simply ignored.

 

The system should be integrated with the workflow

Integrating the system into the existing workflow is also important.  Participants in the study described “required additional tasks” as “undesirable” when using AI systems. The problem is significant because in another study, nearly 40% of anesthesiologists said they could not integrate the system into their clinical routine. Radiologists using an AI system that lacks automation and consistency with common standards were rated as unreliable (link).

In contrast, if the system demonstrates consistency and is tailored to the tasks of a medical professional, it is welcomed (link).

Visualization of the machine learning application in the hospital information system (a) and in a web application presenting patient specific features for prediction (b). For the sample patient, a very high risk of delirium is predicted (in red)
Visualization of the machine learning application in the hospital information system
(a) and in a web application presenting patient specific features for prediction
(b). For the sample patient, a very high risk of delirium is predicted (in red)
source: Technology Acceptance of a Machine Learning Algorithm Predicting Delirium in a Clinical Setting: a Mixed-Methods Study

If the system really helps, it is easier to accept.

Research found that nearly 90% of participants expected their workload to increase after the integration of AI technologies. Lack of time, documentation overload, and patient volume are common issues in critical care, so the added burden of dealing with “yet another program” can be a major objection to implementing AI systems.

However, if the system is able to reduce a physician’s workload and realistically help him or her, there is a better chance of acceptance (link)

 

Training affects acceptability

Also important to the successful implementation of AI in health care is the prior training of staff. In studies of machine learning systems, users without prior experience with such tools felt overwhelmed.

As training increases, so does the percentage of people using AI tools. 

  • 53,49% of nurses who did not use the AI system did not participate in any previous training
  • Half of those who received one training used the system, while participation in two trainings resulted in 83% of trained nurses using the system.
  • After three sessions, the percentage increased to 100%. (link)

Who we are?

At NubiSoft, we develop software for the healthcare industry. We are a trusted technology partner for medical institutions as well as for companies whose end users are medical professionals.

We are certain that well-designed solutions can make a real difference in the work of medical professionals.

We are ready to help your company join the AI revolution safely and sustainably. Contact us when you think you’re ready for it, too.

Contact

Leave your email and we’ll reach out to you to arrange a non-committal online meeting to see how we can help each other.

Leave a Reply

Your email address will not be published. Required fields are marked *