#Patients are less likely to follow advice from AI doctors that know their names

Table of Contents

#Patients are less likely to follow advice from AI doctors that know their names

Engineers often strive to make our interactions with AI more human-like, but a new study suggests a personal touch isn’t always welcome.

Researchers from Penn State and the University of California, Santa Barbara found that people are less likely to follow the advice of an AI doctor that knows their name and medical history.

Their two-phase study randomly assigned participants to chatbots that identified themselves as either AI, human, or human assisted by AI.

The first part of the study was framed as a visit to a new doctor on an e-health platform. 

[Read moreThis dude drove an EV from the Netherlands to New Zealand — here are his 3 top road trip tips]

  #The First 17-inch Chromebook is Here, Thanks to Acer – Review Geek

The 295 participants were first asked to fill out a health form. They then read the following description of the doctor they were about to meet:

Human doctorDr. Alex received a medical degree from the University of Pittsburgh School of Medicine in 2005, and he is board certified in pulmonary (lung) medicine. His area of focus includes cough, obstructive lung disease, and respiratory problems. Dr. Alex says, “I strive to provide accurate diagnosis and treatment for the patients.”
AI doctorAI Dr. Alex is a deep learning-based AI algorithm for detection of influenza, lung disease, and respiratory problems. The algorithm was developed by several research groups at the University of Pittsburgh School of Medicine with a massive real-world dataset. In practice, AI Dr. Alex has achieved high accuracy in diagnosis and treatment.
AI-assisted human doctorDr. Alex is a board-certified pulmonary specialist who received a medical degree from the University of Pittsburgh School of Medicine in 2005.
The AI medical system assisting Dr. Alex is based on deep learning algorithms for the detection of influenza, lung disease, and respiratory problems.
  #Polk’s Reserve R200 is budget hi-fi greatness

The doctor then entered the chat and the interaction began.

Each chatbot was programmed to ask eight questions about COVID-19 symptoms and behaviours. Finally, they offered diagnosis and recommendations based on the CDC Coronavirus Self-Checker.

Around 10 days later, the participants were invited to a second session. Each of them was matched with a chatbot with the same identity as in the first part of the study. But this time, some were assigned to a bot that referred to details from their previous interaction, while others were allocated a bot that made no reference to their personal information.

After the chat, the participants were given a questionnaire to evaluate the doctor and their interaction. They were then told that all the doctors were bots, regardless of their professed identity.

  #How to Enter Recovery Mode on a Mac with Apple Silicon

Diagnosing AI

The study found that patients were less likely to heed the advice of AI doctors that referred to personal information — and more likely to  consider the chatbot intrusive. However, the reverse pattern was observed in views on chatbots that were presented as human.

Per the study paper:

In line with the uncanny valley theory of mind, it could be that individuation is viewed as being unique to human-human interaction. Individuation from AI is probably viewed as a pretense, i.e., a disingenuous attempt at caring and closeness. On the other hand, when a human doctor does not individuate and repeatedly asks patients’ name, medical history, and behavior, individuals tend to perceive greater intrusiveness which leads to less patient compliance.

The findings about human doctors, however, come with a caveat: 78% of participants in this group thought they’d interacted with an AI doctor. The researchers suspect this was due to the chatbots’ mechanical responses and the lack of a human presence on the interface, such as a profile photo.

  #Google and Samsung team up to make Android smartwatches better

Ultimately, the team hopes that the research leads to improvements in how medical chatbots are designed. It could also offers pointers on how human doctors should interact with patients online.

You can read the study paper here.

Greetings Humanoids! Did you know we have a newsletter all about AI? You can subscribe to it right here.

If you liked the article, do not forget to share it with your friends. Follow us on Google News too, click on the star and choose us from your favorites.

For forums sites go to Forum.BuradaBiliyorum.Com

If you want to read more like this article, you can visit our Technology category.


Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button