Sunday, June 13

The computer will see you now: is your therapy session about to be automated? | US News


IIn just a few years, your visit to the psychiatrist’s office could look very different, at least according to Daniel Barron. Your doctor may benefit from having computers analyze the interactions recorded with you, including subtle changes in your behavior and the way you speak.

“I think without a doubt having access to quantitative data on our conversations, on facial expressions and intonations, would provide another dimension to the clinical interaction that is not being detected at this time,” said Barron, a Seattle-based psychiatrist and author. from the new book Reading Our Minds: The Rise of Big Data Psychiatry.

Barron and other physicians believe that the use of artificial intelligence (AI) will grow rapidly in psychiatry and therapy, including facial recognition and text analysis software, which will complement physicians’ efforts to detect mental illness earlier and improve treatments for mental illness. patients. But first you need to show that technologies are effective, and some experts are also wary of biases and other ethical issues.

While telemedicine and digital tools have become increasingly common in recent years, “I think Covid certainly has a supercharged and fast-paced interest,” said John Torous, director of the digital psychiatry division at Beth Israel Deaconess Medical Center. in Boston.

The technology currently in development may already prove useful, Barron argues. For example, computer programs known as algorithms might notice if a person’s facial expressions change subtly over time or if they are speaking much faster or slower than average, which could be an indication that they are manic or depressed. . He believes these technologies could help doctors identify these signs sooner than they would have done otherwise.

The software would collect this data and organize it. Between exams, a doctor could examine the data and focus on a clip of a recording marked by an algorithm. And other information could also be brought in from beyond the doctor’s office.

“There is a lot of data that we could get from audio, wearable devices and other things that tracks who we are and what we are doing that could be used to inform treatments and find out how well treatments are working,” said Colin Depp, psychiatrist. at the University of California, San Diego.

If apps or devices show that a person is sleeping poorly or less, or is gaining weight, or if their social media posts reveal depression-like comments or a different personal pronoun, these could inform a psychiatrist’s diagnosis .

Questions and answers

What is AI?

Show

Artificial intelligence (AI) refers to computer systems that do things that normally require human intelligence. While the holy grail of AI is a computer system indistinguishable from a human mind, there are several specialized, but limited, forms of AI that are already part of our everyday lives. AI can be used with cameras to identify someone based on their face, to feed virtual companions, and to determine if a patient is at high risk for disease.

AI should not be confused with other types of algorithms. The simplest definition of an algorithm is that it is a series of instructions necessary to complete a task. For example, a thermostat in your home is equipped with sensors to detect the temperature and instructions to turn it on or off as needed. This is not the same as artificial intelligence.

The deployment of AI today has been made possible by decades of research on topics including computer vision, which allows computers to perceive and interpret the visual world; natural language processing, allowing them to interpret the language; Y machine learning, a way that computers improve as they find new data.

AI enables us to automate tasks, gather information from huge data sets, and complement the human experience. But a plethora of studies have also begun to document its pitfalls. For example, automated systems often train on huge amounts of historical digital data. As many widely publicized cases show, these data sets often reflect past racial disparities, from which artificial intelligence systems learn and replicate.

Furthermore, some of these systems are difficult to interpret by outsiders due to an intentional lack of transparency or the use of genuinely complex methods.

Thank you for your comments.

As an example of the potential of AI programs, Depp points to a Veterans Affairs project that looks at the clinical records of people who ultimately took their own lives. The computer programs scanned data from his medical records and identified common factors that could involve a person’s employment and marital status, chronic health conditions, or opioid prescriptions. The researchers believe that their algorithm has already recognized other people at risk and disengaged from care, before they become suicidal and before they are detected through traditional channels.

In recent years, researchers have also suggested that depression and other mental illnesses can be predicted from people’s text. Facebook Y Twitter posts by detecting words that are often associated with typical depressive symptoms such as sadness, loneliness, hostility, and brooding. Changes in a person’s posting patterns could alert the doctor that something is wrong.

In fact, in 2017, Facebook developed an algorithm that scanned English posts for text that included suicidal thoughts. If that language was identified, the police would be alerted to the author of the post. (The moving drew criticism, especially since the company had, in fact, been involved in the business of mental health interventions without any supervision).

“Mental illness is at least 50% underdiagnosed, and AI can serve as an early warning and detection system,” said Johannes Eichstaedt, a psychologist at Stanford University. But current detection and detection systems have yet to prove effective, he said. “They have mediocre accuracy by clinical standards, and I include my own work here.”

So far, it gives current AI programs a C grade for accuracy, and they still can’t beat outdated pencil-and-paper surveys, he argues.

One of the problems with the algorithms that Eichstaedt and others are developing, he notes, is that they track a sequence of facial expressions or words, but these are only vague clues to someone’s inner state. It is like a doctor who recognizes apparent symptoms but is not sure what disease is causing them.

Some advocates may be overconfident in AI’s potential to interpret human behavior, warns Kate Crawford, a researcher at New York University and author of the new book Atlas of AI. She noticed the recent scandal over Lemonade, an insurance company that claimed to use artificial intelligence to analyze video recordings its clients submitted when making claims, and that Lemonade said it could detect whether the client was being fake or fraudulent.

This “shows that companies are willing to use AI in ways that are not scientifically proven and are potentially harmful, such as trying to use ‘non-verbal cues’ in videos,” Crawford said in an email. (In a statement to Recode, Lemonade later said that its “users are not treated differently based on their appearance, disability or any other personal characteristics, and AI has not been and will not be used to automatically reject claims.”

Crawford points to a systematic review of science in 2019, led by psychologist and neuroscientist Lisa Feldman Barrett, who demonstrated that while under the best recording circumstances, AI can detect expressions such as frown, smiles, and frown, algorithms cannot infer reliably someone’s underlying emotional state from them. For example, people frown in anger only 30% of the time, Barrett says, and might otherwise frown for reasons other than anger, such as when they are focused or confused, or listening. a bad joke, or they have gas.

AI research hasn’t improved significantly since that review, he argues. “Based on the available evidence, I am not optimistic.” However, he added that a personalized approach might work better. Rather than assume a base of emotional states that are universally recognizable, the algorithms could train a single person over many sessions, including their facial expressions, voice, and physiological measures such as heart rate, while taking the context of those into account. data. Then you would have a better chance of developing a reliable AI for that person, says Barrett.

If such AI systems can eventually be made more effective, ethical issues still need to be addressed. in a just published article, Torous, Depp, and others argue that while AI has the potential to help identify mental problems more objectively, and could even empower patients in their own treatment, it must first address issues such as bias.

During the training of some AI programs, when they are fed huge databases of personal information so that they can learn to discern patterns in themselves, white people, men, higher income people or younger people to they are often overrepresented. As a result, they may misinterpret unique facial features or a weird dialect.

TO recent study focusing on the types of text-based algorithms for mental health used by Facebook and others found that they “demonstrated significant biases with respect to religion, race, gender, nationality, sexuality, and age.” The researchers recommend involving clinicians with demographics more similar to patients’ and that those performing labeling and interpreting be trained in their own biases.

Privacy concerns also arise. Some might resist having their social media activity analyzed, even if their posts are public. And depending on how the data from a recorded therapy session is stored, they could be vulnerable to hacking and ransomware.

To be sure, there will be some who are skeptical about the whole push for artificial intelligence to play a bigger role in mental health decisions. Psychiatry is part science and part intuition, Depp said. AI won’t replace psychiatrists, but it could complement their work, he proposes.

“The alliance between the provider and the person receiving the service is of vital importance and is one of the greatest predictors of positive results. We definitely don’t want to lose that, and in some way, the technologies could help support that. “

Taking into account technological advances, the subject is no longer merely academic.

“The question is, can AI or digital tools in general help us collect more accurate data so that we can be more effective doctors?” Barron asks. “That is a testable question.”


www.theguardian.com

Leave a Reply

Your email address will not be published. Required fields are marked *