Wednesday, November 13, 2024
HomeTechnologyOne in five doctors use ChatGPT in their practice. Is this serious,...

One in five doctors use ChatGPT in their practice. Is this serious, doctor?

Hypochondriac or not, it’s not impossible that you’ve already browsed the web, list of symptoms at hand, to figure out what virus or bacteria you just caught. Medical sites like Vidal or WebMD, institutional sites like the Ameli site, forums like Doctissimo, e.gInternetInternet As our lives have grown, the possibilities for self-diagnosis have multiplied. Yes, but. If you’ve ever done this exercise, you’ll know that the sleuthing required to pick out relevant symptoms and arrive at a definitive and reliable answer is much more complicated than it sounds. In short, nothing beats the advice of a doctor with ten years of study and even more years of practice behind it.
However, with the advent of generative AI, and esp ChatbotsChatbots Developed as Chat GPTChat GPTwe have the right to put the question back on the table. Oh Artificial intelligenceArtificial intelligenceAble to browse every corner of the web to answer questions and problems, is it able to diagnose our ailments more accurately than a forum? And why not just doctors?

No time to read? Discover the audio chronicle behind this transcript. Click play and let yourself drift away. © Futura

One in five GPs use generative AI.

According to recently published studies on the topic, it appears that an increasing number of health professionals are using ChatGPT in their practice. A survey was conducted among 1,000 general practitioners, and published in the journal Scientific on 17 September 2024. BMJshows that 20% of interviewees use creative artificial intelligence tools, such as ChatGPT, Bing AI or Bard de GoogleGoogle For example, in their clinical practice. When asked for more details, 29% of respondents explained that they rely on these services to prepare documentation after the appointment, while at least 28% of them rely on them to recommend a differential diagnosis. Use for

Biased and limited tools

So, we’re not going to beat around the bush: Inevitably, this new development raises its doubts and questions. And the authors of the study do not fail to report them in the discussion of their article. First, it’s important to note that these tools, which, remember, remain in their infancy despite their impressive growth in recent years, have their biases and limitations. On the bias side, for example, in October of last year, a surprising Stanford University study revealed that chatbots tend to perpetuate medical myths and racist biases, such as Mediterranean syndrome. Liars assume that foreign-born populations, and in particular, people. Of African descent, will feel less painpain compared to Caucasians. A stereotype that has long been scientifically disproved, but which persists, and is even taught in some schools. AIs don’t trust anything beyond what humans feed them, so it’s not surprising that they internalize the biases we instill in them, often much more than these studies represent online. There are those who use them. Thus, it is difficult to find impartial judges among them.

AI reveals racism among sports commentators in shocking study

Among the limitations, this time, we can once again mention the fact that these AIs are young and therefore still prone to delusions, misunderstandings and lack of understanding. They were also not specifically designed to diagnose and lack the keen sense that doctors learn to develop throughout their careers. It is impossible to sense, feel, hear, observe: at best, AI must rely on what the practitioner observes and manages to process.

Pay attention to privacy!

Another area of ​​concern highlighted by the study: PrivacyPrivacy. As the authors point out, these AIs are owned by commercial entities, which provide little visibility into how they use the information provided by their chatbots. While it’s a safe bet that no doctor names their patients when doing research, we now know that TriangleTriangle Data is being streamlined more and more, making it possible to search in the blink of an eye.the eyethe eye Identifying a person and compiling a complete file on them, including their contact details, their habits, their hobbies, and now, their medical data.

To date, because these new uses are only in their infancy, devices such as ChatGPT are not considered medical devices within the meaning of European legislation, but this does not mean that their use is prohibited or regulated. So be careful what information you provide to generative AI when you ask them for a diagnosis. If flu symptoms are unlikely to put you in a critical situation, the past has already shown that insurers such as banks can trust your data, the famous Big dataBig data Collected by third-party services, to determine the loan rate or the cost of your mutual insurance. But then, with all these pitfalls in mind, how is it that doctors still use generative AI in their profession? Do the risks outweigh the benefits and to what extent can artificial intelligence improve patient well-being and healthcare professionals?

ChatGPT: An Unexpected Ally in a System in Crisis

This is nothing new, but we still rarely talk about it. Today, the medical system in mainland France is at a breaking point. Faced with a critical situation, medical students are fleeing to other countries, fueling the shortage of doctors and The desertThe desert The number of drugs out of medical stock is growing at an alarming rate, two-thirds of EHPADs are in deficit, the most vulnerable are being neglected, community medicine and private institutions are being neglected in favor of centralization in public hospitals. has been, which are growing rapidly. A saturated, overworked nursing workforce is reaching the end of its force and suicides, which are high in these professions, remain a taboo subject.

Why is there a shortage of doctors in France? © Le Monde, YouTube

So let’s be clear, it’s going to take a lot more than a chatbot to get the French health system back on its feet. But the hope is that with creative AI like ChatGPT, along with more advanced diagnostic tools than ever before, the burden on caregivers will be at least partially reduced. If Conversational AIConversational AI Although unlikely to replace a physical doctor, studies have shown that ChatGPT was able to diagnose certain respiratory diseases better than medically trained individuals, with an algorithm potentially noting its aggressiveness. can make SarcomaSarcoma More effectively than one BiopsyBiopsyor even an AI can detect symptoms. DementiaDementia Before the first external symptoms appear.

They tested ChetGPT in the emergency room: his prognosis is “good” like Dr.

These diagnoses always need to be confirmed by specialists, no matter what, but they can save valuable time for patients and caregivers, limit medical malpractice, and allow for improved treatment. can provide, and reduce both medical and administrative costs. A 2023 study, published in the journal PubMed, shows that ChatGPT can also help healthcare professionals communicate with patients, by helping them to be more useful and In providing more empathetic responses. In an opinion piece for Medium, emergency physician Josh Tamayo-Sauer explains how he uses a chatbot to explain his decisions to patients and their loved ones, when their emotions or lack of knowledge lead them to Prevents hearing.

Next step: Participate in the discussion.

In short, the key benefits are numerous and deserve to start a significant debate on the use of generative AI in the clinical environment. Before we accept this new development without flinching or moving on because our doctor charges us for a consultation that we could do “all by ourselves in three clicks”, let’s consider the risks to our health and our lives. Make time for Lack of hands, and benefits for our care StressStress Caregivers and their practice facility on the other hand.

In the case of AI, as with many other things, many of us have tended to let things unfold without intervening, the content of being surprised or angry as the news unfolds, the reaction to the heated often the dialogue. But takes precedence. So, we now invite you to open the conversation. With your friends, with your loved ones, with your doctor, with your patients, and with the legislators who regulate technologies and their use. Start citizens’ discussions, inform yourselves, and demand peaceful and informed exchanges before the processes become wider and tougher. It’s time to think about how we want creative AI and health to work together. It is up to you whether you want to participate or let things happen. But you won’t be able to say: ” If I had known, I would have acted. »

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments