other

Explaining complex information to patients

Around 20 years ago, I was asked for a long-term “grand challenge” vision, and suggested building systems that helped members of the public understand complex information about themselves, especially medical information. The focus was on making data understandable and accessible to a wide range of people, including those who struggled with information graphics.

Now that I’m in the last phase of my career (I’m 63.5 years old), I’m trying to come back to this vision, collaborating with my students and colleagues in Aberdeen’s medical school in a variety of areas, including supporting cancer patients, helping people understand nutritional data, and explaining IVF predictions. Very different areas, but in all of them a key goal is to help people understand their medical data and what it means for them.

We are incidentally currently looking for a research fellow to work in a project on helping people with melanoma (skin cancer) monitor and manage their condition (job advert). Feel free to contact me if you’re interested!

Anyways, a few weeks ago we had a workshop where computer scientists, clinicians, patients, and other interested parties discussed related topics, including some work of one of my students, Mengzuan Sun, is doing on using chatGPT (GPT4) to explain complex medical notes to patients (blog). I’ll use this experience to illustrate some of the issues we’ve seen (in all domains, not just cancer), and associated challenges for AI and NLG.

“What does it mean for me”

The first point is that patients at the workshop were very interested in understanding medical notes and knowing more about their condition. Of course patients willing to participate in a research workshop may not be representative, but I do think a lot of people are keen to understand their medical status. Perhaps even more in IVF than in cancer. Cancer is a tragedy which some people may try to ignore, but IVF is a choice and people in this space are very committed, otherwise they would not be considering IVF.

However, people at the workshop also wanted medical notes explained in a way which highlighted “what it means for me”, and chatGPT in general did not do a good job with this; it was explained medical terminology but (perhaps not surprisingly) was not very good at relating the information to the patient’s circumstances. So doing this remains an important research challenge!

Trust

The issue of trust was raised many times at the workshop. ChatGPT made some mistakes in the summaries. The one that bothered me the most was not a medical error but rather a spam URL; in response to a question about managing anxiety, ChatGPT suggested a relevant local charity (which does excellent work), but then gave a URL which pointed to a spam website rather than the real charity site. I hope this gets fixed, there is scary potential here for con men to take advantage of vulnerable sick people. But the discussion of trust went far beyond specific problems; the level of trust needed for a professional to recommend a tool to a client is much higher than what is needed for the professional to use the tool themselves.

In the IVF area, we have seen patients ask excellent questions about whether a predictive model can be trusted, since it ignored information which they believe is relevant (blog). We need to do a far better job of explaining to people how far and in what circumstances AI models can be trusted; another huge research challenge.

Emotional issues

Personal medical data can have a huge emotional impact on people, and I have written elsewhere (blog) about the challenges this poses in using NLG to communicate health data, even in a 100% accurate, understandable, and trusted manner. Amongst other examples, in the Babytalk project we did many years ago, we worried that informing relatives that a sick baby was struggling could trigger a heart attack (in the relative), and in recent work in the nutrition space we saw that chatGPT could say things to people which were true but potentially (and needlessly) upsetting (paper).

This issue came up in our recent workshop as well, and I again regard it as a major challenge, especially with LLMs; the fact that LLMs often seem to take information and wording from health forums (like Reddit) does not help.

What can people understand

Finally (for this blog), an issue which often comes up in this area is making information understandable to patients. At the workshop, patients commented that chatGPT’s explanations of medical terminology did not always make sense to them. In both nutrition (paper) and IVF (paper), we’ve looked at effective communication of numerical information, which is challenging for patients with limited numeracy skills .

Obviously everyone is different, in preferences as well as in ability to understand, and so systems in this space need to be flexible and adapt to their users. This again is an important research challenge.

Looking for a research fellow!

Just as a reminder, we are currently looking for a research fellow to work on a project (funded by Cancer Research UK) on supporting people who have melanoma. This project is a collaboration with clinicians and health psychologists, and aims to develop technology which is useful and can be deployed in the real world. We’ll be using vision (to analyse skin images) as well as NLP. Contact me if interested!

3 thoughts on “Explaining complex information to patients

  1. I think that the “what does it mean for me”-part is really (really!) hard. Certainly, avoiding madical lingo and using layperson’s terms is an issue, but I’d say the least problematic one (there should be material out there from patient information leaflets which could constitute some of the training material needed).
    Relating a prognosis to the “patient’s circumstances” which encompasses everything from age and comorbidity via professional and social situation to the patient’s values, culture and (religious) beliefs, and doing this in a language adapted to the patient’s cognitive abilities (“What can people understand”), but also with empathy, is a biggie. And one for which there is – I presume – not much training material to be found.

    Like

    1. I completely agree. What patients want to know is what information means for them, and this depends on their unique circumstances; we’ve see this in all of the health areas we work in. Unfortunately current language models (despite all of the hype about passing medical tests) do not seem able to respond adequately to such questions (at least in our experiments). So this is a very interesting an important research challenge!

      Like

Leave a comment