The Digital Dose

Commercial determinants and therapeutic chatbots

September 03, 2023 Prof Rhonda Wilson & Oliver Higgins Season 1 Episode 2
Commercial determinants and therapeutic chatbots
The Digital Dose
More Info
The Digital Dose
Commercial determinants and therapeutic chatbots
Sep 03, 2023 Season 1 Episode 2
Prof Rhonda Wilson & Oliver Higgins

Ever wonder how the fascinating world of artificial intelligence intersects with the impactful domain of mental health? Join us as we deconstruct the concept of commercial determinants of health as defined by the World Health Organization, and explore their effects on therapeutic chatbots in mental health nursing. We shed light on the intertwined challenges and potential benefits, particularly the critical aspect of regulation and accountability in this rapidly developing area. We dissect a recent article published in the International Journal of Mental Health Nursing, highlighting the necessity for mental health professionals to stay updated about AI and chatbots' potential benefits and risks. And don't miss our discussion on Rachel Botsman's insightful book, "Who Can You Trust?", a must-read for understanding trust relationships in our digital era. Take part in this riveting dialogue as we navigate the thrilling yet complex landscape of AI in mental health. A gripping conversation awaits!

Commercial determinants and therapeutic chatbots: A mental health nursing perspective International Journal of Mental Health Nursing 2023. https://doi.org/10.1111/inm.13199

Rachel Botsman: Who Can You Trust?
https://rachelbotsman.com/books/

Support the Show.

Follow us at @digitaldosenews

The Digital Dose +
Become a supporter of the show!
Starting at $3/month
Support
Show Notes Transcript Chapter Markers

Ever wonder how the fascinating world of artificial intelligence intersects with the impactful domain of mental health? Join us as we deconstruct the concept of commercial determinants of health as defined by the World Health Organization, and explore their effects on therapeutic chatbots in mental health nursing. We shed light on the intertwined challenges and potential benefits, particularly the critical aspect of regulation and accountability in this rapidly developing area. We dissect a recent article published in the International Journal of Mental Health Nursing, highlighting the necessity for mental health professionals to stay updated about AI and chatbots' potential benefits and risks. And don't miss our discussion on Rachel Botsman's insightful book, "Who Can You Trust?", a must-read for understanding trust relationships in our digital era. Take part in this riveting dialogue as we navigate the thrilling yet complex landscape of AI in mental health. A gripping conversation awaits!

Commercial determinants and therapeutic chatbots: A mental health nursing perspective International Journal of Mental Health Nursing 2023. https://doi.org/10.1111/inm.13199

Rachel Botsman: Who Can You Trust?
https://rachelbotsman.com/books/

Support the Show.

Follow us at @digitaldosenews

Oliver:

Welcome everybody to the Digital Dose podcast. So today we're going to be talking about the commercial determinants of health and therapeutic chatbots. As usual, I'm joined by Professor Rhonda Wilson.

Rhonda:

It's fantastic. This is our second podcast and I'm really excited to talk about the commercial determinants of therapeutic chatbots in the mental health nursing context.

Oliver:

I think this is a really interesting topic. We know there's so much happening in the AI generative AI space and one of the easiest things to implement, of course, is a chatbot. By its very nature, chatgpt and those large language models theoretically work like that chatbot based interaction. So an ability for anybody to work on a model, build a model and then converse with the model is arguably the easiest it's ever been. But before we get into the technical components and what that really means, could you start by explaining what the commercial determinants of health are?

Rhonda:

Yeah, well, the World Health Organization have defined the commercial determinants of health as a key social determinant that refer to the conditions, actions and emissions by commercial actors that affect health. Commercial determinants arise in the context of the provision of goods or services for payment and include commercial activities as well as the environment in which the commerce takes place. They can have beneficial or detrimental impacts on health. So the World Health Organization have defined this commercial determinants and it got us thinking. It really got us thinking in terms of its influence in the therapeutic space and in AI or chatbots and what that implications that might have in the mental health setting.

Rhonda:

We've seen commercial determinants have quite an impact on other areas of health in the past. For example, big pharma has certainly had big implications around the use of pharmaceuticals. We don't see any more lots of sales reps coming and door knocking in our mental health services and in our primary health settings, giving us mugs and morning teas and dinners and all kinds of incentives to assist us to learn more about the drug that they're trying to promote. So that was certainly a commercial determinant and I think that is quite an interesting topic at the moment with the latest series on Netflix, the Pain Killer about opioids and pharmacological companies selling and promoting opioids, particularly in the US where that document docu series is set. So we really want to be upfront around AI and chatbots and understand what commercial determinants might you know what some of the risks might be, what some of the challenges might be and if there are any benefits, because we're going to have to work with commercial entities as this develops.

Oliver:

More and more so. I see that you know commercial partnerships. What we're going to have to do, it's going to bring things to market faster. But I think what really raises a point here, especially in the digital health arena, is that, unlike other pharmacological interventions, we don't really have an FDA for digital health products. So really anybody can slap something together, call a therapeutic, put a name on it or something that's you know and put it out there with essentially no evidence and no need to provide evidence that their particular tool is effective, clinically proven makes a difference, and they can charge whatever they want.

Rhonda:

We see that all the time in our app stores and you know you can download hundreds of thousands of apps to deal with mental health conditions theoretically, but almost all of them are commercial and and as and there's very little scientific evidence to support the effectiveness and the safety that might be related to those products. Yet they may or may not have influence on health and well-being, or indeed they may actually be harmful in some cases. The problem is there isn't regulation to assist us, to help the public, understand what's safe and what isn't in that space.

Oliver:

I think if we then sort of bring it back to that therapeutic chat box, there is a whole sort of another level of mystique when it comes to that AI and the large language models and the inner workings. So the commercial that the potential for was the unknown to actually occur within these products is actually quite high, because they can essentially build a black box model and don't have to actually show the inner workings or how it comes to the conclusions or how it interacts with a particular person in a particular way. So it's essentially a level of unmeasuredness that can occur unless there is some level of I don't know the actual term would be there some level of?

Rhonda:

Accountability.

Oliver:

Accountability. There it is.

Rhonda:

There's some real struggles in that domain and you know I think you know some of the risks are around the language training, the training that goes into developing these chat bots. Can you tell us a little bit more about how do you train a chat bot, oliver?

Oliver:

That's a really good question. So these, generally they are large language models such as a chat, gpt, which of course is out there with no and and various other ones. They're designed on a massive amount of data, so they take every bit of data they can find from the Internet, from various other bits and places, and essentially it creates a mathematical representation and we supply with a question or a particular idea and it interprets that and, like magic, gives you a response back. So it generates this response and whilst it appears somewhat magic, it is essentially just giving you the mathematical representation of what it thinks should come next, should come next. So if you ask it X and Y, it says in 99% of cases these particular things follow and it responds in that appropriate way that it's built.

Oliver:

Now these are very, very complicated large learning models. They have lots of. They've got a quite a complex structure by which they they work and arrive at these conclusions. So they're not always easily to identify how the mathematical formula Arrive. But the big proponent that's important is what data was it actually trained on?

Oliver:

Because primarily the systems are backwards facing. They take what we know, what we've learned, all these particular things learning, and they apply it to the questions we present. So if the particular Chatbot is trained on Ordinary conversations, it will supply ordinary conversation like responses. If it's trained on social media, it will operate in the way that social media platform actually operates. So you know, it might become quite negative very quickly or it might not really supply the correct answer you would think, from a clinical perspective or the perspective you're trying to use the chatbot for. So when you're using a chatbot for something like I've rung up my insurance company or you're typing sorry to the insurance company and it's sort of triaging your call, it knows the appropriate responses because you are essentially there wanting to Transact or find out this particular information.

Oliver:

But when things become more complicated we know humans more complicated and we look at it from a therapeutic perspective it starts to get much harder to actually ascertain A positive therapeutic interaction, and I mean that in the way that you know you have with another person Not necessarily a clinical relationship at the way in which you talk to each other, the way a conversation builds, the way sort of that evolves as you go on, that's very, very hard For a chatbot to replicate you need a little bit of empathetic intelligence to connect and have conversations that are meaningful with other humans.

Oliver:

right, you, that's right, and we know that empathy is something that you know that empathy is something that artificial intelligence, machine learning, computers, whatever you want to call it doesn't possess. That's right. Empathy is a human construct as part of that trust relationship which we've talked about before at length, and Rachel Botsman's work is really worth reading for anybody that wants to know more about trust relationships, especially in the age that we're going into. So as we sort of see these therapeutic chatbots being used, we have to start to ask the question well, what happens if the conversation degrades? What happens if it takes the conversation in a way that isn't right for that person but it is right for the data that it was trained?

Rhonda:

on. So sometimes if the data is poor quality data, then you'll get a poor quality clinical decision, but if it's high-grade clinical data, then it's more trustworthy.

Oliver:

Correct, I think a big thing is.

Oliver:

It's the old adage, garbage in, garbage out. It's only as good as that's learned on. But when we look at broad speaking machine learning, artificial intelligence a big part of the way it learns is data becomes labeled or classified. So on your phone, if you've got a definitely with the iPhone, and I'm sure the Google one does it as well you can actually do a tech search and you can search for dog and show you all the pictures on your phone that it's a dog, because there is a big library out there in the back that has been thousands and thousands and thousands of pictures of dogs have been labeled on that. Every time you do the sign-in, when it gets you to pick a motorbike or pick the crosswalk, you're actually helping classify images when you do that. So this big library, but that's only as good as the people classifying it or the content of the pictures. So say, for instance, talk about dogs, I've got Border Collie, you've got Border Collie we love Border Collies and the fur around the house to prove it.

Oliver:

but if we had, you know, an image classification that only have been presented with Border Collies and sold it with dogs, if you presented it with a different dog, it wouldn't have to classify because there's a bias, because it thinks the truth for dog is equals Border Collie, but in fact there are many, many others that are of the types.

Oliver:

So if you had a data set that maybe had 80% Border Collies and then you know 20% of just other dogs, the system will inherently kind of err towards one thing or another because there's much more of that particular information or a bit much more accurate in classifying the Border Collies. But if you give it something that's not a Border Collie, it really struggles. And we see that happening with a lot of AIs where you know if the bulk of the data is, you know Western, you know middle income people, then the data is really great when you're classifying there. But the moment you step outside of those realms, so minorities and cult and vaccinations populations very quickly won't be reflected exactly.

Oliver:

Correct. So this this comes back to the the with our data when we're actually using it for therapeutic purpose, such as a chat bot that if the data that has been trained upon is either poor data, like from a social media platform, or it's done may, it might be from clinical notes, but it might be not coded by clinicians or somebody with the understanding of what's actually occurring, then the data that's presented to the person interacting with chat bot will only have AIs. Good is that particular level and and we know in the case of the one thing in the experiment that's actually mentioned in our article, that they trained it all on reddit and when they turned some of the, the safeties off to make the chat bot a bit more empathetic, to appear a bit more conversational, the conversations degraded very quickly to quite negative outcomes. Of course, it was just from an experiment point of view, but you know, when you then reflect upon reddit as its source, you will actually quite a lot of. Reddit has a has a Reputation for.

Oliver:

So it's done exactly what it's been trained to do so.

Rhonda:

There are some harms that could be perpetuated or even encouraged through Poor quality training In a therapeutic chat bot. Essentially, that could happen. So, then, I guess that it must be really important that we start to work. As you know, we're both mental health nurses. You're a computing scientist as well. I guess it's time for the mental health professions to start to think about working with and collaborating with non-traditional Partners and collaborators like technologists, software engineers, computing scientists. All of those type of people Really need to become part of our, our therapeutic community now very much so, and if they don't, it's, it's going to happen anyway.

Oliver:

This is this is the scary thing that this will go ahead. There is money to be made. There are commercial aspects. Do this. So if we're not having that clinical input, if we're not sort of stepping up and going, we need to be involved or we need to actually be Making sure that the products that are being used are actually have the research and have the evidence behind the sport it, then you know we're going to end up caring for people using inferior products or inferior things that you know may not work for them.

Rhonda:

Hmm, and is as mental health professionals, we so we really need to be at the table. So we do need to understand a bit of this stuff. We do need to understand about the implications of, of training data for mental health practice, for mental health, for the mental health of People and populations. If we're not at that table, we can expect that poor quality and potentially even dangerous products will hit the market, if you like, and Be either freely or cheaply available, and that could cause Significant harm.

Oliver:

Yeah, it's quite a significant potential.

Rhonda:

So it's really important that mental health nurses and other mental health professionals educate themselves and you attend to their own professional developments around understanding the implications of AI and chatbots in the mental health context. That's really, really interesting. Well, as you mentioned, oliver, we we did write an article about this recently and published it in the International Journal of Mental Health Nursing, and we're going to put that in the show material online so that you can access it. It is free to read with with lovelessness, to go and have a look at that and start to get with the lingo a bit around AI and mental health nursing and chatbots and figure out what you can trust and what you can't trust, and Understand some of the challenges and harms that might be associated, but also understand how to critique and see where the benefits might lie as well.

Oliver:

Definitely there's. There's questions that you need to be able to ask of yourself In the context of these tools as we go forward, and you know everybody will benefit for that. So I think it's been great discussion Be very, very interesting. I will also include the Rachel Bossman who can you trust show notes as well. I'd highly recommend that fabulous read.

Oliver:

It is really fascinating for this digital age. So when we come to talking about digital health broadly, that book has so much applicability to you know the way in which we will do our business in the digital health world.

Rhonda:

Another exciting podcast from the the digital dose, rhonda Wilson and Oliver Higgins signing off. But stay tuned because we'll have another podcast dropping soon.

Commercial Determinants on Therapeutic Chatbots
AI and Mental Health Implications