Women's Digital Health
Women's Digital Health Podcast is dedicated to learning more about new digital technologies in women's health.
80% of US healthcare spending is determined by women. Yet only 4% of the investment dollars of healthcare companies are actually spent researching and developing new products and solutions for women.
Many of us are frustrated with incomplete healthcare experiences and sometimes dismissive responses from healthcare providers. You're probably wondering, is there a more convenient and accessible way to get the health experience that I want? Is there a way to get more control over your healthcare journey?
Dr. Brandi Sinkfield is a Board-Certified Anesthesiologist with over 10 years of experience. Growing up she experienced the shame, secrecy, and lack of transparency surrounding women’s health. This has driven her to imagine a pathway for other women to access information that leaves them feeling empowered and full of confidence.
Every two weeks on this podcast, Dr. Sinkfield will discuss digital health in depth, exploring innovative health solutions that are bridging the women's health gap. She will speak with digital health creators, investors, and technologists who are creating convenient and accessible health solutions for women that are designed to fit their schedules and accommodate their needs.
Whether you're curious about advancements improving women's health or struggling with health issues like obesity, heart conditions, or hormone shifts from pregnancy to menopause, follow Women's Digital Health on your favorite podcast platform and never miss an episode.
Women's Digital Health
Exploring AI's in Impact on Mental Health: Bias, Data Privacy, and Women's Health
For the last part of our series aimed at demystifying AI’s role in mental health, we delve deep into the impact of artificial intelligence on women seeking mental health resources. In this episode, we discuss bias in AI, the recent Change Healthcare data breach, and how to protect yourself in the digital health landscape.
If we define bias as leaning towards one outcome over others, it's obvious that, when it comes to AI, it can lead to flawed algorithms and bad decisions. So we look at the concept of bias and the different ways that this can impact AI, specifically data bias, algorithmic bias, and outcome bias, and how each of these can impact the effectiveness of mental health apps and resources.
We also talk about the Change Healthcare ransomware attack, where personal health data was released, leading to distrust and anger among patients. We highlight the need for data privacy protection and staying informed about potential breaches in the healthcare sector.
Topics include:
- The different types of bias in Artificial Intelligence that affect the experience of women seeking mental health resources.
- What health data privacy breaches, like the one at Change Healthcare, tell us about the importance of protecting personal health information, and seeking legal recourse if affected.
- The role of mental health wellness apps and AI in relation to other mental health resources.
- How bias in AI and the risks of health data breaches can help individuals make informed decisions about their mental health care.
- Ways that you can help ensure the protection of your personal health information.
Resources mentioned in this episode:
- Listen to Episode 13 of the Women's Digital Health podcast: Exploring Artificial Intelligence’s Role in Mental Health Through Language Models
- Listen to Episode 14 of the Women's Digital Health podcast: Unveiling the Users of Artificial Intelligence and Mental Health: Who Benefits?
- Visit UnitedHealth Group Updates on Change Healthcare Cyberattack
- Visit Change Healthcare Cyberattack Support
Subscribe to the Women's Digital Health Podcast wherever you're listening right now. And please share the podcast with anyone from your community who will benefit.
Disclaimer
The information in this podcast is for informational purposes only and is not intended as a substitute for professional medical advice, diagnosis, or treatment. Always seek the advice of your physician or other qualified healthcare providers with any questions you may have regarding a medical condition or treatment.
The personal views expressed by guests on Women's Digital Health are their own. Their inclusion here does not constitute an endorsement from Dr. Brandi, Women's Digital Health, or associated organizations.
Visit Women's Digital Health and subscribe to our newsletter.
Connect with Women's Digital Health on Instagram, Facebook, LinkedIn, and YouTube.
Welcome to the Women's Digital Health Podcast, a podcast dedicated to learning more about new digital technologies in women's health. We discuss convenient and accessible solutions that support women with common health conditions. Join us as we explore innovations like mobile health applications, sensors, telehealth, and artificial intelligence, plus more. Learn from a board-certified anesthesiologist the best tips to filling some of your health experience gaps throughout life's journey. Welcome back to episode 15. In this episode, we will be focusing on the impact that artificial intelligence could have as you're searching for mental health resources. This is the third of a three-part series where we looked at all of the components of artificial intelligence in mental health. The first episode, episode 13, looked at vocabulary terms just to help everyone catch up on what is even going on in AI. And I get it. Everyone is there like NLP and What will NLM, what are all these things mean? So check out episode 13 if you just want to catch up on the vocabulary term. I think it's a great way to kind of get your hands in understanding what's going on and how, you know, you could even catch up fast and how this is going to affect your healthcare experience. Part two, we dived a little bit deeper in what you could expect from artificial intelligence if you are seeing or understanding that it is being used in a lot of mental health wellness apps, as well as treatment plans in the form of digital therapeutics, as well as chatbots. And we looked at where your mental health professional could also be using artificial intelligence to really help become more efficient in their clinical setting. This episode, episode 15, is putting it all together and we're gonna take all those topics and ask the question, if I decide to look for support, what is the impact of using artificial intelligence on my healthcare experience? So if you listen to the previous episodes and you're getting curious, you're like, hey, Dr. Brandi, I've been kind of sort of thinking about this, you know, whole mental health thing. Maybe, just maybe, I could consider using some of these technologies like mental health apps or, you know, maybe you have even kind of tried out ChatGPT. You had a little conversation with it and it almost kind of felt like a non-judgy therapist and you had a little feel-good moment. Well, Congrats. Hooray. I can't tell you how many people talked to me after the last few episodes and even our workshop back in April and said, you know what? I don't think I needed this. But Dr. Brandon, you got me thinking I might want to consider one of these mental health wellness apps or, you know, maybe one of these chatbots. But I still have some questions. Are these apps really for me? It's a good question. Okay. And it gets to one of the biggest challenges that AI faces when it comes to healthcare, and that is bias. And so we're going to definitely define bias, and we're going to give you three forms of bias that typically show up in artificial intelligence and how it might impact your experience if you are seeking mental health support. The two topics. that are relevant to artificial intelligence that you're going to hear a lot as people talk about artificial intelligence in the news or in debates and conversations. And those two topics would be, one, bias, and number two, health data and privacy protection. We're going to answer the question, what types of bias exist that impact a woman's experience in finding mental health resources? And then number two, how does artificial intelligence impact health data? And what are some tips that you can use to protect yourself in lieu of all of these health data breaches? We'll definitely discuss this major ransomware attack that occurred with Change Healthcare and offer some tips to help protect yourself if you think you were affected. Okay, before we start discussing bias, I want to make sure we are all on the same page about how AI shows up in mental health apps, because I think that's the one that's kind of, it's not obvious. When you open up any of these wellness apps, even if they're not focused on mental health, there's probably some algorithm or some form of artificial intelligence in the background that's automatically taking information that you're putting in and making decisions on behalf of another human automatically to give you some experience. Whether it's, we recommend that you exercise in this way. We recommend that you have this type of meditation if it is a mental health wellness. experience, but there is some way in which artificial intelligence is being used in most of the wellness apps that we are using today. So to get to the question of how this impacts a woman's experience when she's looking for mental health resources, we have to discuss bias. And first, we're going to define bias. Bias simply means that, by mistake, you tend to lean towards one outcome over other possible outcomes. You prefer one outcome so much that you cannot be objective. You can't be neutral in your judgment. And in research, this happens all the time. In fact, if you're on a quality research project, you just assume that it's there. And you use all of these statistical tools to try to remove it the best way you can, but it's always going to be there. So if we apply the term bias to artificial intelligence, it means that certain algorithms are flawed and they lend themselves to having one outcome over another outcome. So let's go back to our question. What bias exists that can impact a woman's experience using artificial intelligence to find a mental health resource? So there are many types of bias that are seen in artificial intelligence. We're going to talk about three. The first type of bias is data bias, okay. Data bias describes, by mistake, you omitted a certain set of data, okay, which means that you're underrepresenting one type of group. and you're over-representing another type of group. In artificial intelligence, the data is what is going to be used to train the algorithm. So with all that said, keep in mind that up until the 1990s, the National Institute of Health did not require women to be a part of clinical trials. So if women were a part of clinical trials, That means that we weren't generating data about important topics about women like mental health. So this also means that when we do use data in healthcare, Because we know that there's a large gap and disparity in the type of data that we have about women, that as we create algorithms, there's going to be some gaps, there's going to be some flaws. And many of these algorithms are already being applied to the healthcare decision making. You know, no matter what sector you work in, law, finance, education, beauty, marketing, you understand that the way most systems are created is based upon the data that you get, and that data helps you form a decision and those decisions ultimately become a part of a larger system. And so as you can see, if there's challenges with the data, then that means that the decisions are going to be flawed and therefore the system might be flawed because the data was flawed in the beginning or had gaps in the beginning. So whenever you hear me talking about Where is the data coming from or where is this data set coming from? This is the reason why because you can see very quickly how starting with law data can have huge impacts on the other forms of bias that we are going to talk about. Okay, the second type of bias that can really impact a woman's ability to find mental health resources would be algorithmic bias. You know, once you're using flawed data and you're teaching artificial intelligence to make decisions, it probably means that the decisions that it's going to make will unfortunately not be great ones for the women you're trying to help. it's possible that the algorithm is not going to offer the ideal support or treatment plan because the data that it was trained on either didn't include women or even if it did include women, because there's so much, there's so many gaps in the data that it was trained on, it's going to make suboptimal decisions on women who are looking for mental health resources. It may exclude really important information like where they live or how their age, their gender identity, their ableness, and their race. And it really just kind of omits the whole beauty of algorithms and artificial intelligence, which is to help humans make decisions. These algorithms, when they're done correctly, They can help humans make decisions quickly and help a lot of people. But if it's not inclusive, it may make decisions for a few, but it's very limited in its effectiveness and cannot scale. Algorithmic bias is probably one of the most controversial types of bias. that's discussed in academics and in clinical practice. And this is the thing that a lot of patients and health professionals become very distrustful about when it comes to artificial intelligence. But the good news is that there's a lot of dialogue happening. And I'm reading about all these different experts all over the world who are sounding the alarm on the risk of using these algorithms that are not inclusive and cause harm because the data has such large gaps. I definitely have to give kudos to Dr. Joy Bhulanwini, a leading MIT PhD trained computer science who uncovered both gender and racial algorithmic bias used in large tech companies like Microsoft, Amazon, and IBM. In her message about algorithmic bias used in computer vision, which is a special type of AI, used in facial recognition software was heard by the World Health Organization as well as U.S. Congress. So kudos to her and all the other experts who are really leading the charge and increasing awareness about the harms of algorithmic bias. Hey, listeners, it's Dr. Brandi. Thanks for listening to this episode of Women's Digital Health. subscribe to Women's Digital Health on your favorite podcast platform. If you want to know even more about how to use technology to improve your health, subscribe to our newsletter on womensdigitalhealth.com. Follow us on Instagram, Facebook, YouTube, and LinkedIn. Enjoy the rest of this episode. All right, so we're coming up on our last type of bias that can impact the woman's experience of finding mental health resource. And that bias would be outcome bias. And so I'm going to switch perspectives for a second, and I'm going to speak from the perspective of someone who might be creating a mental health wellness app or a digital therapeutic, something that might be prescribed by a mental health professional. And so they're gonna come to the table and say, okay, I hear that there's data bias. I've cleaned that up. I understand that there's a potential for algorithmic bias. I've tried my best to make my algorithm as inclusive as possible, okay? So I don't think that there's any bias in my data. I don't think there's any bias in my algorithm. Outcome bias says that, okay, even if there are no bias in your data and there's no bias in your algorithm, if there are certain groups that are getting different outcomes, then there either is a flaw in your data, or flaws in your algorithm, or the way that the data in the algorithm is being applied does not reflect what's happening in a real-world scenario. So you don't have any bias there, but there's something that is missing in the way this algorithm is being applied into a live scenario. So let's go back to a woman's perspective who's trying to find mental health resources. If the mental health algorithms that are set up are still resulting in women not getting optimal mental health resources, it could mean that either the data that was used to teach the artificial intelligence how to make decisions did not include women or enough women who were looking for mental health resources. Or it assumed a lot of things about these women, including that the women had some degree of health literacy or health awareness, and they were able to recognize that they should be asking for treatment or that they had the economic means to pay for ongoing treatment. There's one takeaway that you can learn from all of these biases when you're considering looking for mental health resources, especially when considering these mental health apps that we now know pretty much many of them have artificial intelligence running in the background, whether or not we know that or not, is the ultimate impact is that you may get some benefit from many of them, but you're probably feeling to some degree it's just not enough. And I hear this a lot. I hear some people say, well, when I use a mental health wellness app, It works for a little bit, or maybe it works to sorta, but I never hear people say that I completely rely on one of these mental health wellness apps. And that's part of this is because of the things that we just discussed. These mental health wellness apps were always intended to be supplementary to other mental health resources that you have in your toolbox. It could be a mental health professional. It could be mindfulness techniques. It could be exercise. It could be community. It could be a collective number of things, but they were never intended to be standalone because many of them do have gaps in the way in which they understand how humans work and how they make decisions for humans. So it's okay, I think, for you to use them. Just consider that if you feel like they're not working, maybe you want to consider adding other things to your toolbox. All right, as a segue into our conversation on health data privacy and data breach, I just want to say that if you are using CHAT-GPT as a form of mental health therapy, just know that CHAT-GPT is not HIPAA compliant. And so in no way is it obligated to keep your health data that you are disclosing safe. They have no obligation to keep that information private. So just keep that in mind. Now, if you're looking for ways to increase the number of tools in your mental health toolbox, consider coming to one of the Women's Digital Health workshops. We just had one in April where we talked about ways to uplift mood, increase energy, and deal with personal challenges. We had a really engaging conversation about loneliness and how we can take advantage of technology to address this national epidemic that everyone is feeling. So subscribe to the newsletter to get the first opportunities to join future workshops that help us all learn about how technology can enhance our well-being together. All right, so let's talk about healthcare data privacy breaches. You know, if you are a woman who is seeking mental health resources, this recent Change Healthcare Ransom attack that we're going to talk about may be pushing you to say, nope, I'm not interested. And I get it. You know, there's just a lack of trust after a data privacy breach like this one that I'm going to give you a little bit more details about. You know, the challenge I had in even making this episode is that I just see improvements in digital health, but I don't see it going away. So I'm choosing to inform you on what's going on and help you understand what you can do to protect yourself as digital health continues to grow, continues to transform healthcare. All right. And so with that, let's discuss change healthcare risk and attack. So Change Healthcare was a branch of United Healthcare Insurance. And Change Healthcare was responsible for patient payments all across the U.S. healthcare. This is a big tech company, and they purported to handle about 15 billion healthcare transactions annually, okay, as they were one of the largest healthcare tech companies. And so just off that number 15 billion, You know that artificial intelligence is definitely being used to process claims. When you have large amounts of data like that, you know, machine learning is being used, natural language processing, deep learning. All of these forms of AI are being used to automate and make the process claiming much faster and easier. And you can check out a link to Forbes, who does a great job of explaining how artificial intelligence is used in medical claims processing. But without a doubt, Change Healthcare was definitely using artificial intelligence. Okay. Now, with regards to this ransomware attack, this is a very sophisticated cyber attack. Actually, there were two cyber attackers that were involved and not to get too much into the details. Typically, a lot of these ransomware attacks, you know, they just hold the system hostage in terms of their ability to do business until a fee is paid. But the attack on February 21st of 2024 is different because this is the first time not only did they hold the Change Healthcare system hostage, but they began to release personal health data. This has never been done before. And it really shines a light on how sophisticated the approaches to the cyber attackers are in gaining access to patient health information and you know this the impact of this attack is going to be felt for a while and it definitely elicits a lot of distrust and anger. First of all there's patient health data that's been released to the public you know on the dark web that's going to have long-term impacts and I'm going to tell you how you can protect yourself if you think you were affected. Number two, hundreds of millions of dollars that was money that was supposed to go to healthcare professional revenue that was supposed to be payments given to healthcare professionals to help take care of patients was frozen. And so these healthcare professionals had to take out high interest loans and find alternative ways to process claims so that they can keep their clinics open and take care of patients. And then lastly, UnitedHealthcare lost several billions of dollars that they'll continue to lose as we learn more about the long-term impacts to UnitedHealthcare. But I want to focus on if you think you were affected by this particular attack, there's a couple of things I want you to make sure you understand about health data privacy protection. Number one, I'm going to include a link to UnitedHealthcare's what you should do if your data is breached. But I want to make sure that you understand what you should know in general if you think that you were ever a part of a health data breach. The first thing is to understand that at this time the data has already been released. So UnitedHealthcare is a covered entity, okay, and they are required to follow the health insurance portability and accountability regulations that requires UnitedHealthcare to notify you within 60 days of the breach. And they have, they have disclosed that. UnitedHealthcare is also undergoing a risk assessment to determine the root cause of the breach that they are slowly disclosing. We are learning about which cyber attackers made the attack, that we're learning more about how they made the attack, and they have to notify you if your data has been compromised. The next thing is if you don't already have some sort of credit monitoring, Equifax, or any of these other credit monitoring systems, get one, okay? Now, UnitedHealth may offer some protection for a limited time in order to protect you from misuse of your patient health information, but I would just get one independently because oftentimes when they offer this, it's temporary, okay? Lastly, you have a right as a patient to file a complaint with the United States Health and Human Services to seek further legal recourse. Of course, I am not an attorney, so I would strongly recommend that you seek legal counseling and, you know, maybe even follow some legal professionals who are keeping us updated on what's happening with this particular data breach. So this is a very unfortunate turn of events, you know, and I hope that you are able to take advantage of all the tips, plus you're able to follow what the government and other legal professionals are doing as they also reveal to us what the, you know, consequences are to those whose data was breached, but also What are the legal consequences to UnitedHealthcare who, you know, was responsible for the compromise of all of this data? And so we'll keep you posted on what's going on. Women's Digital Health, we continue to push for data privacy. We know that sometimes it's like with so many extra steps to keep your information private. But when things like this happen, this is why we want to make sure that you stay protected because you just never know who's going to be responsible for the leak. So that concludes episode 15. I hope you learned something about how artificial intelligence is impacting the experience of women who are trying to find mental health resources. We discussed the impact of bias. We talked about three types of bias, data bias, algorithmic bias, and outcome bias, when you're trying to find mental health resources. And we also talked about the implications of health data privacy breaches like the one experienced at Change Healthcare as we learn more about the use of artificial intelligence being used in these large healthcare data technology companies in ways to protect yourself if you think you are affected by that data breach. So with that, thank you so much for your time. If you liked this episode, please, please, please give us a five-star review wherever you listen to your podcast. Please follow us on LinkedIn, YouTube, TikTok, Instagram, Facebook. Subscribe to our newsletter to learn more information about health technology and how it can really improve your overall well-being while still staying protected. This concludes the three-part series on artificial intelligence used in mental health. As we continue our conversation about mental health resources, we'll interview an expert on neuroscience and how non-invasive brain technologies are being used to improve mental health well-being in those with anxiety and burnout. Thank you so much for your time and bye for now. Although I'm a board-certified physician, I am not your physician. All content and information on this podcast is for informational and educational purposes only. It does not constitute medical advice and it does not establish a doctor-patient relationship by listening to this podcast. Never disregard professional medical advice or delay in seeking it because of something you heard on this podcast. The personal views of our podcast guests on women's digital health are their own