There has been a lot of buzz around ChatGPT and how it will change the world as we know it. One recent article in the Washington Post discussed AI in mental health and the problems with it as well as the potential applications. As a therapist who worked in tech, I have some thoughts on this that are important to note.
Potential applications of AI in mental health:
- Insurance companies will use AI and API’s to connect with electronic medical records systems to analyze client records in bulk. They will claim they are doing this to ensure therapists are providing optimal care and studying patterns in positive outcomes. The reality is that insurance companies are for-profit entities and will use it as yet another tool to refuse care. We already see this in the annoying habit of certain insurance companies requiring us to put start and stop times on each session, and if these times are too similar, they will audit and deny care. Most therapists do hour sessions and aren’t paid for this. Suppose AI engineers and clinicians/experts allow wedges to be driven between them, then big companies with goals that have nothing to do with human outcomes (i.e., better care). If this happens, they will take over the innovation with adverse effects (a focus on maximum profit). Clients will need to be informed of this and have the option to opt out without penalty. In my mind, this is a negative impact of AI on mental health.
- AI could be used to analyze physical health histories and then direct care toward specific programs that are more appropriate. Our current healthcare system doesn’t look at physical health symptoms and appropriately link them with mental health. We could see that correlation through better data and more appropriately coordinating care with earlier interventions. If doctor’s offices were better equipped to assess for trauma and refer to appropriate care, we wouldn’t need AI to assist. The ACE Study has been pioneering and pointed out, “The ACE study found a direct link between childhood trauma and adult onset of chronic disease, incarceration, and employment challenges.”
- The Washington Post article mentions that chatbots could help to teach skills to people in need, or they could help to train people to support populations in need of support but without resources. CBT or motivational interviewing would be accessible applications, and the chatbot could mimic human responses and help train people to provide care. This could be helpful for AI mental health therapy as it could help more rural populations prepare their citizens to support those in need.
- Companies like Nirvana Health are doing fantastic stuff with machine learning to more accurately predict what copays, reimbursements, and other health benefits will be based on their large volume of data. As silly as this might sound, it’s a massive pain for medical practices. We will contact your insurance company and inquire about the specifics of your coverage, and what we are quoted is incorrect. This will often be discovered within several months, but it may result in the client owing more money or needing a large refund – which can be frustrating to both parties. Companies like Nirvana Health can catch these things before they happen since they see this data on such a large scale and can make predictions based on this.
- AI could assist therapists or doctors in finding available and appropriate referral sources. While I like the idea of this, it comes with a lot of caveats. If it’s on the front end of a website intake to get a client into care, people often give the minimum required information, and we miss relevant information you would get in a phone call. It also can be frustrating to people. Anyone who has tried using Amazon support lately knows it’s impossible to get a human, and voice recognition systems and apps have limits to their usefulness. AI also has limitations and inherent biases depending on who trains the model. It would be easier to trust AI if we had more transparency in algorithms design, training, funding, and data connectivity (I am sure there are other things to consider as well). This lack of transparency in the US concerns me for all AI mental health startups. I say all of the above as I believe it’s a precondition to having trust in any relationship with AI.
- AI could be used to generate content and improve the accessibility of care for people who can’t afford treatment. A huge caveat here as well since generative AI is great at outputting data but doesn’t consider what is true or false or most appropriate for a particular person. It just outputs data. If this were done in a peer-reviewed way and made accessible, it could help those in need.
- People must trust that their data won’t be leaked or shared without their knowledge. If you are using a telehealth app or anything related to your mental or physical health, you should not worry about it ending up in Facebook user-identifiable data or other big tech companies. Recently there have been multiple reports of health tech companies sharing personal and health data with large tech companies, and this must stop.
The post Impacts of AI on Mental Health appeared first on Just Mind.
Recent Comments