Your end-of-life decisions, predicted by AI
There is perhaps nothing harder in the world than watching a loved one pass away. End-of-life decisions can be a special kind of torture for those faced with the responsibility—the medical surrogates.
It’s hard to imagine how technology could possibly improve this situation. Of course, nothing will take away the devastation of loss. But what if innovation could provide a bit more structure or guidance to the end-of-life decision-making process?
The story: A new AI tool is being developed to make end-of-life decisions easier for family members: the personalized patient preference predictor (P4).
- So far, the system is more of a theory than a concrete product. The researchers are working out how to build the machine learning algorithms they need to make it a reality.
- The P4 works like a “digital psychological twin” predicting the decision the patient would most likely want if they could speak for themselves.
- It would be built from the patient’s personal data like medical records, text messages, and social media posts, to best predict how a person thinks and how their beliefs are formed.
How well do we know our loved ones?: As it turns out, when it comes to end-of-life decisions, we’re not amazing at predicting what our loved ones would want. Studies demonstrate that our predictive ability as surrogates lands at around 68% accuracy.
Wait, isn’t this why we have advance directives?: Yes, in theory.
- Many patients do sign documents that give their medical surrogates instructions on what to do at the end of their life, should they not be able to make those decisions themselves.
- However, many advance directives are signed well in advance. People’s opinions change. By the time an advance directive is needed, its conclusions may be outdated.
- Plus, in the U.S., only one in three adults has a formal advance directive for family and providers to work from.
Complicated ethics: As you can imagine, this project is a bioethicist’s playground. Here are some of the questions complicating how this tool may work in the real world:
- Culture: End-of-life decisions don’t happen in a vacuum. Around the world, cultural beliefs about death and family shape how these decisions unfold. The P4’s approach reflects an American sensibility, wherein a person’s individual autonomy is prioritized over those of the family.
- Vulnerable populations: This tool may also be especially useful for cases where a patient doesn’t have a family or the family can’t be found, such as with patients who are unhoused, undocumented, or incarcerated. However, given the vulnerable nature of these demographics, using technology to make critical decisions is likely to have a lot of ethical and regulatory pushback.
- Our real vs. online selves: Is the person you are on social media reflective of the ‘real’ you? If you’re shaking your head, you wouldn’t be alone in feeling that way. Therein lies the trouble with using social media (or even private text messages) to model how a person truly thinks. What we express to others or to the world may not actually reflect who we are inside.
Human-machine interaction: When asked about whether they would defer to the P4 or a human surrogate with a different opinion, even the P4 researchers say they’d probably side with the human surrogate. This indicates a gut instinct many of us share: humans have something in them that allows them to know us better than machines can, regardless of the statistics.