Can You Trust Digital Health Firms To Trust You?

By MICHAEL MILLENSON

What kind of health care organization would let a 10-year-old child make an instructional video for patients? And what might that decision teach health tech companies trying to gain the trust of consumers? 

I found myself pondering those questions while listening to Dr. Peter Margolis, co-chair of a National Academy of Medicine committee on health data sharing and stakeholder trust and a speaker at the recent (virtual) Health Datapalooza annual conference. Margolis is also co-director of the Anderson Center for Health Systems Excellence at Cincinnati Children’s, the institution that allowed 10-year old kids with a condition necessitating a feeding tube to create videos showing other children how to insert one. Parents, meanwhile, were recruited to help develop new technology to help their child.

The payoff for this and similar efforts by the shared learning communities Cincinnati Children’s has birthed has been significantly improved outcomes and national renown. But for this type of initiative to succeed, Margolis told me when I visited a few years ago, clinicians and administrators “have to be comfortable with a very different kind of role.” 

As consumers gain access to information once limited to medical insiders, that advice seems increasingly prescient. Federal rules requiring providers to make electronic health data available to patients at no cost take effect April 5.  Meanwhile, voluntary electronic sharing of physician clinical notes is rapidly morphing from the unthinkable to the unremarkable, while apps to make all this data actionable are proliferating. As a result, long-simmering issues related to transparency and trust and are coming to the fore.

“Change is hard,” cautioned Catherine DesRoches, executive director of the OpenNotes initiative. For doctors uncomfortable with having patients essentially peer over their shoulder, she advises simplicity: “Write in your notes what you talk about [with the patient] and talk about what you write.”

Can accepting the validity of the consumer’s knowledge about their own health also start to reshape corporate actions? Perhaps. Heather Cox, chief digital health and analytics officer at Humana, related how the insurer’s MedicareAdvantage plans checked in with members at the start of the COVID-19 pandemic. An analysis of those conversations found that many of the elderly were afraid to leave their homes even to shop for food. Proactively, said Cox, “we were able to deliver more than a million meals to our members in weeks,” as well as refer those who needed counseling to behavioral health specialists. (Humana was not alone: Oscar Health undertook a similar food-delivery program, as did Anthem Blue Cross and perhaps others.)

The “human touch” may not even need humans. London-based digital health futurist Maneesh Juneja, isolated at home with COVID-19, described waking up at two in the morning, exhausted, worried by “weird symptoms” and acutely aware that friends and family were all fast asleep. He turned to an AI chatbot, and, to his surprise, found the conversation reassuring. “Even though it was pseudo-compassion, it was still some form of compassion,” Juneja ruefully acknowledged.

As my friend and colleague Jane Sarasohn-Kohn has long emphasized, trust is a key element of successful health care consumer engagement, and even more so in light of the pandemic. Yet as comforting as algorithmic-driven empathy might be, sustainable trust requires much more. In particular, a relationship of trust requires a genuine transparency that remains glaringly absent in most of health care. 

For instance, health plans, digital health firms and others routinely make use of secretly mined non-medical data, such as credit reports, to guide outreach related to social determinants of health. In contrast, online advertisers have developed a voluntary code that requires “clear, meaningful and prominent” notice of what types of data are being collected for what purpose and for what length of time, as well as data transfer and use practices. No such candid disclosure exists in health care even on a voluntary basis.

“I’d argue right now we’re behind the curve” in addressing consumer data privacy issues, Sen. Bill Cassidy (R-LA), a physician and member of several influential Senate committees overseeing health care, told the conference.

Safeguarding health data privacy and security may well require new laws. At the front lines of care, however, “trust is not something that can be regulated, but needs to be cultivated,” said Margolis. I agree and, as I’ve argued elsewhere, “data liberation” (a term first popularized at Health Datapalooza) requires rethinking the relevance of “patient-centered care,” a concept coined in the late 1980s. In its place, I’ve suggested “collaborative health.” 

Collaborative health (not “collaborative care,”’ which refers to a relationship among providers) is rooted in three core principles: shared information, including opening up the EHR for patients to read, comment upon and share; shared engagement, involving non-traditional actors such as online communities and technology vendors, as well as clinicians; and shared accountability, where all stakeholders have well-defined roles in regard to care continuity, communication, privacy and other questions.

Are digital health firms and others prepared for a relationship of genuine trust with consumers?

Collaborative health represents consumers proclaiming, “Nothing about me without me – but sometimes without you.” It’s a message that demands trust become a two-way street. To earn it, you have to give trust and give up some control. It’s still unclear who in the health tech world is ready to listen to that message.

Michael L. Millenson is president of Health Quality Advisors LLC and adjunct associate professor of medicine at Northwestern University Feinberg School of Medicine. This article originally appeared on Forbes here.