Skip to main content

The Patient Will See You Now

Are Doctor Reviews Good Medicine?


ISSUE:  Winter 2022


Illustration by Rachel Levit-Ruiz“He was OK.” 

This anonymous comment stung more than the blatantly negative reviews that patients occasionally post about me online. Sure, I get good reviews about the impact of my care on patients’ lives (“The doctor was very good and I felt heard…I actually walked out feeling understood and I felt a weight had been lifted.” “I love Dr. Telhan.” Or simply: “Muy bueno.”). And I readily accept feedback that points me toward things I need to improve (“I felt very rushed through the visit.”). I’ve even grown accustomed to the unjustified criticism that sometimes trickles in from the internet. But this pale pronouncement—just OK—landed differently. 

Doctor reviews have been around for a while, but have become only more popular over the last decade. One study found that “hundreds if not thousands of online reviews appear daily on crowdsource platforms of patient review websites and carry growing influence in patients’ medical decision making.” To be sure, people still rely on word of mouth, and insurance coverage ultimately tends to govern most patients’ doctor selections. And yet, in one consumer survey, 70 percent of respondents said they consider a positive online reputation to be “very or extremely important” when choosing a health care provider.

Given the prevalence of online reviews, what is it that we—patients, doctors, and institutions—think we’re measuring with them? What do they leave out? Why do they generate such strong reactions from physicians, who, according to one study, reported that online reviews would increase their stress (78 percent) and strain patient relationships (46 percent)? And why, in my case, did being “OK” feel so unsatisfactory?

The answer, I think, has less to do with what the review said about me than with what it didn’t. Despite my doubts about reviews, I generally accepted them as a kind of barometer—if not of my medical expertise, then at least of how that expertise was perceived by patients. The distorted quality of some reviews seemed like a price that just had to be paid for access to some meaningful insight into how the quality of my practice made patients feel—a price not entirely dissimilar to that which workers in countless other fields have to pay. “He was OK,” however, seemed so comically incomplete that it threw this assumption into question. Terse, bordering on apathetic, this review denied the satisfaction that comes with a conclusive judgment, of letting me know where I stood. A vivid judgment—no matter how skewed—at least offers the consolation of clarity. To the contrary, the vagueness of being “OK” began to reveal a certain absurdity underlying the use of fragmented reviews and larger quantified scoring systems, as well as our attachment to them. How was it that a measurement system I was skeptical of nevertheless held my attention for as long as it did? If, as some doctors believe, star ratings were so obviously uninformed, why did I care at all what they said?

C. Thi Nguyen, a professor of philosophy at the University of Utah who thinks about games and point systems, has observed that “quantified measures are extremely good tools for large-scale bureaucracies to organize themselves….You have some number—some measure—and we know it’s not measuring the real thing, but it becomes dominant anyway.” Could doctor ratings fall into this same flattening, but useful, trap? I asked Nguyen how such markers come to eclipse the reality they’re supposed to be measuring and why their shadows are so difficult to escape. For one thing, he told me, on an individual level, metrics that promise to distill and simplify our values are pleasing to use because they create a “common standard” to “explain to another person how well you’re doing.” Nguyen has written about the allure of such systems being rooted in their ability to take something complex (like what defines “good” performance, and how well one is living up to it) and boil it down so that it is manageable, even enjoyable. Knowing what to care about in public discourse is hard, for example, but Twitter motivates us to equate retweets and follower counts with salience. Every “like” functions as a minor proxy for satisfaction. Nguyen also told me that even though “a lot of valuations don’t capture the thing that matters,” systems are hungry for data that can be gathered quickly and at scale, the kind of data collection “which by its nature can’t capture the richness of the input.” It’s the portability of these simple measurements that gives them so much sway. 

 

My initial response to online reviews was scientific curiosity. I tried small experiments to see what worked when it came to connecting with patients. I would go into exam rooms wearing my white coat some days and other days without it. I went by my first name at times to see if it built more rapport than my full, formal title. To try to give patients more uninterrupted time to share their stories, I set my smart watch to buzz at the five-minute mark after entering the room, as a reminder not to interject for at least that long. I adjusted the chairs and tables in my exam rooms so I could review patients’ scans without having to turn my back to them. I worked on spending less time looking at my computer in the exam room and I got patients their refills faster. Some of these interventions seemed to help—my star ratings improved. The pleasurable feedback loop Nguyen had described was real. 

But a question started to nag at me: What exactly was I improving at? Sure, sometimes I felt like a “better” doctor when my ratings went up. But was I? I grew wary that reviews could become outsized drivers of how I practiced, that they might incentivize changes that were good for ratings, but not necessarily good for patients. If one of my patients needed more attention, would I hesitate to spend additional time with them for fear of hurting my on-time appointment rating? If I’m scored on how likely a patient is to recommend me, would I avoid treatment decisions that might leave the patient feeling unhappy? A paper in the Journal of General Internal Medicine explored precisely this question. The authors sought to examine the issues that arise for physicians in responding to patient requests for tests and treatments that aren’t, in the practitioner’s judgment, medically necessary. They note the challenge of making these decisions when doctors know that “[a]greeing to patients’ requests may increase their satisfaction,” but could also “expose patients to the risks associated with unnecessary tests or procedures.”

Using a standardized patient that doctors could not identify as part of the experiment, the paper’s authors found that 8 percent of physicians ordered a patient-requested MRI they didn’t think was necessary on the first visit; another 22 percent told the patient they might order the MRI in the future. This analysis was done in 1997, and it would be interesting to see how those results have changed now that online reviews have become so pervasive. 

 

Effective communication is often emphasized as an antidote to mistrust and misunderstandings that come up in interactions between experts and nonexperts. But is it enough to persuade concerned (and convinced) parents that anti-biotics aren’t necessarily needed for their kid’s ear infection? Can communication bridge the gap between a doctor recommending a vaccine and a patient who’s dead set against receiving it? If not, what do we learn from a one-star review about a physician’s communication skills or the likelihood of being recommended to others? 

The knowledge gap separating experts from nonexperts is of tremendous interest to Nguyen, who described communication in patient-doctor interactions to me as “one of the deep epistemic problems out there.” The way he thinks about it, “the doctor has medical expertise, whereas what the patient has is expertise over their own values about their own life.” The rub is that only someone possessing both forms of expertise—the necessary medical knowledge and a full understanding of the patient’s values—could actually make the decisions that good care requires. As Nguyen pointed out, “there is no such entity.” 

Add to this dilemma the long and troubling history of patient autonomy being overridden or subsumed by medical experts who have too often exploited their authority or abused patients. A lot of medicine happens behind closed doors to protect the dignity of the patient. But those same doors can conceal mistreatment, discrimination, and gaslighting that takes place behind the veneer of expertise. This is the challenge at the heart of medical practice. Daily, patients rely on doctors to make complex decisions with imperfect knowledge while trusting that they use their authority as experts responsibly. It’s natural that we—experts and nonexperts alike—would want a measurement of how well doctors are doing this.

Patient reviews offer at least one small point of entry for trying to gauge what works and what doesn’t. The University of Utah was the first health care system in the country to begin using online patient reviews of doctors in 2012. Vivian Lee, former dean of the university’s school of medicine and CEO of University of Utah Healthcare (which collects almost $5 billion in annual gross patient revenue), has written about the benefits. Central to her argument is the idea that reviews allow consumers to leverage their collective power to effect change. Citing companies like Uber and Nordstrom that respond to customers’ opinions, she contends that hospitals need to follow suit. In her article explaining Utah’s decision to introduce reviews, Lee quotes the Harvard Business Review on the role of consumer feedback in “keep[ing] the customer front and center” and creating a culture of improvement that offers the kind of transparency that she views as necessary for building trust between patients and health care systems.

Nguyen has some serious reservations about transparency. “My worry,” he told me, “is that transparency is important. Trusting experts is important. They’re [both] genuinely important things, and they’re in conflict.” In his paper, “Transparency is Surveillance,” he writes about the “desperate complexity” of this tension, which he believes “admits no neat resolution, but only painful compromise.” The argument is simple: Transparency asks experts to explain themselves to nonexperts, but many of experts’ reasons aren’t comprehensible to nonexperts. “When we trust experts, we permit them to operate significantly out of our sight,” Nguyen writes. “There, they can use the full extent of their trained sensitivities and intuitions—but they are also freed up to be selfish, nepotistic, biased, and careless. On the other hand, we can distrust experts and demand that they provide a public accounting of their reasoning. In doing so, we guard against their selfishness and bias—but we also curtail the full powers of their expertise. Transparency pressures experts to act within the range of public understanding. The demand for transparency can end up leashing expertise.” 

Combine this observation with the fact that we live in a hyperspecialized knowledge-society in which we can’t possibly know everything we would need to know in order to replace expert decisions with our own. A world in which we rarely know enough even to identify the right expert—one we can trust completely—to remedy our ignorance. 

Given this vulnerability, what exactly does expertise owe to those it claims to serve? The answer depends partly on the field, and also on the risks at hand. In some cases, a solution to a problem needs to be wholly comprehensible to those affected by that problem. In others, a solution simply needs to work. In others still, the need for transparency may be real, but its fulfillment remains out of reach. I can understand how you change the tires on my car, but I don’t fully understand how an engine timing belt runs, nor do I possess the tools to know if it was fixed properly. I simply trust my mechanic (until something stops working). The stakes in medicine are far greater.

So, the question remains: In a world in which not all problems are easily visible and not all solutions are readily understandable, how do we decide which explanations are good enough? Political theorist Ben Martin, writing about public policy in the 1970s, says that “expertness is an ascribed quality, a badge, which cannot be manufactured and affected by an expert himself, but rather only can be received from another, a client.” In other words, an expert is only an expert insofar as she is recognized as one by the outsider. As we’ve come to understand very well during the COVID-19 pandemic, simply being considered an expert can be dangerous, especially in an era of unregulated social media that makes it difficult to parse misinformation from truth. Still, Martin’s emphasis on recognizable usefulness as a mark of expertise holds at least some promise as a safeguard against the misuse or waste of expert knowledge. In Martin’s view, transparency is not a threat to, but rather a prerequisite for, valid claims to expertise. But the demand for discernability baked into this stance can, paradoxically, erode the defining use of expertise: its ability to ask and answer questions that the nonexpert public cannot. 

 

As is true with most doctors, the majority of my reviews are “good.” Still, I often feel a little anxious when I sit down to read them. Public judgment is a difficult thing to get used to. In my case, it’s tied partly to the fear of being judged on aspects of the patient’s experience that I can’t control, but which the patient nevertheless associates with me and thus takes into account when deciding upon a rating. I also wonder at times what the patients who ask me where I’m really from will say about me. But mostly, I worry that the reviews will tell me that, despite over a decade’s worth of education and training, I’m not good enough—that neither competence nor credentials is enough to compensate for my human foibles and limitations; that I’m not succeeding at my job of making people feel better. 

I asked Adrienne Boissy, a practicing neurologist and former chief experience officer at the Cleveland Clinic, who is now chief medical officer at Qualtrics, one of the leading vendors of survey tools used to measure patient experience, why reading reviews can sometimes be so miserable. For many doctors, she told me, how they are perceived by their patients is connected to their identity. “So, if [my] view is, ‘I take tremendous pride in being a clinician and a healer,’ and then the feedback I get is that ‘I’m not very good at that,’ it can feel very personal. To clinicians, it hurts more because this is part of ‘who I am,’ this is part of ‘who I want to be in the world.’” For Boissy, this pain reflects a deep caring on the part of physicians for their patients. We wouldn’t be uncomfortable if we didn’t care. The second answer, Boissy told me, is that there is also “discomfort with the idea that you could productize the relationship, rate it with a star when it feels so much more complex and intimate than a star rating.” 

Boissy was thoughtful about the role of metrics in health care. “I wonder sometimes,” she told me, right off the bat, “whether the enthusiasm for measurement runs fast in directions that maybe miss the meaning. Most relationships I know…defy measurement. It’s like asking…‘How much do you love your patients?’ I hope, actually, we never get to a number on that.” She used the word love “intentionally,” she said, precisely because it resists quantification.

“That said,” she added, “I understand and appreciate the enthusiasm to try to get at what is the secret that makes clinicians behave differently as a result of the relationship. What makes patients take their medicine, or tell you the truth and, you know, come back.” 

Boissy has seen health care organizations around the globe use reviews and metrics to track institutional loyalty and considers them a useful way to build trust while measuring how well organizations are keeping their promises to patients over time. “Not all patients have a choice about where they go, and when they do, they’re choosing based on what they’re hearing other patients say about their experience or their family members at the table. And that’s a very strong influencer, more so than advertising probably.”

The idea of using doctor ratings for health care systems, Boissy told me, “certainly came from outside health care; other industries were doing it and rating their products as a way to better sell…and position [their] products. And health care was probably very slow to that.” Eventually, health care systems had to decide whether to jump in and adopt the ratings process or continue to allow third parties to gather review data. “Most organizations made the decision to manage it internally,” Boissy said. “And that’s critically important, because if you look at some of those third-party raters, they may only have four or five comments…not necessarily from a verified patient…and yet [these reviews] would get seven million views. And so, the organization…really has a quality question on its hands. Because if we want high-quality data put out there so that patients can make the best decision about where to go, then we probably need to have a stake in that conversation.”

 

Having patients review doctors is a reversal of a longstanding power dynamic. Historically, it’s been the patient who presents himself for inspection before the gaze of expertise. Or, as Foucault, always acutely attuned to the geometry of power, put it: “beneath the observing eye of a forearmed doctor.” Reviews, at least in theory, authorize patients to do some of the seeing. In a field that has a long history of bias, the power to see (and speak about) what happens is a democratizing gesture. In The Presentation of Self in Everyday Life, the sociologist Erving Goffman describes daily life as a series of contests in which various actors constantly assert a definition of reality to one another and to themselves. Through our speech, our dress, our actions, we perform the truth we desire others to accept. In other words, we are constantly trying to define the situation in front of us.

In the clinical setting, where “the doctor knows best,” this definitional contest has typically not been much of a contest at all. The definitive version of reality is often marked by the physician’s vocabulary, which tends to crowd out the patient’s story. I imagine that at least part of the anxiety (or satisfaction) that surrounds reviews stems from their potential to reorder this imbalance by giving patients an opportunity to assert publicly their own definition of reality, with a voice that cannot be as easily ignored.

But is the insight offered by reviews of much use when it comes to selecting a competent physician? Boissy told me that in her experience with review systems, physicians wanted data about treatment success and complication rates to be posted, whereas patients wanted to know how other patients felt. Patients want the best for themselves and their family members, Boissy said, but “how they define ‘the best’…is reliant on how people make them feel… It may be unexplainable, unmeasurable, and irrational—and it’s how most humans make decisions.” 

The blurring of distinctions between these two very different types of criteria—whether care is effective versus how an experience makes you feel—can sometimes create an illusion of knowledge and control for patients. The emotional experience of patients is an undeniably vital part of medical care—ignoring something that doesn’t feel right to the patient risks undermining their autonomy and authority in dangerous ways. At the same time, if patients are coming to reviews to understand whether the standard of care was met, leaning too heavily on the emotional experience can leave a big blind spot that might not be recognized as such. To help fulfill their implicit promise of solving the expert-selection problem—Who can I trust?—reviews should include both qualitative narratives and concrete outcome data that patients can use to make meaningful decisions.

Without both, reviews in medicine risk becoming just another symptom of a larger discomfort with asymmetrical expertise. The patient becomes just one more “citizen consumer” who the sociologist Tressie McMillan Cottom, building on the work of historian Lizabeth Cohen, has described in another context as someone who “thinks [they are] an expert in all manner of everyday decisions.” Cottom says that such a person “is the perfect mark for an endless string of scams.” Scam may be too strong a word for the role of reviews in health care, but an uncritical embrace of reviews—sometimes filtered or “curated”—creates pitfalls for both patients and doctors. Such an embrace can mask the existence of a persistently uneven playing field between layman and expert. It also furthers the consumerization of medical care and the notion that ratings based on individual consumer self-interest can function as a proxy for how well institutions—like hospitals or clinics, often funded by the public or seeing patients who are insured by government programs—are actually performing for the communities they are meant to serve. One could argue that not only are we past the point of no return when it comes to this consumerization, but also that we crossed it quite a while ago, that Cohen’s example of the citizen consumer who views her relationship to the state apparatus like any other market transaction is just a plain fact.

But it’s worth thinking about the ways in which a rating system designed by corporations to measure and accelerate consumption can corrupt a patient-doctor relationship. For one thing, while patients are empowered by the ability to post public appraisals of their care, the review also takes a discrete encounter where privacy is paramount and transplants it from the exam room to the internet. At times, this seems to change something fundamental about the visit itself, to alter what is meant to be a therapeutic relationship, one that’s different in important ways from typical retail transactions. Some of these changes are obviously salutary (patients need and deserve more weight given to their experiences), but others (especially consumerist concerns that center on the marketing needs of institutions more than they serve individual patients and doctors) should make us wary lest we succumb to a system in which, as Goffman describes it, a “certain bureaucratization of the spirit is expected so that we can be relied upon to give a perfectly homogenous performance at every appointed time.” As we’ve seen with social-media platforms, moving our organic personal interactions from the private sphere to the public isn’t always healthy.

Consumerization also risks transforming doctors from public servants (however imperfect) into customer-service representatives. I asked a physician at a large academic hospital in Washington, DC, who requested anonymity, about how these pressures play out at his institution, where negative reviews can reduce doctor compensation. When that happens, he said, the hospital frames it “as almost like ‘inspiration’ or ‘incentive’ to say ‘we really want you to do a lot better with your patient satisfaction.’” For many physicians, he said, this kind of incentive feels like an unfair punishment tied to aspects of the consumer experience over which they have little control. Given that reviews are also plagued by bias against women and minorities, this burden is not carried equally.

Far more important than the impact on doctors, this kind of consumerist pressure also turns patients into content creators for an industry competing for their business. Increasingly, the onus falls on patients to dutifully evaluate their care online. An entire consulting industry exists around helping health care systems collect more reviews from patients. One such firm explicitly states that, “[a]s one of the most effective health care marketing strategies today, asking patients for reviews helps you…achieve competitive differentiation.” The methods encouraged by these firms are designed to “generate the kind of social proof that helps patient acquisition.” Unlike billboards or TV ads, these marketing tactics wearing the garb of transparency may not be recognized as such by patients. Also, it’s notable that medical school is a place where the distinction between patients and customers seems to matter, even now. A place where the not-so-secret calculus used to determine which patients are worth acquiring—Are they insured? Can they pay?—still feels at odds with the ethos of public service that student doctors are encouraged to embrace before they go out to practice on their own.

Danielle Ofri has written about how “Madison Avenue buzzwords” used in the mission statements of health care systems ultimately depend on the work ethic and professionalism of the physicians and nurses who labor within such institutions. “By now, corporate medicine has milked just about all the ‘efficiency’ it can out of the system,” Ofri writes. “With mergers and streamlining, it has pushed the productivity numbers about as far as they can go. But one resource that seems endless—and free—is the professional ethic of medical staff members. This ethic holds the entire enterprise together. If doctors and nurses clocked out when their paid hours were finished, the effect on patients would be calamitous. Doctors and nurses know this, which is why they don’t shirk. The system knows it, too, and takes advantage.”

Boissy, refreshingly, recognizes the risks posed by review systems that aren’t tied to structural changes that help both patients and doctors. “We have a responsibility,” she told me, “if we ask people for their feedback, to act on it.” Among many other best practices, she has also implemented innovations that allow physicians to contest or respond to the feedback they receive online, a move that struck me as humanizing. “I also think we have a responsibility to clinicians,” Boissy said. “If we are going to publicly report ratings of them based on feedback, then we support them in being their best selves every day in the workplace…We have a responsibility to our people to make sure they have the training competency to do the best job that they want to do.”

It’s hard for patients and doctors to escape the pull of review systems, even when they’re not implemented with the sense of care and duty that Boissy underscores. For patients, the idea that a problem as complex and fraught as choosing the right doctor can be made somewhat easier by a star-rating system is appealing. These are hard decisions for a nonexpert to make, and any help seems better than none. For physicians, the draw of reviews is tied, at least in part, to the way we’ve been acculturated through our training to accept metrics as a part of life, going all the way back to medical-school tests and board exams. As a group, we are used to (and even welcome) measuring and being measured, often weighing our self-worth against our performance on the tests and challenges that we face. Not only do we see metrics as a way to prove ourselves, but many doctors have been conditioned to believe that embracing new metrics is a sign of commitment. If you were actually competent or really cared, the thinking goes, you would do this happily, you would jump through every new hoop, and do it faster and better each time. 

But is this really a measure of excellence? Or is it precisely the kind of pliable attitude that corporate managers would love to leverage? The kind that Ofri feels is being “cynically manipulated”? If so, maybe reviews are not wholly the proper objects of our attention and we—doctors and patients, both—would do well not to confuse compliance with compassion. 

0 Comments

CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.

Recommended Reading