Next MBA application deadline, May 1. You belong here.

Organizational Behavior | Peer-Reviewed Research

A Fickle Crowd: How Social Pressures Shape Online Product Evaluations

Product reviewers need peers and audiences to see them as credible. But new research indicates that pursuing credibility may compromise the objectivity of their evaluations.

Based on research by Minjae Kim and Daniel DellaPosta

Serious product reviewers need peers and audiences to see them as credible. But new research indicates that pursuing credibility may compromise the objectivity of their evaluations.

  • Evaluators strike a balance between agreeing with and deviating from the opinionated masses.
  • Several key factors appear to contribute to evaluators’ decision to stand out.
  • Companies and consumers should know that ratings can be skewed by a product reviewer's pursuit to appear “legitimate yet skillful.”

Theoretically, product evaluations should be impartial and unbiased. However, this assumption overlooks a crucial truth about product evaluators: They are human beings who are concerned about maintaining credibility with their audience, especially their fellow evaluators. Because product reviewers must care about being perceived as legitimate and skillful, certain social pressures are at play that can influence their reviews.

Research by Minjae Kim (Rice Business) and Daniel DellaPosta (Penn State) takes up the question of how product evaluators navigate these social pressures. They find that in some cases, evaluators uphold majority opinion to appear legitimate and authoritative. In other contexts, they offer a contrasting viewpoint so that they seem more refined and sophisticated. 

Pretend a movie critic gives an uplifting review of a widely overlooked film. By departing from the aesthetic judgments of cinema aficionados, the reviewer risks losing credibility. The audience of fellow film buffs might think "Not only does this reviewer fail to understand the film; they fail to understand film and film-making, broadly."

On the other hand, depending on context, an audience might perceive dissenting evaluators as uniquely perceptive.

What makes the difference between these conflicting perceptions? 

Partly, the difference lies in how niche or mainstream the product is. With large-audience products, Kim and DellaPosta hypothesize, evaluators are more willing to contradict widespread opinion. (If a product does not have a large audience, a contradicting viewpoint won't make much of an impact.)

The perceived classiness of the product can also affect the evaluator’s approach . It’s easier to dissent from majority opinion on products deemed “lowbrow” than those deemed “highbrow.” Kim and DellaPosta suggest it’s more of a risk to downgrade a “highbrow” product that seems to require more sophisticated taste (e.g., classical music) and easier to downgrade a highly rated yet “lowbrow” product that seems easier to appreciate (e.g., a blockbuster movie).        

Thus, the “safe spot” for disagreeing with established opinion is when a product has already been thoroughly and highly reviewed yet appears easier to understand. In that context, evaluators might sense an opportunity to stand out, rather than try to fit in. But disagreeing with something just for the sake of disagreeing can make people think you’re not fair or reasonable. To avoid that perception, it might be better to align your review with majority opinion. 

To test their hypotheses, Kim and DellaPosta used data from beer enthusiast site BeerAdvocate.com, an online platform where amateur evaluators review beers while also engaging with other users. Online reviewers publicly rate and describe their impressions of a variety of beers, from craft to mainstream.

Their data set includes 1.66 million user-submitted reviews of American-produced beers, including 82,077 unique beers, 4,302 brewers, 47,561 reviewers and 103 unique styles of beer. The reviews spanned from December 2000 to September 2015. 

When the researchers compared scores given to the same beer over time, they confirmed their hypothesis about the conditions under which evaluators contradict the majority opinion. On average, reviewers were more inclined to contradict the majority opinions for a beer that had been highly rated and widely reviewed. When evaluators considered a particular brew to be “lowbrow,” downgrading occurred to an even greater extent.

Kim and DellaPosta’s research has implications for producers and consumers, both. Everyone should be aware of the social dynamics involved in product evaluation. The research suggests that reviews and ratings are as much about elevating the people who make them as they are about product quality.

The benefit of making evaluators identifiable and non-anonymous is that it holds people accountable for what they say — a seemingly positive thing. But Kim and DellaPosta reveal a potential downside: Knowing who evaluators are, Kim says, “might warp the ratings in ways that depart from true objective quality.”


Minjae Kim is Assistant Professor of Management – Organizational Behavior at Rice Business.

Daniel DellaPosta is Associate Professor of Sociology and Social Data Analytics at Pennsylvania State University. 

To learn more, see: “The Fickle Crowd: Reinforcement and Contradiction of Quality Evaluations in Cultural Markets.” Organization Science, 33.6 (2022): 2496-2518. DOI: https://doi.org/10.1287/orsc.2021.1556.

You May Also Like

Keep Exploring