User Reviews
User reviews are the lifeblood of consumer information, so we know our choice to not include them in the forefront of our product reviews is unique amongst peers. We understand that many consumers look to other consumers for the most raw experiences to draw information from, so why would we not factor that into our reviews at Tastemaker? It’s a fair question. The simplest answer we can give is avoiding bias.
The long-form answer can be broken down into three sections to make the larger concept more digestible. There are three types of bias we see as prevalent within the larger internet review space: bias by ignorance, bias by conflicting standards, and bias by tampering.
Bias by Ignorance
Bias by ignorance refers to when a user may review a product in an overly positive or negative based only on what they know about the sphere in which it exists. For example, if a user has only experienced one product, their metric for how to measure its performance can only be judged on that one product, resulting in an inherent bias. A user will not know if the product is performing at, below or above industry standards, only that it is either succeeding or failing in its intended purpose, leading often to reviews that are more positive than they should be in accurately reflecting the whole of the product. If you as a user had never used a phone before and were presented with one, you would not be able to make accurate judgements on whether or not the call you made was quicker or clearer than average, so when asked to a review of your experience into a format like the 5-star scale, you might be more inclined to give it 5-stars as it was the best experience with a phone you had ever had, irregardless of whether it was actually good when compared to its peers.
The more frequently seen version of this is the more nuanced bias by naivete, wherein a user has experienced some, but not all, offerings within a suite of products and assumes that they can make a general statement on how a product performs within the space. The most common version of this we see occurs when a consumer upgrades from a poor performing product to one with average performance, which causes the consumer to falsely think the new, better product is performing at the peak of possibility. This manifests in smashing and enthusiastic reviews for products that on the whole, within the space and general price area are nothing more than average, inflating the numbers of a mediocre performer to a place where it steals market share from more competent peers. Nothing about this is inherently bad, if you as a consumer truly enjoy an upgrade in a way that makes you feel as if it couldn’t get any better, that’s a great thing, we just believe providing a score based on the larger sphere in its entirety is a much stronger resource for those looking to others for review data on what they themselves should purchase. We combat this in our own reviews by having dynamic scoring that reflects every addition of a peer into the space; better performing products will cause previously reviewed products to scale down, while weaker contenders will boost their respective peers that continue to perform well.
Bias by Conflicting Standards
Bias by conflicting standards plays into user reviews when rating systems like stars or letter grades are present. While subjective experience is important, it becomes hard to quantify as an overall grade when individuals’ grading guidelines are also subjective. A 2-star experience to a user with special investment in the functionality of one aspect of a product may be a 5-star experience to a user who will never use that specific function amidst the other offerings.
Wouldn’t consensus win out via averages? Maybe in theory, but in practice, many reviews on either end of the extremes are less empirical and more based on emotion. A very negative experience that a user views as wasting their time might be objectively higher than the subjective 1-star rating they leave while under the duress of their frustration. A very positive experience brought on by a user being extremely satisfied with a single aspect of a product may not reflect the objective average performance of the product across the board. Often if you look at products with middle-of-the-road objective performance, it is hard to spot a review that actually walks the line between extremely negative or extremely positive. We believe that specific user data like this should not affect the reflection of a product’s overall performance, but should instead be proffered as separate use data available for those looking specifically for relevant use cases.
Bias by Tampering
Tampering is what we consider the largest reason we do not advocate for caveat-free endorsement of raw user data. Unfortunately across many other large platforms that allow for higher algorithmic placement via better average user review, some brands will choose to bot, buy or otherwise falsify large quantities of positive user reviews, leading to ratings that are obviously biased. While we would like to name specifically which platforms we see as most susceptible to these bad actors, there are legal reasons we will not be specific, but we do have a few tips to spot clear tampering we’ve added below to use as a resource.
The second form of tampering we see often lies with reviewers that are sponsored by manufacturers. Often with contractual agreements like this reviewers are inclined to provide positive reviews in order to preserve relationships or gain more notoriety by giving more bombastic and marketable praise. This, of course, results in clear bias within a review that renders the data generated wholly useless for anything other than selling the product, which is unhealthy for a competitive market. Advertisements that hinge off of a reviewer manipulating their audience’s trust in either their above-board nature or expertise on a particular topic are not reviews, they are product placement, and should not be advertised otherwise.
Obviously that raises the question regarding our partnerships and endorsements of products of how we differ. Our key difference lies in the dynamic with which we interact with brands. We review a product, it performs excellent, and then we approach the brand to see if they would like to partner with us in promoting their product based purely on the objective data we generate. Our contracts always stipulate that our name will not be used in conjunction with any false performance claims, nor will their product be marketed as better than a peer that performs objectively better. We always reserve the right to void contracts if we see product quality decline or peers appear that do the job better. We do not work with brands or products that do not align with and continually commit to espousing the values of quality and sustainability we see as paramount within the consumer sphere.
Tips for Identifying Review Tampering
Are there a suspicious number of reviews when compared with peers? Often the easiest giveaway, as one brand’s product will have hundreds of thousands of positive reviews while the next comparable name-brand may have only close to a hundred. When reviews are botted or purchased wholesale, they will often arrive in amounts that are so large they prevent negative reviews from affecting the overall rating at all over the entirety of the product’s run.
Do the reviews actually have substance in them? While more case-by-case, often bought or botted reviews will use AI-generated copy or quick lines from humans that are incoherently strung together sentences of supposedly related words. Examples on large platforms that we have encountered include over-use of words that are targeted to search engines and algorithms that users would not often casually say, like ‘great product quality,’ ‘best [item],’ ‘better than [competitor product]’. These are often harder to identify, but the best advice can be left as if it doesn’t sound or type like a human, it probably isn’t.
Does the review have any real negatives about the product? Paid and botted reviews are most often employed for products that fail to perform well based entirely on their own merits, requiring a sales pitch-esque approach that does not mention negatives. The most advanced versions of these will pretend to address negatives by writing them off as not really affecting the overall performance from being stellar, a subtle tactic that avoids criticism while convincing the reader that their questions have been answered. Most often, even if they overly commend the product in their scoring, real users will not hesitate to mention downsides they see as damaging to their experience.
When reflecting on these biases within user data, we hope our choice to not advertise consumer reviews as equal to the objective data we generate will be one you understand and agree with. We are actively developing better systems to better integrate user data to be showcased alongside our own reviews, as your input is always appreciated and important. Our hope is to devise a way where use data can be acquired in a process that directs your input in more tailored ways, rather than trying to force your complex perspective into the narrow mold of a single score. Your perspectives are unique, and we look forward to hearing more about them. If you have any thoughts or ideas you’d like us to hear, your input is always welcome at comments@tastemaker.online.
Tastemaker. “User Reviews” Tastemaker., Tastemaker, July 28 2024, www.tastemaker.online.