Metacritic Reviews: Metascore vs. Userscore

For an upcoming project, I had to determine whether or not Metacritic’s ‘Meta-score’ (aka the ‘critic score’) and ‘User score’ were different enough that it was worth analyzing both datasets. The data is for video game reviews, and each point presents the final score given for a game.

It turns out, they are similar, but less than you might think.

Metacritic Metascore vs. Userscore
Statisticians and Data-nerds can note that the correlation had a strength of 0.536, and a p-value of <0.05

Results:

Meta-critic Metascore & User Scores have a moderate positive correlation to one another. These results are statistically significant.

To simplify that statement: The relationship between Metascores and Userscores generally line up with one another. Most of the time, critic and user scores are pretty close to one other. When one group gives a high score to a game, generally so does the other. Likewise with low scores, or scores that fall somewhere in the middle.

Because this is only ‘moderately’ correlated, there are still a lot of times when the two scores differ. The little dots that are further away from the main “swarm” are ones where one scores is vastly different than the others.

With the number of datapoints and their placement, the chances this result happened due to “dumb luck” is exceptionally low.

You may notice that Critic scores are out of 100, while User scores are only out of 10. Aside from giving a little less ‘wiggle room’ for users, this doesn’t effect the analysis too heavily. Users can only rate something out of 10, but the overall rating on a videogame can still have a decimal point in it (i.e. 8.7/10)

Interpretation:

Even beyond looking at a specific genre, most users develop a ‘taste’ for certain games, based on franchises and game studios. Gamers learn what games they enjoy, and what games to avoid. Critics often lack that freedom, and may be circumstantially forced to play games they may have a negative predisposition towards, or would otherwise not consider better than “middle of the road”.

Critics may have to “take one for the team” and review the occasional game they know is going to be awful. Users just know to avoid them.

Another possible explanation is simply that critics score lower because they are…critics. Their goal is to accurately assess the quality of a game as best as possible.

Theories aside, the results of this correlation told me what I needed to know. ‘Metascores’ and ‘Userscores’ are different enough that I can include both in my next analysis.

Leave a Reply

Your email address will not be published. Required fields are marked *