How are the ratings of wine critics drawn up?
wine tasting

Understanding the rating out of 100

The rating out of 100 was popularized by Robert Parker and has now established itself as the global standard. Even the very traditional Revue du Vin de France has converted to it this year. There remains only Jancis Robinson who, among the great world critics, marks out of 20.

 

wine tasting wine tasting wine tasting

The principle of scoring out of 100 adopted by almost all critics

Wines rated by Parker have a score between 50 and 100. Any wine has a minimum score of 50/100 ... even if it is undrinkable. When he rates a wine, Parker therefore assigns it from 0 to 50 points according to several criteria in order to obtain a rating of between 50 and 100 points in the end.

 

The distribution of the 50 points is as follows:

- 5 points for the dress: Color, intensity, clarity

- 15 points for the bouquet: Sharpness, intensity, diversity and complexity

- 20 points for the mouth: Intensity, balance, complexity and length

- 10 points for the development potential and therefore the aptitude for keeping.

 

The wines thus rated can be described as follows:

    • 96-100 (A +): Extraordinary wines with a deep and complex character, presenting all the attributes expected of a classic wine of their variety. Wines of this caliber deserve a special effort to be found, bought and consumed. These are the great iconic wines of the world.
    • 90-95 (A): Exceptional wines, with great complexity and personality that are worth looking for. In short, superb wines!
    • 84-89 (B +): Very good wines, showing varying degrees of finesse and flavor as well as character and which make them enjoyable to drink.
    • 80-84 (B): Average wines, flawless, but without distinction.
    • 70-79 (C): Average or below average wines, possibly showing a fault.
    • From 50 to 69: Wines with several defects which may possibly be unacceptable.

 

The differences between the major global critics

Each great world reviewer has their own appreciation of a wine, and although each reviewer rates according to the precepts stated above, they have their own tastes and preferences. This can obviously result in different ratings depending on the reviews.
What is interesting is how much these scores differ from one reviewer to another, if there are systematic trends such as an undervaluation or overvaluation of wines by systematically assigning scores that are too high or too high. low for example, or even a tendency to give scores very different from the average of the critics or on the contrary very close. The average rating of critics therefore aims to give an objective and consensual score by aggregating the ratings of major global critics.
But with these different scoring habits and the fact that there are reviewers who do not use the rating out of 100, how do you get a reliable and representative average score?

It is necessary to standardize the scores given by these various critics, that is to say place all the wines on a common scale out of 100. This standardization must compensate for the biases of the various critics (for example lower the score of a critic if the 'we have measured that it systematically overestimates wines, or on the contrary up the score if it tends to underestimate).

The graph below shows in a synthetic way the differences in the way of rating different reviews in relation to the aggregate average rating.


Average value

For example, we see that the average score given by James Suckling is one point lower than the average score for all the critics. It is therefore advisable to add a point to their scores when they are included in the calculation of the overall average score. For Jancis Robinson, there is under evaluation of 1/2 point. On the contrary, La Revue Française du Vin very slightly overestimates its marks (1/4 point). The mean score is used to normalize all these differences.

Dispersion

Another item of data in the table shows whether a critic is consensual or, on the contrary, divisive. When there are significant differences (reviews on the right of the table), this means that the reviewer gives scores that are sometimes very different from the average of his peers.
This does not mean that the reviewer overestimates or underestimates (grades can balance out on average); it just means he has very different tastes from those peers. For example, Jeff Leave and Decanter seem to be the most consensual critics (on the left of the graph) while on the contrary Tim Atkin and Jancis Robinson are probably the most singular critics and therefore the most divisive.

Conclusion

We then understand, after reading this article and when we have seen the graph above, the security provided by the average rating of all reviews. We also understand the interest of being able to taste a wine in order to determine how we stand in relation to the average rating.

In conclusion, only the average score reflects the true level of "quality" of the wine and you have to compare it to your own tastes which are the most important.


Newer Post

Leave a comment

Please note, comments must be approved before they are published