HOME > Chowhound > Wine >

Discussion

WA - Jay Miller - is his rating scale only 90-100?

I know I'm preaching to the converted here in complaining about mainstream wine reviewers in general and numerical ratings in particular, but this just flabbergasts me.

Jay Miller, who I gather is Robert Parker's long time drinking buddy, just started doing reviews for Wine Advocate in 2007. In less than a year, he has managed to find no less than 8 "100 point" - presumably "perfect" - wines. And, I should add, more than another 40 wines worthy of being rated 98 points or above. This is basically in about 3 rounds of tastings, I believe.

In the narrative commentary on the current reviews of 2004 and 2005 Oregon wines, he suggests that neither are particularly strong vintages. As for 2004 "The resulting wines tend to be light and forward, with the best of them possibly evolving for a few years and becoming user-friendly, seamless, and easy to drink. Almost none of them will make old bones." 2005 likewise would sound like a pretty mixed vintage: "In 2005, all too many Willamette Valley Pinot Noirs lack that balancing fruit. Almost all of them reveal lower alcohol, elevated acidity, and firm structures. Only the top examples have enough fruit to merit cellaring. Most of them will need to be drunk near-term while the fruit remains intact. However, a number of the top producers were able to solve the problems of the vintage, hit a few home runs, and make some outstanding wines. These will be worth buying and cellaring." You would think from this less than stellar description that there shouldn't be that many wines that get highly rated, only perhaps a few top producers.

Well ... of 177 wines that were rated, 93 - more than half - were rated 90 points or higher. (Supposedly, a rating of 90 points or higher indicates "An outstanding wine of exceptional complexity and character. In short, these are terrific wines."). That sure seems like more than a "few home runs." All but 16 of the wines reviewed (90%) were rated 87 pts or higher. The comments do indicate that there were about another 100 wines tasted that did not get reviews, but how can these ratings at all be correlated with these supposedly being weak vintages?

Just one more lesson on why numerical ratings are arbitrary and pointless.

  1. Click to Upload a photo (10 MB limit)
Delete
    1. From the posts I've been reading on the CH board about many wine reviewers, it sounds like it's not just the numerical portion that are arbitrary and pointless. : {

      1 Reply
      1. re: monkuboy

        Didn't the GURU himself rate 100 points for something that later on turned out to be a fake?

      2. Here's some more information which will surely help to reconcile the disconnect between the discouraging Oregon vintage comments and the plethora of 90+ point reviews.

        Miller also reviewed Australia in the most recent Parker issue. Unlike Oregon, he loves the recent Aussie vintages: "The state of Australian wines has never been better. ... "Most of the wines in this report are from the 2004, 2005, and 2006 vintages. 2005 appears to have been uniformly excellent across this giant continent while 2004 and 2006 are a notch below. " So how do the reviews stack up?

        Of 1,114 wines reviewed (how do you possibly do that?!), more than 100 were 95+, more than 660 (more than half) were 90+, and all but 114 of the wines reviewed (90%) were rated 87 pts or higher.

        Isn't it good to know that in great vintages and bad vintages, there's still just as many "outstanding" wines to choose from?

        I popped into my favorite wine shop this afternoon and mentioned it, thinking they'd be thrilled to have such a plethora of 90+ point wines to sell. His reaction? "Now it just means nobody wants to buy anything under 95 points"! Oy vey.

        1 Reply
        1. >> Just one more lesson on why numerical ratings are arbitrary and pointless.

          It may be arbitrary but not sure about 'pointless' - don't see evidence of that from the text above. His scoring may be consistent based on his own criteria and his own 'arbitrary' scale. And whether he scores from 90 to 100 or 0 to 10 or elect not to give any rating to some group of wines judged 'inferior' is besides the point. Someone else may develop his/her own rating system and may judge the same wine differently - that's obvious. To find a 'smoking gun' you would have to show that he violated his own criteria and badly misdjudged some wines.

          3 Replies
          1. re: olasek

            To me as a potential wine purchaser it simply destroys credibility.

            Theoretically, the ratings are not arbitrary.
            - A wine rated 90-95 is supposed to be "An outstanding wine of exceptional complexity and character. In short, these are terrific wines."
            - A wine rated 80-89 is supposed to be "A barely above average to very good wine displaying various degrees of finesse and flavor as well as character with no noticeable flaws."
            - As a further gloss, WA notes that "Robert Parker's rating system employs a 50-100 point quality scale. It is my belief that the various twenty (20) point rating systems do not provide enough flexibility and often result in compressed and inflated wine ratings. The Wine Advocate takes a hard, very critical look at wine, since I would prefer to underestimate the wine's quality than to overestimate it." (all quotes from the eRobertParker.com website).

            There is nothing which would suggest that a vintage will be graded "on a curve," so to speak. Indeed that would be silly, as it would make it basically impossible to compare reviews from year to year.

            Returning to the example, the vintage notes would suggest (to me, at least) that most 04/05 Oregon wines would fall into the 80-89 range, with a relative few rating 90 or higher.
            - "the best of them possibly evolving for a few years and becoming user-friendly, seamless, and easy to drink"
            - "all too many Willamette Valley Pinot Noirs lack that balancing fruit. Almost all of them reveal lower alcohol, elevated acidity, and firm structures".
            And indeed, the vintages were rated an 86 and 85 respectively (WA rates vintages as well as individual wines).

            But instead, more than HALF of the wines reviewed were given a 90+ score! Even if you throw in the hundred that were tasted and not reviewed, it's still fully one third that were judged to be "outstanding wines of exceptional complexity and character." I can't reconcile that to the vintage descriptions and it makes me distrust the entire review.

            Comparison to the Aussie reviews presents the same difficulties. Somehow, even though the Aussie vintages were excellent, and the Oregon vintages weak, almost identical percentages of wines were rated 90+ and 87+ in both sets of reviews? How can that be?

            That, to me, is what makes it (forgive the pun) pointless.

            1. re: Frodnesor

              The problem with your analysis is that you latch on some few words trying to prove your point. A few words might have been changed and the meaning would be totally different. Your "outstanding wines" are in fact "terrific" wines (per the very definition) and in my opinion actually 'terrific' sounds probably better than 'outstanding' since the latter is awful close to 'extraordinary', likewise "Almost all of them reveal lower alcohol.." could have said "many of them or majority", etc. In other words it may be a matter of semantics. The whole exercise reminds me of analyzing statistics of death penalties and race issues - you get equally skewed results trying to put global spin on it. I am not about to change your dim view on the rating system of this particular wine critic - he may very well be poor reviewer but the data presented above is just not enough (in my opinion) to prove the point.

              1. re: olasek

                I am pretty confident that the quotes I've provided are representative of the full commentary. Indeed the quotes I used in the OP are pretty much the full summary/recap given for each vintage, without editing. Most of the rest of the commentary is historical information on weather during the vintage, along with a rather ponderous comparison of 2005 Oregon to 1993 Burgundy. Moreover, the words are not the ones I chose, they are the ones chosen by the reviewer and the publication. Surely if someone meant "many" or "a majority" they would have said that, rather than "almost all".

                I don't believe it's semantics at all. Forget the commentary, if you will, and just look at the numbers. If a vintage is rated an 85 or an 86, how can it possibly be that more than half of the wines reviewed are deemed "outstanding" or "terrific" and roughly 90% are rated 87+? How can it simultaneously be that roughly the same percentages hold true for Aussie vintages rated significantly higher (between 88 and 96, with no rating yet for 2006)?

                As a couple of side notes ->
                - it's interesting that "lower alcohol" is either intended or perceived as a criticism.
                - I actually tasted and subsequently purchased a fair share of 2004 Oregon Pinots during a visit last spring and do not share the negative assessment. Most wines I tried were far more than "easy to drink" and quite a few might be described as "terrific." So I suppose in that sense perhaps I disagree less with the ratings than I do with the assessment of the vintage.

          2. '05 may have not been a strong vintage but, '05 and '06 Giridet Baco Noir Umpqua Valley, Oregon were exceptional wines.