Napa Valley ranking in the world?
I was sent a Wall Street Journal article about Bill Harlan (from 2012), who is certainly a premier Napa figure and a very interesting guy. I'll insert a link below.
One statement really caught my eye. Near the beginning of the article Harlan says:
""This area has the potential to be a national treasure," he says, "of someday being recognized as one of the finest wine-growing regions in the world. That's why we're here and not somewhere else."
I realize that I've gotten into some rather dicey exchanges here when comparing old world and new world perspectives, but I'd really like some thoughts on his apparent placement of Napa Valley as NOT a national treasure YET, and SOMEDAY being recognized as "one of the finest wine-growing regions".
Have at it!
Bill was "out-of-date" when he made that statement. Napa Valley -- the region, not necessarily the AVA -- has LONG been recognized as "one of the finest wine-growing regions in the world." That said, I think something must be said re: potential. IMHO, the finest wines of the Napa Valley have yet to be produced.
I am not speaking of wine's ever-increasing quality (e.g.: wines from Lodi are much better today than ever before; so, too, the wines from the Languedoc, Chile, and South Africa, etc., etc., etc.). Rather, Napa has been making wine for only a relatively short period of time after several "re-starts." What happened in the 19th century (the 1st Wave) was lost to Prohibition. What happened in the 1930s, '40s, and '50s (2nd Wave) was largely an attempted continuation of the 1800s. The first brand-new-from-the-ground-up winery to be built after Prohibition was Louis M. Martini in 1933-34; the second was the Robert Mondavi Winery in 1966, and with it came the 3rd Wave.
Since the late-1960s, California in general, and Napa in particular, has experimented with varietal wines of various types, with "food wines," with proprietary wines of various types, with harvesting at "physiological maturity" rather than by-the-numbers (think "Parkerized wines," though let me quickly add that's unfair and not meant to be a blanket statement). They have experimented with different types of yeast, different types of oak, different fermentation temperatures and duration of maceration. They have experimented with different trellising systems, different yields, different pruning techniques, and pest control. The list goes on and on . . .
My point is that the California wine "industry" -- while still based on a more industrial model rather than an agricultural one -- is still learning what to do. This is not to suggest that, one day, all experimentation will stop; far from it! Rather, we are still in a largely experimental period overall -- albeit less experimental than, say, in the 1960s or 1980s.
For example: in 1982, there were 375 acres of White Riesling planted in Napa County; as of 2013, there were only 133 acres. In 1982, there were 17 acres of Syrah, but by 2013, it was up to 983. In 1982, there were 4,455 acres of Cabernet Sauvignon planted; by 2013, 19,812. (In 1982, the TOTAL plantings in Napa County were 19,421; in 2013, 45,990 acres.) The size of the county hasn't changed, but the acreage planted sure has! It has not only more than doubled, but the variety of grape varieties has been fine-tuned as people discover which variety does best where.
Certainly agree with what you have written, Jason, and thanks for posting it. Europe has quite a head start on Northern California when it comes to agriculture; but, N. California has better weather. So, as we catch up on the viticulture, watch out. I was impressed when, at the end of the artical, Harlan indicated he wanted to produce a red wine flavor that was new to the world. Hope he or his children are successful.
Respectfully, I disagree. One tasting doesn't prove anything; nor does two. So while I am in no way discounting the IMPACT of the "Judgement of Paris" -- which was HUGE -- neither do I believe for a moment that the results of the tasting *proved* that California is the equal of France.
What was proved (as opposed to demonstrated) by the Judgement of Paris was that these wines were preferred over those wines by this group of tasters. Other wines would have yielded different results. So, too, would other tasters.
What was demonstrated (as opposed to proven) by the Judgement of Paris was that the world could not afford to overlook California wines in general, and Napa wines in particular . . . though I still prefer the Santa Cruz Mountains. ;^)
"What was proved (as opposed to demonstrated) by the Judgement of Paris was that these wines were preferred over those wines by this group of tasters. Other wines would have yielded different results. So, too, would other tasters."
If the discussion buys "ranking" as the premise, Who is ranking is the answer all day long.
If the parrot isn't dead, it's pining for the fjords, stunned, . . .
re: Robert Lauriston
Posted ca midnight 8-9July14: http://chowhound.chow.com/topics/9811...
"The 1976 tasting proved beyond a reasonable doubt that the best California wines were not as easy to distinguish from great French wines as almost everyone had assumed. Mike Grgich might have been the only person who wasn't surprised."
The latter speculation ("might have been the only person not surprised") is latter-day mythos, already discredited in the major 1980s book on the subject of California wine as I earlier mentioned in this thread.
It may be a lost cause to keep pointing out the following in the face of Hollywood and pop-culture, but (a) the 1976 Spurrier tasting was NOT the first clear demonstration that expert blind-tasters preferred some of the best California wines to some great French counterparts -- it was, rather, the first such demonstration to get the notice of Time magazine, and basically the last, too; that momentary interest from mainstream media led to today's disproportionate public awareness of the 1976 tasting; (b) "almost everyone" above excludes many people following the development of serious California wines in the 1950s, 60s, 70s, including people writing at the time; (c) the published record of all this is, and always has been, available to anyone interested enough to read up on it -- that's how I know about it, for instance. That record is, in fact, the standard, authoritative story of California wine history.
Just not the decades-later Hollywood or MSNBC or Wikipedia take.
re: Robert Lauriston
Robert? No offense meant, but that's bull$#|+ -- for so many reasons!
1. It didn't "prove" anything; you know just as well as I do that a) another group of tasters with the same wines would have provided different results; and b) the same tasters with the same wines on another day would have provided different results. This variation has been demonstrated time and time again by universities across the globe, including UC Davis, CSU Fresno, and Cornell, as well as universities in France, Germany, Australia, and on and on and on. Indeed, Steven Spurrier has been quoted as saying, "The results of a blind tasting cannot be predicted and will not even be reproduced the next day by the same panel tasting the same wines." See http://www.liquidasset.com/tasting.html (among others).
2. No one in the California wine trade was surprised; we ALL knew CA wines could more than hold their own. Many people in the British wine trade, too, were not surprised, as California wines already had a reasonable following there. What *was* surprising was the seeming one-sidedness of the results, not the results themselves.
3. If you look at the raw data, Château Montrose received FIVE 1st place votes from the 11 judges; Château Haut Brion received TWO; Château Mouton Rothschild, Ridge Monte Bello, Heitz Martha's, and Stag's Leap each received only ONE vote for 1st.
4. As Steven Spurrier told me in August 1977 prior to a Bollinger RD vertical tasting held at his Cave Madeline, "I knew California would come out on top. I 'rigged' the tasting."
What gets me (reading avidly about all this over the years) is how the 1976 tasting's public perception has shifted and even been exploited ever since.
It had its moment of popular attention at the time, then the public moved on to the next amusement (Abscam, or President Carter's "image consultant").
Wine professionals soon put the 1976 into perspective, even aside from the randomness issue of single tasting events.
It was soon overshadowed internationally by a larger, "much more informative" such comparative tasting in France, which caused even greater commotion in that country, but which US mass media ignored (the Wikipedia entry on the 1976 tasting doesn't even mention it, incredibly). In the epic 1984 California-wine overview book, Bob Thompson's chapter on critical appraisals discredited the popularly perceived "novelty" of the 1976 results and largely put the topic to rest in major writing. It was basically forgotten then for 20 years, outside of reference books.
Then George Taber, author of the 1976 "Time" article (who has since been accused, unjustly I think, of building his whole career on that one event) published his 2005 book expanding on the original article. The book was promoted; a 30-year reenactment was announced; and hordes of younger journalists, who'd never before heard of Spurrier or any "judgment of Paris" -- and who certainly hadn't bothered to learn anything about the history of other comparative tastings -- resuscitated the long-debunked "surprising," "first-ever comparison" myth and ran with it around 2006, achieving its current new lease on life. Despite some of us pointing out the easily verified facts to some of them.
The California reds as a group did not come out on top. The Stag's Leap beat Mouton by 1/20 of a point. Ridge came in fifth and the other four were at the bottom.
I'm quite certain any similar group of expert French tasters comparing a similar group of wines (which on the California side would have meant pretty much the same group of wines) would have had more trouble identifying the California wines than they expected.
That was more about the inexperience and arrogance of the French than about the California wines themselves.
re: Robert Lauriston
" That was more about the inexperience and arrogance of the French than about the California wines themselves."
Then how do you explain the similar, "much more informative," but US-media-ignored results in the Gault-Millau "Wine Olympiade" three years later, OR, that tastings in various countries had been producing similar results for years?
After the wake-up call of the 1976 tasting nobody was so surprised.
Whatever other tastings you're referring to weren't similar. The Stag's Leap and Chateau Montelena were both wineries' second vintages. Tchelistcheff, Lee Stewart, and Mondavi (the winners' mentors) had been making good wines for a while but those remained unknown internationally.
re: Robert Lauriston
"Whatever other tastings you're referring to weren't similar."
Robert, would you please at least learn more real history of this subject before posting characterizations like that? To start with: history of comparative tastings in the major book I keep mentioning (which you appear unfamiliar with although it is maybe the most important single source in print on California wine). Then come back and talk about what was or wasn't "similar" in tastings both before and after 1976. ("Whatever other tastings" indeed.)
The 1976 tasting caused a brief sensation, then it became part of a larger context well analyzed by Thompson in the 1980s, and pretty much forgotten. Other journalists periodically rehashed it to new readers, as in the Asher you linked.
I am trying to point out more context, for people who do not know it. The full record shows that folks only perceive the 1976 tasting again TODAY as such a unique a "wake-up call" thanks to the sensationalizing and myopic re-publicity that the event received again around 2006.
You have an odd habit of picking out one old and out of date book, making it out to be the preeminent authority, avoiding mentioning the title, and never quoting directly. If you think some tastings before 1976 woke the world up to the quality of California wines, stop bluffing and name them.
re: Robert Lauriston
I often cite standard sources prominent over the years and known to many well-read CA wine enthusiasts. I cited the 1984 UC-Sotheby tome explicitly earlier in this thread to anyone interested, more than once, and have even quoted wording from it explicitly. That short reference just now (the latest in this thread) is enough to easily find it, and many people who already know of the book will recognize it under the "UC-Sotheby" shorthand. Most wine writers and much of the trade (not *I*) regarded it as the major 20th-century work on California wine -- the 44 authors were most of the experts in the field. It is less "old and out-of-date" (!) than the 1976 tasting. (If you don't know about all this already, please don't blame me.)
The larger point: I've touched on some well-documented realities, accessible to anyone else willing (as I did) to do the WORK of reading up on them -- even, if should come to that, doing just a little more research to learn what are the major California wine books of each era (there are several such). "Proving" things to people while standing on one foot isn't what this is about.
re: Robert Lauriston
1) Yes, that's why BV and Robert Mondavi wines were never heard of, let alone sold in Britain . . . and why I never saw a bottle of Souverain or Hanzell on European wine lists . . . .
2) The fact that SLWC and Montelena were only in their second vintages means nothing, Robert. Both had great grape sources and experienced winemakers . . . Are you seriously suggesting that one can only produce a great wine after 200 years?
A wine that did not yet exist could not have been entered in the tasting. Take the Stag's Leap off the list and the results look very different.
The 1976 tasting came as no surprise to anyone because those French experts were all familiar with BV, Charles Krug, Hanzell, and Souverain?
re: Robert Lauriston
If you use rank scoring (i.e.: a 1st place vote equals 1 point, a 10th place vote equals 10 points; the wine with the fewest points is the "winner), rather than base things on a 20-point scale, the results are as follows:
1. Château Montrose -- 33 points
2. Stag's Leap Wine Cellar-- 34 points
3. Château Mouton-Rothschild -- 37 points
4. Château Haut-Brion -- 44 points
5. Ridge Vineyards -- 52 points
6. Heitz Cellars -- 67 points
7. Château Léoville Las Cases -- 68 points
8. Freemark Abbey -- 73 points
9. Clos du Val -- 74 points
10. Mayacamas -- 75 points
Note: the minimum possible score (i.e.: if all 11 tasters ranked the same wine their 1st choice) was 11 points; the maximum, 110.
The original scoring was based upon the (then commonplace) "modified 20-point scale." The results were as follows:
1. 14.14 Stag's Leap Wine Cellars
2. 14.09 Château Mouton-Rothschild
3. 13.64 Château Montrose
4. 13.23 Château Haut-Brion
5. 12.14 Ridge Vineyards Monte Bello
6. 11.18 Château Leoville Las Cases
7. 10.36 Heitz Wine Cellars Martha's Vineyard
8. 10.14 Clos Du Val Winery
9. 9.95 Mayacamas Vineyards
10. 9.45 Freemark Abbey Winery
Keep in mind, too, that according to the "powers that be" at UC Davis a score +/- 1.5 is within the margin of error (i.e.: statistically the same).
That said, anyone familiar with the 20-point scale knows it's virtually impossible to score below 10 -- just as with Parker's infamous 100-point scale, it's virtually impossible to score below 50. And yet, some of the judges gave wines scores of "2" and "3" out of 20 points. I have NO IDEA how they did that!
Here are the range of scores (lows and highs), from the 20-point scale:
1. Stag's Leap Wine Cellars -- 10 to 16.5
2. Château Mouton-Rothschild -- 11 to16
3. Château Montrose -- 11 to 17
4. Château Haut-Brion -- 8 to 17
5. Ridge Vineyards Monte Bello -- 7 to 15.5
6. Château Leoville Las Cases -- 8 to 14
7. Heitz Wine Cellars Martha's Vineyard -- 2 to 17 (!)
8. Clos Du Val Winery -- 2 to 16.5 (!)
9. Mayacamas Vineyards -- 3 to 14
10. Freemark Abbey Winery -- 5 to 15
One more comment in the Food for Thought Dept., from the paper I cited above:
>>> It is also useful to consider how successful the judges were in appraising the wines. One measure of the success of a judge is the extent to which an individual judge's ranking is a good predictor of the group's ranking (where the group's ranking excludes the particular judge in question.) By this measure the judges would be ordered as follows (from best predictor to worst): A. de Villaine (.70 correlation), J.-C. Vrinat (.65), Ch. Millau (.61), Steven Spurrier (.47), Pierre Brejoux (.46), Ch. Vanneque (.42), Odette Kahn (.29), and Raymond Oliver (.25). Ironically, the preferences of the remaining judges (Dovaz, Gallagher, and Tari), two of whom were French, are unrelated to the group preference. <<<
I wrote above
>>> 3. If you look at the raw data, Château Montrose received FIVE 1st place votes from the 11 judges; Château Haut Brion received TWO; Château Mouton Rothschild, Ridge Monte Bello, Heitz Martha's, and Stag's Leap each received only ONE vote for 1st. <<<
That was incorrect, as well as misleading in that the judges did not vote (rank) the wines; they only scored them on a 20-point scale.
In terms of being incorrect, all I can say is that I was in a hurry this morning, and I failed to take into account tie scores (i.e.: one judge could have more than one 1st place wine). I apologize for the error. Here, then, is the actually number of first place rankings for each wine (i.e.: the highest point score from a judge on the 20-point scale):
Château Montrose -- 5
Château Haut-Brion -- 3
Stag's Leap Wine Cellars -- 3
Château Mouton-Rothschild -- 2
Ridge Vineyards -- 2
Freemark Abbey -- 1
Heitz Cellars -- 1
Mayacamas -- 1
Clos du Val -- none
Château Léoville Las Cases -- none
What about last place "votes" (i.e.: lowest point scores)? Here are those figures:
Château Montrose -- none
Château Léoville Las Cases -- none
Stag's Leap Wine Cellars -- none
Château Haut-Brion -- 1
Heitz Cellars -- 1
Château Mouton-Rothschild -- 1
Ridge Vineyards -- 1
Freemark Abbey -- 3
Mayacamas -- 3
Clos du Val -- 3
So, if you just look at the number of "firsts", and subtract the number of 'lasts", you get:
1. Montrose (5-0=5)
2. Stag's Leap (3-0=3)
3. Haut-Brion (3-1=2)
4. Mouton (2-1=1)
4. Ridge (2-1=1)
6. Heitz (1-1=0)
6. Léoville (0-0=0)
8. Freemark (1-3= -2)
8. Mayacamas (1-3= -2)
10. Clos du Val (0-3= -3)
I have lived in France for 12 years now. I drink French wine every night from all corners of the country, and I am amazed at how little it costs me to drink good wine every night. For the money I don't think you can beat French wine.
However, I come to California for a few days every year, and I am astounded at how wonderful California wine has become. It's not cheap, but it has to be #1 in the world today.
The wines I drink in Spain and in Portugal every time I visit are exquisite, too, and it wasn't that way 30 years ago, either . . . neither was that way in France in 1977 (for example) . . . so, I have it all on the Iberian Peninsula, or in Western Europe . . . .
For me, the Bottom Line (as it has ***always*** been) is that California makes the BEST California wines in the world! Spain produces the best Spanish wines in the world, and France -- well, no one makes French wines the way they do in France!
I think it's a decent argument that Napa is *one* of the prime wine-producing regions.
But #1? I hope we never actually name "the best" -- because it's not all about volume...and there are too many variations -- reds, whites, varietals, vintages, terroirs -- to ever be able to choose one region over another as "the best".
(I'm really getting tired of the phrases "the best" and "#1" -- in everything, not just wine -- because they are arranged simply as who fits an arbitrary set of qualities chosen by a limited set of opinions)
Harlan sounds like he's quoting, nearly verbatim, Schoonmaker and Marvel's book, on the subject of the already-established promise of the Napa Valley. (The book appeared in 1941.)
I agree strongly with Jason in this thread, and one thing I noticed already in the 1970s and still notice today is a willingness of wine consumers to make broad comparative conclusions about "California wine," or even just "the Napa valley," based on anecdotal sampling of a few (not even necessarily representative) wines in what was already quite a _diverse_ wine-producing region in the 1970s, and is more
Speaking of judgments, latter-day media fixation with the 1976 Spurrier tasting obscured some history, formerly common knowledge among the wine trade and US wine enthusiasts in the years following 1976, and prominent too in the principal general book on California wines from the years following Spurrier (the UC-Sotheby compendium): Spurrier's not only was far from the first public example where blind tasters preferred some California wines to some European counterparts, but also, it wasn't even the most decisive such example to occur in the late 1970s.
The 1976 Spurrier tasting acquired its public importance not so much on substantive results, but because [quoting the same book] _Time_ reported it. The mainstream news media demonstrated very little interest in such tastings both before and after that one.
It's understandable of course that people repeat pop-culture clichés they pick up, but I think it's less understandable that more writers in recent years chose not to go beyond cliché wisdoms, and restore the 1976 tasting to something more like its historical context.