A recent news article about the
inconsistency of wine competition results that
Jamie Goode commented upon in his blog has put me in a bit of an analytic mindset. In short, a rigorous study found that not only were judges wildly inconsistent from year to year, but also tasting the same wine from the same bottle at different times in the same competition. I've been to a couple of blind tastings recently with a large quantity of inexpensive wines tasted back to back. This strikes me as not all that dissimilar to the sort of situation a judge at a wine competition encounters. Without doing any serious statistical analysis, I figured it would be worth looking at how my own ratings stacked up plotted against price.
The plot on the left shows the results of two separate tastings of Chardonnay, Sauvignon Blanc, Shiraz/Syrah, Cabernet Sauvignon and Cabernet Franc. There were about six examples of each wine ranging from $2 to $20 in price with the exception of Cabernet Franc, which was only sampled twice. I've included a trend line that averages the score of multiple wines sampled at a single price point. The most interesting and obvious feature in this plot occurs at the $2 price point consisting of Charles Shaw (aka Two Buck Chuck) Chardonnay, Sauvignon Blanc, Shiraz and Cabernet Sauvignon. Although my written notes include a few qualitatively negative descriptions such as "wet dog" for the Shiraz, "sweetish, mold/funk" for the Cabernet and "flabby" for the Sauvignon Blanc, the average score was 82.75 with a standard deviation of 3.59. A score in the low 80s from my perspective means more or less that a wine is simple, easy to drink and has no really obnoxious flaws. It seems that the Two Buck Chuck is consistently mediocre based on these tastings, which is a pretty good accomplishment for $2. Although I wouldn't buy Charles Shaw even as a cheap daily drinking wine because it's essentially boring, its light body, low alcohol and minimal to non-existent oaking seem to endow it with a mysteriously consistent innocuous quality that allows it to outperform wines with more serious imbalances like excessive alcohol heat and over-oaking.
The second most obvious feature is that there really is no strong trending of the data towards higher scores at higher price points. If you exclude the Charles Shaw data as an outlier, you might argue that there's a slight positive correlation between scores and price. The $12 to $14 range does appear to do better than the lower price range, but the scores at $15 and above simply have wildly large deviations from wine to wine. Part of the explanation might be traced to my qualitative notes on these wines. Those that I rated especially poorly typically had too much obvious alcohol content. Over-oaking and flabbiness were also common complaints, though these flaws tend to make me indifferent to a wine whereas high alcohol and the sinus burn that follows induce outright hate regardless of other merits. I suspect that the $10 to $20 range is where producers start to have grapes that are ripe enough to reach high alcohol levels and where producers start putting money into oak treatment. In most cases, though, I'd rather have a simple, light wine than an overdone wine. Less ripe grapes with less oak tend to work well for me in a cheap wine. Nonetheless, hypothesizing aside, there just isn't much of a price to enjoyment correlation among the wines I tried in these two blind tastings.
Maybe the only really telling result is that the mean score over all 26 wines was 80.12 with a standard deviation of 6.64. In fact, only 5 wines scored better than 85 points and one of those was the only wine that I selected. In my personal ratings 86 points is the threshold where I tend to consider a wine worth buying if the price is reasonable. I simply was not excited by many of these wines, though the majority were serviceable.
Wines that I tasted outside of these two blind tastings tell an entirely different story. The plot on the right shows score versus price on a quasi 100 point scale (a non-100 point scale has been tweaked to look like a 100 point scale) with the price plotted logarithmically. There's good reason to plot the price on a logarithmic scale. Robin Goldstein, author of
The Wine Trials, and his collaborators assumed that price grows exponentially with quality in a fairly rigorous study of how tasters rate wine in comparison to price. This is a very reasonable model since a $100 wine is not ten times better than a $10 wine or even twice as good as a $50 bottle. (Incidentally, Goldstein found that there was actually a slight anti-correlation between price and scores assigned in blind tastings by non-professionals, particularly in the under $20 per bottle price range where most of the data was collected, which seems eerily similar to the results I saw in my scores.) If you assume this sort of exponential price model, then ratings plotted against the logarithm of price should result in a straight line. Indeed, the data trends to nearly a perfect straight line except at the lowest price points.
Of course, it is important to point out that these were all wines that I selected to taste one way or another, and particularly above a certain price threshold (say $10) I tend to be selective in terms of buying wines produced in styles that suit my taste. Additionally, these wines were not tasted blind. If anything, this data suggests I'm willing to pay more for wines that I like or at least expect to like. It may also be true, though, that if I pay more for a wine, I'm more willing to be forgiving towards a small flaw. I also tend to taste wines at home over several hours and use a decanter, which is vastly different from tasting blind where a taster typically gets one taste right after the bottle is opened. A wine that improves with air time simply will not perform its best under these conditions.
Regardless of the potential methodological flaws in my "study," what suggests the results are not complete bunk is the realistic variance of scores in the dozens of data points. Some of the more expensive wines around $20 performed quite poorly, while some inexpensive $10 wines scored quite well. While I suspect price does influence my non-blind scores, clearly other factors such as style and quality more strongly influence my personal ratings, especially when I can sample a wine over an extended time period.
Before wrapping up this post, I'll add a couple of Cab Franc reviews from the last blind tasting I attended. I haven't had any Franc content on this blog in a while and am quite overdue. In the interest of disclosure, I knew which wine was the Buttonwood 2004 Cabernet Franc because I brought the bottle and immediately recognized it based on the characteristic bouquet and style of the wine.
Happy Canyon 2007 Chukker: This is a blend of 70% Cab Franc with 15% Merlot and 15% Cabernet Sauvignon. The nose is suggestive of Cab Franc with candied raspberries and some forest floor. But the taste of this wine was almost comically bad. Miles described a particular Cab Franc in
Sideways as "hollow, flabby and overripe" and he might as well have been describing this wine. It was sweet and utterly lacked any tannic or acidic structure. I've tasted this wine before and that time there was some spritz when I opened the bottle suggesting there was some in-bottle fermentation. It certainly tasted like it was loaded with residual sugar, so this seems plausible. Regardless, this wine drank like cough syrup and had only the slightest resemblance to Cabernet Franc. I rated this 77 at the tasting probably because the bouquet was decent, but I suspect drinking more of this wine would have caused the score to plummet out of sheer exasperation. Not good at $15 or any price for that matter.
Buttonwood 2004 Cabernet Franc: This wine was dark and tannic. There were the typical floral, leathery, earthy and mushroomy qualities, but the fruit was dark as night. There is much more black currant than red fruit in this vintage than others I've tasted from Buttonwood. The body is not particularly heavy, but the acidity and tannins are both relatively high, which again seems specific to the vintage. This actually drank more like a serious Cabernet Sauvignon than any of the actual Cabernet Sauvignons, most of which were too jammy or oaked to express the varietal. Rated 88 at the time which I think is a fair score. This wasn't the purest Cabernet Franc expression, but it was one of the few honest, well-made wines in the entire blind tasting. Recommended, especially if you like a little rustic edginess to your wine.