Tuesday, December 23, 2014

Climate capers at Cato

NOTE: The code and data used to produce all of the figures in this post can be found here.

Having forsworn blogging activity for several months in favour of actual dissertation work, I thought I'd mark a return to Stickman's Corral in time for the holidays. Our topic for discussion today is a poster (study?) by Cato Institute researchers, Patrick Michaels and "Chip" Knappenberger.

Michaels & Knappenberger (M&K) argue that climate models predicted more warming than we have observed in the global temperature data. This is not a particularly new claim and I'll have more to say about it generally in a future post. However, M&K go further in trying to quantify the mismatch in a regression framework. In so doing, they argue that it is incumbent upon the scientific community to reject current climate models in favour of less "alarmist" ones. (Shots fired!) Let's take closer look at their analysis, shall we?

In essence, M&K have implemented a simple linear regression of temperature on a time trend,
\begin{equation}
Temp_t = \alpha_0 + \beta_1 Trend + \epsilon_t.
\end{equation}
This is done recursively, starting from 2014 and incrementing backwards one year at a time until the sample extends until the middle of the 20th century. The key figure in their study is the one below, which compares the estimated trend coefficient, $\hat{\beta_1}$, from a bunch of climate models (the CMIP5 ensemble) with that obtained from observed climate data (global temperatures as measured by the Hadley Centre's HadCRUT4 series).



Since the observed warming trend consistently falls below that predicted by the suite of climate models, M&K conclude:  "[A]t the global scale, this suite of climate models has failed. Treating them as mathematical hypotheses, which they are, means that it is the duty of scientists to reject their predictions in lieu of those with a lower climate sensitivity."

Bold words. However, not so bold on substance. M&K's analysis is incomplete and their claims begin to unravel under further scrutiny. I discuss some of these shortcomings below the fold.

Tuesday, April 22, 2014

On economic "consensus" and the benefits of climate change

Note: Slight edits to graphs and text to make things clearer and more comparable.

Richard Tol is a man who likes to court controversy. I won't deem to analyse his motivations here -- suffice it to say that I respect his professional research at the same time as I find his social media interactions maddeningly obscurechurlish and inconsistent. However, I'm pretty sure that he relishes the the role of provocateur in the climate change debate and will admit no shame in that.

Little wonder then, that his work acts as grist to the mill for sceptical op-eds of a more -- shall we say --considered persuasion. That is, opinion pieces that at least try to marshal some credible scientific evidence against decisive climate change action, rather than just mouthing off some inane contrarian talking points (it's a giant communist conspiracy, etc). Bjørn Lomborg and Matt Ridley are two writers that have cited Richard's research in arguing forcibly against the tide of mainstream climate opinion. I want to focus on the latter's efforts today, since it ties in rather nicely with an older post of mine: "Nope, Nordhaus is still (mostly) right."

I won't regurgitate the whole post, but rather single out one aspect: The net benefits that climate change may or may not bring at moderate temperature increases. The idea is encapsulated in the following figure of Tol (2009), which shows estimates of economic damages due to temperature increases relative to the present day.

Fig. 1
Note: Dots represent individual studies. The thick centre line is the best fit stemming from an OLS regression: D = 2.46T - 1.11T^2, with an R-squared value of 0.51. The outer lines are 95% confidence intervals derived according to different methods. Source: Tol (2009)

Now, there are various points to made about the implications of Fig. 1. People like Matt Ridley are wont to point out that it demonstrates how climate change will bring benefits to us long before it imposes any costs. Ergo, we should do little or nothing about reducing our emissions today. Of course, there are multiple responses to this position and I tried to lay out various reasons in my previous post as to why this is a very misleading take (sunk benefits and inertia in the climate system, uncertainty and risk aversion, unequal distribution of benefits and costs, tipping points, etc).

However, I have two broader points to make here, for which Ridley will prove a useful foil. For example, here he is in The Spectator last year, arguing "Why Climate Change Is Good For The World":
To be precise, Prof Tol calculated that climate change would be beneficial up to 2.2˚C of warming from 2009[... W]hat you cannot do is deny that this is the current consensus. If you wish to accept the consensus on temperature models, then you should accept the consensus on economic benefit.
Bold in my emphasis. Now it should be pointed out that Ridley's articled elicited various responses, including one by Bob Ward that uncovers some puzzling typos in Richard's paper. Ward goes on to show that in fact only two out of the 14 studies considered in Tol (2009) reveal net positive benefits accruing due to climate change, and one of these was borderline at best. Specifically, Mendelsohn et al. (2000) suggest that 2.5˚C of warming will yield a tiny net global benefit equivalent to 0.1% of GDP. (It is should also be noted that they do not account for non-market impacts -- typically things like ecosystems, biodiversity, etc -- which would almost certainly pull their estimate into negative territory.) That leaves one of Richard's own papers, Tol (2002), which suggests that 1˚C of warming will yield a 2.3% gain in GDP, as the sole study showing any kind of benefits due to climate change.

This is all very well-trodden ground by now, but it underscores just how tenuous -- to put it mildly -- Matt Ridely's appeal to economic consensus is. However, we are still left with a curve that purports to show positive benefits from climate change up until around 2˚C of warming, before turning negative. So here are my two comments:

Comment #1: Outlier and functional form

Given that only one study  (i.e. Tol, 2002) among the 14 surveyed in Tol (2009) shows large-ish benefits from climate change, you may be inclined to think that the initial benefits suggested by Fig. 1 are hinged on this "outlier"... And you would not be wrong: individual observations will always stand to impact the overall outcome in small samples. However, I would also claim that such a result is partially an artefact of functional form. What do I mean by this? I mean that predicting positive benefits at "moderate" levels of warming is in some sense inevitable if we are trying to fit a quadratic function[*] to the limited data available in Tol (2009). This is perhaps best illustrated by re-estimating the above figure, but (i) correcting for the typos discovered by Bob Ward and (ii) excluding the outlier in question.

Fig. 2
Based on Figure 1 in Tol (2009), but corrected for typos and including an additional best-fit line that excludes the most optimistic estimate of benefits due to moderate climate change (i.e. Tol, 2002).

Remember that our modified sample includes only negative -- or neutral at best -- effects on welfare due to climate change. And yet, the new best-fit line (dark grey) suggests that we will still experience net benefits for a further 1.75˚C of warming! Thus we see how the choice of a quadratic function to fit our data virtually guarantees the appearance of initial benefits, even when the data themselves effectively exclude such an outcome.[**] You'll note that I am following Ridley's lead here in ignoring the confidence intervals. This is not a particularly sound strategy from a statistical perspective, but let's keep things simple for the sake of comparison.


Comment #2: New data points

As it happens, several several new estimates of the economic effects of climate change have been made available since Tol (2009) was published. Richard has updated his Fig. 1 accordingly and included it in the latest IPCC WG2 report. You can find it on pg. 84 here. (Although -- surprise! -- even this is not without controversy.) However, this updated version does not include a best-fit line. That is perhaps a wise choice given the issues discussed above. Nevertheless, like me, you may still be curious to see what it looks like now that we have a few additional data points. Here I have re-plotted the data, alongside a best-fit line and 95% confidence interval.

Fig. 3
Based on Figure 10 in IPCC WG2 (2014). As before, the best-fit line is computed according a quadratic function using OLS. This yields D = 0.01T - 0.27T^2, with an R-squared value of 0.49.

Whoops. Looks like those initial benefits have pretty much vanished!

So... What odds on Matt Ridley reporting the updated economic "consensus"?

UPDATE: Richard points me towards a recent working paper of his that uses non-parametric methods to fit a curve to the data. This is all well and good, and I commend his efforts in trying to overcome some of the issues discussed above... Except for one overwhelming problem: Non-parametric methods -- by their very nature -- are singularly ill-suited to small samples! Even Wikipedia manages to throw up a red flag in its opening paragraph on the topic: "Nonparametric regression requires larger sample sizes than regression based on parametric models because the data must supply the model structure as well as the model estimates." Arguably even more problematic is the fact that non-parametric estimations are particularly misleading in the tails. I simply don't see how a non-parametric approach can be expected to produce meaningful results, given that we are dealing with a rather pitiful 20-odd observations. Ultimately, it is not so much a question of parametric versus non-parametric. The real problem is a paucity of data.

UPDATE 2: An errata to Tol (2009) has finally been published. The updated figure is, of course,  the same as much the same as I have drawn above. [Having looked a bit closer, I see the errata includes an additional data point that isn't in the IPCC report (Nordhaus, 2013). In addition, the damage figure given for another study (Roson and van der Mensbrugghe, 2012) has changed slightly. Yay for typos!]

UPDATE 3: Ouch... and double ouch. Statistician Andrew Gelman takes Richard out to the woodshed (making many of the same points that I have here). The result isn't pretty. Make sure to read the comments thread too.


___
[*] Tol (2009) uses a simple regression equation of D = b1*T - b2*T^2 to fit the data. He finds b1 = 2.46 and b1 = 1.11, which is where the thick, central grey line in Fig. 1 comes from.
[**] For the record, I don't wish to come across as overly pedantic or critical of the choice of a quadratic damage function. Indeed, it is hard to think of another simple function that would better lend itself to describing the effect of moderate temperature increases. (Albeit not for higher levels of warming.) I am merely trying to expand on the way in which the interplay of limited data and choice of functional form can combine to give a misleading impression of the risks associated with climate change.

Sunday, March 2, 2014

A lemon market for poachers

[My new post at the Recon Hub, which I'll repost in full here...]

Ashok Rao has a provocative suggestion for stopping the rampant poaching of elephant and rhino. Drawing on insights from George Akerlof’s famous paper, “A Market for Lemons“, he argues that all we need to do is create some uncertainty in the illegal ivory trade:
Policymakers and conservationists need to stop auctioning horns and burning stockpiles of ivory, they need to create this asymmetry [which causes markets to break down under Akerlof's model]. And it’s not hard. By virtue of being a black market, there isn’t a good organized body that can consistently verify the quality of ivory in general. Sure, it’s easy to access, but ultimately there’s a lot of supply chain uncertainty. 
There is a cheap way to exploit this. The government, or some general body that has access to tons of ivory, should douse (or credibly commit to dousing) the tusks with some sort of deadly poison, and sell the stuff across all markets. Granting some additional complexities, the black market could not differentiate between clean and lethal ivory, and buyers would refrain from buying all ivory in fear. The market would be paralyzed.
I really like Ashok's proposal… Not least of all, because it is virtually identical to an idea that Torben and I had whilst out for a few drinks one night! (This includes the invocation of Akerlof, by the way.) The big difference being that we didn't go so far as to suggest that the ivory should be poisoned: In our minds, flooding the market with “inferior”, but hard-to-detect fake product would do the trick.

To see why this might be the case, consider the economic choices of an individual poacher. Poaching is a risky activity and there is a decidedly non-negligible probability that you will be imprisoned, severely injured, or even killed as a result of your illegal actions. However, it still makes sense to take on these risks as long as the potential pay-off is high enough… And with rhino horn and ivory presently trading at record prices, that certainly happens to be the case. However, all that an intervention like the one proposed above needs to achieve, is to drive down the price of ivory to a level that would cause most rational agents to reconsider the risks of poaching. What level would this be exactly? That’s impossible for me to say, but I'm willing to bet that poachers are highly price sensitive.

A final comment before I close, inspired by another blog post that has also commented on Ashok's proposal. Jonathan Catalán correctly points out that one of the most valuable aspects of original “lemons” paper, is to force us to think carefully about why asymmetric markets don’t generally collapse into the degenerate equilibrium implied by Akerlof's theory. Perhaps the best answer to that is one hinted at by Akerlof himself: institutions like money-back guarantees, brand reputation, etc.. In light of this, Jonathan wonders whether the black-market wouldn't simply just adopt practices to weed out the counterfeit goods? My feeling, however, is that it is misleading to compare the ivory and rhino horn trade to other illegal markets in this respect. In the drug industry, for example, cartels are able to test the quality of a cocaine shipment simply by trying it themselves. Drugs have a very definite effect on us physiologically and so “quality control” (so to speak) is relatively easy to do. In comparison, we know that crushed rhino horn has no medical efficacy whatsoever… whether that is with respect to treating cancer or healing regular aches and pains. I therefore strongly suspect that it would be much harder for a buyer of powered rhino horn to verify whether their product is the real deal or not. The placebo effect will be as strong, or weak, regardless.

PS -- Legalisation of the ivory and horn trade is another economic approach to solving the poaching problem. Proponents of this view see it as creating opportunities for a sustainable market that both incentivises breeding and undercuts poachers. I am favourably predisposed towards this particular argument, at least in the case of rhino since they are easier to farm. However, I am not convinced that it will put an end to poaching, which will continue as long as the rents are there to be captured. There also remains the question of how demand will respond to a surge in supply, as well as issues related to a biological monoculture (i.e. rhinos will be bred solely on the basis of increasing their horn size). However, those remain issues another day.

Friday, February 14, 2014

Why are economic journal articles so long?

Seriously.

I've been reading a lot of (hard) scientific studies for my current dissertation chapter and it is striking how concise they are. The majority of published articles, at least the ones that I have come across, are typically no more than 4-6 pages in length. The pattern holds especially true for the most prestigious journals like Nature, Science or PNAS.

In contrast, your average economic article is long and getting longer. [Note: New link provided.]

Now you could argue that science journals achieve this feat by simply relegating all of the gritty technical details and discussion to the supplementary materials. And this is true. For a good example, take a look at this article by Estrada et al. (2013).[*] The paper itself is only six pages, while the supplementary material is over 40 pages long. Equally as telling is how similar this supporting information is to the working paper that the final article is apparently based upon.

To be honest, I don't really see how this can be a bad thing. It is primarily the job of the editors and referees to vouch for the technical merits and internal consistency of a published study. Regular readers are mostly interested in the broad context (i.e. significance of the research) and the actual findings. As much as it is important to make the technical details -- and data! -- available to those who want to go through them, the clutter largely detracts from the key messages. I'm also willing to bet good money that many (most?) people currently just skip through the entire mid-section of your typical economics paper anyway, concentrating on the introduction, results and conclusion.

So, is this a weird case of physics envy, where economists feel the need to compensate for lack of quality through quantity? Or does it say something special about the nature of economics, where the limited extent of true experimental data makes methodology more precarious and prone to bias?

Either way, do we really lose anything by making economic journal articles much shorter and saving all the technical details for the supplementary materials?

PS - Yes, I know that most economic journals already reserve a lot of information for the technical appendices. I'd also say that a number of the top journals (e.g. AER) are pleasantly readable -- perhaps surprisingly so for outsiders. But we're still a long way off what the sciences are doing.

UPDATE: It just occurred to me that the frequency of publication plays a reinforcing in all of this. Nature, Science and PNAS are all issued on a weekly basis. The top economic journals, on the other hand, are typically only bi-monthly (at best) or quarterly publications. The higher volume of science publications encourages concise articles for pure reasons of practicality, as well as readability.

___
[*] The authors argue that climate data are better viewed as statistical processes that are characterised by structural breaks around a deterministic time trend... i.e. As opposed to non-stationary stochastic processes comprising one or more unit roots. (This is important because it has implications for the ways in which we should analyse climate data from a statistical time-series perspective.) In so doing, they are effectively reliving a similar debate regarding the statistical nature of macroeconomic time-series data, which was ignited by a very famous article by Pierre Perron. Perron happens to be one of the co-authors on the Estrada paper.

Ed Byrne on wedding planning

As the man says, you wouldn't understand unless you've been there.

(Start from 2:45)


Friday, January 24, 2014

Of Vikings and Credit Rating Agencies

Yesterday's post reminded me of a story that encapsulates much of my own feelings about credit rating agencies (and, indeed, the naivete that characterised the build-up to the Great Recession).

The year was 2008 and I was working for an economics consultancy specialising in the sovereign risk of emerging markets. We primarily marked African countries, but also had a number of "peripheral" OECD countries on our books. One of these was Iceland and I was assigned to produce a country report that would go out to our major clients.

By this time, the US subprime market had already collapsed and it was abundantly clear that Europe (among others) would not escape the contagion. With credit conditions imploding, it was equally clear that the most vulnerable sectors and countries were those with extended leverage positions.

Iceland was a case in point. The country had a healthy fiscal position, running a budget surplus and public debt only around 30% of GDP. However, private debt was a entirely different story. Led by the aggressive expansion of its commercial banks into European markets, total Icelandic external debt was many times greater than GDP. Compounding the problem was a rapid depreciation in the Icelandic króna, which made the ability to service external liabilities even more daunting. (Iceland was the world's smallest economy to operate an independently floating exchange rate at that time.) Wikipedia gives a good overview of the situation:
At the end of the second quarter 2008, Iceland's external debt was 9.553 trillion Icelandic krónur (€50 billion), more than 80% of which was held by the banking sector.[4] This value compares with Iceland's 2007 gross domestic product of 1.293 trillion krónur (€8.5 billion).[5] The assets of the three banks taken under the control of the [Icelandic Financial Services Authority] totalled 14.437 trillion krónur at the end of the second quarter 2008,[6] equal to more than 11 times of the Icelandic GDP, and hence there was no possibility for the Icelandic Central Bank to step in as a lender of last resort when they were hit by financial troubles and started to account asset losses.
It should be emphasised that everyone was aware of all of this at the time. I read briefings by all the major ratings agencies (plus reports from the OECD and IMF) describing the country's precarious external position in quite some detail. However, these briefings more or less all ended with the same absurd conclusion: Yes, the situation is very bad, but the outlook for the economy as a whole remains okay as long as Government steps in forcefully to support the commercial banks in the event of a deepening crisis.(!)

I could scarcely believe what I was reading. What could the Icelandic government possibly hope to achieve against potential liabilities that were an order of magnitude greater than the country's entire GDP? Truly, it would be like pissing against a hurricane.

Of course, we all know what happened next.

In the aftermath of the Great Recession, I've often heard people invoke the phrase -- "When the music is playing, you have to keep dancing" -- perhaps as a means of understanding why so many obvious danger signs were ignored in favour of business-as-usual. It always makes me think of those Icelandic reports when I hear that.

PS- Technically, Iceland never did default on its sovereign debt despite the banking crisis and massive recession. It was the (nationalised) banks that defaulted so spectacularly. The country has even managed a quite remarkable recovery in the scheme of things. The short reasons for this are that they received emergency bailout money from outside and, crucially, also decided to let creditors eat their losses.

Thursday, January 23, 2014

Home bias in sovereign ratings

 [Rather irritatingly, I wrote the below post at the end of last week and had been meaning to publish it on Monday. Unfortunately, I got snowed in with work and now see that Tyler Cowen and a bunch of other people have already covered the paper in question. Still, in a bid to get some blogging activity going around these parts again, here's my two cents.]
"The Home Bias In Sovereign Ratings" 
Fuchs and Gehring conduct empirical analyses of variation in nine different credit ratings agencies around the world that offer ratings of at least 25 sovereigns[...] The paper is motivated by two good questions: (1) Do ratings agencies assign better ratings to their home countries? (2) Do they assign better ratings to countries that have close cultural, economic, or geopolitical ties to their home country? 
[...] 
Fuchs and Gehring find clear evidence of “home bias”. Specifically, their analysis finds that agencies do indeed assign higher ratings to their home country governments compared to other countries with the same characteristics. This result was especially strong during the global financial crisis (GFC) years–nearly a 2 point “bump” in ratings.
As someone who has been both a consumer and producer of sovereign rating reports prior to starting a PhD, I find this sort of thing very interesting. The role of inherent biases in the industry is scope for bemusement and alarm. This paper by Fuchs and Gehring would at least seem to go some of the way in explaining why, say, Fitch places the United States in its highest credit ratings category... while (Chinese-based agency) Dagong only places the US in its third highest category.

That being said, Daniel McDowell (author of the above blog post) points out that it is not especially clear how such findings actually stand to affect future ratings. For one thing, changes in sovereign ratings sometimes have zero, or even paradoxical effects, such as when the demand for US treasuries actually rose following the country's downgrade by Standard & Poors in 2011.[*]

On the other hand, it should also be noted that if one of the other major agencies -- i.e. Fitch or Moody's -- had followed S&P's lead in downgrading the US credit score in 2011, then that probably would have had fairly major financial implications. Most obviously, a large number of investment funds have specific mandates regarding the type of securities they must hold... as determined by the average score among the big three credit ratings agencies. For example, a fund might be legally required to hold a minimum proportion of "triple-A-rated" bonds. Given how ubiquitous US treasuries are, some major portfolio rebalancing would almost certainly be required if the US lost its "average" credit rating. You may recall that this is something that a lot of people were worried about at the time. It is also one reason that the ratings agencies continue to have a practical (and potentially deleterious) relevance to financial markets.

Anyway, apologies for getting sidetracked. Interesting paper and blog post. Check them out.
___
[*] A popular explanation at the time was that the downgrade provided the shake-up that Congress needed in order to overcome the political impasse over the debt ceiling...

Thursday, January 2, 2014

Top posts on Stickman's Corral for 2013

As if you care!

I'll limit it to three because: a) Parsimony, b) Blogger is being recalcitrant b*tch at the moment and is not letting me see any more than that.

1) Econ blogosphere comment form. Way out in front -- with nearly 7,000 views thanks to links from FT Alpha, Barry Ritholtz and a bunch of people on Twitter -- was this spoof of the various personalities, trolls and anti-trolls that populate today's economic blogs. Some of the additional suggestions provided by commentators were golden.

2) Thinking of doing a PhD? My post advocating for postgraduate studies in Norway accumulated around a 1,000 views over the course of the year. It would appear that the idea of drawing a healthy salary whilst having access to great data and clean air could prove a viable alternative to crushing student debt (or miserly graduate stipends) and 80-hour work weeks. As an aside: A fairly large number of hits on this one where referrals from a single comment that I left under one of Noah Smith's blog posts. Thus is the way of the blogosphere layer cake.

3) Review - Extreme Environment (Ivo Vegter). It may not have reached the top spot, but my review of Ivo Vegter's anti-environmentalism screed was certainly the longest post of the year. Five thousand words for around 750 1,000 views may sound like a poor return to some, but it does occupy second place in the Google search rankings for Vegter's book behind only Amazon. Speaking of which: Rather irritatingly, my abridged review on Amazon has garnered more "unhelpful" votes than the other way around. I emphasise the "helpful" bit. This isn't about whether you agree with an author -- or reviewer for that matter -- but rather a question of whether a review enables you to obtain a better understanding of a book and it's relative merits. I guess in-depth, considered criticism is less helpful than gushing, one-line endorsements. (Or lazy dismissals for that matter.) Haters gonna hate. Derpers gonna derp.

If memory serves me correctly, I have a couple posts with 500-odd views here at Stickman's... while my most widely read articles of the year were almost certainly the series I did for the Energy Collective on natural gas. So, it's been a respectable year blogging-wise for yours truly. I might get around to listing my personal favourite posts of the year later during the week.