This will be my last post at Stickman's Corral.
I recently built a new website and all future blogging activity will be hosted here: grantmcdermott.github.io
The move has been a while coming. I'd been maintaining several different web portals -- this blog, a Google Sites page, a GitHub account, the odd WordPress account -- all of which ended up becoming redundant and confusing. The new website collapses all of these disparate strands into a single entity. I intend for it to serve as both my professional webpage and a repository for any future blog posts.
I have also migrated/copied all of the old posts from this blog (including comments) over to the new website. The full range of discussions will be preserved there even as I no longer maintain this blog.
Still, it does feel like the end of a (mini) era. Stickman's ended up showing much greater stamina than I ever envisaged when first hitting the "create" button back in 2010. The dearth of recent posts notwithstanding, it has basically served as a travelogue of my journey through grad school and beyond. Along the way I've learnt a lot, enjoyed many great discussions, fed the odd troll, emigrated several times, managed to earn a PhD (probably should have mentioned that on the blog), got a new job (ditto) and even started a family (ditto).
I want to thank everyone that has ever taken the time to read something that I have posted here, or has been compelled to leave a comment of their own. I hope that you will be encouraged to do the same at my new website and please let me know if anything looks out of place.
See ya around.
G.M.
stickman's corral
A gunfight of apathetic proportions.
Thursday, April 28, 2016
Wednesday, June 17, 2015
Centralised versus decentralised
I was quoted in a recent article about bringing power to sub-Saharan Africa: "How do you bring electricity to 620 million people?" The journalist, Tom Jackson, did a good job of summarising my position (although I am mildly annoyed that he didn't send me a copy before publication; something I asked for). That being said, some additional context never hurts and so I thought I'd publish my full email response to his questions.
Two minor footnotes: First, this was framed as a "centralised versus decentralised" debate. There are of course many variations on the decentralisation theme. (Do you really mean distributed generation, rather than transmission? Does this include microgrids? Etc.) Given the way the questions were asked, I simply took it to mean the absence of a centralised electricity grid. Second, when I talk about first-best and second-best alternatives, I don't quite mean in the strict economic sense of optimality conditions. Rather, I am trying to convey the idea that one solution is only really better when the other is unavailable due to outside factors.
Why are grids still vital? Why is a functioning electricity grid necessary for economic growth?
I
should probably conclude by saying that I fully support experimentation with
decentralised systems. I just wouldn't want to put my own money on it.
Two minor footnotes: First, this was framed as a "centralised versus decentralised" debate. There are of course many variations on the decentralisation theme. (Do you really mean distributed generation, rather than transmission? Does this include microgrids? Etc.) Given the way the questions were asked, I simply took it to mean the absence of a centralised electricity grid. Second, when I talk about first-best and second-best alternatives, I don't quite mean in the strict economic sense of optimality conditions. Rather, I am trying to convey the idea that one solution is only really better when the other is unavailable due to outside factors.
---
Why are grids still vital? Why is a functioning electricity grid necessary for economic growth?
These
two questions are more or less the same, so I‘ll take them together. Large,
centralised grids constitute the most efficient and cost-effective way of delivering
(and consuming) electricity in modern economies. Not only are decentralised
options substantially more expensive and (generally) less
reliable,
there’s no intrinsic reason to believe that they will be better at delivering a clean energy future.
Is the
centralised argument being lost in places like SA where the grid is so poor and
not being improved?
I
wouldn't say that South Africa’s present electricity woes are the result of
grid failure. Rather, the problem is primarily one of generation capacity and
government mismanagement. On that note, the grid is the one component of the
electricity system that is best thought of as a “natural
monopoly”.
(The other components of the electricity value chain – i.e. generation and
distribution – should then be left to competitive forces.) Your question
highlights an irony. Eskom’s mismanagement on the generation side (huge
overspends and delays on the Medupi and Kusile power stations, etc.) are
undermining confidence in its ability to manage a centralised grid, the one
aspect that government can legitimately claim needs to be operated as a
regulated monopoly.
That
all being said, Eskom is falling behind the required investment
goals for maintaining an adequate grid infrastructure into the future. A
deficient grid network has also constrained economic growth in many other
developing countries, from Nigeria to India. And, yet, this is not to say that
the decentralised alternative offers an intrinsically superior solution. A grid
system remains the first-best option. Decentralised solutions are really a
second-best option in the absence of the former. The distinction is crucial.
What role
for de-centralised solutions?
I
think that decentralised solutions will remain a second-best, niche alternative
for the next few decades. There are several things that cause me to take this
position, of which intermittency and local storage are probably the most
pronounced. Now, there do happen to be a number of exciting developments on the
storage issue, but nothing that I would expect to fundamentally change the
equation. More to the point, I believe that the resilience of a decentralised generation
system will fundamentally require a functioning grid. The increased
intermittency and smaller scale of decentralised power production will
necessitate excellent access to similar, small-scale generation in other
regions. This can only be achieved through a robust grid network. (An example
may help to make my point: Germany’s much-fêted Energiewende was
supposed to involve a fundamental shift towards the decentralised paradigm.
What we've seen in practice, however, is that the Germans are investing hugely in extending their inter-regional grid capacities to places like Norway, whose
hydropower resources offer the most cost-effective means of accommodating the intermittency of wind and solar.) Similarly, the parallels
that people inevitably draw between a dentralised electricity system and the
communication sector (i.e. where fixed-line telephones were leap-frogged by
cell phones) are misplaced. Beyond various other differences, cell phones networks are
fundamentally centralised in nature: Cell phone towers are the grid equivalent
of the modern-day communications sector.
Tuesday, December 23, 2014
Climate capers at Cato
NOTE: The code and data used to produce all of the figures in this post can be found here.
Having forsworn blogging activity for several months in favour of actual dissertation work, I thought I'd mark a return to Stickman's Corral in time for the holidays. Our topic for discussion today is a poster (study?) by Cato Institute researchers, Patrick Michaels and "Chip" Knappenberger.
Michaels & Knappenberger (M&K) argue that climate models predicted more warming than we have observed in the global temperature data. This is not a particularly new claim and I'll have more to say about it generally in a future post. However, M&K go further in trying to quantify the mismatch in a regression framework. In so doing, they argue that it is incumbent upon the scientific community to reject current climate models in favour of less "alarmist" ones. (Shots fired!) Let's take closer look at their analysis, shall we?
In essence, M&K have implemented a simple linear regression of temperature on a time trend,
\begin{equation}
Temp_t = \alpha_0 + \beta_1 Trend + \epsilon_t.
\end{equation}
This is done recursively, starting from 2014 and incrementing backwards one year at a time until the sample extends until the middle of the 20th century. The key figure in their study is the one below, which compares the estimated trend coefficient, $\hat{\beta_1}$, from a bunch of climate models (the CMIP5 ensemble) with that obtained from observed climate data (global temperatures as measured by the Hadley Centre's HadCRUT4 series).
Since the observed warming trend consistently falls below that predicted by the suite of climate models, M&K conclude: "[A]t the global scale, this suite of climate models has failed. Treating them as mathematical hypotheses, which they are, means that it is the duty of scientists to reject their predictions in lieu of those with a lower climate sensitivity."
Bold words. However, not so bold on substance. M&K's analysis is incomplete and their claims begin to unravel under further scrutiny. I discuss some of these shortcomings below the fold.
Having forsworn blogging activity for several months in favour of actual dissertation work, I thought I'd mark a return to Stickman's Corral in time for the holidays. Our topic for discussion today is a poster (study?) by Cato Institute researchers, Patrick Michaels and "Chip" Knappenberger.
Michaels & Knappenberger (M&K) argue that climate models predicted more warming than we have observed in the global temperature data. This is not a particularly new claim and I'll have more to say about it generally in a future post. However, M&K go further in trying to quantify the mismatch in a regression framework. In so doing, they argue that it is incumbent upon the scientific community to reject current climate models in favour of less "alarmist" ones. (Shots fired!) Let's take closer look at their analysis, shall we?
In essence, M&K have implemented a simple linear regression of temperature on a time trend,
\begin{equation}
Temp_t = \alpha_0 + \beta_1 Trend + \epsilon_t.
\end{equation}
This is done recursively, starting from 2014 and incrementing backwards one year at a time until the sample extends until the middle of the 20th century. The key figure in their study is the one below, which compares the estimated trend coefficient, $\hat{\beta_1}$, from a bunch of climate models (the CMIP5 ensemble) with that obtained from observed climate data (global temperatures as measured by the Hadley Centre's HadCRUT4 series).
Since the observed warming trend consistently falls below that predicted by the suite of climate models, M&K conclude: "[A]t the global scale, this suite of climate models has failed. Treating them as mathematical hypotheses, which they are, means that it is the duty of scientists to reject their predictions in lieu of those with a lower climate sensitivity."
Bold words. However, not so bold on substance. M&K's analysis is incomplete and their claims begin to unravel under further scrutiny. I discuss some of these shortcomings below the fold.
Tuesday, April 22, 2014
On economic "consensus" and the benefits of climate change
Note: Slight edits to graphs and text to make things clearer and more comparable.
Richard Tol is a man who likes to court controversy. I won't deem to analyse his motivations here -- suffice it to say that I respect his professional research at the same time as I find his social media interactions maddeningly obscure, churlish and inconsistent. However, I'm pretty sure that he relishes the the role of provocateur in the climate change debate and will admit no shame in that.
Little wonder then, that his work acts as grist to the mill for sceptical op-eds of a more -- shall we say --considered persuasion. That is, opinion pieces that at least try to marshal some credible scientific evidence against decisive climate change action, rather than just mouthing off some inane contrarian talking points (it's a giant communist conspiracy, etc). Bjørn Lomborg and Matt Ridley are two writers that have cited Richard's research in arguing forcibly against the tide of mainstream climate opinion. I want to focus on the latter's efforts today, since it ties in rather nicely with an older post of mine: "Nope, Nordhaus is still (mostly) right."
I won't regurgitate the whole post, but rather single out one aspect: The net benefits that climate change may or may not bring at moderate temperature increases. The idea is encapsulated in the following figure of Tol (2009), which shows estimates of economic damages due to temperature increases relative to the present day.
Now, there are various points to made about the implications of Fig. 1. People like Matt Ridley are wont to point out that it demonstrates how climate change will bring benefits to us long before it imposes any costs. Ergo, we should do little or nothing about reducing our emissions today. Of course, there are multiple responses to this position and I tried to lay out various reasons in my previous post as to why this is a very misleading take (sunk benefits and inertia in the climate system, uncertainty and risk aversion, unequal distribution of benefits and costs, tipping points, etc).
However, I have two broader points to make here, for which Ridley will prove a useful foil. For example, here he is in The Spectator last year, arguing "Why Climate Change Is Good For The World":
This is all very well-trodden ground by now, but it underscores just how tenuous -- to put it mildly -- Matt Ridely's appeal to economic consensus is. However, we are still left with a curve that purports to show positive benefits from climate change up until around 2˚C of warming, before turning negative. So here are my two comments:
Comment #1: Outlier and functional form
Given that only one study (i.e. Tol, 2002) among the 14 surveyed in Tol (2009) shows large-ish benefits from climate change, you may be inclined to think that the initial benefits suggested by Fig. 1 are hinged on this "outlier"... And you would not be wrong: individual observations will always stand to impact the overall outcome in small samples. However, I would also claim that such a result is partially an artefact of functional form. What do I mean by this? I mean that predicting positive benefits at "moderate" levels of warming is in some sense inevitable if we are trying to fit a quadratic function[*] to the limited data available in Tol (2009). This is perhaps best illustrated by re-estimating the above figure, but (i) correcting for the typos discovered by Bob Ward and (ii) excluding the outlier in question.
Remember that our modified sample includes only negative -- or neutral at best -- effects on welfare due to climate change. And yet, the new best-fit line (dark grey) suggests that we will still experience net benefits for a further 1.75˚C of warming! Thus we see how the choice of a quadratic function to fit our data virtually guarantees the appearance of initial benefits, even when the data themselves effectively exclude such an outcome.[**] You'll note that I am following Ridley's lead here in ignoring the confidence intervals. This is not a particularly sound strategy from a statistical perspective, but let's keep things simple for the sake of comparison.
Comment #2: New data points
As it happens, several several new estimates of the economic effects of climate change have been made available since Tol (2009) was published. Richard has updated his Fig. 1 accordingly and included it in the latest IPCC WG2 report. You can find it on pg. 84 here. (Although -- surprise! -- even this is not without controversy.) However, this updated version does not include a best-fit line. That is perhaps a wise choice given the issues discussed above. Nevertheless, like me, you may still be curious to see what it looks like now that we have a few additional data points. Here I have re-plotted the data, alongside a best-fit line and 95% confidence interval.
Whoops. Looks like those initial benefits have pretty much vanished!
So... What odds on Matt Ridley reporting the updated economic "consensus"?
UPDATE: Richard points me towards a recent working paper of his that uses non-parametric methods to fit a curve to the data. This is all well and good, and I commend his efforts in trying to overcome some of the issues discussed above... Except for one overwhelming problem: Non-parametric methods -- by their very nature -- are singularly ill-suited to small samples! Even Wikipedia manages to throw up a red flag in its opening paragraph on the topic: "Nonparametric regression requires larger sample sizes than regression based on parametric models because the data must supply the model structure as well as the model estimates." Arguably even more problematic is the fact that non-parametric estimations are particularly misleading in the tails. I simply don't see how a non-parametric approach can be expected to produce meaningful results, given that we are dealing with a rather pitiful 20-odd observations. Ultimately, it is not so much a question of parametric versus non-parametric. The real problem is a paucity of data.
UPDATE 2: An errata to Tol (2009) has finally been published. The updated figure is, of course, the same as much the same as I have drawn above. [Having looked a bit closer, I see the errata includes an additional data point that isn't in the IPCC report (Nordhaus, 2013). In addition, the damage figure given for another study (Roson and van der Mensbrugghe, 2012) has changed slightly. Yay for typos!]
UPDATE 3: Ouch... and double ouch. Statistician Andrew Gelman takes Richard out to the woodshed (making many of the same points that I have here). The result isn't pretty. Make sure to read the comments thread too.
___
[*] Tol (2009) uses a simple regression equation of D = b1*T - b2*T^2 to fit the data. He finds b1 = 2.46 and b1 = 1.11, which is where the thick, central grey line in Fig. 1 comes from.
[**] For the record, I don't wish to come across as overly pedantic or critical of the choice of a quadratic damage function. Indeed, it is hard to think of another simple function that would better lend itself to describing the effect of moderate temperature increases. (Albeit not for higher levels of warming.) I am merely trying to expand on the way in which the interplay of limited data and choice of functional form can combine to give a misleading impression of the risks associated with climate change.
Richard Tol is a man who likes to court controversy. I won't deem to analyse his motivations here -- suffice it to say that I respect his professional research at the same time as I find his social media interactions maddeningly obscure, churlish and inconsistent. However, I'm pretty sure that he relishes the the role of provocateur in the climate change debate and will admit no shame in that.
Little wonder then, that his work acts as grist to the mill for sceptical op-eds of a more -- shall we say --considered persuasion. That is, opinion pieces that at least try to marshal some credible scientific evidence against decisive climate change action, rather than just mouthing off some inane contrarian talking points (it's a giant communist conspiracy, etc). Bjørn Lomborg and Matt Ridley are two writers that have cited Richard's research in arguing forcibly against the tide of mainstream climate opinion. I want to focus on the latter's efforts today, since it ties in rather nicely with an older post of mine: "Nope, Nordhaus is still (mostly) right."
I won't regurgitate the whole post, but rather single out one aspect: The net benefits that climate change may or may not bring at moderate temperature increases. The idea is encapsulated in the following figure of Tol (2009), which shows estimates of economic damages due to temperature increases relative to the present day.
Fig. 1 Note: Dots represent individual studies. The thick centre line is the best fit stemming from an OLS regression: D = 2.46T - 1.11T^2, with an R-squared value of 0.51. The outer lines are 95% confidence intervals derived according to different methods. Source: Tol (2009) |
Now, there are various points to made about the implications of Fig. 1. People like Matt Ridley are wont to point out that it demonstrates how climate change will bring benefits to us long before it imposes any costs. Ergo, we should do little or nothing about reducing our emissions today. Of course, there are multiple responses to this position and I tried to lay out various reasons in my previous post as to why this is a very misleading take (sunk benefits and inertia in the climate system, uncertainty and risk aversion, unequal distribution of benefits and costs, tipping points, etc).
However, I have two broader points to make here, for which Ridley will prove a useful foil. For example, here he is in The Spectator last year, arguing "Why Climate Change Is Good For The World":
To be precise, Prof Tol calculated that climate change would be beneficial up to 2.2˚C of warming from 2009[... W]hat you cannot do is deny that this is the current consensus. If you wish to accept the consensus on temperature models, then you should accept the consensus on economic benefit.Bold in my emphasis. Now it should be pointed out that Ridley's articled elicited various responses, including one by Bob Ward that uncovers some puzzling typos in Richard's paper. Ward goes on to show that in fact only two out of the 14 studies considered in Tol (2009) reveal net positive benefits accruing due to climate change, and one of these was borderline at best. Specifically, Mendelsohn et al. (2000) suggest that 2.5˚C of warming will yield a tiny net global benefit equivalent to 0.1% of GDP. (It is should also be noted that they do not account for non-market impacts -- typically things like ecosystems, biodiversity, etc -- which would almost certainly pull their estimate into negative territory.) That leaves one of Richard's own papers, Tol (2002), which suggests that 1˚C of warming will yield a 2.3% gain in GDP, as the sole study showing any kind of benefits due to climate change.
This is all very well-trodden ground by now, but it underscores just how tenuous -- to put it mildly -- Matt Ridely's appeal to economic consensus is. However, we are still left with a curve that purports to show positive benefits from climate change up until around 2˚C of warming, before turning negative. So here are my two comments:
Comment #1: Outlier and functional form
Given that only one study (i.e. Tol, 2002) among the 14 surveyed in Tol (2009) shows large-ish benefits from climate change, you may be inclined to think that the initial benefits suggested by Fig. 1 are hinged on this "outlier"... And you would not be wrong: individual observations will always stand to impact the overall outcome in small samples. However, I would also claim that such a result is partially an artefact of functional form. What do I mean by this? I mean that predicting positive benefits at "moderate" levels of warming is in some sense inevitable if we are trying to fit a quadratic function[*] to the limited data available in Tol (2009). This is perhaps best illustrated by re-estimating the above figure, but (i) correcting for the typos discovered by Bob Ward and (ii) excluding the outlier in question.
Fig. 2 Based on Figure 1 in Tol (2009), but corrected for typos and including an additional best-fit line that excludes the most optimistic estimate of benefits due to moderate climate change (i.e. Tol, 2002). |
Remember that our modified sample includes only negative -- or neutral at best -- effects on welfare due to climate change. And yet, the new best-fit line (dark grey) suggests that we will still experience net benefits for a further 1.75˚C of warming! Thus we see how the choice of a quadratic function to fit our data virtually guarantees the appearance of initial benefits, even when the data themselves effectively exclude such an outcome.[**] You'll note that I am following Ridley's lead here in ignoring the confidence intervals. This is not a particularly sound strategy from a statistical perspective, but let's keep things simple for the sake of comparison.
Comment #2: New data points
As it happens, several several new estimates of the economic effects of climate change have been made available since Tol (2009) was published. Richard has updated his Fig. 1 accordingly and included it in the latest IPCC WG2 report. You can find it on pg. 84 here. (Although -- surprise! -- even this is not without controversy.) However, this updated version does not include a best-fit line. That is perhaps a wise choice given the issues discussed above. Nevertheless, like me, you may still be curious to see what it looks like now that we have a few additional data points. Here I have re-plotted the data, alongside a best-fit line and 95% confidence interval.
Fig. 3 Based on Figure 10 in IPCC WG2 (2014). As before, the best-fit line is computed according a quadratic function using OLS. This yields D = 0.01T - 0.27T^2, with an R-squared value of 0.49. |
So... What odds on Matt Ridley reporting the updated economic "consensus"?
UPDATE: Richard points me towards a recent working paper of his that uses non-parametric methods to fit a curve to the data. This is all well and good, and I commend his efforts in trying to overcome some of the issues discussed above... Except for one overwhelming problem: Non-parametric methods -- by their very nature -- are singularly ill-suited to small samples! Even Wikipedia manages to throw up a red flag in its opening paragraph on the topic: "Nonparametric regression requires larger sample sizes than regression based on parametric models because the data must supply the model structure as well as the model estimates." Arguably even more problematic is the fact that non-parametric estimations are particularly misleading in the tails. I simply don't see how a non-parametric approach can be expected to produce meaningful results, given that we are dealing with a rather pitiful 20-odd observations. Ultimately, it is not so much a question of parametric versus non-parametric. The real problem is a paucity of data.
UPDATE 2: An errata to Tol (2009) has finally been published. The updated figure is, of course,
UPDATE 3: Ouch... and double ouch. Statistician Andrew Gelman takes Richard out to the woodshed (making many of the same points that I have here). The result isn't pretty. Make sure to read the comments thread too.
___
[*] Tol (2009) uses a simple regression equation of D = b1*T - b2*T^2 to fit the data. He finds b1 = 2.46 and b1 = 1.11, which is where the thick, central grey line in Fig. 1 comes from.
[**] For the record, I don't wish to come across as overly pedantic or critical of the choice of a quadratic damage function. Indeed, it is hard to think of another simple function that would better lend itself to describing the effect of moderate temperature increases. (Albeit not for higher levels of warming.) I am merely trying to expand on the way in which the interplay of limited data and choice of functional form can combine to give a misleading impression of the risks associated with climate change.
Friday, April 11, 2014
While I'd normally apologise for the lack of posting...
I've been somewhat preoccupied with more pressing matters...
Sunday, March 2, 2014
A lemon market for poachers
[My new post at the Recon Hub, which I'll repost in full here...]
Ashok Rao has a provocative suggestion for stopping the rampant poaching of elephant and rhino. Drawing on insights from George Akerlof’s famous paper, “A Market for Lemons“, he argues that all we need to do is create some uncertainty in the illegal ivory trade:
To see why this might be the case, consider the economic choices of an individual poacher. Poaching is a risky activity and there is a decidedly non-negligible probability that you will be imprisoned, severely injured, or even killed as a result of your illegal actions. However, it still makes sense to take on these risks as long as the potential pay-off is high enough… And with rhino horn and ivory presently trading at record prices, that certainly happens to be the case. However, all that an intervention like the one proposed above needs to achieve, is to drive down the price of ivory to a level that would cause most rational agents to reconsider the risks of poaching. What level would this be exactly? That’s impossible for me to say, but I'm willing to bet that poachers are highly price sensitive.
A final comment before I close, inspired by another blog post that has also commented on Ashok's proposal. Jonathan Catalán correctly points out that one of the most valuable aspects of original “lemons” paper, is to force us to think carefully about why asymmetric markets don’t generally collapse into the degenerate equilibrium implied by Akerlof's theory. Perhaps the best answer to that is one hinted at by Akerlof himself: institutions like money-back guarantees, brand reputation, etc.. In light of this, Jonathan wonders whether the black-market wouldn't simply just adopt practices to weed out the counterfeit goods? My feeling, however, is that it is misleading to compare the ivory and rhino horn trade to other illegal markets in this respect. In the drug industry, for example, cartels are able to test the quality of a cocaine shipment simply by trying it themselves. Drugs have a very definite effect on us physiologically and so “quality control” (so to speak) is relatively easy to do. In comparison, we know that crushed rhino horn has no medical efficacy whatsoever… whether that is with respect to treating cancer or healing regular aches and pains. I therefore strongly suspect that it would be much harder for a buyer of powered rhino horn to verify whether their product is the real deal or not. The placebo effect will be as strong, or weak, regardless.
PS -- Legalisation of the ivory and horn trade is another economic approach to solving the poaching problem. Proponents of this view see it as creating opportunities for a sustainable market that both incentivises breeding and undercuts poachers. I am favourably predisposed towards this particular argument, at least in the case of rhino since they are easier to farm. However, I am not convinced that it will put an end to poaching, which will continue as long as the rents are there to be captured. There also remains the question of how demand will respond to a surge in supply, as well as issues related to a biological monoculture (i.e. rhinos will be bred solely on the basis of increasing their horn size). However, those remain issues another day.
Ashok Rao has a provocative suggestion for stopping the rampant poaching of elephant and rhino. Drawing on insights from George Akerlof’s famous paper, “A Market for Lemons“, he argues that all we need to do is create some uncertainty in the illegal ivory trade:
Policymakers and conservationists need to stop auctioning horns and burning stockpiles of ivory, they need to create this asymmetry [which causes markets to break down under Akerlof's model]. And it’s not hard. By virtue of being a black market, there isn’t a good organized body that can consistently verify the quality of ivory in general. Sure, it’s easy to access, but ultimately there’s a lot of supply chain uncertainty.
There is a cheap way to exploit this. The government, or some general body that has access to tons of ivory, should douse (or credibly commit to dousing) the tusks with some sort of deadly poison, and sell the stuff across all markets. Granting some additional complexities, the black market could not differentiate between clean and lethal ivory, and buyers would refrain from buying all ivory in fear. The market would be paralyzed.I really like Ashok's proposal… Not least of all, because it is virtually identical to an idea that Torben and I had whilst out for a few drinks one night! (This includes the invocation of Akerlof, by the way.) The big difference being that we didn't go so far as to suggest that the ivory should be poisoned: In our minds, flooding the market with “inferior”, but hard-to-detect fake product would do the trick.
To see why this might be the case, consider the economic choices of an individual poacher. Poaching is a risky activity and there is a decidedly non-negligible probability that you will be imprisoned, severely injured, or even killed as a result of your illegal actions. However, it still makes sense to take on these risks as long as the potential pay-off is high enough… And with rhino horn and ivory presently trading at record prices, that certainly happens to be the case. However, all that an intervention like the one proposed above needs to achieve, is to drive down the price of ivory to a level that would cause most rational agents to reconsider the risks of poaching. What level would this be exactly? That’s impossible for me to say, but I'm willing to bet that poachers are highly price sensitive.
A final comment before I close, inspired by another blog post that has also commented on Ashok's proposal. Jonathan Catalán correctly points out that one of the most valuable aspects of original “lemons” paper, is to force us to think carefully about why asymmetric markets don’t generally collapse into the degenerate equilibrium implied by Akerlof's theory. Perhaps the best answer to that is one hinted at by Akerlof himself: institutions like money-back guarantees, brand reputation, etc.. In light of this, Jonathan wonders whether the black-market wouldn't simply just adopt practices to weed out the counterfeit goods? My feeling, however, is that it is misleading to compare the ivory and rhino horn trade to other illegal markets in this respect. In the drug industry, for example, cartels are able to test the quality of a cocaine shipment simply by trying it themselves. Drugs have a very definite effect on us physiologically and so “quality control” (so to speak) is relatively easy to do. In comparison, we know that crushed rhino horn has no medical efficacy whatsoever… whether that is with respect to treating cancer or healing regular aches and pains. I therefore strongly suspect that it would be much harder for a buyer of powered rhino horn to verify whether their product is the real deal or not. The placebo effect will be as strong, or weak, regardless.
PS -- Legalisation of the ivory and horn trade is another economic approach to solving the poaching problem. Proponents of this view see it as creating opportunities for a sustainable market that both incentivises breeding and undercuts poachers. I am favourably predisposed towards this particular argument, at least in the case of rhino since they are easier to farm. However, I am not convinced that it will put an end to poaching, which will continue as long as the rents are there to be captured. There also remains the question of how demand will respond to a surge in supply, as well as issues related to a biological monoculture (i.e. rhinos will be bred solely on the basis of increasing their horn size). However, those remain issues another day.
Friday, February 14, 2014
Why are economic journal articles so long?
Seriously.
I've been reading a lot of (hard) scientific studies for my current dissertation chapter and it is striking how concise they are. The majority of published articles, at least the ones that I have come across, are typically no more than 4-6 pages in length. The pattern holds especially true for the most prestigious journals like Nature, Science or PNAS.
In contrast, your average economic article is long and getting longer. [Note: New link provided.]
Now you could argue that science journals achieve this feat by simply relegating all of the gritty technical details and discussion to the supplementary materials. And this is true. For a good example, take a look at this article by Estrada et al. (2013).[*] The paper itself is only six pages, while the supplementary material is over 40 pages long. Equally as telling is how similar this supporting information is to the working paper that the final article is apparently based upon.
To be honest, I don't really see how this can be a bad thing. It is primarily the job of the editors and referees to vouch for the technical merits and internal consistency of a published study. Regular readers are mostly interested in the broad context (i.e. significance of the research) and the actual findings. As much as it is important to make the technical details -- and data! -- available to those who want to go through them, the clutter largely detracts from the key messages. I'm also willing to bet good money that many (most?) people currently just skip through the entire mid-section of your typical economics paper anyway, concentrating on the introduction, results and conclusion.
So, is this a weird case of physics envy, where economists feel the need to compensate for lack of quality through quantity? Or does it say something special about the nature of economics, where the limited extent of true experimental data makes methodology more precarious and prone to bias?
Either way, do we really lose anything by making economic journal articles much shorter and saving all the technical details for the supplementary materials?
PS - Yes, I know that most economic journals already reserve a lot of information for the technical appendices. I'd also say that a number of the top journals (e.g. AER) are pleasantly readable -- perhaps surprisingly so for outsiders. But we're still a long way off what the sciences are doing.
UPDATE: It just occurred to me that the frequency of publication plays a reinforcing in all of this. Nature, Science and PNAS are all issued on a weekly basis. The top economic journals, on the other hand, are typically only bi-monthly (at best) or quarterly publications. The higher volume of science publications encourages concise articles for pure reasons of practicality, as well as readability.
___
[*] The authors argue that climate data are better viewed as statistical processes that are characterised by structural breaks around a deterministic time trend... i.e. As opposed to non-stationary stochastic processes comprising one or more unit roots. (This is important because it has implications for the ways in which we should analyse climate data from a statistical time-series perspective.) In so doing, they are effectively reliving a similar debate regarding the statistical nature of macroeconomic time-series data, which was ignited by a very famous article by Pierre Perron. Perron happens to be one of the co-authors on the Estrada paper.
I've been reading a lot of (hard) scientific studies for my current dissertation chapter and it is striking how concise they are. The majority of published articles, at least the ones that I have come across, are typically no more than 4-6 pages in length. The pattern holds especially true for the most prestigious journals like Nature, Science or PNAS.
In contrast, your average economic article is long and getting longer. [Note: New link provided.]
Now you could argue that science journals achieve this feat by simply relegating all of the gritty technical details and discussion to the supplementary materials. And this is true. For a good example, take a look at this article by Estrada et al. (2013).[*] The paper itself is only six pages, while the supplementary material is over 40 pages long. Equally as telling is how similar this supporting information is to the working paper that the final article is apparently based upon.
To be honest, I don't really see how this can be a bad thing. It is primarily the job of the editors and referees to vouch for the technical merits and internal consistency of a published study. Regular readers are mostly interested in the broad context (i.e. significance of the research) and the actual findings. As much as it is important to make the technical details -- and data! -- available to those who want to go through them, the clutter largely detracts from the key messages. I'm also willing to bet good money that many (most?) people currently just skip through the entire mid-section of your typical economics paper anyway, concentrating on the introduction, results and conclusion.
So, is this a weird case of physics envy, where economists feel the need to compensate for lack of quality through quantity? Or does it say something special about the nature of economics, where the limited extent of true experimental data makes methodology more precarious and prone to bias?
Either way, do we really lose anything by making economic journal articles much shorter and saving all the technical details for the supplementary materials?
PS - Yes, I know that most economic journals already reserve a lot of information for the technical appendices. I'd also say that a number of the top journals (e.g. AER) are pleasantly readable -- perhaps surprisingly so for outsiders. But we're still a long way off what the sciences are doing.
UPDATE: It just occurred to me that the frequency of publication plays a reinforcing in all of this. Nature, Science and PNAS are all issued on a weekly basis. The top economic journals, on the other hand, are typically only bi-monthly (at best) or quarterly publications. The higher volume of science publications encourages concise articles for pure reasons of practicality, as well as readability.
___
[*] The authors argue that climate data are better viewed as statistical processes that are characterised by structural breaks around a deterministic time trend... i.e. As opposed to non-stationary stochastic processes comprising one or more unit roots. (This is important because it has implications for the ways in which we should analyse climate data from a statistical time-series perspective.) In so doing, they are effectively reliving a similar debate regarding the statistical nature of macroeconomic time-series data, which was ignited by a very famous article by Pierre Perron. Perron happens to be one of the co-authors on the Estrada paper.
Ed Byrne on wedding planning
As the man says, you wouldn't understand unless you've been there.
(Start from 2:45)
(Start from 2:45)
Sunday, January 26, 2014
Friday, January 24, 2014
Of Vikings and Credit Rating Agencies
Yesterday's post reminded me of a story that encapsulates much of my own feelings about credit rating agencies (and, indeed, the naivete that characterised the build-up to the Great Recession).
The year was 2008 and I was working for an economics consultancy specialising in the sovereign risk of emerging markets. We primarily marked African countries, but also had a number of "peripheral" OECD countries on our books. One of these was Iceland and I was assigned to produce a country report that would go out to our major clients.
By this time, the US subprime market had already collapsed and it was abundantly clear that Europe (among others) would not escape the contagion. With credit conditions imploding, it was equally clear that the most vulnerable sectors and countries were those with extended leverage positions.
Iceland was a case in point. The country had a healthy fiscal position, running a budget surplus and public debt only around 30% of GDP. However, private debt was a entirely different story. Led by the aggressive expansion of its commercial banks into European markets, total Icelandic external debt was many times greater than GDP. Compounding the problem was a rapid depreciation in the Icelandic króna, which made the ability to service external liabilities even more daunting. (Iceland was the world's smallest economy to operate an independently floating exchange rate at that time.) Wikipedia gives a good overview of the situation:
I could scarcely believe what I was reading. What could the Icelandic government possibly hope to achieve against potential liabilities that were an order of magnitude greater than the country's entire GDP? Truly, it would be like pissing against a hurricane.
Of course, we all know what happened next.
In the aftermath of the Great Recession, I've often heard people invoke the phrase -- "When the music is playing, you have to keep dancing" -- perhaps as a means of understanding why so many obvious danger signs were ignored in favour of business-as-usual. It always makes me think of those Icelandic reports when I hear that.
PS- Technically, Iceland never did default on its sovereign debt despite the banking crisis and massive recession. It was the (nationalised) banks that defaulted so spectacularly. The country has even managed a quite remarkable recovery in the scheme of things. The short reasons for this are that they received emergency bailout money from outside and, crucially, also decided to let creditors eat their losses.
The year was 2008 and I was working for an economics consultancy specialising in the sovereign risk of emerging markets. We primarily marked African countries, but also had a number of "peripheral" OECD countries on our books. One of these was Iceland and I was assigned to produce a country report that would go out to our major clients.
By this time, the US subprime market had already collapsed and it was abundantly clear that Europe (among others) would not escape the contagion. With credit conditions imploding, it was equally clear that the most vulnerable sectors and countries were those with extended leverage positions.
Iceland was a case in point. The country had a healthy fiscal position, running a budget surplus and public debt only around 30% of GDP. However, private debt was a entirely different story. Led by the aggressive expansion of its commercial banks into European markets, total Icelandic external debt was many times greater than GDP. Compounding the problem was a rapid depreciation in the Icelandic króna, which made the ability to service external liabilities even more daunting. (Iceland was the world's smallest economy to operate an independently floating exchange rate at that time.) Wikipedia gives a good overview of the situation:
At the end of the second quarter 2008, Iceland's external debt was 9.553 trillion Icelandic krónur (€50 billion), more than 80% of which was held by the banking sector.[4] This value compares with Iceland's 2007 gross domestic product of 1.293 trillion krónur (€8.5 billion).[5] The assets of the three banks taken under the control of the [Icelandic Financial Services Authority] totalled 14.437 trillion krónur at the end of the second quarter 2008,[6] equal to more than 11 times of the Icelandic GDP, and hence there was no possibility for the Icelandic Central Bank to step in as a lender of last resort when they were hit by financial troubles and started to account asset losses.It should be emphasised that everyone was aware of all of this at the time. I read briefings by all the major ratings agencies (plus reports from the OECD and IMF) describing the country's precarious external position in quite some detail. However, these briefings more or less all ended with the same absurd conclusion: Yes, the situation is very bad, but the outlook for the economy as a whole remains okay as long as Government steps in forcefully to support the commercial banks in the event of a deepening crisis.(!)
I could scarcely believe what I was reading. What could the Icelandic government possibly hope to achieve against potential liabilities that were an order of magnitude greater than the country's entire GDP? Truly, it would be like pissing against a hurricane.
Of course, we all know what happened next.
In the aftermath of the Great Recession, I've often heard people invoke the phrase -- "When the music is playing, you have to keep dancing" -- perhaps as a means of understanding why so many obvious danger signs were ignored in favour of business-as-usual. It always makes me think of those Icelandic reports when I hear that.
PS- Technically, Iceland never did default on its sovereign debt despite the banking crisis and massive recession. It was the (nationalised) banks that defaulted so spectacularly. The country has even managed a quite remarkable recovery in the scheme of things. The short reasons for this are that they received emergency bailout money from outside and, crucially, also decided to let creditors eat their losses.
Subscribe to:
Posts (Atom)