Showing posts with label Economics. Show all posts
Showing posts with label Economics. Show all posts

Sunday, March 2, 2014

A lemon market for poachers

[My new post at the Recon Hub, which I'll repost in full here...]

Ashok Rao has a provocative suggestion for stopping the rampant poaching of elephant and rhino. Drawing on insights from George Akerlof’s famous paper, “A Market for Lemons“, he argues that all we need to do is create some uncertainty in the illegal ivory trade:
Policymakers and conservationists need to stop auctioning horns and burning stockpiles of ivory, they need to create this asymmetry [which causes markets to break down under Akerlof's model]. And it’s not hard. By virtue of being a black market, there isn’t a good organized body that can consistently verify the quality of ivory in general. Sure, it’s easy to access, but ultimately there’s a lot of supply chain uncertainty. 
There is a cheap way to exploit this. The government, or some general body that has access to tons of ivory, should douse (or credibly commit to dousing) the tusks with some sort of deadly poison, and sell the stuff across all markets. Granting some additional complexities, the black market could not differentiate between clean and lethal ivory, and buyers would refrain from buying all ivory in fear. The market would be paralyzed.
I really like Ashok's proposal… Not least of all, because it is virtually identical to an idea that Torben and I had whilst out for a few drinks one night! (This includes the invocation of Akerlof, by the way.) The big difference being that we didn't go so far as to suggest that the ivory should be poisoned: In our minds, flooding the market with “inferior”, but hard-to-detect fake product would do the trick.

To see why this might be the case, consider the economic choices of an individual poacher. Poaching is a risky activity and there is a decidedly non-negligible probability that you will be imprisoned, severely injured, or even killed as a result of your illegal actions. However, it still makes sense to take on these risks as long as the potential pay-off is high enough… And with rhino horn and ivory presently trading at record prices, that certainly happens to be the case. However, all that an intervention like the one proposed above needs to achieve, is to drive down the price of ivory to a level that would cause most rational agents to reconsider the risks of poaching. What level would this be exactly? That’s impossible for me to say, but I'm willing to bet that poachers are highly price sensitive.

A final comment before I close, inspired by another blog post that has also commented on Ashok's proposal. Jonathan Catalán correctly points out that one of the most valuable aspects of original “lemons” paper, is to force us to think carefully about why asymmetric markets don’t generally collapse into the degenerate equilibrium implied by Akerlof's theory. Perhaps the best answer to that is one hinted at by Akerlof himself: institutions like money-back guarantees, brand reputation, etc.. In light of this, Jonathan wonders whether the black-market wouldn't simply just adopt practices to weed out the counterfeit goods? My feeling, however, is that it is misleading to compare the ivory and rhino horn trade to other illegal markets in this respect. In the drug industry, for example, cartels are able to test the quality of a cocaine shipment simply by trying it themselves. Drugs have a very definite effect on us physiologically and so “quality control” (so to speak) is relatively easy to do. In comparison, we know that crushed rhino horn has no medical efficacy whatsoever… whether that is with respect to treating cancer or healing regular aches and pains. I therefore strongly suspect that it would be much harder for a buyer of powered rhino horn to verify whether their product is the real deal or not. The placebo effect will be as strong, or weak, regardless.

PS -- Legalisation of the ivory and horn trade is another economic approach to solving the poaching problem. Proponents of this view see it as creating opportunities for a sustainable market that both incentivises breeding and undercuts poachers. I am favourably predisposed towards this particular argument, at least in the case of rhino since they are easier to farm. However, I am not convinced that it will put an end to poaching, which will continue as long as the rents are there to be captured. There also remains the question of how demand will respond to a surge in supply, as well as issues related to a biological monoculture (i.e. rhinos will be bred solely on the basis of increasing their horn size). However, those remain issues another day.

Friday, February 14, 2014

Why are economic journal articles so long?

Seriously.

I've been reading a lot of (hard) scientific studies for my current dissertation chapter and it is striking how concise they are. The majority of published articles, at least the ones that I have come across, are typically no more than 4-6 pages in length. The pattern holds especially true for the most prestigious journals like Nature, Science or PNAS.

In contrast, your average economic article is long and getting longer. [Note: New link provided.]

Now you could argue that science journals achieve this feat by simply relegating all of the gritty technical details and discussion to the supplementary materials. And this is true. For a good example, take a look at this article by Estrada et al. (2013).[*] The paper itself is only six pages, while the supplementary material is over 40 pages long. Equally as telling is how similar this supporting information is to the working paper that the final article is apparently based upon.

To be honest, I don't really see how this can be a bad thing. It is primarily the job of the editors and referees to vouch for the technical merits and internal consistency of a published study. Regular readers are mostly interested in the broad context (i.e. significance of the research) and the actual findings. As much as it is important to make the technical details -- and data! -- available to those who want to go through them, the clutter largely detracts from the key messages. I'm also willing to bet good money that many (most?) people currently just skip through the entire mid-section of your typical economics paper anyway, concentrating on the introduction, results and conclusion.

So, is this a weird case of physics envy, where economists feel the need to compensate for lack of quality through quantity? Or does it say something special about the nature of economics, where the limited extent of true experimental data makes methodology more precarious and prone to bias?

Either way, do we really lose anything by making economic journal articles much shorter and saving all the technical details for the supplementary materials?

PS - Yes, I know that most economic journals already reserve a lot of information for the technical appendices. I'd also say that a number of the top journals (e.g. AER) are pleasantly readable -- perhaps surprisingly so for outsiders. But we're still a long way off what the sciences are doing.

UPDATE: It just occurred to me that the frequency of publication plays a reinforcing in all of this. Nature, Science and PNAS are all issued on a weekly basis. The top economic journals, on the other hand, are typically only bi-monthly (at best) or quarterly publications. The higher volume of science publications encourages concise articles for pure reasons of practicality, as well as readability.

___
[*] The authors argue that climate data are better viewed as statistical processes that are characterised by structural breaks around a deterministic time trend... i.e. As opposed to non-stationary stochastic processes comprising one or more unit roots. (This is important because it has implications for the ways in which we should analyse climate data from a statistical time-series perspective.) In so doing, they are effectively reliving a similar debate regarding the statistical nature of macroeconomic time-series data, which was ignited by a very famous article by Pierre Perron. Perron happens to be one of the co-authors on the Estrada paper.

Friday, January 24, 2014

Of Vikings and Credit Rating Agencies

Yesterday's post reminded me of a story that encapsulates much of my own feelings about credit rating agencies (and, indeed, the naivete that characterised the build-up to the Great Recession).

The year was 2008 and I was working for an economics consultancy specialising in the sovereign risk of emerging markets. We primarily marked African countries, but also had a number of "peripheral" OECD countries on our books. One of these was Iceland and I was assigned to produce a country report that would go out to our major clients.

By this time, the US subprime market had already collapsed and it was abundantly clear that Europe (among others) would not escape the contagion. With credit conditions imploding, it was equally clear that the most vulnerable sectors and countries were those with extended leverage positions.

Iceland was a case in point. The country had a healthy fiscal position, running a budget surplus and public debt only around 30% of GDP. However, private debt was a entirely different story. Led by the aggressive expansion of its commercial banks into European markets, total Icelandic external debt was many times greater than GDP. Compounding the problem was a rapid depreciation in the Icelandic króna, which made the ability to service external liabilities even more daunting. (Iceland was the world's smallest economy to operate an independently floating exchange rate at that time.) Wikipedia gives a good overview of the situation:
At the end of the second quarter 2008, Iceland's external debt was 9.553 trillion Icelandic krónur (€50 billion), more than 80% of which was held by the banking sector.[4] This value compares with Iceland's 2007 gross domestic product of 1.293 trillion krónur (€8.5 billion).[5] The assets of the three banks taken under the control of the [Icelandic Financial Services Authority] totalled 14.437 trillion krónur at the end of the second quarter 2008,[6] equal to more than 11 times of the Icelandic GDP, and hence there was no possibility for the Icelandic Central Bank to step in as a lender of last resort when they were hit by financial troubles and started to account asset losses.
It should be emphasised that everyone was aware of all of this at the time. I read briefings by all the major ratings agencies (plus reports from the OECD and IMF) describing the country's precarious external position in quite some detail. However, these briefings more or less all ended with the same absurd conclusion: Yes, the situation is very bad, but the outlook for the economy as a whole remains okay as long as Government steps in forcefully to support the commercial banks in the event of a deepening crisis.(!)

I could scarcely believe what I was reading. What could the Icelandic government possibly hope to achieve against potential liabilities that were an order of magnitude greater than the country's entire GDP? Truly, it would be like pissing against a hurricane.

Of course, we all know what happened next.

In the aftermath of the Great Recession, I've often heard people invoke the phrase -- "When the music is playing, you have to keep dancing" -- perhaps as a means of understanding why so many obvious danger signs were ignored in favour of business-as-usual. It always makes me think of those Icelandic reports when I hear that.

PS- Technically, Iceland never did default on its sovereign debt despite the banking crisis and massive recession. It was the (nationalised) banks that defaulted so spectacularly. The country has even managed a quite remarkable recovery in the scheme of things. The short reasons for this are that they received emergency bailout money from outside and, crucially, also decided to let creditors eat their losses.

Thursday, January 23, 2014

Home bias in sovereign ratings

 [Rather irritatingly, I wrote the below post at the end of last week and had been meaning to publish it on Monday. Unfortunately, I got snowed in with work and now see that Tyler Cowen and a bunch of other people have already covered the paper in question. Still, in a bid to get some blogging activity going around these parts again, here's my two cents.]
"The Home Bias In Sovereign Ratings" 
Fuchs and Gehring conduct empirical analyses of variation in nine different credit ratings agencies around the world that offer ratings of at least 25 sovereigns[...] The paper is motivated by two good questions: (1) Do ratings agencies assign better ratings to their home countries? (2) Do they assign better ratings to countries that have close cultural, economic, or geopolitical ties to their home country? 
[...] 
Fuchs and Gehring find clear evidence of “home bias”. Specifically, their analysis finds that agencies do indeed assign higher ratings to their home country governments compared to other countries with the same characteristics. This result was especially strong during the global financial crisis (GFC) years–nearly a 2 point “bump” in ratings.
As someone who has been both a consumer and producer of sovereign rating reports prior to starting a PhD, I find this sort of thing very interesting. The role of inherent biases in the industry is scope for bemusement and alarm. This paper by Fuchs and Gehring would at least seem to go some of the way in explaining why, say, Fitch places the United States in its highest credit ratings category... while (Chinese-based agency) Dagong only places the US in its third highest category.

That being said, Daniel McDowell (author of the above blog post) points out that it is not especially clear how such findings actually stand to affect future ratings. For one thing, changes in sovereign ratings sometimes have zero, or even paradoxical effects, such as when the demand for US treasuries actually rose following the country's downgrade by Standard & Poors in 2011.[*]

On the other hand, it should also be noted that if one of the other major agencies -- i.e. Fitch or Moody's -- had followed S&P's lead in downgrading the US credit score in 2011, then that probably would have had fairly major financial implications. Most obviously, a large number of investment funds have specific mandates regarding the type of securities they must hold... as determined by the average score among the big three credit ratings agencies. For example, a fund might be legally required to hold a minimum proportion of "triple-A-rated" bonds. Given how ubiquitous US treasuries are, some major portfolio rebalancing would almost certainly be required if the US lost its "average" credit rating. You may recall that this is something that a lot of people were worried about at the time. It is also one reason that the ratings agencies continue to have a practical (and potentially deleterious) relevance to financial markets.

Anyway, apologies for getting sidetracked. Interesting paper and blog post. Check them out.
___
[*] A popular explanation at the time was that the downgrade provided the shake-up that Congress needed in order to overcome the political impasse over the debt ceiling...

Wednesday, November 6, 2013

Why economists love auctions

Some background first: South Africa's power market is utterly dominated by (a) coal and (b) Eskom, the parasitical parastatal monopoly. In a bid to encourage both fuel diversification and competition, the government has determined that 3,725 MW of new capacity up until 2030 should consist of renewable sources operated by independent power producers (IPPs). This translates to roughly 10,000 GWh of actual future electricity generation.

Ignoring the fact that this is small potatoes in the scheme of things -- less than 5% of the country's current 240 TWh annual electricity consumption -- the point that I want to make here is mostly about how those IPPs are chosen.

Having played with various schemes, authorities eventually settled on something called the Renewable Energy Independent Power Producer Procurement Programme (REIPPPP). This is effectively a competitive bidding process, whereby applicants submit a guaranteed price that they are willing to accept for electricity that they generate in the future. In other words, it looks a lot like the idealised auction market advocated in economic textbooks.

So how have things turned out? Well, the results from the third round of bidding have just come in. (The previous bidding rounds were held in 2011 and 2012, respectively.) It would appear that things are progressing rather well:
The six successful solar PV bidders, which shared an allocation of 435 MW, were particularly aggressive with their pricing. Fully indexed prices using April 2011 as the base year showed that the average solar PV price fell from R2.75/kWh in bid-window one to 88c/kWh in the third round.
[...]Similarly, the price of onshore wind fell from R1.14/kWh in round one to 89c/kWh in round two and to only 66c/kWh in the latest round. A total of 787 MW was allocated across the seven wind projects
[...]Prices for the two 100-MW-apiece CSP [concentrated solar power] projects declined from R2.68/kWh in the first window to R1.46/kWh. 
We must of course be careful not to draw too many conclusions from the above figures. For one thing, the average price that South Africans currently pay for electricity remains lower than any of the above bids. Although, it is scheduled to approach (and even exceed, in the case of wind) them in coming years:


Even then, just because a small portion of wind or solar energy is expected to "reach grid parity" within the next five years, doesn't mean that the game is up for fossil fuels. There are major problems with peak balancing, intermittency and load-following constraints that renewables need to overcome, which I and many others have discussed at length before.

However, a 68% drop in the bid price of solar PV since 2011 -- say nothing of wind and CSP bids falling by nearly 45% -- is clearly impressive. Many people will see this as a evidence of how quickly renewable technologies are progressing. I wouldn't dispute that, but I also see it as a vindication of the auction system that was used for determining the winning bids.

It is something that South Africa's electricity sector could use a lot more of.

Monday, October 21, 2013

Joe Romm's cognitive dissonance on renewables, nuclear and shale gas

I used to be an avid reader of Joe Romm's "Climate Progress" blog. However, my enthusiasm has waned dramatically over the years due to his selective presentation of facts and data, stark intolerance for any opposing ideas and dogmatic stance on nuclear power. (On the plus side, his blog remains an excellent repository for climate news and he can be great fun when mocking the likes of Christopher Monckton.)

Probably the biggest problem that I have with Romm, however, is that he appears to suffer from acute cognitive dissonance. For example, the overriding theme of his blog is one of impending climate doom, yet he regularly proclaims that renewables are already at grid parity, getting cheaper by the second and ready for mass deployment. So, problem solved surely? Frustratingly, this is a recurrent theme on many green blogs, where Cassandra complexes are hard to square with wildly overstated -- or misleading at best -- claims about current renewable energy performance.

Such cognitive dissonance is again on display in one of Romm's recent posts, entitled "Major Study Projects No Major Long-Term Benefit From Shale Gas Revolution". The study in question is by Huntington et al, (2013) and contains projections from a broad suite of integrated climate models. In addition to GHG emissions, the researchers looked at the wider economic impacts of shale gas and their conclusions are rather more nuanced than Romm's excitable headline would suggest. In short, the final projections depend on a complex set of model assumptions and variable interactions. This is evident from the following paragraph that Romm actually cites from the study (emphasis his):
…this trend towards reducing emissions becomes less pronounced as natural gas begins to displace nuclear and renewable energy that would have been used otherwise in new power plants under reference case conditions. Another contributor to the modest emissions impact is the somewhat higher economic growth that stimulates more emissions. Reinforcing this trend is the greater fuel and power consumption resulting from lower natural gas and electricity prices.
Does anyone else see the irony here? Romm is lauding a study which questions the climate credentials of shale gas... and yet that largely depends on whether cheap gas displaces nuclear power -- a technology that he maligns at every opportunity.

More importantly, to say that shale gas confers no long-term climate benefits (in of itself) is extremely misleading. It all depends on whether it is complemented by a carbon price, as anyone interested in this debate (at least that I am aware of) readily acknowledges. You get a sense of this from the very figure that Joe Romm chooses to include in his blog post:

Comparison of low shale scenario (light blue), high shale scenario (dark blue), and a scenario depicting a reference case combined with a carbon price (green). This reference case is in between the low and high shale scenarios, while the carbon price starts at $25/tonne in 2013 and increases at 5% each year. Source: Huntington et al. (2013).

The dramatic reduction in emissions due to a carbon price is clearly evident. However, the above figure is still not really comparing apples with apples, since the carbon price is not adapted to the high shale scenario. (It is applied to a reference scenario that is somewhere in between the high and low shale cases.) Luckily, the data that would allow us to make the correct comparison is available here. I have therefore reconstructed the above graph, this time adding a new column that specifically combines the high shale scenario with a carbon price.

Based on Figure 13 of Huntington et al. (2013). The figure now includes a fourth column (purple) where a high shale scenario is combined with a carbon price.

This updated graph makes perfectly clear that the shale revolution can be fully compatible with deep long-term emission reductions, as long as it is complemented by a carbon price. To his credit, Romm does mention this briefly in the article and has also commented on the issue previously. Yet, by continuing to disparage shale gas and pretend that its supporters ignore the need for a carbon price, he simply serves to further polarise the climate debate.

THOUGHT FOR THE DAY: Adapting to the threat of climate change will require a broad suite of interventions. Nobody should claim that the proliferation of shale gas is a sufficient development for de-carbonising the global economy. However, together with a carbon price and other technological breakthroughs, it will likely form a very necessary component.

PS - It probably goes without saying that the economy also benefits from cheap and abundant shale. Huntington et al. state as much in their report (p. 7):
Higher shale resources reduce the costs of natural gas development and expand opportunities throughout the economy. Relative to its path in the low-shale case, [real GDP] is higher in all models that track the economy’s aggregate output. The cumulative aggregation of these GDP gains over all years is significant standing at $1.1 trillion (2010 dollars).
Showing this in graphical form is a little trickier, since some of the models actually take economic growth as an exogenous assumption, or don't extend all the way until 2050. Nonetheless, here is a graph showing a selection of models that compare changes in real GDP up until 2035.

Tuesday, October 15, 2013

Obligatory comment on the 2013 Nobelists

Seeing as it is very de jour to comment on this sort of thing in the econ blogosphere, here is a quick personal take:

I know that this year's laureates have raised eyebrows -- not least of all because people think that Fama and Shiller are at complete odds with one another. This doesn't strike me as especially correct. (Hansen is really the odd one out in this triumvirate, but we'll get to him in a second). For starters, and as pointed out many times over the last two days, Fama was one of the first people to publish results that ran counter to EMH predictions. Mark Thoma is exactly right in pointing out the EMH remains a really useful benchmark/framework for thinking about markets in an empirical sense. I've used it a fair bit when looking at energy and commodity markets for my own research and also when asked to to advise/comment on market trends. 

Shiller has played less of a formal role for me personally, though his housing index and his "dividend returns" data have been extremely handy tools in the blogosphere. The former is better known, but the latter is especially useful when, say, debating your average goldbug. (E.g. When dividends are taken into account, U.S. stocks have enjoyed inflation-adjusted returns of +/-1,000% since 1974. Gold, on the other hand, has yielded a rather more modest 130% over the same time period...)

The 2013 Nobelist who has had the most relevance for me, however, is Lars Peter Hansen. I suspect that this is true for many people working in economic research today, simply because the tools that he bequeathed us are so widely used in modern empirical work. Alex Tabarrok has one of the best "layman" explanations of GMM that I've seen here. Guan Yang has a more wonkish (but still accessible to anyone who is familiar with basic econometrics) exposition here.

Predictable Nobel Prizes in the Economic Sciences?

The subject line is taken from an email sent around my department by one of the finance profs. Here's the email itself:
A curiosity: In the very first edition of their textbook Financial Theory and Corporate Policy (Addison-Wesley, 1979), the authors Thomas E. Copeland and J. Fred Weston dedicated the book to 15 named “pioneers in the development of the modern theory of finance”. Out of these 15 pioneers, eight have since been awarded the Nobel Prize (viz. Debreu (‘81), Modigliani (‘85), Miller (‘90), Markowitz (‘90), Sharpe (‘90), Merton (‘97), Scholes (‘97), and Fama (‘13)), one was already a Nobel laureate (Arrow (‘72)), and three are dead and thus not eligible (Lintner, Black, Hirshleifer). So what about the chances of the three remaining finance pioneers Michael Jensen, Richard Roll, and Stephen Ross? By the way, this year’s laureates Hansen, Shiller, and Fama (a well as the previous laureates Engle, Lucas, Arrow, and Samuelson) are all among the twelve elected Fellows of the American Finance Association, recognized as having made a distinguished contribution to the field of finance.
So I guess it's even money on Jensen, Roll and Ross then...

PS - Here's the evidence.

Wednesday, September 4, 2013

The more you read about ABCT... the more you read about ABCT

Chris responded last week to my previous post on the empirical (ir?)relevance of ABCT. I've been too busy to reply properly until now. (To be honest, I think that my original points remain intact.) I should also say that neither of us can afford to keep this dialogue going on for much longer. Still, here are some excerpts from his latest post, followed by my comments.

First, on the challenge of trying to distinguish between business processes that are fundamentally short-term in nature versus those of the longer-term:
My dad’s business, for instance, does multiple short-term contracting projects within long-term property development projects. In the normal production structure distribution his irrigation installations would be classified near the consumer level as it sits very close to final consumption, but he prices projects at the outset of long-term investment projects when the developer begins to plan and commence his project. My dad’s business therefore adjusts prices early in the business cycle at the same time that projects more remote of the consumer do, and will continue to price for projects throughout the period of the long-term project.
Unlike Chris' initial post, where he was bemoaning the use of statistical indices, I regard this as a more interesting observation. Yes, it is true that firms with short-term production horizons will in some sense be dependent on the activity of (other) firms with longer-term production horizons. However, I still don't regard this as a decisive barrier to an empirical investigation into ABCT.[*] First note, however, that Chris' objection could be seen as theoretical critique of ABCT as much as an empirical one. For if his remarks hold true, then it is extremely difficult even in principle to distinguish the way in which, say, products closer to the end consumer are made less attractive by a fall in interest rates. The mechanics of the classic (naive?) Hayekian triangle begin to unravel, since the underlying distortions -- the switch into capital goods at the expense of consumption goods during an initial period of credit expansion -- may not even occur in a qualitative sense. Indeed, if processes all along the chain of production benefit from credit expansion then we are closer to a theory of economic growth than of business cycles.

Nevertheless, what really matters in this case is the change in relative prices. If you buy the insights provided by ABCT, then it seems extremely implausible that conditions inherently favourable to long-term production processes could benefit short-term processes to a near (or even greater) extent, merely through the creation of auxiliary demand. This is particularly true if the economy is operating at anywhere near full capacity, as is typically emphasised as the starting point for their analysis by Hayek and Mises... i.e. Any increase in capital goods production must increasingly come at the expense of consumer goods.[**] The focus of Lester and Wolff (2013) was the changing nature of such relative prices. It therefore seems a perfectly valid approach from my perspective and, moreover, the failure of the data to conform to the theory's broad predictions, or show signs of economic/statistical significance, is indeed cause for scepticism of ABCT's relevance. A final point on this matter is that L&W trace the evolution of these relative prices over time, which further accounts for the dynamic shifts between sequential processes in the economy.

Chris also made a few other remarks that I thought were worthy of comment, so here are some brief(ish) observations on other parts of his post:
Of course we have only have had around 5/6 business cycles since 1972 that to my mind can’t produce any statistically significant results either.
Okay, and how many monetary policy interventions have we had in that time? Again, I would think that this reflects rather poorly on a theory that places central bank interventions at the (inevitable) heart of all swings in the business cycle.
ABCT does not claim to be a theory that can explain all observed economic phenomena,which is what Grant thinks it claims to do.
Strawman. I have been very clear -- directly following the paper by L&W -- that this was entirely a question of how relevant ABCT is for explaining observed business cycles in the macroeconomy. Nothing more, nothing less. (Although, one wonders about the usefulness of a theory on business cycles if it seemingly fails to achieve that primary goal.)

On the subject of cycles, here is a beautiful example of circularity:
Let me emphasize that the relevance of the Austrian theory can only increase the more one engages and learns about[...] Austrian theory.
I love this sentence and have re-worked the title of my post in its honour.

On theory versus data:
So to Grant’s point, it is more than just a tendency of Austrians to dismiss empirical ‘evidence’ that runs counter to ABCT and related concepts, because their theories are not built on empirical data but on rigorous logical deduction.
Firstly, I challenge anyone to show me that ABCT follows solely and directly from the action axiom alone. The list of subsidiary axioms and assumptions becomes enormous once we reach the full scope of the theory. This idea of an immaculately conceived business cycle theory, of pure logical cogency and free of any auxiliary pillars is, to be frank, so fanciful that not even the most zealous praxeologist could believe it. More importantly, the "choice" between theory and empirics is a false dichotomy. The above paragraph betrays a misunderstanding of how theory in mainstream economics (or elsewhere) is developed and exactly why it is mutually reinforcing to empirical observation. All economic theory is essentially deductive in nature. You start with some primary axioms or propositions and work through to the implications and consequences. Yet, how do we arbitrate between competing theories or measure their importance? Well, the same way that we do for any scientific field; we test them using data from the real world. Rejection of empirical scrutiny, validation and testing means that we are no longer debating economics or any kind of science for that matter. We are now in the realm of religion.

Chris ends his post in decidedly Churchillian mode:
But Grant should know, in our professions as economists and in the practice of economic forecasting, we are continuously, nay, every week, refining and enhancing our forecasting methods and theories based on what’s available and recent experience. Economic theory and economic forecasting are, of course, very different things.
Typing up that final paragraph must have been difficult whilst holding a bowler hat over his breast and staring defiantly into the distance. Just kidding, bud. I agree with the sentiments here. I ask only that theory shape our forecasting efforts and that we avail ourselves of the opportunity to reconsider these theories when the facts do not match the predictions.

___
[*] As a technical point, there is also some confusion about data classification in the above paragraph. The PPI stage-of-process data is classified by commodities, not firms. Chris' dad's business -- hi Len! -- could therefore have goods classified in various stage-of-process categories, depending on where and who the end consumer was.

[**] This is analogous to an argument made by Tyler Cowen on the co-movement of investment and consumption over the business cycle. See pp. 8-9 of Daniel Kuehn's paper on the Hayekian version of ABCT, which I also mentioned in my previous post.

Tuesday, August 20, 2013

Empirical evidence and the relevance of ABCT

A new study by Lester and Wolff (2013), hereafter L&W,  is set to cause a bit of a stir in Austrian circles. [HT: Daniel Kuehn]

The paper, which was published in the Review of Austrian Economics no less, finds that Austrian Business Cycle Theory (ABCT ) is not particularly relevant from an empirical standpoint. In short: The unique predictions made by ABCT, concerning the relative price and output changes of goods in different stages of production, are not borne out by the data. It is therefore very difficult to argue that such dynamics are driving the business cycle of the macroeconomy.

I've read through the paper and think that it is a very thorough and technically sound piece of analysis. More importantly, it fills a gap in the literature by using good data to ask the right questions. The conclusion closely matches my own view on ABCT, which is that it constitutes an internally consistent framework for the most part, yet has limited relevance as an overarching macro theory. (That said, L&W also acknowledge that Austrians emphasise a number of concepts, from the coordinating role of market prices and the inter-temporal allocation of resources, that are very valuable to broader economics. Mainstream macro is certainly richer for incorporating these insights.)

Arguing with Austrian-types is something of a side hobby for yours truly and I sent a copy of the paper to Chris Becker, my friend since school days and staunch proponent of all things ABCT. He has written a thoughtful blog post on what he sees are the "flaws and shortcomings" of the study. However, I am not persuaded by his arguments.

Chris starts out by calling into question the various data and metrics used by L&W. For instance, he says that PPI is "only a proxy" for actual economic activity and market prices as "no statistical measure is 100% accurate". Wait a minute, that is simply a tautology. Statistics are by definition imperfect representations of the true state of nature based on probabilistic laws and frequency distributions. To claim that this invalidates their use in scientific research is to a) betray a misunderstanding of how statistics actually works and b) discard a great majority of scientific discoveries and technological advancements since before even the Enlightenment. All that really matters in this case is that these indexes constitute accurate representations of the underlying variables and populations that they refer to. I see no reason to think that they are biased in a manner that systematically renders them uninformative (or misleading) -- particularly if the proposed dynamics were truly the main drivers of large swings in economic activity. I should also say that Chris' objections here would strike me as more convincing if I didn't see Austrians constantly referring to PPI, money supply data, etc. in support of their own arguments.

Next, Chris walks through the various monetary policy variables used in the study and what he perceives as their shortcomings. For the record, L&W use the Federal Funds Rate (FRR) as their main monetary policy variable, while a number of other metrics (M0, M1, M2, etc ) are utilized for robustness checks. In each case, these various monetary policy variables return the same broad set of results that ultimately fail to find vindication for ABCT. (That's the point of running robustness checks after all; they should produce results that are consistent with each other.) Chris does seem to agree with L&W in regarding the FFR as the most appropriate variable to proxy for changes in monetary policy. He even writes: "It is instructive that distortions of the FFR provide the most significant response in favour of ABCT, as it is the divergence between this interest rate and the natural rate of interest that sets in motion the business cycle, according to the Mises-Hayek theory." Except it isn't really instructive at all, because even if some of the coefficients have the same sign as predicted by the theory, they are almost uniformly insignificant from an economic and statistical perspective! L&W are very clear about this and make the point several times throughout their paper. For example (and with emphasis added):
It is critical to note that the results lack statistical significance. In each IRF [Impulse Response Function], the 80 % confidence interval bands suggest that none of the four IRFs demonstrate impact or dynamic responses which differ significantly from zero for more than a few months. This point is particularly relevant when ABCT would otherwise rely on the large shifts in capital to drive business cycle dynamics
Once again, we are trying to discern whether ABCT is a plausible candidate for explaining the business cycle at large. According to this evidence, that doesn't appear to be the case.

Chris continues his discussion on monetary policy variables by describing ways in which they may or may not be directly relevant to ABCT, and how the theory can ostensibly accommodate findings that run counter to predictions made by the standard ABCT model. I won't go into too deeply into these issues except to say that I think he runs dangerously close to describing ABCT in pseudoscientific terms. As Popper correctly pointed out many years ago, a theory which claims its strength is to account for any possible outcome is no real scientific theory at all. On the flipside, to say that there are other factors that mitigate how the dynamics of ABCT play out in the economy, is tantamount to admitting that it has limited relevance for explaining observed economic phenomena! (Further, given that the dataset runs from 1972 to 2011 and the empirical analysis tracks variables over a 60-month period following a policy shock, I personally don't think that appealing to "credit injection points" and "historical contingencies" holds much water.)

The post ends with a helpful (ahem) reading list. I have taken the liberty of noting down the respective page numbers for each of the books that Chris recommends: "To really understand ABCT, one should read Ludwig von Mises’ “Human Action” [924 pages] , Friedrich von Hayek’s “Prices and Production” [594 pages], Murray Rothbard’s “Man, Economy, and State” [1,441 pages], and Jesus Huerta de Soto’s “Money, Bank Credit, and Economic Cycles.” [777 pages]". Now, I know that people like to make fun of some Austrians for inevitably referring them to incredibly lengthy treatises during internet debates (often in lieu of making actual arguments). I don't usually think of my friend as falling into that category, but come on... 3,736 pages! If that's what it takes to truly understand ABCT, then I sincerely doubt that anyone has a coherent grip on it.

Allow me to conclude by making two general observations:

1) I may be wrong, but I can't quite escape the feeling that econometrics is seen by many as the preserve of academics and government. That couldn't be further from the truth. Econometrics and statistical analysis has been fundamental to virtually every private company and industry that I have ever worked in, with or am aware of... from the energy sector to finance to consulting to media. If empirical methods were truly misleading, then surely the evolutionary dynamics of the market would have brought about their demise long ago?

2) As with any scientific field or theory, no single study -- no matter how well done -- is enough to invalidate an entire research programme. Similarly, I am hardly claiming that econometrics and empirical studies are infallible. (In addition to discussing the vexing problems of identification many times on this blog, I have also argued that theory and data are mutually reinforcing so that one acts as a check on the other.) However, I do wonder what evidence would be sufficient for Austrians to reconsider their theories. I detect a remarkable tendency to dismiss any empirical evidence that runs counter to ABCT and its related concepts. It should be said that all major schools of economic thought have had to face up to the challenges presented by the data... And are better for it. Keynesians made significant adjustments to their theories in the face of 1970's stagflation, as well as the intellectual challenges of the Lucas Critique and microfoundations movement. For their part, recent events have forced Monetarists to confront the limitations of Friedman's quantity of money supply rule and the potential ineffectiveness of monetary policy at the zero lower bound. Theory cannot advance if it is impervious to data.

___
PS - An ungated version of the L&W paper can be downloaded here.
PPS - Those interested in this subject should also read Daniel's excellent overview of Hayek's version of ABCT, which I believe is forthcoming in Critical Review.

Wednesday, July 31, 2013

Inflation-targeting, CPI measures and the poor

In a bid to get back to regular blogging, below is a comment that I've just left under a Daily Maverick column by Paul Berkowitz, entitled "Who benefits from inflation-targeting?" It includes the following provocative graph on the relative inflation levels faced by different consumer groups in South Africa.


___

There are a number of legitimate reasons to critique an inflation-targeting regime. Further, the relatively high inflation burden felt by poorer segments of society certainly merits attention in of itself. That said, there are some persistent misconceptions among the public about inflation-targeting: What are its goals and what are the (theoretical) mechanisms by which these are achieved -- particularly w.r.t. a central bank's chosen CPI measure? My gripe with this article is that it may perpetuate such misconceptions by failing to acknowledge an important principle of inflation-targeting.

Rather than simply being an end in of itself, an inflation target is also seen as a means of stabilising the output gap (and ultimately smoothing of the business cycle). Theoretical justification for this so-call "divine coincidence" -- which implies no trade-off between the twin goals of stabilising inflation and maintaining optimal economic output -- is derived from the standard new Keynesian DSGE models favoured by a large segment of macroeconomists. See Blanchard and Gali (2005), or as Blanchard has written elsewhere:
This is a really important result. It implies that central banks should indeed focus just on inflation, and can sleep well at night. If they succeed in stabilizing inflation, they will automatically generate the optimal level of activity. (p. 3)
The upshot is that, if we are going to criticize the representativeness of headline CPI for all income groups, then we should at least acknowledge it's (intended) wider role in stabilising the output gap. (The CPI basket is chosen, after all, because it corresponds to average purchases within the economy as a whole.) You can certainly argue about the conditions under which "the divine coincidence" is satisfied, as well the theoretical underpinnings of the DSGE models. However, closing the output gap is probably very much in the poor's interest as well.

On a final and related note, there are severe drawbacks to the SARB targeting a predominantly commodities-based basket that would more closely accord with the average purchases of low-income families. Most obviously, the SARB is more or less powerless to control the inherent volatility of commodity groups, not to mention the potential for causing very counterproductive amplifications in the consumption cycle. (See here, esp. end of point 4.)

Sunday, June 2, 2013

Are there any four-minute miles in economics?

3:59.4

Roger Bannister's four laps of the Iffley Road Track on 6 May 1954 have been immortalised in the annals of sporting lore and human achievement. By becoming the first man to run a sub-four minute mile, he had broken the "impossible" barrier and so made clear the importance of mind over matter. Athletes from all over the world would soon replicate Bannister's feat now that he had liberated them from their mental shackles...

Except... no. The problem with this romantic narrative is that it has been hopelessly embellished. The idea of a four-minute "barrier" was almost entirely the invention of the media, which fanned the idea to sell papers as runners increasingly closed in on the mark. Wikipedia (indulge me) puts it quite nicely:
The claim that a 4-minute mile was once thought to be impossible by informed observers was and is a widely propagated myth created by sportswriters and debunked by Bannister himself in his memoir, The Four Minute Mile (1955). The reason the myth took hold was that four minutes was a nice round number which was slightly better (1.4 seconds) than the world record for nine years, longer than it probably otherwise would have been because of the effect of World War II in interrupting athletic progress in the combatant countries.
I was reminded of this yesterday, as my Twitter and Facebook feeds were flooded by excitable and angry complaints about the Dollar-Rand exchange rate breaching the symbolic threshold of 1:10.

Source: Bloomberg

Now, to be sure, the Rand is at it's weakest level for several years following a number of social upheavals, government scandals and political infighting, questionable economic policy, and wider trends in emerging markets. (Here, here and here for more context.) I should also say that I am not endorsing a "weak Rand" strategy here in any shape or form. I am, however, interested in the question of whether a 1:10 exchange ratio is significant in of itself.

Put differently, do we have reason to believe that the Rand's rate of depreciation will accelerate further as a result of having passed this threshold? I must confess that I don't see it. That's not to say that further depreciation can't happen, but rather: a) That would be the result of existing economic fundamentals rather than surpassing some magic metric mark, b) A full-blown currency crisis seems very unlikely from my perspective. (If nothing else, the South African Reserve Bank is on record as saying that they will tighten policy in the advent of further weakening, although that remains very open to interpretation.)

Moving beyond the case of the USD-ZAR exchange rate, the notion of thresholds pervades much of economics and finance... Or, at least, it pervades talk about economics and finance. Consider, for example, some of the headlines from recent weeks concerning the fall of gold prices to below $1,500 and then $1,400 per ounce... or the brouhaha surrounding that Rogoff-Reinhart paper and their fabled elusive "90 percent" cut-off rate for debt-to-GDP ratios and its supposedly dire consequences on economic growth.

Some of this -- let's call it -- threshold affinity in economics and finance could be justified by underlying factors, such as physical laws, regulatory limits, etc. However, most of it is probably just good copy for selling financial news. At worst, it may even be self-referential nonsense designed to confuse lay investors and the general public. Here are two stylised explanations for why "round number" thresholds shouldn't matter in of themselves:
  1. Valuations should ultimately be set according to economic fundamentals. These would not be much different for a stock or trade that is valued at, say, R9.90 versus R10.10.
  2. An alternative reason is that traders don't target levels per se. Rather, they target the levels implied by momentum and trend lines (with predefined margins of safety), or algorithmic strategies (which are similar in principle). There's no a priori reason to think that these implied levels will accord to nice round numbers.
Having said that, market psychology can obviously work very differently to the cool, rational calculations implied by standard theory. "Round numbers" will become important, as long as enough people believe them to be important. More precisely, symbolic levels will gain significance if I believe that other people regard them as being significant. (Ye old beauty contest story.) It should also be said that even standard theory does not suppose that change should evolve in a linear fashion...

Let me end this post by saying that I haven't bothered with any kind of literature research; I'd be interested in hearing about studies investigating this type of phenomena. Alternatively, if not much has been done and someone is interested in looking at it further... drop me a line. Two possibilities for checking the existence of "four-minute mile" numbers is that they should act as focal points or thresholds. For the former, we would expect data to bunch around particular levels from both above and below. For the latter, we would expect a discontinuity in the rate of change for a particular stock or currency valuation (i.e. once a threshold is breached). Several ways of testing this empirically immediately spring to mind.

Thursday, April 11, 2013

Monetary regimes and economic outcomes

Eric Rauchway has a post over at Crooked Timber that's generating a fair bit of interest, since it compares real economic growth and inflation for the "G7" countries over various monetary regimes.

His data is taken from a 1993 paper by Michael Bordo and consequently doesn't cover information from the last two decades. ("I made you some charts. Because I love you that much. (But not enough to extend the floating exchange rate regime data down to the present; that’s actual work.)")

Well, that sounds like a challenge. And if anyone is fit to do mind-numbing compilation of data[*], that would be your typical economic graduate student...

Behold: I give you Rauchway's charts brought forward to the present day!

Fig. 1

Fig. 2

Compared to Rauchway's charts, the updated versions bring both good and bad changes from the perspective of floating (fiat) currency proponents. On the positive side, inflation has come down quite a bit. On the negative side, so has real GDP growth -- although to a lesser extent. You can better see this by looking at the next two charts, which compare the 1974-1989 (i.e. as in Bordo's paper) and 1990-2011/12 periods of the post-Bretton Woods era.

Fig. 3

Fig. 4

Of course, this is more or less as one would have guessed. We know that late '70s and early '80s were a period of high inflation -- with various shocks and loose monetary policy to blame. On the GDP side, it's interesting to note that Japan appears to be the primary driver of slower growth in the latter part of the post-Bretton Woods era (Fig. 3). Given that it's stalling economy is probably suffering from a lack of monetary accommodation to drag it out of liquidity trap conditions -- Note: recent events may provide the decisive policy experiment to prove whether this is the case or not -- it's far from obvious to me that the strictures imposed by the alternative monetary regimes would have yielded better outcomes. (I've had my say at various times on this blog as to why I think returning to a gold standard is a rotten idea, so I won't go into that now.)

And, on that note, I should say that I fully agree with the various commentators in the Crooked Timber thread, who have been pointing out that these don't charts don't nearly suffice i.t.o. counterfactuals, etc, etc. Still, these eyeball comparisons remain an intriguing bit of blogosphere fun. 
___
[*] It actually wasn't that much work thanks to our friendly FRED friends.

Wednesday, February 13, 2013

More on inflation, violence and identification

Chris has responded to my previous post, which he frames as a criticism of his research. I should state upfront that this does not strike me as entirely accurate, since I emphasized at various points that my concerns lay in the possible journalistic interpretation of his work. Some email correspondence between the two of us suggests that I am not alone in expressing such trepidations, but I digress. On then, to Chris’s response…

1) He begins by taking issue with my decision to focus on food prices, politely suggesting that I “may have missed” the fact that his non-discretionary index of living costs includes various other components (including rent, electricity, water, etc).

As it happens, I don’t think that I missed this at all. My reasons for focusing on food prices are quite simple. First, they provide relevant context to the real effects that I highlight in my post, i.e. agricultural shocks stemming from massive drought. This was done deliberately with the aim of illustrating the overriding message of my post: Attributing causation to any particular event is often very difficult, and we certainly have to bear real effects in the front of our minds when discussing the sources of inflation. (To reiterate, this is something that the Business Day article failed to do entirely.) Second, food prices provide an obvious segue to the other article that I discuss in my post, which concerns the role that monetary expansion had in driving up food prices and thus precipitating the Arab Spring. Such matters notwithstanding, however, I did happen to include the following passage in my original post:
To be clear, South Africans have also experienced sharp increases in the cost of amenities like electricity and water provision due to some boneheaded policy decisions and as a legacy of inefficient parastatal monopolies.
Chris may have missed that, though. (wink)

2) His second objection is that I am unfairly interpreting his research as a suggestion that food hikes are the only cause of violence. He quotes his references to “political grievances” as evidence that I haven’t read the article properly.

Again, however, this seems to be a misunderstanding of what I have written and the major point of my post. In the passage that he quotes, I'm not concerned with alternative causes of violence, but rather the underlying drivers of one particular cause, i.e. inflation. At the risk of repeating myself: To the extent that inflation does act as a trigger for social unrest and violence – and irrespective of whether that occurs alongside other factors such as political grievances or not – we need to understand what the underlying forces behind that inflation are. Any analysis that focuses only on the nominal effects of (quote unquote) “delinquent” monetary policy is simply misleading. Why? Well, because there may be very significant real price drivers occurring at the same time! This is something that the Business Day article completely failed to mention, and the same is true for The Telegraph article that I quoted in the second half of my post. I see nothing wrong with taking exception to such slipshod analysis.

3) Next issue: On my suggestion that one might baulk at the definitive description of this research as “proof” of the relationship between inflation and violence...  Well, I don’t have much to add here, since – again – this is a criticism of how the journalist chose to frame his article. “Proof” is simply too strong and simplistic a word to use given all this issues that I have raised. (Note: I see that this has happened elsewhere.)

4) The penultimate point that Chris makes in his reply extends beyond the article featured in Business Day.  I will summarize his argument as saying that the South African Reserve Bank (SARB) should abandon its focus on the headline CPI, because a) Non-discretionary inflation has been rising much faster, b) It cannot control which specific goods rise and fall in price, and c) It would better facilitate an environment of civil harmony by stabilizing the Rand against a basket of commodities.

Now, interestingly enough, subsequent to yesterday’s post I found this column that Chris has penned himself. (I’ll take it that we can safely assume away possibility of incorrect interpretation by a third party here.) He produces the below graph and proceeds to write: 
Seeing as Non-Discretionary goods price inflation has averaged well above the SARB’s price inflation target of 6% for most of the past seven years, low income groups’ standards of living are falling at a compounded rate relative to high income earners. [Emphasis mine.]

I don’t have the raw data to hand, but eye-balling the chart it doesn't seem at all obvious to me that non-discretionary goods have “averaged well above” the 6% inflation target. (Does it seem obvious to anyone else?) In fact, I’d hazard a guess that it averages a shade below the 6% mark. Certainly, the strongest statement that we can probably make about this series is that it fluctuates around that general level.

We all agree that no single measure of price changes is perfect. Indeed, it is precisely for this reason that we have constructed so many different indices in the hopes of getting a better sense of how “inflation” is playing out in the economy. Central banks like the SARB choose to follow a preferred metric – like the CPI – for a number of reasons, most of them very sensible. As it happens, the sheer volatility of commodities is a key reason why some CBs prefer “core” to “headline” inflation measures. Trying to conduct monetary policy in response to a simple basket of commodity prices would not only be incredibly difficult due to the inherent volatility (and the fact that the CB is more or less powerless to  stop these short-run swings), but potentially counterproductive because of the amplifying effect that it could have on consumption cycles. (For more discussion, see Matt Rognlie’s excellent posts on this subject: herehere, and here.)

5) As for his final point, that peer review is not superior to insights that bring in paying clients… Well, clearly that is not what I meant by “cracking” the problems of identifying a causal link between price increases and the uprisings in the Arab world. (Mind you, if he did accurately predict these events in advance of them happening then I certainly am impressed.) So, while I regard the profit mechanism as essential as the next economist, that has nothing to do with my concerns about getting to grips with some very obvious identification problems. That said, allow me to make a broader concluding remark: Just as no-one should suggest that peer-review is infallible, we should never confuse profitability with validity. Even psychics have been doing a roaring trade for centuries. It doesn't make them right.

Wednesday, January 30, 2013

Review - Economics Evolving (Agnar Sandmo)

Following an email exchange with Dan Kuehn and Jon Catalán, I decided that it was finally time to write up a review for Agnar Sandmo's "Economics Evolving" (which I have been punting for some time). Full disclosure is that I know Agnar personally and think that he is a tremendous economist. That said, I started his book before I had actually spoken much to him and honestly believe that I judged it on nothing else but the merits of its content.

And with that, here is the review which I have just posted on Amazon:

...


In his masterful Wealth & Poverty of Nations, economic historian David S. Landes opens with a quip that "Geography has fallen on hard times." I have often wondered whether the same might said of history -- at least when it comes to cataloguing the development of economic thought. Despite the efforts of Landes and co. (who arguably tend to focus more on events rather than thinkers), this subject is sorely absent from the modern economic curriculum.

Agnar Sandmo's excellent Economics Evolving (EE) will hopefully go some way towards remedying that. The book is a compelling history of economic thought, told through the lives and works of the key figures that have shaped the field. The text is lucid and jargon free, so that even complex ideas are conveyed with a clear simplicity. My impression is that any lay person with an interest in economics could pick up the book and gain a deep understanding of the subject. (I personally happened to read EE while doing my graduate studies in economics and it really helped to keep the overarching ideas clear in my head. This can be surprisingly difficult at times, when getting wrapped up in the mathematics or technical arguments of a particular theory might hinder you from seeing the wood for the trees. The concise description of various concepts -- from Walrasian Equilibrium to growth theory -- thus provided a welcome foil to the analytical rigour required by my core grad courses.)

Each chapter or subsection opens with an brief biography of the featured economist(s). These provide valuable context to the overall discussion and are typically interspersed with interesting vignettes and anecdotes. One of my favourites occurs on p. 90, where Sandmo reproduces a letter from John Stuart Mill to the philosopher Jeremy Bentham. The former is enquiring after the 3rd and 4th volumes of Hooke's Roman History, having "recapitulated" the 1st and 2nd volumes. Sandmo points out that this seemingly unremarkable correspondence between two leading intellectuals of the time was actually written when Mill had only just turned six! Mill's almost impossible precocity serves as the ideal backdrop for describing his many later contributions -- in both economics and philosophy -- during the pages that follow.

Sandmo, a fairly eminent economist in his own right, is never less than evenhanded in his discussion of the key figures and thinkers that have shaped the development of economics. His writing is admirably free of ideological bias and I appreciated not being able to necessarily tell which side the author would personally lean to on different economic questions. That is not to say that he is never critical, however, as EE succinctly highlights the faults in many arguments and theories. (E.g. In an interesting chapter on the economic theories of Karl Marx, we are told how a falling rate of profit is a supposedly inevitable feature of capital accumulation, and how this in turn would eventually lead to the entire system collapsing. Sandmo counters (p. 133): "Each element in his chain of reasoning may be criticized", and convincingly proceeds to do exactly that.)

Of course, not everyone's favourite economist can feature prominently (or even at all) in a book that is designed above all to be concise and readable. However, I think it is fair to say that the major players are all covered in admirable depth, as well as numerous others. I particularly enjoyed the sections on the classicists (Malthus, Say, Ricardo, and Mill) and the forerunners to the "Marginal Revolution" (Gossen, Dupuit, Cournot, and Thünen). These are the kinds of tremendously important figures that are normally relegated to the footnotes in most modern economic curricula, and it was refreshing to get a full sense of their contributions and beliefs. I found it intriguing, for instance, to see how well they had often anticipated later developments in the science and continued to have relevant insights for our own economic circumstances of the present day. (It was equally interesting to get a sense of how their views have either been distorted or successfully reproduced by later thinkers.)

In summary, this book is a wonderful companion to any student of economics, and many others besides. I can easily recommend it.

Monday, October 22, 2012

Debt and utility come alive!

As promised, here is an interactive Excel spreadsheet showing how utility changes in a debt-financing scenario versus the laissez faire case. Read my previous post to get a sense of the motivation.

The spreadsheet is based on Bob Murphy's initial table, albeit with some slight changes. Perhaps most importantly, I assume that old people may now earn less than young people (where the Old:Young income ratio is determined by the parameter "a"). And we can obviously vary all other parameters as desired.

As we can see, it is ultimately the combination of variables and parameters -- say nothing of the assumptions used within this simple model -- that determine whether people are better off as a result of deficit financing and transfers. E.g. All else equal, a lower "a" will mean that government is able to transfer more from Old Al to Young Bob in the first period (i.e. higher "t"), whilst not lowering the utilities of future generations. Indeed, "t" is the most interesting parameter from my perspective since this is the one that Government must decide on when all the others are given exogenously.

For a more specific example, plug in the following values: r = 100%, g = 500%, a = 0.5 and t = 3. Clearly these large growth rates are absurd, but they serve to illustrate how subsequent generations can actually suffer a relative loss in utility if GDP growth ("g") is significantly higher than the interest rate ("r"). As I explained in my previous post, this is because I am assuming a diminishing marginal utility (DMU) function. However, you can offset things by now plugging in a = 0.2. This gives you an idea of how important the interplay between our parameters is.

I'd obviously encourage you to play around with things and see how your results change. As an added bonus, I've included a formula (in the light blue cell) that works out the maximum transfer for maintaining a neutral effect on utility, when we assume that utility is logarithmic.[*]



Please feel free to share and adapt as you wish. Of course, a pointer towards the original source would not go amiss.

THOUGHT FOR THE DAY: Relative to the non-intervention case, government deficits may or may not have negative consequences for the utilities of future generations... Like everything else in life, it depends on our starting assumptions. I sense that this might not be the definitive answer that some of you were hoping for, but so be it. That said, I'd suggest that plausible values for our parameters would be r = g = 3% and a = 0.75. In this case, one notes that government in our simple model could actually improve utilities quite easily by using deficit finance. (In the logarithmic utility case, they can transfer anything up to 25 units and still yield an improvement in the utility levels of future generations.)

___
[*] The formula is based on equating the laissez faire and deficit utilities:
ln(X) + ln(aX) = ln(X - t) + ln[(1 + g)aX + (1 + r)]

Taking exponents and a bit of algebra allows us to solve for t:
t = [1/(1 +r)] * X [(1 +r) - a(1 + g)]

Sunday, October 21, 2012

The plot thickens in the debt debate

Since joining this debate, I have consistently argued that deficit financing is no less sustainable than taxation if economic growth is at least equal to the interest rate (i.e. g >= r). The models that I have discussed so far always seemed to assume that the reverse was true, so it was completely unsurprising that future generations ran into trouble in these scenarios at some point. It was an inevitable outcome of the design.

As a corollary of this, I also assumed that individual's utility would not be adversely affected if g >= r. To be sure, some important qualifications need to be made here. Most notably, in economics we typically assume that people have diminishing marginal utility (DMU) with respect to consumption. This effectively means that they value losses higher than equivalent gains at any given income level. (If I have 100 apples, then I would lose more utility from having three apples taken away from me, than I would gain in utility if someone gave me three apples.) Of course, this is why we generally think it is better to tax rich people and give that money to poor people, rather than the other way around. 

In this case, deficit financing would improve individual utilities if it meant transferring money (or apples) from a young generation that was wealthier than the old generation.[*] The relatively poor old generation would value the gain in apples more than the rich younger generation would value their loss. And, of course, it certainly seems reasonable to assume that old people will be earning less direct income than young people at any moment in time (due to retirement, etc). This is at least a standard assumption in the OLG literature to the best of my knowledge. I tried to make these points more explicitly in this comment to Nick Rowe... 

However, I had something of an epiphany walking home from the pub last night. (Two epiphanies if you include the realisation that I really should have brought an umbrella with me.):

What if high GDP growth is actually bad for individual utility when a government is using deficit financing? More specifically, what if g > r is the very thing that causes the utilities of future generations (at some point) to fall relative to what they would have been under the laissez faire scenario? This may seem pretty counter-intuitive -- at least it was to me until last night -- but the reasoning is actually pretty simple. Again, it comes back to our old friend: diminishing marginal utility (DMU).

If g exceeds r then at some point a "poor" old generation will be relatively more well off next to their "rich" young selves. Economic growth will outstrip the relative increase in bond repayments. In this case, DMU kicks in such that the transfer from young to old becomes a net "loss"... at least relative to non-intervention scenario. 

I'm not sure whether this is an easy point for people to digest in written form. I sense that it would be much easier to show this in mathematical terms than the long verbal description that I have given above. Nonetheless, I've actually made an Excel spreadsheet that shows that the intuition is correct. As soon as someone is able to tell me how I can upload an active Excel sheet to a blog, I'll do so. [UPDATE: Here it is.]

Make no mistake, g =? r is not the only thing that matters here. Another key issue, for example, is the ratio of old people's incomes relative to the incomes of young people. That's why I want to upload an interactive version of the spreadsheet, so that people can play with different parameter values to see how relative utilities are affected.

I don't know what this means in the context of original blogosphere debt blowout. Frankly, I'm not even particularly interested in who said what at this point. The assumptions that we've been working with are pretty far removed from many of the real-life reasons for taking on debt in any case (e.g. Debt could spur innovation or actually boost economic growth relative to the counterfactual). However, it was an interesting "theoretical" result for me. Nick Rowe may have been making a more profound point than even he realised.

___
[*] Strictly speaking, what matters in this case is that any young individual (or cohort) is relatively wealthy relative to their older selves. I have more disposable income available when I am working than when I am retired.

Friday, October 19, 2012

Even more debt and inheritance (and sales)

Bob Murphy and another commentator have left some interesting observations underneath my last post. They basically want to distinguish between straight-up bequests versus the sale of bonds to the next generation. I was going to leave a response there, but figured that this may be long enough to warrant it's own post.

Bob writes:
You're right, if people in the future are literally bequeathed the bonds from the previous generation, then they are OK (holding all other bequests constant). But what if the previous generation *sells* the bonds to them? Then they're screwed.
My immediate response is to say: "Okay, but what if these young people buy bonds only to resell them to the next generation in the following period?" That seems perfectly consistent with the other assumptions of this model. Taking it for granted that this option is available to every subsequent generation, we would be in exactly the same position as we started with. i.e. This is ultimately a problem of GDP growth being lower than the interest rate... Something which everyone seems to agree upon.

As a thought experiment, however, let's consider the alternative: What if the younger generation refuse to buy the bonds off the old generation? These old timers are now stuck with bonds that they can't sell and, assuming that they decide not to leave any bequests out of spite, what happens next? Well, surely both the bonds and corresponding government debt are extinguished at the start of the next period. In this case, government no longer has a need to finance any outstanding debt burden. We are back to the laissez faire outcome for all future generations.

To be sure, in this scenario one particular generation -- (e.g.) Frank -- will be made worse off, at the same time as everyone else is fine. However, having said that, government could step in at period 6 to maintain Frank's lifetime utility. It does this by taxing Young George an eye-watering 96 apples and transferring them to Old Frank. Of course, now the government is finally at an impasse in period 7. It physically cannot tax Young Hank enough to offset (Old) George's initial losses, since 96*2 > 100 annual production. However, that is an artefact of the model set-up, where any form of debt financing is de facto unsustainable given that we have imposed a positive interest rate and zero economic growth!

The way I see it, this keeps returning to one unavoidable conclusion: The "bad" outcomes of Bob's model can all be traced back to the fact that the interest rate exceeds GDP growth. Everything else is paper fodder.