Friday, February 14, 2014

Why are economic journal articles so long?


I've been reading a lot of (hard) scientific studies for my current dissertation chapter and it is striking how concise they are. The majority of published articles, at least the ones that I have come across, are typically no more than 4-6 pages in length. The pattern holds especially true for the most prestigious journals like Nature, Science or PNAS.

In contrast, your average economic article is long and getting longer. [Note: New link provided.]

Now you could argue that science journals achieve this feat by simply relegating all of the gritty technical details and discussion to the supplementary materials. And this is true. For a good example, take a look at this article by Estrada et al. (2013).[*] The paper itself is only six pages, while the supplementary material is over 40 pages long. Equally as telling is how similar this supporting information is to the working paper that the final article is apparently based upon.

To be honest, I don't really see how this can be a bad thing. It is primarily the job of the editors and referees to vouch for the technical merits and internal consistency of a published study. Regular readers are mostly interested in the broad context (i.e. significance of the research) and the actual findings. As much as it is important to make the technical details -- and data! -- available to those who want to go through them, the clutter largely detracts from the key messages. I'm also willing to bet good money that many (most?) people currently just skip through the entire mid-section of your typical economics paper anyway, concentrating on the introduction, results and conclusion.

So, is this a weird case of physics envy, where economists feel the need to compensate for lack of quality through quantity? Or does it say something special about the nature of economics, where the limited extent of true experimental data makes methodology more precarious and prone to bias?

Either way, do we really lose anything by making economic journal articles much shorter and saving all the technical details for the supplementary materials?

PS - Yes, I know that most economic journals already reserve a lot of information for the technical appendices. I'd also say that a number of the top journals (e.g. AER) are pleasantly readable -- perhaps surprisingly so for outsiders. But we're still a long way off what the sciences are doing.

UPDATE: It just occurred to me that the frequency of publication plays a reinforcing in all of this. Nature, Science and PNAS are all issued on a weekly basis. The top economic journals, on the other hand, are typically only bi-monthly (at best) or quarterly publications. The higher volume of science publications encourages concise articles for pure reasons of practicality, as well as readability.

[*] The authors argue that climate data are better viewed as statistical processes that are characterised by structural breaks around a deterministic time trend... i.e. As opposed to non-stationary stochastic processes comprising one or more unit roots. (This is important because it has implications for the ways in which we should analyse climate data from a statistical time-series perspective.) In so doing, they are effectively reliving a similar debate regarding the statistical nature of macroeconomic time-series data, which was ignited by a very famous article by Pierre Perron. Perron happens to be one of the co-authors on the Estrada paper.


  1. I think it's entirely independent of physics envy - or at least if there is any dependence on physics, it's indirect, and a function of the historical demands of referees.

    The peer review process, I reckon, lends itself to semi-useless reviews by reviewers strapped for time. What's the easiest thing to do if it's an empirical study? Suggest a few more "robustness checks", other variables the authors might add. And so that happens, and so to pre-empt this the wiser authors control for everything plus its dog, which lends to longer papers.

    That and data not being experimental so we need to spend bloody ages describing what the data is, how it came about, how we believe we can identify that Y happened only because X happened and not because of any other possible factor.

    So: referees and being non-experimental get my vote.

  2. My explanation. Economics is mostly a popularity contest because of the extreme difficulty in obtaining controlled 'scientific' results. A popularity contest requires methods to signal to others that they are popular, and encourage some back-scratching from them a little later.

    There are two main signalling methods. First is via citations, and in particular the usual method in economics of in-text (Author, Date) citation rather than numbered footnotes. People want to see their names in articles - right there in the text so readers know they are important people.

    The second reason is related. Economics is quite tribal, which is again a result of scientific evidence is hard to come by. Thus the wordiness of articles is a much a signal to others that you know the language of the tribe, and by putting all the methods/maths etc in the document you are opening doors into the club. Sure, they could go in the footnotes, but that demotes the importance of methodology, whereas methodology is in fact a defining characteristic of economic tribes. For example, a micro model must be derived from a representative agent's utility-maximising problem. Imagine the horror of discovering that in fact there are many ways to represent behaviour, and then to aggregate it, in a model hidden down in the footnotes. You wouldn't want to accidentally give it credit if that's not how it's done in your tribe.

    Of course, I could be making the fallacy of attributing the cause of this patter to the individuals themselves, and offering utility-maximising motives where there really are none. It could just be a kind of path-dependence - economists got used to article seeing presented in a particular way, so they expect new articles and new journals to conform to the established norms. This explanation implies that it really is mostly chance that we don't currently observe the reverse pattern.

    Again, this is a problem of not be able to provide decent scientific evidence. So I have made this comment extra-long in the hope of giving credibility to my arguments, which I hope improve my popularity in the blogosphere.

  3. James and Cameron,

    You both threaten to make this comments section superior to my original post. Great thoughts from each of you and I have very little substantive disagreement.

    Some minor (and probably unnecessary) clarifications:

    Of course, I'm not suggesting that the peer-review is infallible or that referees will always catch all the faults (real or imagined) of a paper. However, surely we should be able to explain our identification strategy in a few short paragraphs and without regurgitating well-known issues at length in every paper? I'm also not suggesting that we omit all the extra stuff required to convince referees (or regular readers) of a study's robustness... but merely that we can stick that in a meaty appendix without compromising the narrative and key messages at hand. (Indeed, quite the opposite if it helps to get rid of clutter in the paper itself.)

    You raise an excellent point about the role of tribal language in economics. As for the question of whether its origins lie with signalling or simply path dependence, may I prevaricate by saying both? Not that either is a particularly compelling reason to stay with the status quo, mind you. Speaking of which...

    "So I have made this comment extra-long in the hope of giving credibility to my arguments, which I hope improve my popularity in the blogosphere."

    Performative analysis in action!

  4. As per the signalling claim:

    Path dependence, maybe.