What is Noah Smith talking about?

In a post last week, Noah Smith discusses what he views as a problem in economics, empirical papers tacking on theory sections to appease editors. I thouroughly disagree with with this complaint. As a blog that hopes to be about econ papers – and as a fan of Dr. Smith’s – complaining about his piece seems like the perfect way to start off.

Before I attack him, let me start with some appreciation of Noahpinion. It is without a doubt my favorite blog. In many ways it served as my introduction to economics. Noah’s popcorn chewing attitude towards larger debates, his efforts as a referee (seen as recently as here) make reading the catalog of posts on Noahpinion a true pleasure. It doesn’t hurt that Noah himself is very sharp. As a result, I’ve read Noahpinion for at least 6 years. I aspire to this blog being nearly as well written or thought as Noah’s efforts. Perhaps because of this – I’m quick to pouce on disagreements.

Noah discusses three categories of econ papers, finding none to his liking. The first category – purely theoretical papers – are dismissed as being (mostly) unhelpful. Fine. I think those papers have tremendous value, but I’ll leave it alone for now. The second category – purely empirical papers – are hard to learn from. They may not have any external validity, and even if they do, the effects can typically be dismissed as local to an unidentifiably small area. This I think I mostly agree with. People tend to rely heavily on their priors when determining how widely applicable this type of evidence is. The third category is empirical papers which build some theory of their own and apply it to their data. This gives external predictions, but at the cost of an explosion of single use models – many of which seem to have been added only to appease an editor or referee. Apparently this is terrible. I firmly disagree.

Why do we think this is a problem? If I go to the literature to try to make policy choices, and find an overwhelming array of options – that is clearly unhelpful. However, its not obvious to me that is what happens. Typically, over time, the more useful models stick around and the old ones die out. If we look at literatures that are 10 or 20 years old, I’m sure we can find a generally agreed on base model or two, with a few different optional wrinkles depending on the situation.

Further, Noah himself argues that purely empirical papers don’t contain the same information value as an empirical paper attached to a model. The complaint is rather that one time models are what appears. But the reason for that is simple – whenever I add a complication to a model in an empirical paper, I am in a sense rejecting the base model in favor of the more complex model. This is implicitly a falsification of an old model which failed to explain circumstances adequately. Almost by definition, each of these models rejects an old model – so growth in the world of models may be much lower than Noah claims.

These models accomplish other useful things too though. Much like a bet between bloggers, they pin down the researcher – forcing her to say what her beliefs about the world are. There’s nothing wrong if those beliefs change, but it means that the process of criticizing conclusions and inferences becomes much easier. Without these models – we get neither the ability to make non-local claims, nor any understanding of where they came from.

These models also help newcomers to a field by giving them insight into what smart people think about how things work. When you see a model being applied to data in a paper – there is a researcher (or several) who have thought long and hard about the topic behind that model. While it may be a toy model – designed to be solvable, or estimable, or intuitive, it conveys what the authors think are the important dynamics in an area. This communication feature is something we should not be overlooking. Lowering the bar for researchers to enter and understand other fields is really important for progress.

From a theory of science perspective this model explosion is also good. Incentivizing hundreds or thousands of minds to come up with novel, counterintuitive theories with high explanatory power makes the whole field more likely to develop towards the truth. As search algorithms go – this closely resembles evolutionary algorithms. I wouldn’t say that evolutionary search techniques are optimal, but when trying to optimize on a complex surface with many local optima, they are very good. This may be a strange way to think about research – but its not a bad metaphor at all. A related point though: when we look at any optimization routine, do we ever complain that someone used too many starting points? No. That would be ridiculous. We want as many starting points as possible.

I do want to make a few caveats. There are definitely papers for which the theory section is purely pro forma and which don’t help the field at all. That doesn’t mean that the referees going around requiring theory sections aren’t helping the field. That would require the net benefits – including from papers that add theory sections as a result of this pressure which turn out to be useful – to be non-existent.1 I don’t think these papers are super common – I think most people plan a theory section anyhow – but Noah would know more that I about that. Further, my argument is along the lines that this is long run optimal. Clearly it could be quite damaging short run. It could even be long run detrimental if theory sections act as barriers to entry because newcomers don’t know enough to write them – this is related to Noah’s mud moat concept. There is also a strong form of path dependence here, so short run painful could in theory never attain long run optimality. Again – I think this is probably not the case. I’m near certain that Lakatos would have a lot to say – but I haven’t read enough of his stuff to comment on it. But I suspect it would go something like – profusion of theories is in fact how we should defined a successful research program. Or maybe not. A final caveat – I’m mostly reading papers recommended by reading lists at the moment – they may be differentially good relative to the literature as a whole in this respect.2

  1. Or lower than the costs of that research? But the goal of all research should be moving towards that truth – so unless this is a hideously inefficent mechanism, it shouldn’t be a problem. I think. This needs a bit more thought. 

  2. On the other hand – if reading lists can separate out the good papers, then the model explosion is maybe less concerning? A few public reading lists and suddenly all the downsides might disappear.