The Great Austerity Error

Back when I used to go to journalism conferences, I remember there frequently being workshops offered on the theme of “math for journalists.” In other words, how not to screw up, or get tricked by, nuances of percentages, ratios, and statistics. We could all always use a brush up on those things, but I remember being surprised and disheartened at how many people who went into journalism came from the self-professed math hating or scared of numbers camps.

Now, I don’t think that you need to be able to work complicated calculus problems in your spare time in order to report the news. But numbers matter. And popular understanding of what they mean matters.

The recent austerity Excel error debacle is a great case in point. In case you missed it, two economists, Carmen Reinhart and Kenneth Rogoff, published a paper in 2010, right at the heart of the Great Recession claiming to show that when debt hit 90 percent of GDP, there was a significant tipping point in terms of reduced economic growth. Ideological supporters of austerity, looking for things to bolster their inhumane policies, latched onto the paper, saying it showed that even if their policies caused pain now, they were the right policies for the long term.

And despite skepticism by many, too many journalists let them get away with it, presenting their theory as confirmed and widely accepted fact. As Paul Krugman points out, the Washington Post Editorial Board, in calling for continued aggressive action on the deficit, referred to “the 90 percent mark that economists regard as a threat to sustainable economic growth.”

Two problems: there’s the ever present correlation vs. causation problem. In other words, if two things are correlated, you don’t know which caused t he other, or if they are both related to some third factor. Slow growth could be causing high debt and not the other way around. Now, if journalists reported responsibly about correlation and causation, some large percentage of health and science news would disappear entirely, or at least their headlines would get a lot less interesting. But anyone who has read about the publication bias toward studies that show positive results as opposed to those that don’t (but which are likely just as accurate and important) might conclude that we’d probably be better off that way.

But there was another problem with the whole austerity paper. It was just plain wrong. They screwed up. Left out some countries. Made an excel typo when creating a sum. And when those mistakes are corrected, their tipping point disappears. The justification for a set of policies that have had real, tangible, destructive consequences at the scale of nations as well as families was created by a math error. Pretty great reason to not exempt the policy folks and journalists from math class, I’d say.

Now, I’m tempted to feel a little sorry for the researchers. Everyone screws up sometime, even when they are trying to be diligent. I copyedited academic journals for a while, and while I never uncovered mistakes of that magnitude, I can tell you the charts were very often full of smaller errors. However, my sympathy is of course muted by what the authors were trying to prove and what it was used to justify, certainly with no protest from them.

Writing in the New Scientist, Velichka Dimitrova called this a good reason to support open data—everyone who publishes something on economics should make their data sets available to other researchers so their results can be replicated and built upon. I thoroughly agree, and I hope that doing so would have uncovered this mistake sooner. But I have to wonder if any business journalists amd editorial boards would have bothered to look, even if the data sets were there.

It’s easy to impatiently twitch about the anti-intellectual, anti-science factions in this country who want to deny established facts about planetary orbits, overwhelming evidence for evolution and climate change, or the complete lack of evidence for any damage to children of gay parents. But it’s just as anti-intellectual to not make any distinction between scientific consensus on repeatedly, rigorously studied topics, and one-off, not replicated, politically convenient analyses. It’s anti-science to not take responsibilty for knowing how the scientific process works and understanding probability and statistics enough to play our role as skeptics.

And it’s a problem if we let researchers imply what moral choices we should make based on their conclusions. After all, what if Reinhart and Rogoff had been right? There has been a loud chorus of us who have for decades been saying that economic growth can’t be our measure of health—that we live on a finite planet, and infinite growth based on limited natural resources with externalities not well accounted for is the wrong model, practically and morally. So even if growth would slow, should we have made knee-jerk policies that cause suffering now in order to try to capture that old dream of forever growth in the future? Or could this moment be, in either case, a time to reframe our investments toward a lower growth, lower consumption, yet more sustainable, more fair, more humane future?

(This column was originally published in Metroland, the Capital Region of New York’s former alt-weekly, under the title “Study Says What?” on May 23, 2013.)

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s