Tim Harford, whom I respect greatly for his ability to write lucidly and clearly on economics, has written about Angrist and Pishke's latest paper on econometrics, building on the supposedly ground-breaking ideas and suggestions of Ed Leamer back in the 1980s. David Roodman takes up the story here.
As a Christian and econometrician I can't stand his last comment about Christians in America (why is believing in a God that creates un-scientific? Many Christians are able to bridge this gap David struggles greatly with, myself included. We're discovering the wonder of God's creation using Science, and the two answer totally different questions - the how and why), but let's get past that.
More importantly, the con has long been taken out of econometrics, and it was done before Angrist and Pishke - in fact I'd say they perpetuate the con. I went to a launch for the Angrist-Pishke book, Mostly Harmless Econometrics, at the 2009 RES conference in Surrey. Pishke said, basically: Back in the day (1970s) nobody trusted anybody else's econometrics, but now it's different (oh really?). Now if we have heteroskedasticity (or autocorrelation) in our residuals, we just correct for it using standardised residuals.
Excuse me, but how is that not conning the audience? We introduce some botched correction which makes a whole stack of assumptions, and is only valid in large samples, and somehow it's all ok? If I have a gaping wound in my knee, and someone says: Here's a plaster (band aid for the Americans), would you be happy? That's the equivalent.
Autocorrelation causes bias in estimates, heteroskedasticity causes imprecision. They are both symptoms of a greater malaise - model misspecification.
So how is it thus ok to paper over the cracks and not address the more serious underlying problems?
That, for me, is the con in econometrics, and it is being continually perpetuated, especially in the econometric sentiment of Angrist and Pishke.
Monday, March 29, 2010
Monday, March 15, 2010
Huh? Why Bayesian is better I don't understand
So it seems Ziliak and McCloskey are getting a lot of attention at the moment, as they've published a new book. Tim Harford blogged about them, and now there's an article in Science News about them.
I suppose I should really be pleased that scepticism about statistical testing is getting a more widespread audience. There's little doubt a lot of dubious stuff is done, especially in economics by people wedded to their theories.
But there's something very off-putting about the virulence of Ziliak and McCloskey. They are no better than the academics they criticise for taking some argument/rule, and pushing it fundamentally. If you have a moment I'll leave you to their responses to critiques from two of the finest minds in doing statistics properly, Aris Spanos and Kevin Hoover (Google link). They also don't refrain from taking a pop at Clive Granger because he dares not to fully agree with them. If anyone disagrees with them, it's the other guys that are misunderstood, it's not that Ziliak or McCloskey could ever need to learn anything new - they've already cornered the entire field, and made sense of all the mistakes the rest of us have ever made.
But anyhow, the Science News article puzzles me. It goes the whole length talking about how statistics are being manipulated and this reduces their credibility, then finally says: The way forward is Bayesian! Why is that? Because Bayesians also use priors. Now, priors are our prior hypotheses about the thing we're investigating. We warp the statistical results by our prior belief about what something should look like.
Isn't that manipulation?????
I suppose I should really be pleased that scepticism about statistical testing is getting a more widespread audience. There's little doubt a lot of dubious stuff is done, especially in economics by people wedded to their theories.
But there's something very off-putting about the virulence of Ziliak and McCloskey. They are no better than the academics they criticise for taking some argument/rule, and pushing it fundamentally. If you have a moment I'll leave you to their responses to critiques from two of the finest minds in doing statistics properly, Aris Spanos and Kevin Hoover (Google link). They also don't refrain from taking a pop at Clive Granger because he dares not to fully agree with them. If anyone disagrees with them, it's the other guys that are misunderstood, it's not that Ziliak or McCloskey could ever need to learn anything new - they've already cornered the entire field, and made sense of all the mistakes the rest of us have ever made.
But anyhow, the Science News article puzzles me. It goes the whole length talking about how statistics are being manipulated and this reduces their credibility, then finally says: The way forward is Bayesian! Why is that? Because Bayesians also use priors. Now, priors are our prior hypotheses about the thing we're investigating. We warp the statistical results by our prior belief about what something should look like.
Isn't that manipulation?????
Subscribe to:
Posts (Atom)