Monday, October 11, 2010

Old but worth commenting on...

I've just come across some interesting views on macroeconomics and macroeconomics.  The most recent thing is some comments by Lawrence Meyer picked up by a number of bloggers.  One blogger in particular, a staunch Austrian who sees everything in economics having originated in one person (Hayek), points readers back to Arnold Kling's thoughts on empirical macroeconomics, posted back in February 2009.

What gets me most, and I think this is common for all economists who sit back and criticise another field, is that this guy hasn't got a clue what macroeconometricians do these days.  This is why I've stopped making such rash statements about DSGE models.  I have a feeling that they are progressing along certain lines I don't like and so I make general statements which one of my co-authors will constantly tell me are way off the mark.  So until I can make a proper assessment I've stopped.

Arnold Kling talks about how the empirical macroeconomist goes about his work.  He looks at data, and because of serial correlation, he takes differences - Kling goes as far as to say that to do otherwise "would be utterly unsound practice".  Thankfully Robert Bell in the comments takes him to task on this - as does any sensible Econometrics textbook, as Bell says.  Differencing destroys massive amounts of information on the levels of data series of interest, and economic theories consider the levels of variables (inflation, nominal GDP, consumption, etc).

The modern macroeconometrician looks into whether cointegrating relationships exist in the levels of the data.  Kling has concerns about imposing priors on the analysis.  Until recently I thought the Pesaran et al approach to cointegration (bounds testing etc) was no different essentially to the Johansen/Hendry et al approach - but now I realise there is a fundamental difference.  Pesaran et al assert you must have strong theoretical priors about the cointegrating relationships before you even begin - the Johansen school is instead more agnostic about it.

Basically, if stationary relationships exist in the data, you can look at them.  You can see whether they make economic sense, and if they do (and even if they don't) you can start to make economic inference about them - and of course only then in a qualified sense - this was a particular country in a particular time period.  But it doesn't render this kind of information useless, as a lot of economists (Austrians and theoretical macroeconomists - from very different starting principles).  It gives us some idea about the size of effects.  We get some rough bound on how big an effect is - and if we want to apply it elsewhere we have to consider whether or not that application is appropriate - critically so.  Probably the most harm is done to econometrics by its proponents claiming much too much for it.

Wednesday, August 25, 2010

Academic Snobbery?

A few weeks back I wrote about what I though was academic snobbery - I get a little agitated when I hear TV and radio interview "economists", who are commercial economists, employed by some company, rather than academic economists.

Over at Worthwhile Canadian Initiative Mike Moffatt has written on something very similar: The fact that non-economists perceive economists as being something they are not because of these guys that appear in the media and give black and white answers to questions which any serious economist knows have many shades of gray/grey.  Mike says that academic economists thus are not invited for these kinds of interviews because we don't give an exciting opinionated sound-bite.

Tuesday, August 17, 2010

Lack of Knowledge, or a Conspiracy?

So many papers I read, like this one, make the assertion that when we can't be sure about whether a data series is I(0) or I(1) (so stationary or non-stationary, integrated of order zero or one), it's ok because we can use the Pesaran, Shin and Smith (1999) approach and all is well.

Implicitly, then, according to these guys, the Johansen (1996) approach requires some previous knowledge about the level or integratedness of the time series.  Is this true?  Of course it's not.

Saturday, August 14, 2010

One Data Point, Exactly

I happen to have an old university friend who is a keen Austrian, in the economics sense.  Austrians appear to reject all forms of empirical investigation, and hence are unlikely to be the subject of warmth given my econometric standpoint.

They emphasise theoretical reasoning based on "self-evident" axioms, but also emphasise how complicated the world with terribly difficult to model human beings is, and hence how we cannot possibly hope to test any economic theory.  So they tell us how the world is, but aren't prepared to have their "self-evident" theories tested by real economic data.

They also appear to be generally libertarian in nature, and hence hold the price mechanism in the utmost regard: It can solve all problems.  Even where the price mechanism won't yield the optimal solution, it will still always yield a better solution than government intervention.

So this post on Cafe Hayek is particularly shocking on so many levels.  The author does make the point that one data point is not sufficient, and that's the point I should emphasise on the econometric sceptic blog.  But moreover, why is an Austrian defending a social democracy that has an extensive social safety net?

The defence though, against this assertion is simple: It's one data point, and you can't prove anything with one data point.

Am I an Academic Snob?

When listening to Radio 4 on the BBC driving into work, and when reading newspaper articles, I'm increasing noticing that when news sources interview economics, it is rarely the academic type they interview, usually the commercially employed ones, and I wonder why that is, and whether it's always been that way.  An example is in this article about the Bank of England where Simon Ward is called upon.

I often do get emails at work (university) about opportunities to talk to the press, but they aren't that frequent.  Maybe it's because my university is a provincial, non-London one?  Maybe London-based academic economists do get a lot more opportunities?

Or is it that commercially based economists are a different breed to us academics?  More happy to state their opinion, and happy to take strong positions?  Academic economists generally are more reserved types (I say generally, of course there are plenty of excepions like Andy Rose, Steve Levitt, Paul Krugman and Brad DeLong to name but a few) who will hedge their responses: The classic one-handed economist (on the one hand... but on the other hand...) is hard to find.

I don't know if I feel offended by this, but one thing I know is that the strong opinions expressed on the economy by more commercially based economists have a greater chance of being plain wrong than the more couched and qualified responses of your academic economist.  Of course, for the Torygraph, an academic economist is also probably far too left-wing to be considered...

What Can Be Learnt From Econometrics?

The title of this blog is, of course, somewhat arrogant.  It suggests the econometric method is superior to whatever economic theory can throw up.  It also suggests that my attempts at doing econometrics are superior to everyone else's.  Of course, I don't subscribe to either viewpoint.

However, I do have a high regard for econometrics.  I'm not a sceptic because, like commenters such as Falkenblog and his commenters, I have a low regard for econometrics and in fact regard economic theory more highly than econometrics.  It's probably also worth emphasising I'm not at all sympathetic to what is somewhat vaguely described as the Austrian School of Economics, the proponents of whom disregard all empirical analysis as valueless.

I see economic theory and econometrics as being complementary in learning more about the world around us.  Just as I create theories in my head about my everyday existence (my car can drive 500 miles without me refuelling), when I'm confronted with hard data, I'm forced to rethink.  I made 490 miles in the end before coming to a sticky end on the M42.

I think this is the least arrogant position.  It says: I don't know everything, it's very complicated out there.  I'll construct theories about the world, and I'll test them.  If the theories are rejected, I will probe how well the theories have been tested and if the testing passes muster, I'll reformulate the theory.

An attempt to post more regularly...

With the advent of Google Reader, I've become a much more avid reader of blogs.  I like to think it's keeping me up to date with what's relevant and interesting in the world out there, although I do worry it takes up a little too much of my time.

It turns out I also have a number of blogs myself, not least this one.  For some reason I'm not 100% sure about, I've tried to make this one an anonymous one, whereas I have my other teaching ones, and other previously aborted more generally attempts at blogging.

However, there gets to a point where I'm getting pretty tired of being a consumer and want to make my point in a better way than some small comment, probably missed, at the bottom of a blog post.

I'll never reach the prolificness of some bloggers (Matt Yglesias, Brad DeLong, etc), but I'd like to think one or two a week should suffice.  I'm about to write one of the few for this week now...

Monday, March 29, 2010

Irritating and I Don't Agree

Tim Harford, whom I respect greatly for his ability to write lucidly and clearly on economics, has written about Angrist and Pishke's latest paper on econometrics, building on the supposedly ground-breaking ideas and suggestions of Ed Leamer back in the 1980s.  David Roodman takes up the story here.

As a Christian and econometrician I can't stand his last comment about Christians in America (why is believing in a God that creates un-scientific?  Many Christians are able to bridge this gap David struggles greatly with, myself included.  We're discovering the wonder of God's creation using Science, and the two answer totally different questions - the how and why), but let's get past that.

More importantly, the con has long been taken out of econometrics, and it was done before Angrist and Pishke - in fact I'd say they perpetuate the con. I went to a launch for the Angrist-Pishke book, Mostly Harmless Econometrics, at the 2009 RES conference in Surrey.  Pishke said, basically: Back in the day (1970s) nobody trusted anybody else's econometrics, but now it's different (oh really?).  Now if we have heteroskedasticity (or autocorrelation) in our residuals, we just correct for it using standardised residuals.

Excuse me, but how is that not conning the audience?  We introduce some botched correction which makes a whole stack of assumptions, and is only valid in large samples, and somehow it's all ok?  If I have a gaping wound in my knee, and someone says: Here's a plaster (band aid for the Americans), would you be happy?  That's the equivalent.

Autocorrelation causes bias in estimates, heteroskedasticity causes imprecision.  They are both symptoms of a greater malaise - model misspecification.

So how is it thus ok to paper over the cracks and not address the more serious underlying problems?

That, for me, is the con in econometrics, and it is being continually perpetuated, especially in the econometric sentiment of Angrist and Pishke.

Monday, March 15, 2010

Huh? Why Bayesian is better I don't understand

So it seems Ziliak and McCloskey are getting a lot of attention at the moment, as they've published a new book.  Tim Harford blogged about them, and now there's an article in Science News about them.

I suppose I should really be pleased that scepticism about statistical testing is getting a more widespread audience.  There's little doubt a lot of dubious stuff is done, especially in economics by people wedded to their theories.

But there's something very off-putting about the virulence of Ziliak and McCloskey.  They are no better than the academics they criticise for taking some argument/rule, and pushing it fundamentally.  If you have a moment I'll leave you to their responses to critiques from two of the finest minds in doing statistics properly, Aris Spanos and Kevin Hoover (Google link).  They also don't refrain from taking a pop at Clive Granger because he dares not to fully agree with them.  If anyone disagrees with them, it's the other guys that are misunderstood, it's not that Ziliak or McCloskey could ever need to learn anything new - they've already cornered the entire field, and made sense of all the mistakes the rest of us have ever made.

But anyhow, the Science News article puzzles me.  It goes the whole length talking about how statistics are being manipulated and this reduces their credibility, then finally says: The way forward is Bayesian!  Why is that?  Because Bayesians also use priors.  Now, priors are our prior hypotheses about the thing we're investigating.  We warp the statistical results by our prior belief about what something should look like.

Isn't that manipulation?????