> There are fancier statistical tests than
> correlation that purport to tell whether
> A->B or B->A.
Daniel Davies wrote:
> Doug is here referring to "Granger causation".
> Basically and shorn of statistical language,
> A "Granger-causes" B if and only if:
>
> 1) A precedes B in time
> 2) A provides information about B which is not
> available any other way.
>
> Clive Granger (who won the Nobel Prize for
> Economics for, among other achievements, proving
> that if a linear combination of N nonstationary
> series is stationary, there must be some flow of
> Granger-causation between them) has remarked on
> occasion that he has often asked people what it
> is that is involved in "proper" causation over
> and above "mere" Granger-causation and never got
> a satisfactory response. I'm not saying I agree
> with him, just that the guy who invented the
> Granger causality test puts somewhat more
> philosophical oomph behind it than the majority
> of users.
Granger spoke at Trinity University here in San Antonio, TX last year. Before going to see him, I looked up his Nobel Lecture, where he said the following about the above:
An earlier concept that I was concerned with was that of causality. As a postdoctoral student in Princeton in 1959–1960, working with Professors John Tukey and Oskar Morgenstern, I was involved with studying something called the “cross-spectrum,” which I will not attempt to explain. Essentially one has a pair of inter-related time series and one would like to know if there are a pair of simple relations, first from the variable X explaining Y and then from the variable Y explaining X. I was having difficulty seeing how to approach this question when I met Dennis Gabor who later won the Nobel Prize in Physics in 1971. He told me to read a paper by the eminent mathematician Norbert Wiener which contained a definition that I might want to consider. It was essentially this definition, somewhat refined and rounded out, that I discussed, together with proposed tests in the mid 1960’s. The statement about causality has just two components:
1. The cause occurs before the effect; and 2. The cause contains information about the effect that that is unique, and is in no other variable.
A consequence of these statements is that the causal variable can help forecast the effect variable after other data has first been used. Unfortunately, many users concentrated on this forecasting implication rather than on the original definition.
At that time, I had little idea that so many people had very fixed ideas about causation, but they did agree that my definition was not “true causation” in their eyes, it was only “Granger causation.” I would ask for a definition of true causation, but no one would reply. However, my definition was pragmatic and any applied researcher with two or more time series could apply it, so I got plenty of citations. Of course, many ridiculous papers appeared.
When the idea of cointegration was developed, over a decade later, it became clear immediately that if a pair of series was cointegrated then at least one of them must cause the other. There seems to be no special reason why there two quite different concepts should be related; it is just the way that the mathematics turned out.
<end>
<http://nobelprize.org/economics/laureates/2003/granger-lecture.pdf>
I'd love to hear what Justin has to say on this, with his background in philosophy. This reading seems to run directly counter to David Hume's skepticism of inductive reasoning. Hume, as I understand, read cause and effect as distinct occurances (due to the limits of our ability to know), with the cause alone providing NO information about he effect. According to Hume, we make the association out of habit, not because of any *observable* necessity. "Granger causality," however, explicitly claims that the cause contains information about the effect.
If I'm way off-base here, any correction would be welcome.
-- Shane