On Wed, 30 Jul 2003, Brian Siano wrote:
> Luke thinks much more of Spearman's g than I do. My athletic-ability
> analogy was designed to illustrate the fallacy of g, actually. Here's a
> bit of history for y'all. When they first started creating IQ tests,
> they found that people didn't test very consistently: people might be
> great with verbal skills and be terrible with math, or vice-versa. A
> person might answer two questions designed to measure the same problem
> with widely varying results. So, Charles Spearman developed the
> statistical technique of factor analysis to reduce all of these
> "factors" (i.e., performance on each question) down to a single, common
> number, which he termed "g" and hypothesized that it was the single,
> biologically-based factor that determines intelligence. The _technique_
> is brilliant, and it was a major advance in statistics... but in this
> application, what it does is reduce even unrelated factors into g.
>
> This is where I have to argue with Luke, who felt I should have used
> sports tasks which are closely related. The point, Luke, was that a
> sports-g can be derived from tasks that are _not even remotely_
> related... and that's what IQ and Spearman's g do.
Hey, let's be fair to Spearman and factor analysis here. If the variables included in a factor analysis model have low correlations, the loadings on the common factor ("g" here) will be low. "Unique" variance will be high. On IQ tests like the Stanford-Binet, the various subscales load heavily on the first factor (g). Factor Analysis is not just a statistical parlor trick that combines unrelated variables and makes them all look like indicators of the same thing.
That said, I think it's clear that IQ is a proxy for academic achievement and capability, not general intelligence. People often demonstrate practical intelligence in everyday life, and this capacity for practical intelligence has little or no relationship to IQ or SAT scores (see Robert Sternberg for the research on this).
Miles