[lbo-talk] Elite Institutions/SATs

Brian Siano siano at mail.med.upenn.edu
Wed Jul 30 11:53:39 PDT 2003


andie nachgeborenen wrote:


>Oh, just great. You defend the SAT, which measures
>literally _nothing_, because it is crude approximation
>to IQ tests, which measure the mythical "G," utterly
>discredited by all but the true believers (except in
>its orginal role of identifying or ruling out people
>who might need special help), in the forlorn hope that
>other intellectual capacities that we cannot also
>measure are "correlated" with each other, though how
>we know this if we cannot measure them is anybody's
>guess. Ferchrissake, Luke, you should be able to see
>throgh this series of fallacies and errors. It's
>embarrasing to see you humiliate yourself in this way.
>
I know I'm reaching my three-message quota with this, but I have to defend Luke a little here. It's one thing to say that the SAT doesn't measure what people _claim_ it measures. One can say that it fails as a measure of future success, or predictive performance in college, or even as a measure of the hypothetical trait of IQ (or Spearman's "g"). But it doesn't measure "literally nothing."

The usefulness of the SAT is dependent on the context. For example, if a college has to sift through thousands of applicants, then having a standardized scale by which the applicants are pre-measured can really streamline the evaluation process. You can say, "okay, we require an SAT score of at least 1000," and you'd be able to cut a lot of applications out of the process very swiftly, and focus on the people who are more likely to do well in your college. (I'm not saying this is fair or right or just. It's sort of like a credit rating.)

I also have to argue with Andie's coment of "the forlorn hope that other intellectual capacities that we cannot also measure are "correlated" with each other, though how we know this if we cannot measure them is anybody's guess." It's not that forlorn. We're trying to understand what people's brains do. And one way to begin some kind of understanding is to develop some detailed observations about what they do. And we see how these observations relate to one another-- if people do X, do they tend to do Y or Z, and why? From this, we can hypothesize about more detailed workings, check those theories against the facts, and continue. It's not "forlorn." It's science.

Also, there are brain disorders that can be recognized and diagnosed by standardized tests. Certain correlations can indicate speech aphasias, dyslexia, some forms of schizophrenia, and other disorders. And as I mentioned before, one of the ways in which we can measure the health impacts of many pollutants is through their effects on test scores. In other words, there are good uses for these tests. But these tests do not measure an innate, biologicaly-based trait called "intelligence."

Luke is right that the verbal scores on the SATs are a good proxy for IQ is accurate, technically speaking. That's because the SATs are, in terms of structure, IQ tests-- and such tests are _designed to correlate well_ with other IQ tests. This is partly due to good ol' ideological inertia (established theory is treated as a benchmark, even when it's fallacious), and partly due to utility. A major IQ test could consist of a thousand questions or so. But if you have several thousand people to test, then you'll want a smaller test to use-- and its results have to correlate well with the major test to be worth using.

Luke thinks much more of Spearman's g than I do. My athletic-ability analogy was designed to illustrate the fallacy of g, actually. Here's a bit of history for y'all. When they first started creating IQ tests, they found that people didn't test very consistently: people might be great with verbal skills and be terrible with math, or vice-versa. A person might answer two questions designed to measure the same problem with widely varying results. So, Charles Spearman developed the statistical technique of factor analysis to reduce all of these "factors" (i.e., performance on each question) down to a single, common number, which he termed "g" and hypothesized that it was the single, biologically-based factor that determines intelligence. The _technique_ is brilliant, and it was a major advance in statistics... but in this application, what it does is reduce even unrelated factors into g.

This is where I have to argue with Luke, who felt I should have used sports tasks which are closely related. The point, Luke, was that a sports-g can be derived from tasks that are _not even remotely_ related... and that's what IQ and Spearman's g do.



More information about the lbo-talk mailing list