On Fri, 21 Nov 2003, Carrol Cox wrote:
> As we have them now, computers never make mistakes, or if they do make
> mistakes it is because their user fed them garbage (either in the form
> of the instructions from the programmer or data from the user). But if
> computers begin to think for themselves, would they not also begin to
> make mistakes, to disagree wildly with each other?
>
> Would that not in fact be the proper test to apply? Computers are
> thinking if and only if they are making mistakes and disagreeing with
> each other????
>
> Carrol
>
This is getting into the real problem here: how can we distinguish "human" mistakes and disagreement from "computer" mistakes? Like a computer, humans often make mistakes because they are "fed" the wrong data or execute a plan/program incorrectly. Like humans, two computers with different "backgrounds" will "disagree" about the solution to a problem (e.g., the calculator in Win95 and the open source calculators in Linux--there's a notorious rounding bug in the Win 95 calc).
I know Justin doesn't want to get into the philosophical heavy lifting here, but I think Wittgenstein's relevant: we assume terms such as "disagree" refer to some basic psychological state, rather than analyze how the word "works" as part of language games in everyday social life.
Could computers "disagree"? Of course: if we include computer output disparities in our use of the word "disagree". If we insist that there is some important ontological distinction between human disagreement and computer disagreement, we're just saying that we don't ordinarily apply the word "disagree" to machines. Is this because our language use is mapping an important ontological distinction, or is it arbitrary? I agree with Wittgenstein on this: what does it matter? (What is defined as real is real in its consequences, so the ontological status of the referent is irrelevant: cf. the practical reality of the Catholic church, whether or not God really exists.)
If everyone uses the terms "thinking" to apply to humans and not machines, then machines will never be able to "think".
In short: this whole debate is predicated on a misconception of how psychological terms like thinking, intelligence, and disagreement are actually used. The language leads us astray here.
Miles