[lbo-talk] AI

Brian Siano siano at mail.med.upenn.edu
Fri Nov 21 11:20:18 PST 2003


Carrol Cox wrote:


> As we have them now, computers never make mistakes, or if they do make
>
>mistakes it is because their user fed them garbage (either in the form
>of the instructions from the programmer or data from the user). But if
>computers begin to think for themselves, would they not also begin to
>make mistakes, to disagree wildly with each other?
>
>
"This mission is too important for me to allow you to jeopardize it."


>Would that not in fact be the proper test to apply? Computers are
>thinking if and only if they are making mistakes and disagreeing with
>each other????
>
>
This reminds me of an important distinct in animal intelligence, namely, dogs and cats. Cat owners are fond of saying that their cats are more intelligent than dogs, but by and large, this isn't true. Dogs are _far_ more intelligent. They have a wider repertoire of behaviors, a wider range of emotions, and can be trained to perform much more complex tasks than cats can.

This is where cat owners remark "That's because cats are too smart to want to be trained" as though it's freshly-minted wit. But the claim could be made about amoebas as well. But the point is made with a simple question: how many seeing-eye cats are there in the world?

Basically, cats have a repertoire that works well enough for them. Dogs have a bigger repertoire, which means there are more things they can do... which means they have more ability to _fuck up_. (Humans have even greater capabilities for fucking up.) So, a greater capacity to disagree isn't a bad yardstick for intelligence.



More information about the lbo-talk mailing list