[lbo-talk] AI

Curtiss Leung curtiss_leung at ibi.com
Wed Nov 19 11:29:30 PST 2003


Bang! I wish I could have said it so well.

Searle also omits that computers and the person in the room are not just following rules: they must also maintain *state*. If the man in the room is going to pass the Turing test in a language he doesn't understand, he must keep some track of the symbols already processed. That may not seem like a desvastating criticism, but given that Searle insists the core of the Chinese Room argument is that minds have semantic content or "meaning", I think he has to explain what the difference between the state maintained by the person in the room/software and these meanings are.

Curtiss


>
> Searle's Chinese Room analogy seems to rest on a
> fallacious argument. Seale says that the Room is
> not "conscious," because all it's doing is following
> rules. He doesn't mention an important point: it's
> following rules which _we are aware of_. We know the
> rules by which the Chinese Room behaves. But we do not
> know the rules by which an organic brain behaves. This
> enables Searle to imply that the as-yet-unknown rules by
> which organic brains function _is_ consciousness, but the
> known rules are _not_ consciousness.



More information about the lbo-talk mailing list