[lbo-talk] AI

Brian Siano siano at mail.med.upenn.edu
Wed Nov 19 10:45:37 PST 2003


andie nachgeborenen wrote:


> Or is your worry more the sort of thing that John
>
>Searle is getting at in his examples about the
>"Chinese Room," that no matter how complicated and
>intricate a system of symbolic operations you create
>-- Searles is a big room that crunches Chinese
>ideograms, put in some, out come some that are
>appropriate -- it wases the Turing Test -- Searle says
>it won't be _conscious_, it won't be "there," it will
>lack that glow, whatever that is. Here there's no
>answer, there are only intuitions. Mine is that the
>Chinese room can think. Searle's, not.
>
>
This raises the obvious question. If you're interacting with something, and it _seems_ to be conscious, and _seems_ to be responding to you in a manner that implies consciousness, then would there be any problem in saying that it _is_ conscious?

Searle's Chinese Room analogy seems to rest on a fallacious argument. Seale says that the Room is not "conscious," because all it's doing is following rules. He doesn't mention an important point: it's following rules which _we are aware of_. We know the rules by which the Chinese Room behaves. But we do not know the rules by which an organic brain behaves. This enables Searle to imply that the as-yet-unknown rules by which organic brains function _is_ consciousness, but the known rules are _not_ consciousness.



More information about the lbo-talk mailing list