A bigger problem is that consciousness and intelligence are both poorly defined. We don't even know what these terms mean for humans, so trying to work out whether robots/computers would fit it is ridiculous.
Further more, a computer which developed consciousness, or intelligence, might be so alien to our sensibilities, that we'd never realise it. Our consciousness is derived from our experiences, so given the differing experiences of a computer, it is unlikely that it would be much like us.
Another category mistake, is to assume that much of AI work is trying to create an "intelligent" machine. It isn't - its trying to create an unintelligent machine that can do things which humans do as a byproduct of intelligence. Those machines won't develop consciousness, unless in a very limited way.
However connectionism, which is essentially mimicing the functioning of the brain, is trying to create "intelligent" machines. As an area this is limited by hardware, rather than software.
-----Original Message----- From: lbo-talk-admin at lbo-talk.org [mailto:lbo-talk-admin at lbo-talk.org]On Behalf Of Chris Doss Sent: Thursday, November 20, 2003 1:03 PM To: lbo-talk at lbo-talk.org Subject: RE: [lbo-talk] AI
Searle's analogy is ridiculous - akin to saying that we're not intelligent because neurons don't know what they're doing. Whatever else intelligence is, its definitely holistic, which is why the reductionist arguments are so ludicrous (and apply to humans, as much as they would a computer). ---
Category mistake: Intelligence and consciousness are not the same thing.
_________________________________________________________________ Set yourself up for fun at home! Get tips on home entertainment equipment, video game reviews, and more here. http://special.msn.com/home/homeent.armx
___________________________________ http://mailman.lbo-talk.org/mailman/listinfo/lbo-talk