[lbo-talk] AI

Brian Siano siano at mail.med.upenn.edu
Fri Nov 21 11:14:38 PST 2003


Chris Doss wrote:


>
> Further more, a computer which developed consciousness, or intelligence,
> might be so alien to our sensibilities, that we'd never realise it. Our
> consciousness is derived from our experiences, so given the differing
> experiences of a computer, it is unlikely that it would be much like us.

This is actually an extremely good point. Consider that our minds have developed through thousands of years of human evolution. That's a lot of give-and-take with our surrounding environment, and that development was shaped by the way in which our bodies evolved. In other words, if our bodies were radically different, our brains would not be what they are now. (If our bodies were horses, then our brains'd probably be likes horses' brains, too.)

A computer sufficiently complex enough to match the abilites of the human brain would not exist as a human being does. It's possible that a computer could be _designed_ to behave as a human being does. Its means of evaluating the world would be shaped, in part, by its perceptions of the world: for example, if we gave it a single visual input, it might develop extreemly complex behaviors to extract more information out of that input (say, moving it from side-to-side to acquire depth perception, or performing extra work on the single image to interpret depth).

This does lead to some interesting ideas. An artificial intelligence might actually be as foriegn to us as that of a whale or dolphin.



More information about the lbo-talk mailing list