[lbo-talk] RE: AI

Matt lbo2 at beyondzero.net
Fri Nov 21 11:10:09 PST 2003


On Fri, Nov 21, 2003 at 09:17:07AM -0800, joanna bujes wrote:


> If a computer can do anything without a humanly-programmed set of
> instructions, we can start talking. However, a computer without a program
> is like a car without gasoline. It just sits there and does nothing.
> Another way of getting at this is to say that machine "Intelligence" starts
> at the point where a computer can come up with something that lies outside
> the permutations of its instruction set. Problem is, it can't. Some humans,
> on the other hand, can overcome their conditioning.

These analogies are only useful as arguments insofar as the analogies are accurate. I'm not so sure that gasoline is to cars like programs are to computers and like conditioning is to humans.

The faith in human uniqueness when it comes to sentience bothers me, because it seems to be posited on some non-physical, non-identifiable component of humanity that can only exist in humans. Religious fundamentalists certainly agree with that position, and they call it the soul. Most of your points seem consistent with that, even without using the "soul" word. Whether or not the "soul" is the thing that gets one into an afterlife or the thing that allows humans to come up with something outside the permutations of our "instruction sets" it is an entity not proven or posited by any scientific theory.

The human mind, being a part of the Universe, is bounded by the laws of science, both known and unknown. That makes it an extremely complicated machine, one we do not fully understand, but a machine nonetheless. That does not mean though that because we do not currently understand how it operates we will not one day be able to create an accurate replica, or something completely different that would meet any criteria of sentience.

It is safe to say that the chunks of silicon on your desk are not going to magically become sentient by adding more memory and a faster processor, but that isn't the type of environment in which "strong AI" is going to exist. But to say a computer - meaning a machine specifically created for solving problems - can never exist that will be sentient is a much bolder assertion. Human intellgience is the result of random aberrations in organic systems under unforgiving environments over the course of millenia. Why couldn't the product of that slow, random process produce something as good or better through a dedicated effort (even if that effort were a faster, less random process similar to evolution)? My hunch is that AI is most likely to be less like Data on Star Trek or C3PO on Star Wars and more "gooey", like the Blade Runner Replicants or the android Bishop in Alien[s]. So the problem isn't just one of computer science, but of biochemistry.

Lastly, sentience, or AI, is really not some binary state - either something is or it isn't. Dolphins display quite a bit of intelligence, as to other hominids. Some of them seem to even exhibit a degree of consciousness. I've had pets that display more empathy than US politicians.

Matt

-- PGP RSA Key ID: 0x1F6A4471 aim: beyondzero123 PGP DH/DSS Key ID: 0xAFF35DF2 icq: 120941588 http://blogdayafternoon.com yahoo msg: beyondzero123

Respect my body 'cos that's where you came from.

-DJ Rap



More information about the lbo-talk mailing list