[lbo-talk] What's at stake?

Dwayne Monroe idoru345 at yahoo.com
Wed Nov 19 08:30:41 PST 2003


Luke Weiger posted:

http://www.informationweek.com/story/IWK20010905S0004

from which -

"In contrast with our intellect, computers double their performance every 18 months," warned the genius physicist in a recent interview with the German newsmagazine Focus. "So the danger is real that they could develop intelligence and take over the world."

Charles Brown asked:

I know Stephen Hawking is more intelligent than I am, but can we discuss this ? Isn't there some qualitative difference between artificial and "real" intelligence still ?

===============

Efforts to produce artificial intelligence systems which match (let alone exceed) cognitive behaviors humans take for granted have failed completely.

For a long time, AI enthusiasts like Prof. Marvin Minsky of MIT explained persistent failures away as being the result of hardware deficiencies which would be solved as performance improved. Of course, computer performance, at the hardware level, has increased by orders of magnitude and yet machine cognition remains elusive.

AI has gone through various iterations of a consistent enthusiasm/defeat cycle. At the core of each failure is a bad (or incomplete) model of human thought which the AI researchers tried to mimic via silicon.

Years ago, *machine learning* was the craze. It was assumed that software could *learn* and, therefore, achieve a state of sentience, using virtual explorations of virtual environments. The core idea was that children learn via trial and error (I touched the hot stove, was burned and learned the meaning of hot directly) and so software could, if given the proper instruction set for self-correcting exploration, attain a state of knowledge in a similar way.

This produced some interesting results but nothing that could honestly be called thinking.

Another, hardware based, craze was massive parallelism and neural networks. This was based upon the observation that our brain's *processing elements* are interconnected in complex and redundent webs.

The assumption was that a similar wiring scheme for the processing elements of a computer, perhaps running *learning software* would do the trick and bring about true (or, *strong* as enthusiasts described it) AI. An interesting example of this was a device called the Connection Machine built in Boston in the 1980's.

Again, the results were interesting but no thinking could be observed.

...

To sum up, Prof. Hawking is wrong in this case. He has confused improvements in computer processing power with improvements in computer cognition. This is truly a matter of apples and oranges.

Improvements in processing performance mean greater speed in doing the sorts of things computers already excel at: symbol manipulation according to instruction sets.

It is a leap of illogic to assume a coming age of machine cognition from the available evidence.

For anyone interested in a philosophical treatment of the problems with *strong AI* , I recommend the works of Dr. John Seale -

http://www.artsci.wustl.edu/~philos/MindDict/searle.html

DRM

__________________________________ Do you Yahoo!? Protect your identity with Yahoo! Mail AddressGuard http://antispam.yahoo.com/whatsnewfree



More information about the lbo-talk mailing list