[lbo-talk] AI

Dwayne Monroe idoru345 at yahoo.com
Wed Nov 19 11:34:08 PST 2003


Having worked with AI based systems and huge computing clusters for a few years now, I find this faith in strong AI bewildering and disappointing. It is perhaps, an indication of the deep complexity of the systems we depend upon that many people believe almost any innovation to be possible - as if via magic.

Computers, as symbol manipulators via instruction set execution, are without peer among devices. Someone once called them the *proteus of machines* and I agree.

But strong AI is a failure and will remain a failure because its underlying assumption - that minds can be built or evolved using canned mimicry of certain (poorly understood) aspects of human cognition - is deeply flawed.

*Learning systems* are cleverly constructed and very useful tools. So are massively parallel neural nets. But these systems do not think now and do not form the foundation of some future thinking apparatus because of fundamental differences between what minds and machines do.

Prof. Hawking's error, and the error of all believers in so-called *strong AI* is confusing hardware's ever greater processing potential with a greater potential for cognition. Since brains are wildly complex and capable of *processing* incredible amounts of information in very complex ways, the assumption is that a machine approaching the brain's level of complexity, will, sooner or later and with the right instruction set, think.

There are many things wrong with this starting with the seldom commented on fact that our hardware far outstrips our software in performance. or, to put it bluntly, software sucks.

Greater processing power gives computers the ability to manipulate symbols and execute complex instruction sets with greater speed. This does not lead to cognition.

A belief in consciousness in existing machinery is anthropomorphism. Belief that future developments will, somehow, produce strong AI is magical thinking.

There may be hidden potentials in initiatives such as quantum computing but the methods we employ today - and their foreseeable improved successors - are not walking down the road to sentience.

Well phrased and on-point criticisms of Searle's arguments are fine and necessary but do nothing to change the fundamentals: machines do not think.

Searle explains the Chinese Room in an interview -

http://globetrotter.berkeley.edu/people/Searle/searle-con4.html

DRM

__________________________________ Do you Yahoo!? Protect your identity with Yahoo! Mail AddressGuard http://antispam.yahoo.com/whatsnewfree



More information about the lbo-talk mailing list