[lbo-talk] AI

Dwayne Monroe idoru345 at yahoo.com
Wed Nov 19 16:11:38 PST 2003


I've been rather naughty today, over-posting excessively.

Tommorow, back to the straight and narrow as it were.

This will be my last contribution to this thread.

...

Curtiss Leung wrote:

strong AI as I understand it means this: all mental processes can be represented as algorithms/idealized Turing machines. That's not the same as "mimicry of certain aspects of human cognition"

======

I believe Searle coined the term *strong AI*. He was referring to what people like Marvin Minksy were saying loudly and clearly for quite some time: one day soon, you and I will be able to sit down and have a conversation with an artificial mind like *2001*'s Hal 9000. A thinking being made of silicon or some successor material. The algorithm definition is a retreat position, born of the failure of this effort. It is important to remember that AI critiques of people like Searle were reactions to the grandiosity of AI's most prominent proponents.

.....

Brian Siano wrote:

Again, I'd disagree. There has been phenomenal improvement in computer software over the past thirty years. Consider how many software tasks were brand-new, or extremely rare, as recently as ten years ago--digital editing software, OCR, voice-recognition, textual and image analysis, medical imaging, and much, much more. Part of this is due to the greater power of hardware, of course, but it's due just as much to the ever-expanding understanding of software design.

==========

Brian, it appears that we must, as the old saying goes, agree to disagree on this point. I work intimately with complex systems (mostly Linux and Solaris based) everyday at the systems engineering level. My experience with the behavior of hardware, as opposed to software leads me to harbor deep skepticism about claims of tremendous software improvement.

As you said, hardware is developed as instruction sets and can be described as *frozen code*. Lord knows we've seen our share of math compiler, FVID, and assorted other errors with processors.

So they are far from perfect and include the programming errors of the developing engineers which, as you said, cannot be debugged on running devices.

Even so, the code written to run on these machines is, I believe, even less perfect and suffers from age old problems (many of them arising, it seems, from the limits of human organizational techniques) perhaps best described in Frederick Brooks' classic, *The Mythical Man Month*.

I spend countless hours reverse engineering the poorly behaved code of teams of software engineers. These are not stupid women and men, quite the contrary. But the development process, new techniques and languages notwithstanding, is frought with problems which prevent performance maximization.

I agree with you that gaming code comes closest (amongst shrinkware) to stretching the limits of hardware. Even so, a performance analysis of even this sharply written software reveals glaring errors.

Yes, rapid debugging is possible, but as any Microsoft service pack or errata patch installer will tell you, the effort to fix bugs often introduces new bugs in a cascade effect.

I strongly recommend you review Jaron Lanier's essay (link re-posted below), which covers these issues quite thoroughly from the point of view of a geek with serious-ass bona fides.

Also...

While it's true, as you say, that AI boosters such as MIT's Minsky have stated that our evolving understanding of human cognition will lead to successful machine intelligence, the foundation has always been that this new knowledge would be actionable on ever more powerful ever more sophisticated hardware.

So, in that sense, it can be said that their hopes lie entirely upon endless hardware improvement.

One Half of a Manifesto link -

http://www.wired.com/wired/archive/8.12/lanier.html

DRM

__________________________________ Do you Yahoo!? Protect your identity with Yahoo! Mail AddressGuard http://antispam.yahoo.com/whatsnewfree



More information about the lbo-talk mailing list