[lbo-talk] AI

Brian Siano siano at mail.med.upenn.edu
Wed Nov 19 12:11:27 PST 2003


Dwayne Monroe wrote:


>Having worked with AI based systems and huge computing
>clusters for a few years now, I find this faith in
>strong AI bewildering and disappointing. It is
>perhaps, an indication of the deep complexity of the
>systems we depend upon that many people believe almost
>any innovation to be possible - as if via magic.
>
>
I wouldn't call it "faith," actually. It's more like a recognition that there really is no compelling reason to believe that strong AI is_not_ possible, and it's very likely that we'll develop it, eventually.


>But strong AI is a failure and will remain a failure
>because its underlying assumption - that minds can be
>built or evolved using canned mimicry of certain
>(poorly understood) aspects of human cognition - is
>deeply flawed.
>
>
*Learning systems* are cleverly constructed and very


>useful tools. So are massively parallel neural nets.
>But these systems do not think now and do not form the
>foundation of some future thinking apparatus because
>of fundamental differences between what minds and
>machines do.
>
>
Can't agree less, because your comment rests on one big assumption. If yu're going to say that there are 'fundamental differences" between minds and machines, then this presumes a knowledge of what those differences _are_. Our current knowledge of minds is so limited, and so tentative, that one shouldn't even _presume_ to know what those "fundamental differences" are.

And it should be said that, as we attempt to model the human mind, and as we find that our models fall short of full understanding, we use this knowledge to revise our understanding. In other words, one of the projects of AI is to help us understand our own minds.

Frankly, statements such as yours strike me as based on little more than "faith."


>Prof. Hawking's error, and the error of all believers
>in so-called *strong AI* is confusing hardware's ever
>greater processing potential with a greater potential
>for cognition. Since brains are wildly complex and
>capable of *processing* incredible amounts of
>information in very complex ways, the assumption is
>that a machine approaching the brain's level of
>complexity, will, sooner or later and with the right
>instruction set, think.
>
>
>There are many things wrong with this starting with
>the seldom commented on fact that our hardware far
>outstrips our software in performance. or, to put it
>bluntly, software sucks.
>
>
You're basing this claim on an odd factor of terminology applied to machines. Could you please explain how the distinction between "hardware" and "software" applies to humans?


>Greater processing power gives computers the ability
>to manipulate symbols and execute complex instruction
>sets with greater speed. This does not lead to
>cognition.
>
>
It _has_ not led to cognition. Until we understand cognition, we cannot rule out this possibility.


>A belief in consciousness in existing machinery is
>anthropomorphism. Belief that future developments
>will, somehow, produce strong AI is magical thinking.
>
>
In _existing_ technology, certainly. But I fail to see how belief that AI will be developed is "magical thinking." If I said that human beings will eventually travel to Mars, that's a reasonable claim. It's not currently technically feasible, of course, and there are any number of political issues that prevent it from happening. And personally, I can think of better things to do than send a manned mission to Mars. But it's not an unreasonable claim to make, and there's no incontrovertible reason against it happening. The same would apply to the claim that mankind will perish in a global nuclear war.


>There may be hidden potentials in initiatives such as
>quantum computing but the methods we employ today -
>and their foreseeable improved successors - are not
>walking down the road to sentience.
>
>
So quantum computing is _not_ a method employed today? Regular readers of _Slashdot_ can recall many announcements of strides towards the development of quantum computing. But still, it's nice to have this concession to the possibility of strong AI.


>Well phrased and on-point criticisms of Searle's
>arguments are fine and necessary but do nothing to
>change the fundamentals: machines do not think.
>
They do not think _yet_.



More information about the lbo-talk mailing list