[lbo-talk] What's at stake?/AI again

Brian Siano siano at mail.med.upenn.edu
Wed Nov 19 21:24:23 PST 2003


andie nachgeborenen wrote:


>I and some others have alluded to connectionism and
>neural net theory. Many people in AI think that
>non-algorithmic connectionist models will do better in
>capuring human thought than the Good Old Fashioned AI
>models, which are algorithmic. This is speculative but
>there were some promsing results last time I looked,
>which is now ten years ago or so. For more info, check
>out some recent text on cognitive science. Btw, all
>thinking, algorithmic or not, _can_ be represented as
>the output of a Turing machine, but it turns out that
>this is not a useful or illuminating fact beyond the
>most abstract level of analysis. jks
>
Yes, and No. Turing's theoretical machine illustrated ways in which machine computation could _not_ match human cognition. For example if a Turing machine's processes entered into some loop of activity-- calculating an irrational fraction, for example-- it could spend eternity trapped on that action. (Sort of like that old _Star Trek_ episode where Spock sends a malevolent computer into fits by telling it to calculate _pi_.) And as Godel demonstrated, all formal systems are incomplete, in that there are statements that they cannot resolve.

This could be taken as a proof against AI, but I don't think it is. For one thing, it fails to prove that there aren't analogous limits for human beings, i.e., things that humans can't resolve. For another, it doesn't explain why humans get around the limits the example suggests; until we understand _why_ humans won't spend their time calculating millions of decimal digits, we can't rule out the possibility that this ability can't be modelled mathematically.



More information about the lbo-talk mailing list