You wrote of Searle:
> Rather Searle attacks AI because he thinks of if as
> exactly the opposite of a brain-based account -- he
> thinks of it as a "brains are irrelevant" account.
OK, but that's a position he *imputes* to his opponents. Some of 'em hold it, I'm sure, but I'm also sure some of 'em don't. (Now who does and who doesn't I can't say, but aren't software neural networks an attempt to model actual physical systems in the brain?) Software can and is being used to model all sorts of physical phenomena, e.g., weather. But would anyone who does climate models say that the weather is an algorithm? Nope. And so would it be fair to impute that that those who do climate modeling are reducing weather to an algorithm? Nope.
Please note: I'm not arguing his point that brains must be relevant, but disapproving of the way he treats his opponents. Although Chalmers holds the objectionable (to me at least) view that consciousness is a "nonphysical feature of the world" -- see http://www.u.arizona.edu/~chalmers/book/searle-response.html -- it seems that good chunks of Searle's review of Chalmers's book were invective, plain and simple.
> He thinks the fundamental presupposition of AI is
>
> 1. Mind:Brain::Software:Hardware
> 2. Therefore, brains are irrelevant
> because any hardware in which you could instantiate
> the software is equivalent. And naturally, his view
> is the opposite: minds are features of brains and
> only brains.
I agree with him that minds are features of brains, but I think it's an open question whether or not they might only be features of brains.
"Ah, this terrible gibberish. Where will it end?" thus spake Hunter S. Thompson.
Thanks for your thoughtful reply, -- Curtiss
__________________________________________________ Do You Yahoo!? Talk to your friends online with Yahoo! Messenger. http://im.yahoo.com