[lbo-talk] AI

Curtiss Leung curtiss_leung at ibi.com
Thu Nov 20 00:00:46 PST 2003


OK, new day, new post on this. I am: bad.

Replies to Dwayne, Ian, Michael follow. I want to be concilatory, even if we still disagree. Apologies if anything that follows raises your hackles.

Dwayne:

> So they are far from perfect and include the

> programming errors of the developing engineers which,

> as you said, cannot be debugged on running devices.

>

> Even so, the code written to run on these machines is,

> I believe, even less perfect and suffers from age old

> problems (many of them arising, it seems, from the

> limits of human organizational techniques) perhaps

> best described in Frederick Brooks' classic, *The

> Mythical Man Month*.

>

> I spend countless hours reverse engineering the poorly

> behaved code of teams of software engineers. These

> are not stupid women and men, quite the contrary. But

> the development process, new techniques and languages

> notwithstanding, is frought with problems which

> prevent performance maximization.

I wouldn't deny any of what you write here. Softwaredevelopment and maintenance is exceedingly difficult, and I agree very much with what Brooks wrote in _Mythical Man Month_: many projects fail at the design phase, adding programmers to project that's late will make it later, etc., etc., etc. (Add me and the project will never make it to production) However, there are some things about automata and algorithms that are true independent of any implementation, the stuff you have to learn in analysis of algorithms or automata theory: you can make a sort routine that is faster on average than n^2, but not n log(n); there's a finite state automaton for every regular expression; compilers cannot be implemented as FSAs, etc. Apologies if I botched any of those results, because I never studied analysis of algorithms or automata theory, but I hope we can both agree that these are statements of formal theories that do not depend on implementations but upon proof...anyway, that's the status of the question for me: not whether a team could implement a mind (software engineering) but whether the mind can, *in principle*, be implemented as an algorithm (question in philosophy of mind/automata theory). And that's is pretty much what Searle wrote when he defined "strong AI" in "Minds, Brains, and Programs:"

according to strong AI, the computer

is not merely a tool in the study of the

mind; rather, the appropriately programmed

computer really is a mind, in the sense

that computers given the right programs

can be literally said to understand and

have other cognitve states. (stolen from http://www.ptproject.ilstu.edu/STRONGAI.HTM)

Actually, I'm agnostic about whether this is provable. Just as you can't parse anything more complex than an RE with a finite state machine, maybe it can be shown that human cognitions are not algorithmic. And even if it were shown to be the case, I wouldn't put any money in a startup that said it would be the first to implement something based on the result--if I had the guts, I'd probably short everything in the sector. But I am interested in what the *consequences* of it being the case that minds are algorithmic.

Ian:

> Why do we even need an object ontology? How

> about a neo-Heraclitean event-relationality

> ontology?

Can the event "having fries" be part of the flux? <grin />

Michael:

> this point has been recognized and

> recognizable for much longer than there have

> been so-called AI computer devices,

> which seem to me to be dragging otherwise open

> and productive minds into twisted fits of commodity

> fetishism.

Commodity fetishism? Exactly how does speculation on whether or not minds are algorithms amount to imputing that exchange value is immanent in commodities?

Look, I basically have a crude materialist ontology (sorry Ian): whatever is, is matter. (This position is, I know, fraught with problems. I shouldn't really be able to talk about processes, should I? But I do. Someone should sue me, if only to prove the reality of the law to me, which, if I were consistent, would be an entity I refuse to admit.) That means human beings are just matter, the mind is just matter (or a material process, so my crude materialism is already in trouble.) I also want to uphold human rights. So I have a problem: how would I justify holding these positions to someone else? Especially to someone whose justification of human rights rests upon human beings having some special attribute or special place in the universe? I can at least show my good faith by saying that any other thing not human but that shares certain traits should also be extended human rights.

You've said that this is an obvious conclusion for you, so we agree. But for some reason, trying to establish a connection between materialism and human rights interests me. Maybe if I encountered more folks like you, it would lose its hold on my interest.

Curtiss



More information about the lbo-talk mailing list