[lbo-talk] re: AI

jeffrey fisher jfisher at igc.org
Wed Nov 19 14:23:41 PST 2003


please forgive me for jumping in in the middle, here, w/o catching up. i'll take my lumps if i earn them.

On Wednesday, November 19, 2003, at 04:57 PM, andie nachgeborenen wrote:


>
> By theway, I donk think that thinking makes for
> rights. It's sensing and, specifically, suffering --
> not just physical suffering.

very peter singer. ;-)


>
> I don't know beans about how computers work, and don't
> know and don't really care if they ever get smart
> enough to have to worry about whether we should give
> them rights. The point of the analogy -- this is what
> Michael D doesn't see -- is what it tells us about
> _us_. Joanne thinks it tells us that we are degraded
> to mere mechanism.

i've never understood what's "mere" about "machines" . . . seems to me that presuming machines could someday think is no more ridiculous or politically charged than presuming that machines could never come to think. (and yes, i realize this is joann's position, not justin's)


> I think it tells us that we are
> fully part of the natural world, just a moderately
> intelligent bit of it. That is a view that people have
> been resisting tooth and claw since the dawn of the
> modern age.

two words: donna haraway. it's all always about how super-duper-special human beings are. we're just so *different* that we're *better*! neither ("mere") machine, nor ("mere") animal, nor god, we are Human.

we need to get over ourselves as a species. people who would have no truck with such talk when it comes to nations or classes or races traffic in it regularly when it comes to species . . . and life, for that matter. we base our arguments on some quasi-scientific notions of our uniqueness, and then what happens to our arguments if/when it turns out we aren't so unique, after all. the obvious analogy here is the copernican revolution . . . we were special because the universe revolved around us, but then, oops! it doesn't! so, uh, now what is it? and so on . . .

in philip k dick's _do androids dream of electric sheep_, the defining element of human-ness/humanity, as opposed to the androids, is taken to be the capacity for empathy rather than "consciousness" (not unlike the tin man wanting a heart). scott's film is true to the novel even while it turns it inside-out by making it pretty plain that the android "villains" actually feel. the point is not that they do or even that they will, but for us to wonder how we would think of ourselves *if they did*.

all this business about turing tests and mere machines is, imo, a way of avoiding this fundamental question.


>
> jks
>
> --- joanna bujes <jbujes at covad.net> wrote:
>> Dwayne writes:
>>
>> "Well phrased and on-point criticisms of Searle's
>> arguments are fine and necessary but do nothing to
>> change the fundamentals: machines do not think."
>>
>> However, the more that human beings are reduced to
>> the level of objects, the more likely they are to
>> imagine that machines could think. You see,
>> experientially, the gap between man and machine is
>> shrinking.
>>
>> Joanna
>>
>



More information about the lbo-talk mailing list