Sullivan goes all lefty

kelley kwalker2 at gte.net
Sun Jun 11 14:15:26 PDT 2000



>Peter K quoted Andrew Sullivan:
>
>>A specter, to put it bluntly, is haunting America: the specter of
>>dot-communism.


>Hmm, no mention of Richard Barbrook?
>
><http://www.nettime.org/nettime.w3archive/199909/msg00046.html>
>
>Doug

seriously, tho, no one believes this crap do they? i find Peter V's comments interesting, as others who've piped up here but i just don't buy it. sure, don't dismiss geeks/coders/hackers as potential allies, but honestly they seem to me to be folks ripe for the kind of analysis that marx gave intellectuals in the 18th Brumaire -- a kind of floating class that could go one way or another. there is something about the culture that, from my admittedly cursory observation, doesn't sound anything like annalee's

i don't know if anyone saw it but phil agre just posted a chapter on Red Rock Eater that i found interesting, it's over 60 k though. i forwarded it to another list in parts. i'll just put the intro below. if enough people are interested i can send offlist or maybe post it on the web after i get his permission. at any rate, this is phil's intro to the chapter:

Red Rock Eater News Service" <rre at lists.gseis.ucla.edu>

------------------

[Back in 1997 I published a book, "Computation and Human Experience", that maybe five people have read. It's not about the Internet and it's not about politics. It's about what computers are, in a deep sense, and it proceeds through a relentless study of the dynamics of research in one avant-garde subfield of computer science: artificial intelligence (AI). AI is the subject in which I received my PhD, and I regard it simultaneously as a powerful way of looking at the world and as dangerous nonsense. (Neoclassical economics is the same way.)

**(in my never humble opinion, EVERYONE should view their discipline or field of study this way. no shit! )**

I spent several years figuring out how to resolve this tension within myself and in my relations with AI people, and though most of the AI people no doubt think I'm a jerk, at least I can have a civil chat with some of them. The book reflects this tension, and tries to explain in great depth the precise relationship between the dangerous-nonsense aspects of AI and the powerful-way-of-looking-at-the-world aspects. It's a long book, and with its equal recourse to the theories of Derrida and Heidegger and to the notations and narrative conventions of computer science it is guaranteed to offend just about everyone.

The chapter I've enclosed is an early discussion of what computers are. It is organized around the dialectical relationship in computer science between "implementation" -- that is, the physical realization of computers as objects in the physical world that obey the laws of physics -- and "abstraction" -- the ideas and language that are inscribed in the computer, and that need have no particular relation to the laws of physics. For those who don't know how computers work, it provides an introduction to things like wires. (Things like gates have to wait until Chapter 5.) For those who *do* know how computers work, the chapter is supposed to defamiliarize any such knowledge, so that commonplace technical constructs like variables start to seem strange again. Much of the chapter is taken up recounting various controversies from the history of AI that readers will find unfamiliar. But the deep purpose of the chapter is not to explain or settle those controversies, which are not very important in the long run, but to draw more fundamental conclusions about the nature of computation.

In particular, throughout the book I aim to deconstruct that mistaken conception of people and their lives that I call "mentalism": the idea (as, for example, in Descartes) that we have an internal space called "the mind" that is radically different in nature from the outside world, and yet that (precisely because of its radical difference from the outside world) ends up mirroring the outside world in great detail in the form of "knowledge". This is not to say that people don't have minds, don't have knowledge, don't set themselves apart from the world, etc, much less that people don't have brains, aren't intelligent, etc. Rather, it is to say that we cannot understand people's relationships to their surroundings except against the background of a prior deep embedding and immersion that should blow up received philosophical and technical ideas about the mind, knowledge, and much else. I aim to demonstrate all of this by closely following the internal logic of AI research and showing how it deconstructs itself. The method is thus closely related to Derrida's method of deconstruction as applied to philosophical ideas, but it embraces none of the relativism, idealism, nihilism, quietism, and other philosophical solecisms that Derrida is often (and most often wrongly) accused of. Anyway, I hope that you will find this enclosed chapter useful on its own, and that it will provoke you to buy the book, which despite its serious shortage of commercial viability is still very much in print.]

((It will be curious if he actually gets derrida and the PROPER meaning of deconstruction right)



More information about the lbo-talk mailing list