Computation and Human Experience (RRE)

Dace edace at flinthills.com
Mon Jun 12 20:03:11 PDT 2000


Thanks, kelley, for passing this on to me. My comments revolve around a few basic assumptions of cognitive science, some of which Agre rejects and some of which he accepts. On the whole, this chapter was extremely educational for me.


>The chapter I've enclosed is an early discussion of what computers
>are. It is organized around the dialectical relationship in computer
>science between "implementation" -- that is, the physical realization
>of computers as objects in the physical world that obey the laws
>of physics -- and "abstraction" -- the ideas and language that are
>inscribed in the computer, and that need have no particular relation
>to the laws of physics.

There's nothing abstract about a computer. The functions it performs are merely *interpreted* by human beings as involving the implementation of abstract operations. There are no ideas or languages inscribed in a computer. What we find instead is silicon, copper, electricity, feedback loops, etc. To believe we've found anything other than this is to invoke a kind of mysticism.

Since computation itself is strictly a function of the mind, there's really no such thing as a computer. What actually exists in the world is a box full of stuff. We imagine it's a computer only because what happens in that box has the same outcome we produce when we compute.


>In particular, throughout the book I aim to deconstruct that mistaken
>conception of people and their lives that I call "mentalism": the idea
>(as, for example, in Descartes) that we have an internal space called
>"the mind" that is radically different in nature from the outside
>world, and yet that (precisely because of its radical difference from
>the outside world) ends up mirroring the outside world in great detail
>in the form of "knowledge".

There are two mistakes we can make about the mind: One, it's different from the brain; and two, it's the same as the brain. Fortunately, each assertion serves as a corrective for the other.

True, the mind is not an "internal space." But that's only because it's "internal" even to space itself. There's no space in the mind, just time. Mind is reducible, not to brain, but to time. For the brain-- like any material object-- the present is nothing more than right now. For the mind, presence also includes the past. I'm not simply referring to memory, but to all mental functions, such as language, idea, identity, emotion, etc. None of these make any sense except in the context of a living past.

The reason mind/brain is perceived quite naturally as being two different things is that we can approach it from two different perspectives, like looking at a coin from the head side or the tail side. Because we have bodies and brains, we can perceive it from the point of view of space, in which case we see it as a brain. Because we have minds, we can also perceive it from the point of view of time, in which case we see it as a mind.

There's no abstraction in the brain any more than in computers. Though you can't see or touch an abstraction, you can indeed see and touch your brain. Just take it out of your head, and poke it all you want. Slice off a piece and put it under an electron microscope, and you can see the neurons and the stuff they're made of. There's nothing general or abstract in there. Everything you find will be quite specific and concrete, including all the relations of the parts.

But brains, unlike computers, are merely the spatial aspect of a larger system, a kind of system that we don't know how to construct. In fact, there cannot ever be a machine with a mind, because to have a mind, you have to have built yourself, and if you built yourself, you're not a machine. In other words, there's no self to a machine. Life is not merely existence but self-existence, and the self is strictly a function of time.


>An important example of this principle is found in the first several
>stages of visual processing (known as "early vision"), according to
>computational neurophysiologists such as Marr (1982). According to
>this theory, the visual cortex is organized, at least in part, as a
>set of modules, each of which computes some function from one version
>of the visual image to another. Although the exact roster of these
>computations is still uncertain, a typical proposal is that a certain
>module takes in a slightly blurred version of the retinal image and
>produces a map of where the edges in the image are located. Another
>module takes in these edge maps for both eyes (i.e., a stereo edge
>map) and produces a map of "depth" (i.e., how far away the physical
>edges actually are).

While it's possible that occipital neurons perform the kind of functions that also occur in a "computer," as far as I know this notion has nothing to recommend it over Karl Pribram's "holographic hypothesis," according to which the occipital neurons, which happen to be arranged much like the chords in a piano, resonate at particular frequencies which correspond to the frequencies of light entering the retina. Rather than constructing an image in the back of the head, the brain tunes us in to the lightwaves in our eyes and thus the images they carry. This accords with our unshakable sense that what we see is the actual world around us, not merely a reproduction somewhere in the recesses of the brain.


>Chomsky, similarly, defined his project in linguistics in terms of
>a distinction between "competence" and "performance." Linguistic
>competence is a mathematical abstraction expressing in ideal terms
>the grammar of a given language. Linguistic performance, by contrast,
>concerns the actual situated employment of language by real people.

Linguistic competence is real and therefore cannot be a mathematical abstraction. It is merely describable-- within human imagination-- in terms of a mathematical abstraction. The underlying grammar of a given language (as well as universal grammar) is mentally real. It's real in the same sense that imagination is real, even if the things we imagine are not. The question is how we inherit our grammar. Chomsky takes the standard view that it's encoded in our genes. But if we accept the reality of mind, then mentality itself could account for inheritance. In other words, maybe the mind is not simply the *brain* over time. Maybe it's the whole body-- including all the organs and cells-- over time. Maybe what we call memory is merely the personal aspect of a species-wide phenomenon. It's not through genes but species-memory that our evolutionary past is made present for each new body and brain.

The idea of genes as blueprint for the body is now universally dismissed among molecular biologists. I first learned of the growing skepticism back in '88 from my cell biology professor in college. The idea of "genetic program" or "instruction manual" got scrapped as hopelessly untenable. Now it's thought that the plan for the organism somehow emerges in the course of the "conversation" between genes and proteins in the developing embryo. The British geneticist Enrico Coen, who presents the new view in his book, *The Art of Genes*, frankly admits that researchers don't even have a clue as to how the deep interiors of cells give rise to the outward forms of multicellular organisms. As in the case of vision, there's no particular reason to assume that the going theory will ever provide any kind of real answer.

Ted



More information about the lbo-talk mailing list