The Moral Life of Geeks

Kendall Clark kclark at ntlug.org
Mon Sep 11 12:00:27 PDT 2000


I wrote this for Monkeyfist.com, and I thought someone on this mailing list might be interested in it. It describes a basic moral dilemma that confronts geeks and other technically-minded folks.

Best, Kendall Clark, The Monkeyfist Collective

Title: The Moral Life of Geeks Abstract: In a society that is increasingly undemocratic and fascist, what moral dilemmas do technically-minded people face, and how might they resolve them?

Develop the fundamental theory, algorithms, and software for the

design and analysis of robust, high-performance, team-based,

multi-agent cooperative control systems operating in dynamic,

uncertain, adversarial environments.

That sounds like fun, where do I apply? As it turns out, the

[1]Office of Naval Research. It seems that most of the really

interesting research -- especially in areas of intersection between

computer software, biotech, and nanotech -- is funded today either

by the Pentagon System or corporations. But what if you are, like

most Monkeyfisters, a geek and a leftist? What if you are a person

inclined to do technical work but also inclined to refuse to work

for evil institutions, that is, institutions that cause undeserved

harm?

The basic dilemma lies between, on the one hand, not developing

one's innate and learned capacities, which can be a kind of harm to

oneself and to others, and, on the other, developing one's

capacities by working for evil institutions.

The dilemma has many forms. For example, most Monkeyfisters are or

have been involved in developing [2]free software, often [3]because

of moral considerations. And yet there is a moral tension: Richard

Stallman wrote the [4]GPL in order to give software people a way to

share their efforts freely with neighbors. But one of the guiding

principles of [5]open source software is that licenses cannot

[6]discriminate against fields of endeavor. But what about fields of

endeavor that are evil? What about writing software, or doing

research, that will, directly or indirectly, be used to cause

undeserved harm to others?

I owe myself and others a duty to develop my capacities; and one way

I've chosen to do that is to be involved in the development of a

free software infrastructure. But I also owe an obligation to refuse

and resist cooperation with evil institutions. Under all standard

free software licenses, anyone may use the fruits of my labor --

including evil institutions like the Pentagon, the US Armed

Services, defense contractors like [7]Boeing and [8]United

Technologies; online porn merchants; biotech corporations like

[9]Monsanto; and agents of globalization like the [10]World Trade

Organization. So in developing free software it appears that far

from resisting cooperation with evil institutions, I'm may be

directly or indirectly contributing to them.

I've only used free software development as a representative

activity; what I've said so far about it applies to many kinds of

technical R&D. Why shouldn't I discriminate against evil fields of

endeavor? There are three standard responses:

1. Technology is amoral. The first response is that since

technology is morally neutral, so as long as I don't put my work

to evil ends, I'm not morally blameworthy.

2. Redefine the dilemma. The second response says that just because

I can write software or do research that may be used by evil

institutions, doesn't mean I have to. I could be a waiter or a

farmer instead.

3. Applied technology v. basic science. The third response

distinguishes between basic research and applied technology; in

doing so, it claims that, since it increases human knowledge and

is only indirectly, if ever, applied, basic research is morally

praiseworthy, or at least not prima facie morally blameworthy,

even if evil institutions ultimately use it to achieve evil

ends.

The first response is flawed. It's simply not the case that all

technology is necessarily amoral. Technology, like any other

cultural artifact, doesn't just fall from the sky. It's always

already embedded in, and inextricable from, social space, which is

always already a political space, which, in turn, is always already

an ethically-contested space.

I take this lesson from the work of David Noble and Steven Shapin.

Technology, with very few exceptions, gets developed in our late

Western capitalist era because its development gets funded by

governments and corporations, often in partnership. Failing to take

that social and political context into account when evaluating

technologies, and the morality of one's participation in their

development, is simply to fail to take account of all the relevant

facts. While some technologies -- for example,

[11]computer-supported collaborative work (CSCW) -- can be used

equally well for good or evil ends, technology itself is not

necessarily amoral.

The second response is coherent, but problematic if you believe, as

I do, that persons have a duty, to themselves and to some others, to

develop their innate and learned capacities as a necessary condition

of human flourishing. The second response is applicable in what we

may call limit situations in which the only choice one has is either

developing one's capacities in association with an evil institution

or not developing them directly, if at all. What proponents of the

second response fail to recognize is that limit situations are rare.

In sum, then, the second response is a useful and valid one, but

only in some rare situations.

The third response claims, essentially, that whatever the moral

status of particular bits of applied technology, or engineering,

basic research is at least only second-order problematic. While I

agree that we shouldn't abandon all basic research, even when it's

reasonable to assume that some of it will be used to achieve evil

ends, it's unclear whether most technical people do basic research,

or whether doing basic research funded by evil institutions should

be done at all. The modern research university is obviously of

crucial importance, but an ever-increasing majority of research done

in universities is funded by the Pentagon and corporations. In

short, that basic research is only second-order morally problematic

can at best be ameliorative, not dispositive, of the basic dilemma.

(And in the particular case of software geeks, most software

development is more like applied tech than basic research, i.e.,

more like the development of, say, the [12]Apache Web server than

what Donald Knuth does, and so the third response isn't very helpful

to the geeks.)

So how should technical people respond to this dilemma? I suggest

three kinds of response, the first two of which are specific to the

development of free software, while the third is generally

applicable.

First, we need to reinvigorate moral debate about free software

(and, by extension, about technology and intellectual property in

general) by talking not only in terms of freedom, which Richard

Stallman has done well, but also in terms of responsibility, that

is, acknowledgment of one's duty to avoid cooperating with

institutions that are evil. One way to do that is to talk about an

Ethical Public License, at least as a thought-experiment. What

might such a license look like? Is it legally possible to write

binding software license that prohibits particular its use within

fields of endeavor or particular types of institution? What kind of

moral claims are involved in such a license? How far should one go

to prohibit one's work from being used to cause undeserved harm?

Could the resulting license still make a claim to be free software,

that is, a tool of extending personal freedom?

Second, and this applies primarily to those of us who are both

leftists and geeks, we need to challenge the wholly unreflective

libertarianism of free software, and Internet, culture. Most geeks,

I suspect, would not credit the dilemma I've described, if for no

other reason than that most geeks are habituated libertarians, who

don't think about their technical work in terms of social or

institutional analysis.

Finally, we need, especially in America, to reassert democratic

control over the kinds of institution that fund technology

development and basic research, but particularly those that are

ostensibly democratic: government and universities. In that way we

may be able to reassert control over public corporations as well.

What good can come of reasserting democratic control? If

governments, universities, and corporations were under democratic

control, they could be harnessed to pursue ends that contribute to,

rather than impede, human flourishing. Under strong, reinvigorated

democratic control, the moral status of basic research becomes much

clearer, since it becomes reasonable to assume that the applications

of that research will be for good, not for harm. Democratic control

of these institutions would make limit situations exceedingly rare,

since it would tend to promote the pursuit of good ends over evil

ones.

Technology has liberative potential, but only if it's controlled by

democratic structures and institutions. And given the sorry state of

American democracy at present, it's no wonder that geeks, engineers,

and scientists of good will daily face difficult moral dilemmas. The

solution to those dilemmas, and the key to harnessing technology for

the good, is the reassertion of democracy in the face of its slow,

ongoing demise.

References

1. http://www.onr.navy.mil/sci_tech/special/muri2001

2. http://www.fsf.org/philosophy/free-sw.html

3. http://www.fsf.org/philosophy/why-free.html

4. http://www.fsf.org/gpl/

5. http://www.opensource.org/

6. http://www.opensource.org/osd-rationale.html#clause6

7. http://www.boeing.com/

8. http://www.utc.com/

9. http://www.monsanto.com/

10. http://www.wto.org/

11. http://usabilityfirst.com/cscw.html

12. http://www.apache.org/

-- Posted on Monkeyfist at http://monkeyfist.com/articles/651



More information about the lbo-talk mailing list