digital garden of reflections, hopes and fears


philip agre

Agre was a child math prodigy who became a popular blogger and contributor to Wired. Agre earned his doctorate at MIT in 1989, the same year the World Wide Web was invented.

“Genuinely worrisome developments can seem ‘not so bad‘ simply for lacking the overt horrors of Orwell’s dystopia,” wrote Agre,

Nearly 30 years later, Agre’s paper seems eerily prescient, a startling vision of a future that has come to pass in the form of a data industrial complex that knows no borders and few laws. Data collected by disparate ad networks and mobile apps for myriad purposes is being used to sway elections or, in at least one case, to out a gay priest. But Agre didn’t stop there. He foresaw the authoritarian misuse of facial recognition technology, he predicted our inability to resist well-crafted disinformation and he foretold that artificial intelligence would be put to dark uses if not subjected to moral and philosophical inquiry.

Other critics of digital technology related to Philip Agre:

Google hired Timnit Gebru to be an outspoken critic of unethical AI. Then she was fired for it.

Marc Rotenberg, who edited a book with Agre in 1998 on technology and privacy, and is now founder and executive director for the Center for AI and Digital Policy.

Charlotte Lee, who studied under Agre as a graduate student at UCLA, and is now a professor of human-centered design and engineering at the University of Washington.

Agre isn’t available. In 2009, he simply dropped off the face of the earth, abandoning his position at UCLA. When friends reported Agre missing, police located him and confirmed that he was OK, but Agre never returned to the public debate. His closest friends declined to further discuss details of his disappearance, citing respect for Agre’s privacy. Some said that, as of a few years ago, he was living somewhere around Los Angeles.

Christine Borgman, a professor of information studies at UCLA

Agre's landmark 1997 paper is called “Lessons Learned in Trying to Reform AI” is still largely considered a classic, said Geoffrey Bowker, professor emeritus of informatics at University of California, Irvine. Agre noticed that those building artificial intelligence ignored critiques of the technology from outsiders. But Agre argued criticism should be part of the process of building AI. “The conclusion is quite brilliant and has taken us as a field many years to understand. One foot planted in the craftwork in design and the other foot planted in a critique,” Bowker said.

of AI has progressed, it has created problems — ranging from discrimination to filter bubbles to the spread of disinformation — and some academics say that is in part because it suffers from the same lack of self-criticism that Agre identified 30 years ago.

In December, Google’s firing of AI research scientist Timnit Gebru after she wrote a paper on the ethical issues facing Google’s AI efforts, highlighted the continued tension over the ethics of artificial intelligence and the industry’s aversion to criticism

“It’s such a homogenous field and people in that field don’t see that maybe what they’re doing could be criticized,” said Sofian Audrey, a professor of computational media at University of Quebec.

== In a 1994 paper, published a year before the launches of Yahoo, Amazon and eBay, Agre foresaw that computers could facilitate the mass collection of data on everything in society, and that people would overlook the privacy concerns because, rather than “big brother” collecting data to surveil citizens, it would be many different entities collecting the data for lots of purposes, some good and some problematic. Agre wrote in the paper that the mass collection of data would change and simplify human behavior to make it easier to quantify. That has happened on a scale few people could have imagined, as social media and other online networks have corralled human interactions into easily quantifiable metrics, such as being friends or not, liking or not, a follower or someone who is followed. And the data geerated by those interactions has been used to further shape behavior, by targeting messages meant to manipulate people psychologically.

Agre brought his work into the mainstream with an Internet mailing list called the Red Rock Eater News Service, named after a joke in Bennett Cerf’s Book of Riddles. It’s considered an early example of what would eventually become blogs.

His final project was what friends and colleagues colloquially called “The Bible of the Internet,” a definitive book that would dissect the foundations of the Internet from the ground up. But he never finished it.

Simon Penny, a professor of fine arts at University of California, Irvine who has studied Agre’s work extensively. “

John Seberger, a postdoctoral fellow in the Department of Informatics at Indiana University who has studied Agre’s work extensively,

Phil Agre saw the dark side of the Internet 30 years ago (

Agre's mailing list, the archive of which is still at His (and David Chapman's) famous AAAI paper about the "Pengi" system

The "Surveillance and Capture" paper mentioned by the Post seems to capture an important distinction between two modes of privacy invasion; even 20 years later I see attempts to discuss privacy concerns founder on a failure to reckon with this distinction.

David Chapman

How to help someone use a computer:

Rationalizations for bad design, a posting to RISKS digest:

Layering, from a course on Information Systems and Design:

Toward a Critical Technical Practice: Lessons Learned in Trying to Reform AI:

Notes and Recommendations (from RRE Digest):

Red Rock Eater Digest, 1994 – 2004: <….

The Network Observer, 1994 – 1996:

He wrote a book, Computation and Human Experience, here are some extracts and a chapter summary:

Phil Agre's homepage at UCLA is still alive and has numerous of his writings:

Phil Agre's Wired articles -

The 1995 "While the Left Sleeps" about the Left underestimating the Republicans is interesting.

Previous HN posts on Agre:

Phil Agre Missing (Nov 26, 2009)

Missing Internet Pioneer Phil Agre Is Found Alive (Feb 1 2010)…

He's mentioned in several HN comments as well, though suprisingly few:

Recommended writings:

Some past threads - not very large - on his writings: How to help someone use a computer. (1996) - - April 2020 (1 comment)

Find Your Voice: Writing for a Webzine (1999) - - Oct 2018 (8 comments)

Your Face Is Not a Bar Code (2003) - - Sept 2018 (29 comments)

Life After Cyberspace (1999) - - April 2015 (2 comments)

How to help someone use a computer - - Aug 2010 (8 comments)

How to help someone use a computer - - Dec 2008 (5 comments)

How to Be a Leader in Your Field - - Sept 2007 (2 comments)

Also related:

Making AI Philosophical Again: On Philip E. Agre’s Legacy (2014) - - Dec 2019 (15 comments)

Missing Internet Pioneer Phil Agre Is Found Alive - - Feb 2010 (5 comments)

Phil Agre Missing - - Nov 2009 (4 comments)

PhD thesis The Dynamic Structure of Everyday Life [1]. I found it worth reading. At that time I also read the paper he wrote with David Chapman, Pengi: An Implementation of a Theory of Activity, which was also interesting. [1]



More profoundly, though, Agre wrote in the paper that the mass collection of data would change and simplify human behavior to make it easier to quantify. That has happened on a scale few people could have imagined, as social media and other online networks have corralled human interactions into easily quantifiable metrics, such as being friends or not, liking or not, a follower or someone who is followed. As Hannah Arendt wrote in 1968 (!)

From a philosophical viewpoint, the danger inherent in the new reality of mankind seems to be that this unity, based on the technical means of communication and violence, destroys all national traditions and buries the authentic origins of all human existence. This destructive process can even be considered a necessary prerequisite for ultimate understanding between men of all cultures, civilizations, races, and nations. Its result would be a shallowness that would transform man, as we have known him in five thousand years of recorded history, beyond recognition. It would be more than mere superficiality; it would be as though the whole dimension of depth, without which human thought, even on the mere level of technical invention, could not exist, would simply disappear. This leveling down would be much more radical than the leveling to the lowest common denominator; it would ultimately arrive at a denominator of which we have hardly any notion today.

As long as one conceives of truth as separate and distinct from its expression, as something which by itself is uncommunicative and neither communicates itself to reason nor appeals to "existential" experience, it is almost impossible not to believe that this destructive process will inevitably be triggered off by the sheer automatism of technology which made the world one and, in a sense, united mankind. It looks as though the historical pasts of the nations, in their utter diversity and disparity, in their confusing variety and bewildering strangeness for each other, are nothing but obstacles on the road to a horridly shallow unity. This, of course, is a delusion; if the dimension of depth out of which modern science and technology have developed ever were destroyed, the probability is that the new unity of mankind could not even technically survive. Everything then seems to depend upon the possibility of bringing the national pasts, in their original disparateness, into communication with each other as the only way to catch up with the global system of communication which covers the surface of the earth.

– Hannah Arendt, "Men in Dark Times"

=== "Industrial Society and Its Future" came out 25 years ago and predicted and described lots of dark stuff that came true. Why didn't people listen?!

Jacques Ellul wrote about all this since 1954, and his 'Le système technicien' book covers all the Unabomber's material (and more) quite clearly,

  1. Make individual people accountable for the decisions that 'AI' systems make.

  2. Foster a culture of critique within AI development and deployment.

== Remember the "KILL YOUR TELEVISION" bumper sticker of the late 80s, early 90s? About 1995 or so, I was driving in Seattle, and at a light, the car in front of me had a similarly-styled "KILL YOUR MODEM" sticker.

Perhaps it was an Agre reader.

Joseph Weisenbaum's similar sentiments from 1985: ===

980s and earlier here:

That includes Paul Baran, co-inventor of switched-packet networking at RAND, Willis Ware, also RAND, Shoshana Zuboff, Richard Boeth, and others. Agre is conspicuously absent.

The advocacy voices were numerous — Arthur C. Clarke, Stewart Brand, Howard Rheingold, Kevin Kelley (and much of the rest of the Whole Earth / Wired gang). Adam Curtis's work has focused strongly on this, especially on what he sees as the California / West Coast school of techno-utopianism.

David Bowie

Douglas Adams in the 1970s? His idea of the "Babel fish" which allowed clear communication across languages, rather than ushering in universal peace as expected (as idealists in the 1990s thought the Internet would) instead resulted in more warfare and devastation as aliens could insult each other clearly.

“Business coalitions are already forming to eviscerate the Securities and Exchange Commission and the Food and Drug Administration, which regulate perhaps the country's most morally hazardous industries.”n