I’ve spent the last few weeks here auditioning ideas for my next book, on the topic of “algorithmic culture.” By this I mean the use of computers and complex mathematical routines to sort, classify, and create hierarchies for our many forms of human expression and association.
I’ve been amazed by the reception of these posts, not to mention the extent of their circulation. Even more to the point, the feedback I’ve been receiving has already prompted me to address some of the gaps in the argument — among them, the nagging question of “what is to be done?”
I should be clear that however much I may criticize Google, Facebook, Netflix, Amazon, and other leaders in the tech industry, I’m a regular user of their products and services. When I get lost driving, I’m happy that Google Maps is there to save the day. Facebook has helped me to reconnect with friends whom I thought were lost forever. And in a city with inadequate bookstores, I’m pleased, for the most part, to have Amazon make suggestions about which titles I ought to know about.
In other words, I don’t mean to suggest that life would be better off without algorithmic culture. Likewise, I don’t mean to sound as if I’m waxing nostalgic for the “good old days” when small circles of élites got to determine “the best that has been thought and said.” The question for me is, how might we begin to forge a better algorithmic culture, one that provides for more meaningful participation in the production of our collective life?
It’s this question that’s brought me to the idea of algorithmic literacies, which is something Eli Pariser also talks about in the conclusion of The Filter Bubble.
I’ve mentioned in previous posts that one of my chief concerns with algorithmic culture has to do with its mysteriousness. Unless you’re a computer scientist with a Ph.D. in computational mathematics, you probably don’t have a good sense of how algorithmic decision-making actually works. (I count myself among the latter group.) Now, I don’t mean to suggest that everyone needs to study computational mathematics, although some basic understanding of the subject couldn’t hurt. I do mean to suggest, however, that someone needs to begin developing strategies by which to interpret both the processes and products of algorithmic culture, critically. That’s what I mean, in a very broad sense, by “algorithmic literacies.”
In this I join two friends and colleagues who’ve made related calls. Siva Vaidhyanathan has coined the phrase “Critical Information Studies” to describe an emerging “transfield” concerned with (among other things) “the rights and abilities of users (or consumers or citizens) to alter the means and techniques through which cultural texts and information are rendered, displayed, and distributed.” Similarly, Eszter Hargittai has pointed to the inadequacy of the notion of the “digital divide” and has suggested that people instead talk about the uneven distribution of competencies in digital environments.
Algorithmic literacies would proceed from the assumption that computational processes increasingly influence how we perceive, talk about, and act in the world. Marxists used to call this type of effect “ideology,” although I’m not convinced of the adequacy of a term that still harbors connotations of false consciousness. Maybe Fredric Jameson’s notion of “cognitive mapping” is more appropriate, given the many ways in which algorithms help us to get our bearings in world abuzz with information. In any case, we need to start developing a vocabulary, one that would provide better theoretical tools with which to make sense of the epistemological, communicative, and practical entailments of algorithmic culture.
Relatedly, algorithmic literacies would be concerned with the ways in which individuals, institutions, and technologies game the system of life online. Search engine optimization, reputation management, planted product reviews, content farms — today there are a host of ways to exploit vulnerabilities in the algorithms charged with sifting through culture. What we need, first of all, is to identify the actors chiefly responsible for these types of malicious activities, for they often operate in the shadows. But we also need to develop reading strategies that would help people to recognize instances in which someone is attempting to game the system. Where literary studies teaches students how to read for tone, so, too, would those of us invested in algorithmic literacies begin teaching how to read for evidence of this type of manipulation.
Finally, we need to undertake comparative work in an effort to reverse engineer Google, Facebook, and Amazon, et al.’s proprietary algorithms. One of the many intriguing parts of The Googlization of Everything is the moment where Vaidhyanathan compares and contrasts the Google search results that are presented to him in different national contexts. A search for the word “Jew,” for example, yields very different outcomes on the US’s version of Google than it does on Germany’s, where anti-Semitic material is banned. The point of the exercise isn’t to show that Google is different in different places; the company doesn’t hide that fact at all. The point, rather, is to use the comparisons to draw inferences about the biases — the politics — that are built into the algorithms people routinely use.
This is only a start. Weigh in, please. Clearly there’s major work left to do.