I’ve blogged off and on over the past 15 months about “algorithmic culture.” The subject first came to my attention when I learned about the Amazon Kindle’s “popular highlights” feature, which aggregates data about the passages Kindle owners have deemed important enough to underline.
Since then I’ve been doing a fair amount of algorithmic culture spotting, mostly in the form of news articles. I’ve tweeted about a few of them. In one case, I learned that in some institutions college roommate selection is now being determined algorithmically — often, by matching up individuals with similar backgrounds and interests. In another, I discovered a pilot program that recommends college courses based on a student’s “planned major, past academic performance, and data on how similar students fared in that class.” Even scholarly trends are now beginning to be mapped algorithmically in an attempt to identify new academic disciplines and hot-spots.
There’s much to be impressed by in these systems, both functionally and technologically. Yet, as Eli Pariser notes in his highly engaging book The Filter Bubble, a major downside is their tendency to push people in the direction of the already known, reducing the possibility for serendipitous encounters and experiences.
When I began writing about “algorithmic culture,” I used the term mainly to describe how the sorting, classifying, hierarchizing, and curating of people, places, objects, and ideas was beginning to be given over to machine-based information processing systems. The work of culture, I argued, was becoming increasingly algorithmic, at least in some domains of life.
As I continue my research on the topic, I see an even broader definition of algorithmic culture starting to emerge. The preceding examples (and many others I’m happy to share) suggest that some of our most basic habits of thought, conduct, and expression — the substance of what Raymond Williams once called “culture as a whole way of life” — are coming to be affected by algorithms, too. It’s not only that cultural work is becoming algorithmic; cultural life is as well.
The growing prevalence of algorithmic culture raises all sorts of questions. What is the determining power of technology? What understandings of people and culture — what “affordances” — do these systems embody? What are the implications of the tendency, at least at present, to encourage people to inhabit experiential and epistemological enclaves?
But there’s an even more fundamental issue at stake here, too: who speaks for culture?
For the last 150 years or so, the answer was fairly clear. The humanities spoke for culture and did so almost exclusively. Culture was both its subject and object. For all practical purposes the humanities “owned” culture, if for no other reason than the arts, language, and literature were deemed too touchy-feely to fall within the bailiwick of scientific reason.
Today the tide seems to be shifting. As Siva Vaidhyanathan has pointed out in The Googlization of Everything, engineers — mostly computer scientists — today hold extraordinary sway over what does or doesn’t end up on our cultural radar. To put it differently, amid the din of our pubic conversations about culture, their voices are the ones that increasingly get heard or are perceived as authoritative. But even this statement isn’t entirely accurate, for we almost never hear directly from these individuals. Their voices manifest themselves in fragments of code and interface so subtle and diffuse that the computer seems to speak, and to do so without bias or predilection.
So who needs the humanities — even the so-called “digital humanities” — when your Kindle can tell you what in your reading you ought to be paying attention to?