Tag Archive for cultural authority

The Internet of Words

A piece I just penned, “The Internet of Words,” is now out in The Chronicle of Higher Education. In part, it’s a review of two wonderful new books about social media: Alice E. Marwick’s Status Update: Celebrity, Publicity, and Branding in the Social Media Age, and danah boyd’s It’s Complicated: The Social Lives of Networked TeensBoth books were published within the last year by Yale University Press.

But the piece is also a meditation on words, taking the occasion of both books to think through the semantics of digital culture. It’s inspired by Raymond Williams‘ Keywords: A Vocabulary of Culture and Society (1976; 2nd ed., 1983), looking closely at how language shifts accompany, and sometimes precede, technological change. Here’s a snippet:

Changes in the language are as much a part of the story of technology as innovative new products, high-stakes mergers and acquisitions, and charismatic corporate leaders. They bear witness to the emergence of new technological realities, yet they also help facilitate them. Facebook wouldn’t have a billion-plus users absent some compelling features. It also wouldn’t have them without people like me first coming to terms with the new semantics of friendship.

It was great having an opportunity to connect some dots between my scholarly work on algorithmic culture and the keywords approach I’ve been developing via Williams. The piece is also a public-facing statement of how I approach the digital humanities.

Please share—and I hope you like it.

Share

Late Age On the Radio

Just a quick post linking you to my latest radio interview, with WFHB-Bloomington’s Doug Storm.  Doug is one of the hosts of a great program called “Interchange,” and this past Tuesday I was delighted to share with him a broad-ranging conversation about many of the topics I address in The Late Age of Print—the longevity of books, print (and paper) culture, reading practices, taste hierarchies, and more.  Toward the end, the conversation turned to my latest work, on the politics of algorithmic culture.

The program lasts about an hour.  Enjoy!

Share

Algorithms Are Decision Systems

My latest interview on the topic of algorithmic culture is now available on the 40kBooks blog.  It’s an Italian website, although you can find the interview in both the original English and in Italian translation.

The interview provides something like a summary of my latest thinking on algorithmic culture, a good deal of which was born out of the new research that I blogged about here last time.  Here’s an excerpt from the interview:

Culture has long been about argument and reconciliation: argument in the sense that groups of people have ongoing debates, whether explicit or implicit, about their norms of thought, conduct, and expression; and reconciliation in the sense that virtually all societies have some type of mechanism in place – always political – by which to decide whose arguments ultimately will hold sway. You might think of culture as an ongoing conversation that a society has about how its members ought to comport themselves.

Increasingly today, computational technologies are tasked with the work of reconciliation, and algorithms are a principal means to that end. Algorithms are essentially decision systems—sets of procedures that specify how someone or something ought to proceed given a particular set of circumstances. Their job is to consider, or weigh, the significance of all of the arguments or information floating around online (and even offline) and then to determine which among those arguments is the most important or worthy. Another way of putting this would be to say that algorithms aggregate a conversation about culture that, thanks to technologies like the internet, has become ever more diffuse and disaggregated.

Something I did not address at any length in the interview is the historical backdrop against which I’ve set the new research: the Second World War, particularly the atrocities that precipitated, occurred during, and concluded it.  My hypothesis is that the desire to offload cultural decision-making onto computer algorithms stems significantly, although not exclusively, from a crisis of faith that emerged in and around World War II.  No longer, it seems, could we human beings be trusted to govern ourselves ethically and responsibly, and so some other means needed to be sought to do the job we’re seemingly incapable of doing.

A bunch of readers have asked me if I’ve published any of my work on algorithmic culture in academic journals.  The answer, as yet, is no, mostly because I’m working on developing and refining the ideas here, in dialogue with all of you, before formalizing my position.  (THANK YOU for the ongoing feedback, by the way!)  Having said that, I’m polishing the piece I blogged about last time, “‘An Infernal Culture Machine’: Intellectual Foundations of Algorithmic Culture,” and plan on submitting it to a scholarly journal fairly soon.  You’re welcome to email me directly if you’d like a copy of the working draft.


P.S. If you haven’t already, check out Tarleton Gillespie’s latest post over on Culture Digitally, about his new essay on “The Relevance of Algorithms.”

Share

WordPress

Lest there be any confusion, yes, indeed, you’re reading The Late Age of Print blog, still authored by me, Ted Striphas.  The last time you visited, the site was probably red, white, black, and gray.  Now it’s not.  I imagine you’re wondering what prompted the change.ir-leasing.ru

The short answer is: a hack.  The longer answer is: algorithmic culture.polvam.ru

At some point in the recent past, and unbeknownst to me, The Late Age of Print got hacked.  Since then I’ve been receiving sporadic reports from readers telling me that their safe browsing software was alerting them to a potential issue with the site.  Responsible digital citizen that I am, I ran numerous malware scans using multiple scanning services.  Only one out of twenty-three of those services ever returned a “suspicious” result, and so I figured, with those odds, that the one positive must be an anomaly.  It was the same service that the readers who’d contacted me also happened to be using.

Well, last week, Facebook implemented a new partnership with an internet security company called Websense.  The latter checks links shared on the social networking site for malware and the like.  A friend alerted me that an update I’d posted linking to Late Age came up as “abusive.”  That was enough; I knew something must be wrong.  I contacted my web hosting service and asked them to scan my site.  Sure enough, they found some malicious code hiding in the back-end.

Here’s the good news: as far as my host and I can tell, the code — which, rest assured, I’ve cleaned — had no effect on readers of Late Age or your computers.  (Having said that, it never hurts to run an anti-virus/malware scan.)  It was intended only for Google and other search engines, and its effects were visible only to them.  The screen capture, below, shows how Google was “seeing” Late Age before the cleanup.  Neither you nor I ever saw anything out of the ordinary around here.

Essentially the code grafted invisible links to specious online pharmacies onto the legitimate links appearing in many of my posts.  The point of the attack, when implemented widely enough, is to game the system of search.  The victim sites all look as if they’re pointing to whatever website the hacker is trying to promote. And with thousands of incoming links, that site is almost guaranteed to come out as a top result whenever someone runs a search query for popular pharma terms.

So, in case you were wondering, I haven’t given up writing and teaching for a career hocking drugs to combat male-pattern baldness and E.D.

This experience has been something of an object lesson for me in the seedier side of algorithmic culture.  I’ve been critical of Google, Amazon, Facebook, and other such sites for the opacity of the systems by which they determine the relevance of products, services, knowledge, and associations.  Those criticisms remain, but now I’m beginning to see another layer of the problem.  The hack has shown me just how vulnerable those systems are to manipulation, and how, then, the frameworks of trust, reputation, and relevance that exist online are deeply — maybe even fundamentally — flawed.

In a more philosophical vein, the algorithms about which I’ve blogged over the last several weeks and months attempt to model “the real.”  They leverage crowd wisdom — information coming in the form of feedback — in an attempt to determine what the world thinks or how it feels about x.  The problem is, the digital real doesn’t exist “out there” waiting to be discovered; it is a work in progress, and much like The Matrix, there are those who understand far better than most how to twist, bend, and mold it to suit their own ends.  They’re out in front of the digital real, as it were, and their actions demonstrate how the results we see on Google, Amazon, Facebook, and elsewhere suffer from what Meaghan Morris has called, in another context, “reality lag.”  They’re not the real; they’re an afterimage.

The other, related issue here concerns the fact that, increasingly, we’re placing the job of determining the digital real in the hands of a small group of authorities.  The irony is that the internet has long been understood to be a decentralized network and lauded, then, for its capacity to endure even when parts of it get compromised.  What the hack of my site has underscored for me, however, is the extent to which the internet has become territorialized of late and thus subject to many of the same types of vulnerabilities it was once thought to have thwarted.  Algorithmic culture is the new mass culture.

Moving on, I’d rather not have spent a good chunk of my week cleaning up after another person’s mischief, but at least the attack gave me an excuse to do something I’d wanted to do for a while now: give Late Age a makeover.  For awhile I’ve been feeling as if the site looked dated, and so I’m happy to give it a fresher look.  I’m not yet used to it, admittedly, but of course feeling comfortable in new style of anything takes time.

The other major change I made was to optimize Late Age for viewing on mobile devices.  Now, if you’re visiting using your smart phone or tablet computer, you’ll see the same content but in significantly streamlined form.  I’m not one to believe that the PC is dead — at least, not yet — but for better or for worse it’s clear that mobile is very much at the center of the internet’s future.  In any case, if you’re using a mobile device and want to see the normal Late Age site, there’s a link at the bottom of the screen allowing you to switch back.

I’d be delighted to hear your feedback about the new Late Age of Print.  Drop me a line, and thanks to all of you who wrote in to let me know something was up with the old site.

 

Share

Algorithmic Literacies

I’ve spent the last few weeks here auditioning ideas for my next book, on the topic of  “algorithmic culture.”  By this I mean the use of computers and complex mathematical routines to sort, classify, and create hierarchies for our many forms of human expression and association.dekor-okno.ru

I’ve been amazed by the reception of these posts, not to mention the extent of their circulation.  Even more to the point, the feedback I’ve been receiving has already prompted me to address some of the gaps in the argument — among them, the nagging question of “what is to be done?”

I should be clear that however much I may criticize Google, Facebook, Netflix, Amazon, and other leaders in the tech industry, I’m a regular user of their products and services.  When I get lost driving, I’m happy that Google Maps is there to save the day.  Facebook has helped me to reconnect with friends whom I thought were lost forever.  And in a city with inadequate bookstores, I’m pleased, for the most part, to have Amazon make suggestions about which titles I ought to know about.

In other words, I don’t mean to suggest that life would be better off without algorithmic culture.  Likewise, I don’t mean to sound as if I’m waxing nostalgic for the “good old days” when small circles of élites got to determine “the best that has been thought and said.”  The question for me is, how might we begin to forge a better algorithmic culture, one that provides for more meaningful participation in the production of our collective life?

It’s this question that’s brought me to the idea of algorithmic literacies, which is something Eli Pariser also talks about in the conclusion of The Filter Bubble. 

I’ve mentioned in previous posts that one of my chief concerns with algorithmic culture has to do with its mysteriousness.  Unless you’re a computer scientist with a Ph.D. in computational mathematics, you probably don’t have a good sense of how algorithmic decision-making actually works.  (I count myself among the latter group.)  Now, I don’t mean to suggest that everyone needs to study computational mathematics, although some basic understanding of the subject couldn’t hurt.  I do mean to suggest, however, that someone needs to begin developing strategies by which to interpret both the processes and products of algorithmic culture, critically.  That’s what I mean, in a very broad sense, by “algorithmic literacies.”

In this I join two friends and colleagues who’ve made related calls.  Siva Vaidhyanathan has coined the phrase “Critical Information Studies” to describe an emerging “transfield” concerned with (among other things) “the rights and abilities of users (or consumers or citizens) to alter the means and techniques through which cultural texts and information are rendered, displayed, and distributed.”  Similarly, Eszter Hargittai has pointed to the inadequacy of the notion of the “digital divide” and has suggested that people instead talk about the uneven distribution of competencies in digital environments.

Algorithmic literacies would proceed from the assumption that computational processes increasingly influence how we perceive, talk about, and act in the world.  Marxists used to call this type of effect “ideology,” although I’m not convinced of the adequacy of a term that still harbors connotations of false consciousness.  Maybe Fredric Jameson’s notion of “cognitive mapping” is more appropriate, given the many ways in which algorithms help us to get our bearings in world abuzz with information.  In any case, we need to start developing a  vocabulary, one that would provide better theoretical tools with which to make sense of the epistemological, communicative, and practical entailments of algorithmic culture.

Relatedly, algorithmic literacies would be concerned with the ways in which individuals, institutions, and technologies game the system of life online. Search engine optimization, reputation management, planted product reviews, content farms — today there are a host of ways to exploit vulnerabilities in the algorithms charged with sifting through culture.  What we need, first of all, is to identify the actors chiefly responsible for these types of malicious activities, for they often operate in the shadows.  But we also need to develop reading strategies that would help people to recognize instances in which someone is attempting to game the system.  Where literary studies teaches students how to read for tone, so, too, would those of us invested in algorithmic literacies begin teaching how to read for evidence of this type of manipulation.

Finally, we need to undertake comparative work in an effort to reverse engineer Google, Facebook, and Amazon, et al.’s proprietary algorithms.  One of the many intriguing parts of The Googlization of Everything is the moment where Vaidhyanathan compares and contrasts the Google search results that are presented to him in different national contexts.  A search for the word “Jew,” for example, yields very different outcomes on the US’s version of Google than it does on Germany’s, where anti-Semitic material is banned.  The point of the exercise isn’t to show that Google is different in different places; the company doesn’t hide that fact at all.  The point, rather, is to use the comparisons to draw inferences about the biases — the politics — that are built into the algorithms people routinely use.

This is only a start.  Weigh in, please.  Clearly there’s major work left to do.

Share

The Conversation of Culture

Last week I was interviewed on probably the best talk radio program about culture and technology, the CBC’s Spark. The interview grew out of my recent series of blog posts on the topic of algorithmic culture.  You can listen to the complete interview, which lasts about fifteen minutes, by following the link on the Spark website.  If you want to cut right to the chase and download an mp3 file of the complete interview, just click here.focuz.ru

The hallmark of a good interviewer is the ability to draw something out of an interviewee that she or he didn’t quite realize was there.  That’s exactly what the host of Spark, Nora Young, did for me.  She posed a question that got me thinking about the process of feedback as it relates to algorithmic culture — something I’ve been faulted on, rightly, in the conversations I’ve been having about my blog posts and scholarly research on the subject.  She asked something to the effect of, “Hasn’t culture always been a black box?”  The implication was: hasn’t the process of determining what’s culturally worthwhile always been mysterious, and if so, then what’s so new about algorithmic culture?

The answer, I believe, has everything to do with the way in which search engine algorithms, product and friend recommendation systems, personalized news feeds, and so forth incorporate our voices into their determinations of what we’ll be exposed to online.

They rely, first of all, on signals, or what you might call latent feedback.  This idea refers to the information about our online activities that’s recorded in the background, as it were, in a manner akin to eavesdropping.  Take Facebook, for example.  Assuming you’re logged in, Facebook registers not only your activities on its own site but also every movement you make across websites with an embedded “like” button.

Then there’s something you might call direct feedback, which refers to the information we voluntarily give up about ourselves and our preferences.  When Amazon.com asks if a product it’s recommended appeals to you, and you click “no,” you’ve explicitly told the company it got that one wrong.

So where’s the problem in that?  Isn’t it the case that these systems are inherently democratic, in that they actively seek and incorporate our feedback?  Well, yes…and no.  The issue here has to do with the way in which they model a conversation about the cultural goods that surround us, and indeed about culture more generally.

The work of culture has long happened inside of a black box, to be sure.  For generations it was chiefly the responsibility of a small circle of white guys who made it their business to determine, in Matthew Arnold’s famous words, “the best that has been thought and said.”

Only the black box wasn’t totally opaque.  The arguments and judgments of these individuals were never beyond question.  They debated fiercely among themselves, often quite publicly; people outside of their circles debated them equally fiercely, if not more so.  That’s why, today, we teach Toni Morrison’s work in our English classes in addition to that of William Shakespeare.

The question I raised near the end of the Spark interview is the one I want to raise here: how do you argue with Google?  Or, to take a related example, what does clicking “not interested” on an Amazon product recommendation actually communicate, beyond the vaguest sense of distaste?  There’s no subtlety or justification there.  You just don’t like it.  Period.  End of story.  This isn’t communication as much as the conveyance of decontextualized information, and it reduces culture from a series of arguments to a series of statements.

Then again, that may not be entirely accurate.  There’s still an argument going on where the algorithmic processing of culture is concerned — it just takes place somewhere deep in the bowels of a server farm, where all of our movements and preferences are aggregated and then filtered.  You can’t argue with Google, Amazon, or Facebook, but it’s not because they’re incapable of argument.  It’s because their systems perform the argument for us, algorithmically.  They obviate the need to justify our preferences to one another, and indeed, before one another.

This is a conversation about culture, yes, but minus its moral obligations.

Share

Cultural Informatics

In my previous post I addressed the question, who speaks for culture in an algorithmic age?  My claim was that humanities scholars once held significant sway over what ended up on our cultural radar screens but that, today, their authority is diminishing in importance.  The work of sorting, classifying, hierarchizing, and curating culture now falls increasingly on the shoulders of engineers, whose determinations of what counts as relevant or worthy result from computational processes.  This is what I’ve been calling, “algorithmic culture.”

The question I want to address this week is, what assumptions about culture underlie the latter approach?  How, in other words, do engineers — particularly computer scientists — seem to understand and then operationalize the culture part of algorithmic culture?

My starting point is, as is often the case, the work of cultural studies scholar Raymond Williams.  He famously observed in Keywords (1976) that culture is “one of the two or three most complicated words in the English language.”  The term is definitionally capacious, that is to say, a result of centuries of shedding and accreting meanings, as well as the broader rise and fall of its etymological fortunes.  Yet, Williams didn’t mean for this statement to be taken as merely descriptive; there was an ethic implied in it, too see this site.  Tread lightly in approaching culture.  Make good sense of it, but do well not to diminish its complexity.

Those who take an algorithmic approach to culture proceed under the assumption that culture is “expressive.”  More specifically, all the stuff we make, practices we engage in, and experiences we have cast astonishing amounts of information out into the world.  This is what I mean by “cultural informatics,” the title of this post.  Algorithmic culture operates first of all my subsuming culture under the rubric of information — by understanding culture as fundamentally, even intrinsically, informational and then operating on it accordingly.

One of the virtues of the category “information” is its ability to link any number of seemingly disparate phenomena together: the movements of an airplane, the functioning of a genome, the activities of an economy, the strategies in a card game, the changes in the weather, etc.  It is an extraordinarily powerful abstraction, one whose import I have come to appreciate, deeply, over the course of my research.

The issue I have pertains to the epistemological entailments that flow from locating culture within the framework of information.  What do you have to do with — or maybe to — culture once you commit to understanding it informationally?

The answer to this question begins with the “other” of information: entropy, or the measure of a system’s disorder.  The point of cultural informatics is, by and large, to drive out entropy — to bring order to the cultural chaos by ferreting out the signal that exists amid all the noise.  This is basically how Google works when you execute a search.  It’s also how sites like Amazon.com and Netflix recommend products to you.  The presumption here is that there’s a logic or pattern hidden within culture and that, through the application of the right mathematics, you’ll eventually come to find it.

There’s nothing fundamentally wrong with this understanding of culture.  Something like it has kept anthropologists, sociologists, literary critics, and host of others in business for well over a century.  Indeed there are cultural routines you can point to, whether or not you use computers to find them.  But having said that, it’s worth mentioning that culture consists of more than just logic and pattern.  Intrinsic to culture is, in fact, noise, or the very stuff that gets filtered out of algorithmic culture.

At least, that’s what more recent developments within the discipline of anthropology teach us.  I’m thinking of Renato Rosaldo‘s fantastic book Culture and Truth (1989), and in particular of the chapter, “Putting Culture in Motion.”  There Rosaldo argues for a more elastic understanding of culture, one that refuses to see inconsistency or disorder as something needing to be purged.  “We often improvise, learn by doing, and make things up as we go along,” he states.  He puts it even more bluntly later on: “Do our options really come down to the vexed choice between supporting cultural order or yielding to the chaos of brute idiocy?”

The informatics of culture is oddly paradoxical in that it hinges on a more and less powerful conceptualization of culture.  It is more powerful because of the way culture can be rendered equivalent, informationally speaking, with all of those phenomena (and many more) I mentioned above.  And yet, it is less powerful because of the way the livingness, the inventiveness — what Eli Pariser describes as the “serendipity” — of culture must be shed in the process of creating that equivalence.

What is culture without noise?  What is culture besides noise?  It is a domain of practice and experience diminished in its complexity.  And it is exactly the type of culture Raymond Williams warned us about, for it is one we presume to know but barely know the half of.

Share

Who Speaks for Culture?

I’ve blogged off and on over the past 15 months about “algorithmic culture.”  The subject first came to my attention when I learned about the Amazon Kindle’s “popular highlights” feature, which aggregates data about the passages Kindle owners have deemed important enough to underline.Укладка дикого камня

Since then I’ve been doing a fair amount of algorithmic culture spotting, mostly in the form of news articles.  I’ve tweeted about a few of them.  In one case, I learned that in some institutions college roommate selection is now being determined algorithmically — often, by  matching up individuals with similar backgrounds and interests.  In another, I discovered a pilot program that recommends college courses based on a student’s “planned major, past academic performance, and data on how similar students fared in that class.”  Even scholarly trends are now beginning to be mapped algorithmically in an attempt to identify new academic disciplines and hot-spots.

There’s much to be impressed by in these systems, both functionally and technologically.  Yet, as Eli Pariser notes in his highly engaging book The Filter Bubble, a major downside is their tendency to push people in the direction of the already known, reducing the possibility for serendipitous encounters and experiences.

When I began writing about “algorithmic culture,” I used the term mainly to describe how the sorting, classifying, hierarchizing, and curating of people, places, objects, and ideas was beginning to be given over to machine-based information processing systems.  The work of culture, I argued, was becoming increasingly algorithmic, at least in some domains of life.

As I continue my research on the topic, I see an even broader definition of algorithmic culture starting to emerge.  The preceding examples (and many others I’m happy to share) suggest that some of our most basic habits of thought, conduct, and expression — the substance of what Raymond Williams once called “culture as a whole way of life” — are coming to be affected by algorithms, too.  It’s not only that cultural work is becoming algorithmic; cultural life is as well.

The growing prevalence of algorithmic culture raises all sorts of questions.  What is the determining power of technology?  What understandings of people and culture — what “affordances” — do these systems embody? What are the implications of the tendency, at least at present, to encourage people to inhabit experiential and epistemological enclaves?

But there’s an even more fundamental issue at stake here, too: who speaks for culture?

For the last 150 years or so, the answer was fairly clear.  The humanities spoke for culture and did so almost exclusively.  Culture was both its subject and object.  For all practical purposes the humanities “owned” culture, if for no other reason than the arts, language, and literature were deemed too touchy-feely to fall within the bailiwick of scientific reason.

Today the tide seems to be shifting.  As Siva Vaidhyanathan has pointed out in The Googlization of Everything, engineers — mostly computer scientists — today hold extraordinary sway over what does or doesn’t end up on our cultural radar.  To put it differently, amid the din of our pubic conversations about culture, their voices are the ones that increasingly get heard or are perceived as authoritative.  But even this statement isn’t entirely accurate, for we almost never hear directly from these individuals.  Their voices manifest themselves in fragments of code and interface so subtle and diffuse that the computer seems to speak, and to do so without bias or predilection.

So who needs the humanities — even the so-called “digital humanities” — when your Kindle can tell you what in your reading you ought to be paying attention to?

Share

Algorithmic Culture, Redux

Back in June I blogged here about “Algorithmic Culture,” or the sorting, classifying, and hierarchizing of people, places, objects, and ideas using computational processes.  (Think Google search, Amazon’s product recommendations, who gets featured in your Facebook news feed, etc.)  Well, for the past several months I’ve been developing an essay on the theme, and it’s finally done.  I’ll be debuting it at Vanderbilt University’s “American Cultures in the Digital Age” conference on Friday, March 18th, which I’m keynoting along with Kelly Joyce (College of William & Mary), Cara Finnegan (University of Illinois), and Eszter Hargittai (Northwestern University).  Needless to say, I’m thrilled to be joining such distinguished company at what promises to be, well, an event.
Кровля из металлочерепицы. Ее достоинства и недостатки.

The piece I posted originally on algorithmic culture generated a surprising — and exciting — amount of response.  In fact, nine months later, it’s still receiving pingbacks, I’m pretty sure as a result of its having found its way onto one or more college syllabuses.  So between that and the good results I’m seeing in the essay, I’m seriously considering developing the material on algorithmic culture into my next book.  Originally after Late Age I’d planned on focusing on contemporary religious publishing, but increasingly I feel as if that will have to wait.

Drop by the conference if you’re in or around the Nashville area on Friday, March 18th.  I’m kicking things off starting at 9:30 a.m.  And for those of you who can’t make it there, here’s the title slide from the PowerPoint presentation, along with a little taste of the talk’s conclusion:

This latter definition—culture as authoritative principle—is, I believe, the definition that’s chiefly operative in and around algorithmic culture. Today, however, it isn’t culture per se that is a “principle of authority” but increasingly the algorithms to which are delegated the task of driving out entropy, or in Matthew Arnold’s language, “anarchy.”  You might even say that culture is fast becoming—in domains ranging from retail to rental, search to social networking, and well beyond—the positive remainder of specific information processing tasks, especially as they relate to the informatics of crowds.  And in this sense algorithms have significantly taken on what, at least since Arnold, has been one of culture’s chief responsibilities, namely, the task of “reassembling the social,” as Bruno Latour puts it—here, though, by discovering statistical correlations that would appear to unite an otherwise disparate and dispersed crowd of people.

I expect to post a complete draft of the piece on “Algorithmic Culture” to my project site once I’ve tightened it up a bit. Hopefully it will generate even more comments, questions, and provocations than the blog post that inspired the work initially.

In the meantime, I’d welcome any feedback you may have about the short excerpt appearing above, or on the talk if you’re going to be in Nashville this week.

Share

How to Have Culture in an Algorithmic Age

The subtitle of this post ought to be “apparently,” since I have developing doubts about substituting digital surveillance systems and complex computer programs for the considered — humane — work of culture.

Case in point: about six weeks ago, Galley Cat reported on a new Kindle-related initiative called “popular highlights,”which Amazon.com had just rolled out onto the web for beta testing.  In a nutshell, Amazon is now going public with information about which Kindle books are the most popular, as well as which passages within them have been the most consistently highlighted by readers.

How does Amazon determine this?  Using the 3G connection built into your Kindle, the company automatically uploads your highlights, bookmarks, marginal notes, and more to its server array, or computing cloud.  Amazon calls this service “back up,” but the phrase is something of a misnomer.  Sure, there’s goodwill on Amazon’s part in helping to ensure that your Kindle data never gets deleted or corrupted.  By the same token, it’s becoming abundantly clear that “back up” exists as much for the sake of your convenience as it does for Amazon itself, who mines all of your Kindle-related data.  The Galley Cat story only confirms this.

This isn’t really news.  For months I’ve been writing here and elsewhere about the back up/surveillance issue, and I even have an academic journal article appearing on the topic this fall.  Now, don’t get me wrong — this is an important issue.  But the focus on surveillance has obscured another pressing matter: the way in which Amazon, and indeed other tech companies, are altering the idea of culture through these types of services.  Hence my concern with what I’m calling, following Alex Galloway, “algorithmic culture.”

In the old paradigm of culture — you might call it “elite culture,” although I find the term “elite” to be so overused these days as to be almost meaningless — a small group of well-trained, trusted authorities determined not only what was worth reading, but also what within a given reading selection were the most important aspects to focus on.  The basic principle is similar with algorithmic culture, which is also concerned with sorting, classifying, and hierarchizing cultural artifacts.

Here’s the twist, however, which is apparent from the “About” page on the Amazon Popular Highlights site:

We combine the highlights of all Kindle customers and identify the passages with the most highlights. The resulting Popular Highlights help readers to focus on passages that are meaningful to the greatest number of people.

Using its computing cloud, Amazon aggregates all of the information it’s gathered from its customers’ Kindles to produce a statistical determination of what’s culturally relevant. In other words, significance and meaningfulness are decided by a massive — and massively distributed — group of readers, whose responses to texts are measured, quantified, and processed by Amazon.

I realize that in raising doubts about this type of cultural work, I’m opening myself to charges of elitism.  So be it.  Anytime you question what used to be called “the popular,” and what is now increasingly referred to as “the crowd,” you open yourself to those types of accusations. Honestly, though, I’m not out to impugn the crowd.

To my mind, the whole elites-versus-crowd debate is little more than a red-herring, one that distracts from a much deeper issue: Amazon’s algorithm and the mysterious ways in which it renders culture.

When people read, on a Kindle or elsewhere, there’s context.  For example, I may highlight a passage because I find it to be provocative or insightful.  By the same token, I may find it to be objectionable, or boring, or grammatically troublesome, or confusing, or…you get the point.  When Amazon uploads your passages and begins aggregating them with those of other readers, this sense of context is lost.  What this means is that algorithmic culture, in its obsession with metrics and quantification, exists at least one level of abstraction beyond the acts of reading that first produced the data.

I’m not against the crowd, and let me add that I’m not even against this type of cultural work per se.  I don’t fear the machine.  What I do fear, though, is the black box of algorithmic culture.  We have virtually no idea of how Amazon’s Popular Highlights algorithm works, let alone who made it.  All that information is proprietary, and given Amazon’s penchant for secrecy, the company is unlikely to open up about it anytime soon.

In the old cultural paradigm, you could question authorities about their reasons for selecting particular cultural artifacts as worthy, while dismissing or neglecting others.  Not so with algorithmic culture, which wraps abstraction inside of secrecy and sells it back to you as, “the people have spoken.”

Share