The Conversation of Culture

Last week I was interviewed on probably the best talk radio program about culture and technology, the CBC’s Spark. The interview grew out of my recent series of blog posts on the topic of algorithmic culture.  You can listen to the complete interview, which lasts about fifteen minutes, by following the link on the Spark website.  If you want to cut right to the chase and download an mp3 file of the complete interview, just click here.focuz.ru

The hallmark of a good interviewer is the ability to draw something out of an interviewee that she or he didn’t quite realize was there.  That’s exactly what the host of Spark, Nora Young, did for me.  She posed a question that got me thinking about the process of feedback as it relates to algorithmic culture — something I’ve been faulted on, rightly, in the conversations I’ve been having about my blog posts and scholarly research on the subject.  She asked something to the effect of, “Hasn’t culture always been a black box?”  The implication was: hasn’t the process of determining what’s culturally worthwhile always been mysterious, and if so, then what’s so new about algorithmic culture?

The answer, I believe, has everything to do with the way in which search engine algorithms, product and friend recommendation systems, personalized news feeds, and so forth incorporate our voices into their determinations of what we’ll be exposed to online.

They rely, first of all, on signals, or what you might call latent feedback.  This idea refers to the information about our online activities that’s recorded in the background, as it were, in a manner akin to eavesdropping.  Take Facebook, for example.  Assuming you’re logged in, Facebook registers not only your activities on its own site but also every movement you make across websites with an embedded “like” button.

Then there’s something you might call direct feedback, which refers to the information we voluntarily give up about ourselves and our preferences.  When Amazon.com asks if a product it’s recommended appeals to you, and you click “no,” you’ve explicitly told the company it got that one wrong.

So where’s the problem in that?  Isn’t it the case that these systems are inherently democratic, in that they actively seek and incorporate our feedback?  Well, yes…and no.  The issue here has to do with the way in which they model a conversation about the cultural goods that surround us, and indeed about culture more generally.

The work of culture has long happened inside of a black box, to be sure.  For generations it was chiefly the responsibility of a small circle of white guys who made it their business to determine, in Matthew Arnold’s famous words, “the best that has been thought and said.”

Only the black box wasn’t totally opaque.  The arguments and judgments of these individuals were never beyond question.  They debated fiercely among themselves, often quite publicly; people outside of their circles debated them equally fiercely, if not more so.  That’s why, today, we teach Toni Morrison’s work in our English classes in addition to that of William Shakespeare.

The question I raised near the end of the Spark interview is the one I want to raise here: how do you argue with Google?  Or, to take a related example, what does clicking “not interested” on an Amazon product recommendation actually communicate, beyond the vaguest sense of distaste?  There’s no subtlety or justification there.  You just don’t like it.  Period.  End of story.  This isn’t communication as much as the conveyance of decontextualized information, and it reduces culture from a series of arguments to a series of statements.

Then again, that may not be entirely accurate.  There’s still an argument going on where the algorithmic processing of culture is concerned — it just takes place somewhere deep in the bowels of a server farm, where all of our movements and preferences are aggregated and then filtered.  You can’t argue with Google, Amazon, or Facebook, but it’s not because they’re incapable of argument.  It’s because their systems perform the argument for us, algorithmically.  They obviate the need to justify our preferences to one another, and indeed, before one another.

This is a conversation about culture, yes, but minus its moral obligations.

Share

2 comments

  1. Chris M. says:

    Interesting! I’m coming late to this discussion of algorithmic culture, let me stipulate, so perhaps I’m misunderstanding how you’re defining the term.

    But it seems to me — just to play devil’s advocate — that the sort of algorithms you describe, both direct and indirect, have been at play for a *long* time. The only difference now is the specific technological media through which they manifest themselves.

    For example, any number of economists would be happy to tell you that “culture” is the result of our aggregated preferences for goods and services, as expressed through willingness-to-pay — i.e., through price signals. And just as with the messages we send with mouse clicks, some of those signals are conscious and voluntary, others are unwitting.

    Markets aren’t a perfect way of aggregating preferences, of course, but what is? And that’s just the economic perspective… let’s not even get into anthropologists…

    The contrast with the “small circle of white guys” feels like you’re moving the goalposts. To the extent that was ever true, it applied only to “high” culture. For at least as long as lowbrow and middlebrow culture have existed, the self-appointed gatekeepers have been railing against them — in vain. They keep evolving just as fast as various mass audiences can find ways to signal their preferences.

    And in that regard, Google and Facebook are merely the latest vehicle for those signals… certainly not the last, and probably far from the most revolutionary. If there were ever moral obligations attached, I’d love to know how, where, and when.

    • Ted Striphas says:

      First off, Chris, sorry for the long delay in responding to your thoughtful comment. One word: midterms. Also, your comment warranted more than just a perfunctory response.

      For my part, I’m interested not only in the use of computers to model practices of human coordination and decision-making, but equally important, the history of the idea that social and economic life conform to “algorithmic” processes — here understood as spontaneous, more or less self-organizing activities.

      The latter is a too flattened out a theory for my tastes. That in turn makes we wonder how adequate many of the algorithmic processes I’ve described over these last few weeks, which take this understanding as a basic operating principle, are in terms of their capacity to model social, economic, and indeed cultural activities in the “real world.” But even more than that, I’m interested in the effects this particular implementation of a social theory goes on to have in the “real world,” above and beyond any question of its accuracy. I’ve called this, in another place, the “algorithmic real.”

      I will say, for whatever it’s worth, that I can’t agree with your assessment of my take on elite cultural authorities of the early 20th century. You can only make the argument your making from the vantage point of today, where an abundance of “popular” cultural artifacts generally rules the day — in stores, classrooms, etc. This isn’t to say popular stuff wasn’t, well, popular back in the day; surely it was. But the social status conferred on it was lesser in relationship to the canons of cultural material preferred by Oxbridge English professors and the like.

      But here we do seem to agree on the basic point: the meaningfulness of culture has long been arrived at communicatively — through conversation and debate.

      That’s how it’s tended to move. So what happens when we offload major aspects of that communication onto systems that not only aren’t transparent but are also built on top of a flawed social theory?

      For me that doesn’t bode well for culture, or a democratic one at any rate.

Leave a Reply

Your email address will not be published.