Tag Archive for Amazon.com

Algorithmic Literacies

I’ve spent the last few weeks here auditioning ideas for my next book, on the topic of  “algorithmic culture.”  By this I mean the use of computers and complex mathematical routines to sort, classify, and create hierarchies for our many forms of human expression and association.dekor-okno.ru

I’ve been amazed by the reception of these posts, not to mention the extent of their circulation.  Even more to the point, the feedback I’ve been receiving has already prompted me to address some of the gaps in the argument — among them, the nagging question of “what is to be done?”

I should be clear that however much I may criticize Google, Facebook, Netflix, Amazon, and other leaders in the tech industry, I’m a regular user of their products and services.  When I get lost driving, I’m happy that Google Maps is there to save the day.  Facebook has helped me to reconnect with friends whom I thought were lost forever.  And in a city with inadequate bookstores, I’m pleased, for the most part, to have Amazon make suggestions about which titles I ought to know about.

In other words, I don’t mean to suggest that life would be better off without algorithmic culture.  Likewise, I don’t mean to sound as if I’m waxing nostalgic for the “good old days” when small circles of élites got to determine “the best that has been thought and said.”  The question for me is, how might we begin to forge a better algorithmic culture, one that provides for more meaningful participation in the production of our collective life?

It’s this question that’s brought me to the idea of algorithmic literacies, which is something Eli Pariser also talks about in the conclusion of The Filter Bubble. 

I’ve mentioned in previous posts that one of my chief concerns with algorithmic culture has to do with its mysteriousness.  Unless you’re a computer scientist with a Ph.D. in computational mathematics, you probably don’t have a good sense of how algorithmic decision-making actually works.  (I count myself among the latter group.)  Now, I don’t mean to suggest that everyone needs to study computational mathematics, although some basic understanding of the subject couldn’t hurt.  I do mean to suggest, however, that someone needs to begin developing strategies by which to interpret both the processes and products of algorithmic culture, critically.  That’s what I mean, in a very broad sense, by “algorithmic literacies.”

In this I join two friends and colleagues who’ve made related calls.  Siva Vaidhyanathan has coined the phrase “Critical Information Studies” to describe an emerging “transfield” concerned with (among other things) “the rights and abilities of users (or consumers or citizens) to alter the means and techniques through which cultural texts and information are rendered, displayed, and distributed.”  Similarly, Eszter Hargittai has pointed to the inadequacy of the notion of the “digital divide” and has suggested that people instead talk about the uneven distribution of competencies in digital environments.

Algorithmic literacies would proceed from the assumption that computational processes increasingly influence how we perceive, talk about, and act in the world.  Marxists used to call this type of effect “ideology,” although I’m not convinced of the adequacy of a term that still harbors connotations of false consciousness.  Maybe Fredric Jameson’s notion of “cognitive mapping” is more appropriate, given the many ways in which algorithms help us to get our bearings in world abuzz with information.  In any case, we need to start developing a  vocabulary, one that would provide better theoretical tools with which to make sense of the epistemological, communicative, and practical entailments of algorithmic culture.

Relatedly, algorithmic literacies would be concerned with the ways in which individuals, institutions, and technologies game the system of life online. Search engine optimization, reputation management, planted product reviews, content farms — today there are a host of ways to exploit vulnerabilities in the algorithms charged with sifting through culture.  What we need, first of all, is to identify the actors chiefly responsible for these types of malicious activities, for they often operate in the shadows.  But we also need to develop reading strategies that would help people to recognize instances in which someone is attempting to game the system.  Where literary studies teaches students how to read for tone, so, too, would those of us invested in algorithmic literacies begin teaching how to read for evidence of this type of manipulation.

Finally, we need to undertake comparative work in an effort to reverse engineer Google, Facebook, and Amazon, et al.’s proprietary algorithms.  One of the many intriguing parts of The Googlization of Everything is the moment where Vaidhyanathan compares and contrasts the Google search results that are presented to him in different national contexts.  A search for the word “Jew,” for example, yields very different outcomes on the US’s version of Google than it does on Germany’s, where anti-Semitic material is banned.  The point of the exercise isn’t to show that Google is different in different places; the company doesn’t hide that fact at all.  The point, rather, is to use the comparisons to draw inferences about the biases — the politics — that are built into the algorithms people routinely use.

This is only a start.  Weigh in, please.  Clearly there’s major work left to do.

Share

The Conversation of Culture

Last week I was interviewed on probably the best talk radio program about culture and technology, the CBC’s Spark. The interview grew out of my recent series of blog posts on the topic of algorithmic culture.  You can listen to the complete interview, which lasts about fifteen minutes, by following the link on the Spark website.  If you want to cut right to the chase and download an mp3 file of the complete interview, just click here.focuz.ru

The hallmark of a good interviewer is the ability to draw something out of an interviewee that she or he didn’t quite realize was there.  That’s exactly what the host of Spark, Nora Young, did for me.  She posed a question that got me thinking about the process of feedback as it relates to algorithmic culture — something I’ve been faulted on, rightly, in the conversations I’ve been having about my blog posts and scholarly research on the subject.  She asked something to the effect of, “Hasn’t culture always been a black box?”  The implication was: hasn’t the process of determining what’s culturally worthwhile always been mysterious, and if so, then what’s so new about algorithmic culture?

The answer, I believe, has everything to do with the way in which search engine algorithms, product and friend recommendation systems, personalized news feeds, and so forth incorporate our voices into their determinations of what we’ll be exposed to online.

They rely, first of all, on signals, or what you might call latent feedback.  This idea refers to the information about our online activities that’s recorded in the background, as it were, in a manner akin to eavesdropping.  Take Facebook, for example.  Assuming you’re logged in, Facebook registers not only your activities on its own site but also every movement you make across websites with an embedded “like” button.

Then there’s something you might call direct feedback, which refers to the information we voluntarily give up about ourselves and our preferences.  When Amazon.com asks if a product it’s recommended appeals to you, and you click “no,” you’ve explicitly told the company it got that one wrong.

So where’s the problem in that?  Isn’t it the case that these systems are inherently democratic, in that they actively seek and incorporate our feedback?  Well, yes…and no.  The issue here has to do with the way in which they model a conversation about the cultural goods that surround us, and indeed about culture more generally.

The work of culture has long happened inside of a black box, to be sure.  For generations it was chiefly the responsibility of a small circle of white guys who made it their business to determine, in Matthew Arnold’s famous words, “the best that has been thought and said.”

Only the black box wasn’t totally opaque.  The arguments and judgments of these individuals were never beyond question.  They debated fiercely among themselves, often quite publicly; people outside of their circles debated them equally fiercely, if not more so.  That’s why, today, we teach Toni Morrison’s work in our English classes in addition to that of William Shakespeare.

The question I raised near the end of the Spark interview is the one I want to raise here: how do you argue with Google?  Or, to take a related example, what does clicking “not interested” on an Amazon product recommendation actually communicate, beyond the vaguest sense of distaste?  There’s no subtlety or justification there.  You just don’t like it.  Period.  End of story.  This isn’t communication as much as the conveyance of decontextualized information, and it reduces culture from a series of arguments to a series of statements.

Then again, that may not be entirely accurate.  There’s still an argument going on where the algorithmic processing of culture is concerned — it just takes place somewhere deep in the bowels of a server farm, where all of our movements and preferences are aggregated and then filtered.  You can’t argue with Google, Amazon, or Facebook, but it’s not because they’re incapable of argument.  It’s because their systems perform the argument for us, algorithmically.  They obviate the need to justify our preferences to one another, and indeed, before one another.

This is a conversation about culture, yes, but minus its moral obligations.

Share

Cultural Informatics

In my previous post I addressed the question, who speaks for culture in an algorithmic age?  My claim was that humanities scholars once held significant sway over what ended up on our cultural radar screens but that, today, their authority is diminishing in importance.  The work of sorting, classifying, hierarchizing, and curating culture now falls increasingly on the shoulders of engineers, whose determinations of what counts as relevant or worthy result from computational processes.  This is what I’ve been calling, “algorithmic culture.”

The question I want to address this week is, what assumptions about culture underlie the latter approach?  How, in other words, do engineers — particularly computer scientists — seem to understand and then operationalize the culture part of algorithmic culture?

My starting point is, as is often the case, the work of cultural studies scholar Raymond Williams.  He famously observed in Keywords (1976) that culture is “one of the two or three most complicated words in the English language.”  The term is definitionally capacious, that is to say, a result of centuries of shedding and accreting meanings, as well as the broader rise and fall of its etymological fortunes.  Yet, Williams didn’t mean for this statement to be taken as merely descriptive; there was an ethic implied in it, too see this site.  Tread lightly in approaching culture.  Make good sense of it, but do well not to diminish its complexity.

Those who take an algorithmic approach to culture proceed under the assumption that culture is “expressive.”  More specifically, all the stuff we make, practices we engage in, and experiences we have cast astonishing amounts of information out into the world.  This is what I mean by “cultural informatics,” the title of this post.  Algorithmic culture operates first of all my subsuming culture under the rubric of information — by understanding culture as fundamentally, even intrinsically, informational and then operating on it accordingly.

One of the virtues of the category “information” is its ability to link any number of seemingly disparate phenomena together: the movements of an airplane, the functioning of a genome, the activities of an economy, the strategies in a card game, the changes in the weather, etc.  It is an extraordinarily powerful abstraction, one whose import I have come to appreciate, deeply, over the course of my research.

The issue I have pertains to the epistemological entailments that flow from locating culture within the framework of information.  What do you have to do with — or maybe to — culture once you commit to understanding it informationally?

The answer to this question begins with the “other” of information: entropy, or the measure of a system’s disorder.  The point of cultural informatics is, by and large, to drive out entropy — to bring order to the cultural chaos by ferreting out the signal that exists amid all the noise.  This is basically how Google works when you execute a search.  It’s also how sites like Amazon.com and Netflix recommend products to you.  The presumption here is that there’s a logic or pattern hidden within culture and that, through the application of the right mathematics, you’ll eventually come to find it.

There’s nothing fundamentally wrong with this understanding of culture.  Something like it has kept anthropologists, sociologists, literary critics, and host of others in business for well over a century.  Indeed there are cultural routines you can point to, whether or not you use computers to find them.  But having said that, it’s worth mentioning that culture consists of more than just logic and pattern.  Intrinsic to culture is, in fact, noise, or the very stuff that gets filtered out of algorithmic culture.

At least, that’s what more recent developments within the discipline of anthropology teach us.  I’m thinking of Renato Rosaldo‘s fantastic book Culture and Truth (1989), and in particular of the chapter, “Putting Culture in Motion.”  There Rosaldo argues for a more elastic understanding of culture, one that refuses to see inconsistency or disorder as something needing to be purged.  “We often improvise, learn by doing, and make things up as we go along,” he states.  He puts it even more bluntly later on: “Do our options really come down to the vexed choice between supporting cultural order or yielding to the chaos of brute idiocy?”

The informatics of culture is oddly paradoxical in that it hinges on a more and less powerful conceptualization of culture.  It is more powerful because of the way culture can be rendered equivalent, informationally speaking, with all of those phenomena (and many more) I mentioned above.  And yet, it is less powerful because of the way the livingness, the inventiveness — what Eli Pariser describes as the “serendipity” — of culture must be shed in the process of creating that equivalence.

What is culture without noise?  What is culture besides noise?  It is a domain of practice and experience diminished in its complexity.  And it is exactly the type of culture Raymond Williams warned us about, for it is one we presume to know but barely know the half of.

Share

Who Speaks for Culture?

I’ve blogged off and on over the past 15 months about “algorithmic culture.”  The subject first came to my attention when I learned about the Amazon Kindle’s “popular highlights” feature, which aggregates data about the passages Kindle owners have deemed important enough to underline.Укладка дикого камня

Since then I’ve been doing a fair amount of algorithmic culture spotting, mostly in the form of news articles.  I’ve tweeted about a few of them.  In one case, I learned that in some institutions college roommate selection is now being determined algorithmically — often, by  matching up individuals with similar backgrounds and interests.  In another, I discovered a pilot program that recommends college courses based on a student’s “planned major, past academic performance, and data on how similar students fared in that class.”  Even scholarly trends are now beginning to be mapped algorithmically in an attempt to identify new academic disciplines and hot-spots.

There’s much to be impressed by in these systems, both functionally and technologically.  Yet, as Eli Pariser notes in his highly engaging book The Filter Bubble, a major downside is their tendency to push people in the direction of the already known, reducing the possibility for serendipitous encounters and experiences.

When I began writing about “algorithmic culture,” I used the term mainly to describe how the sorting, classifying, hierarchizing, and curating of people, places, objects, and ideas was beginning to be given over to machine-based information processing systems.  The work of culture, I argued, was becoming increasingly algorithmic, at least in some domains of life.

As I continue my research on the topic, I see an even broader definition of algorithmic culture starting to emerge.  The preceding examples (and many others I’m happy to share) suggest that some of our most basic habits of thought, conduct, and expression — the substance of what Raymond Williams once called “culture as a whole way of life” — are coming to be affected by algorithms, too.  It’s not only that cultural work is becoming algorithmic; cultural life is as well.

The growing prevalence of algorithmic culture raises all sorts of questions.  What is the determining power of technology?  What understandings of people and culture — what “affordances” — do these systems embody? What are the implications of the tendency, at least at present, to encourage people to inhabit experiential and epistemological enclaves?

But there’s an even more fundamental issue at stake here, too: who speaks for culture?

For the last 150 years or so, the answer was fairly clear.  The humanities spoke for culture and did so almost exclusively.  Culture was both its subject and object.  For all practical purposes the humanities “owned” culture, if for no other reason than the arts, language, and literature were deemed too touchy-feely to fall within the bailiwick of scientific reason.

Today the tide seems to be shifting.  As Siva Vaidhyanathan has pointed out in The Googlization of Everything, engineers — mostly computer scientists — today hold extraordinary sway over what does or doesn’t end up on our cultural radar.  To put it differently, amid the din of our pubic conversations about culture, their voices are the ones that increasingly get heard or are perceived as authoritative.  But even this statement isn’t entirely accurate, for we almost never hear directly from these individuals.  Their voices manifest themselves in fragments of code and interface so subtle and diffuse that the computer seems to speak, and to do so without bias or predilection.

So who needs the humanities — even the so-called “digital humanities” — when your Kindle can tell you what in your reading you ought to be paying attention to?

Share

Rent This Book!

I’ve been struck this start of the school year by the proliferation of textbook rental outfits here in Bloomington, Indiana and elsewhere.  Locally there’s TXTBookRental Bloomington, which brokers exclusively in rented course texts, as well as TIS and the IU Bookstore (operated by Barnes & Noble), both of whom sell books in addition to offering rental options.  The latter also just launched a marketing campaign designed to grow the rental market.  Further away there’s Amazon.com, which isn’t only offering “traditional” textbook rentals but also time-limited Kindle books.  These are “pay only for the exact time you need” editions that disappear once the lease expires.reteks.ru

There’s been a good deal of enthusiasm about textbook rentals.  Many see them as a welcome work-around to the problem of over-inflated textbook prices, about which many people, including me, have been complaining for years.  Rentals help to keep the price of textbooks comparatively low by allowing students the option of not having to invest fully, in perpetuity, in the object.  Indeed, the rental option recognizes that students often share an ephemeral relationship with their course texts.  Why bother buying something outright when you need it for maybe three or four months at most?

My question is: are textbook rentals simply a boon for college students, or are there broader economic implications that might complicate — or even undercut — this story?

I want to begin by thinking about what it means to “rent” a textbook, since, arguably, students have been doing so for a long time.  When I was an undergraduate back in the early 1990s, I purchased books at the start of the semester knowing I’d sell many of them back to the bookstore upon completion of the term.  Had I bought these books, or was I renting them?  Legally it was the former, but effectively, I believe, it was the latter.  I’d paid not for a thing per se but for a relationship with a property that returned to the seller/owner once a period of time had elapsed.  That sounds a lot like rental to me.

So let’s assume for the moment that the rental of textbooks isn’t a new phenomenon but rather something that’s been going on for decades.  What’s the difference between then and now?  Buyback.  Under the old rental system you’d get some money for your books if your decided you didn’t want to keep them.  Under the new régime you get absolutely nothing.  Granted, it wasn’t uncommon for bookstores to give you a pittance if you decided to sell back your course texts; more often than not they’d then go on re-sell the books for a premium, adding insult to injury.  Nevertheless, at least you’d get something like your security deposit back once the lease had expired.  Now the landlord pockets everything.

Some industrious student needs to look into the economics of these new textbook rental schemes.  Is it cheaper to rent a course text for a semester, or do students actually make out better in the long run if they purchase and then sell back?

If I had to speculate, I’d say that booksellers wouldn’t be glomming on to the latest rental trend if it wasn’t first and foremost in their economic self-interest — even if they’re representing it otherwise.

Coming next week: textbook rentals, part II: what happens when books cease being objects that ordinary people own and accumulate?

Share

A Second Age of Incunabula

What a difference a few years can make.  I’m talking about the proliferation of e-reading devices among my Indiana University undergraduates — devices that were virtually non-existent in their lives not so very long ago.  Let me explain.Бассейны. Сделать бассейн

In 2006, I piloted a course based loosely on The Late Age of Print called “The Cultures of Books and Reading.”  We ended, predictably, with a unit on the future of books in an age of digital media.  We read (among other things) a chapter or two from Sven Birkerts’ Gutenberg Elegies, in addition to Kevin Kelly’s provocative essay from The New York Times Magazine, “Scan This Book!”  The materials provoked some intriguing thoughts and conversation, but it seemed to me as if something was missing; it was as though the future of books and reading wasn’t palpable yet, and so most everything we talked about seemed, well, a little ungrounded.  Remember — this was about a year before the first Kindle landed, three years before the Barnes & Noble Nook, and a full four years before the release of the iPad.  We’re talking ancient history in today’s technological terms.

When I taught the course two years later, things had changed — somewhat.  There was genuine curiosity about e-reading, so much so that a group of students asked me to bring in my Kindle, hoping to take it for a test drive.  I did, but didn’t realize that the battery had died.  The demonstration ended up being a bust, and worse still, it was the last day of class.  In other words, no do-overs.  Still, that didn’t stop some of the students from writing papers about the possibilities e-readers held for them and their peers.  While I appreciated the argument — and indeed, the earnestness — I ended up being a little disappointed by those papers.  On the whole they were flatly celebratory.  The lack of critical perspective was, I believe, a function of their having had little to no actual interaction with e-reading devices.

Now it’s 2011, and I’m teaching the course once again.  Boy, have things changed!  On day one I asked the group of 35 if any of them owned an e-reader.  I expected to see maybe a few hands, since I’m aware of the reports stating that these devices have had more uptake among older users.  Much to my surprise, around half the class raised their hands.  We’re talking mostly 20 year-olds here.  I had to know more.  Some told me they owned a Kindle, others a Nook, and still others said they were iPad people who read using apps.  In a couple of instances they owned more than one of these devices.  They especially liked the convenience of not having to lug around a bag full of heavy books, not to mention the many public domain texts they could download at little or no cost.

There I was, standing in front of a group of students who also happened to be seasoned e-book readers.  Because they’d self-selected into my class, I knew I needed to be mindful about the extent to which their interest in electronic reading could be considered representative of people their age.  Even so, it was clear on day one that our conversations would be very different compared to those I’d had with previous cohorts in “The Cultures of Books and Reading.”

At the end of class a student approached me to ask about which version of Laura Miller’s Reluctant Capitalists: Bookselling and the Culture of Consumption, one of the required texts, he should buy.  Old analog me assumed he was referring to cloth or paper, since I’d brought in my hardback copy but told the group I’d ordered paperbacks through the bookstore.  My assumption was wrong.  He told me that he wanted to purchase the Kindle edition but had some hesitations about doing so.  How would he cite it, he asked?  I said he should go ahead and acquire whichever version most suited him; the citations we could figure out.

A very different conversation indeed — one that I expect will become much more the norm by the time I teach “The Cultures of Books and Reading” the next time around.  For now, though, here go the 36 of us, slouching our way into a moment in which analog and digital books commingle with one another.  It reminds me a little of the first 100 years of printing in the West — the so-called “age of incunabula,” when manuscripts, printed editions, and hybrid forms all co-existed, albeit not so peaceably.  I wonder if, at some point in the future, historians will begin referring to our time as the second age of incunabula.

Share

And…We're Back!

It’s been awfully quiet around here for the past six weeks or so.  I’ve had a busy summer filled with travel, academic writing projects, and quality time with my young son.  Blogging, regretfully, ended up falling by the wayside.

I’m pleased to announce that The Late Age of Print is back after what amounted to an unannounced — and unintended — summer hiatus.  A LOT has gone in the realm of books and new media culture since the last time I wrote: Apple clamped down on third parties selling e-books through the iPad; Amazon’s ad-supported 3G Kindle debuted; Barnes & Noble continues to elbow into the e-book market with Nook; short-term e-book rentals are on the rise; J. K. Rowling’s Pottermore website went live, leaving some to wonder about the future of publishers and booksellers in an age when authors can sell e-editions of their work directly to consumers; and much, much more.

For now, though, I thought I’d leave you with a little something I happened upon during my summer vacation (I use the term loosely).  Here’s an image of the Borders bookstore at the Indianapolis Airport, which I snapped in early August — not long after the chain entered liquidation:

The store had been completely emptied out by the time I returned.  It was an almost eerie site — kind of like finding a turtle shell without a turtle inside antabuse tablets 500mg.  Had I not been in a hurry (my little guy was in tow), I would have snapped an “after” picture to accompany this “before” shot.  Needless to say, it’s been an exciting and depressing summer for books.

Then again, isn’t it always?  More to come…soon, I promise.

Share

The Billion Dollar Book

About a week ago Michael Eisen, who teaches evolutionary biology at UC Berkeley, blogged about a shocking discovery one of his postdocs had made in early April.  The discovery happened not in his lab, but of all places on Amazon.com.abisgroup.ru

While searching the site for a copy of Peter Lawrence’s book The Making of a Fly (1992), long out of print, the postdoc happened across two merchants selling secondhand editions for — get this — $1.7 million and $2.2 million respectively!  A series of price escalations ensued as Eisen returned to the product page over following days and weeks until one seller’s copy topped out at $23 million.

But that’s not the worst of it.  One of the comments Eisen received on his blog post pointed to a different secondhand book selling on Amazon for $900 million.  It wasn’t an original edition of the Gutenberg Bible from 1463, nor was it a one-of-a-kind art book, either.  What screed was worth almost $1 billion?  Why, a paperback copy of actress Lana Turner’s autobiography, published in 1991, of course!  (I suspect the price may change, so in the event that it does, here’s a screen shot showing the price on Saturday, April 30th.)

Good scientist that he is, Eisen hypothesized that something wasn’t right about the prices on the fly book.  After all, they seemed to be adjusting themselves upward each time he returned to the site, and like two countries engaged in an arms race, they always seemed to do so in relationship to each other.  Eisen crunched some numbers:

On the day we discovered the million dollar prices, the copy offered by bordeebook [one of the sellers] was 1.270589 times the price of the copy offered by profnath [the other seller].  And now the bordeebook copy was 1.270589 times profnath again. So clearly at least one of the sellers was setting their price algorithmically in response to changes in the other’s price. I continued to watch carefully and the full pattern emerged. (emphasis added)

So the culprit behind the extraordinarily high prices wasn’t a couple of greedy (or totally out of touch) booksellers.  It was, instead, the automated systems — the computer algorithms — working behind the scenes in response to perceived market dynamics.

I’ve spent the last couple of blog posts talking about algorithmic culture, and I believe what we’re seeing here — algorithmic pricing — may well be an extension of it.

It’s a bizarre development.  It’s bizarre not because computers are involved in setting prices (though in this case they could have been doing a better job of it, clearly).  It is bizarre because of the way in which algorithms are being used to disrupt and ultimately manipulate — albeit not always successfully — the informatics of markets.

Indeed, I’m becoming  convinced that algorithms (at least as I’ve been talking about them) are a response to the decentralized forms of social interaction that grew up out of, and against, the centralized forms of culture, politics, and economics that were prevalent in the second and third quarters of 20th century.  Interestingly, the thinkers who conjured up the idea of decentralized societies often turned to markets — and more specifically, to the price system — in an attempt to understand how individuals distributed far and wide could effectively coordinate their affairs absent governmental and other types of intervention.

That makes me wonder: are the algorithms being used on Amazon and elsewhere an emergent form of “government,” broadly understood?  And if so, what does a billion dollar book say about the prospects for good government in an algorithmic age?

Share

Algorithmic Culture, Redux

Back in June I blogged here about “Algorithmic Culture,” or the sorting, classifying, and hierarchizing of people, places, objects, and ideas using computational processes.  (Think Google search, Amazon’s product recommendations, who gets featured in your Facebook news feed, etc.)  Well, for the past several months I’ve been developing an essay on the theme, and it’s finally done.  I’ll be debuting it at Vanderbilt University’s “American Cultures in the Digital Age” conference on Friday, March 18th, which I’m keynoting along with Kelly Joyce (College of William & Mary), Cara Finnegan (University of Illinois), and Eszter Hargittai (Northwestern University).  Needless to say, I’m thrilled to be joining such distinguished company at what promises to be, well, an event.
Кровля из металлочерепицы. Ее достоинства и недостатки.

The piece I posted originally on algorithmic culture generated a surprising — and exciting — amount of response.  In fact, nine months later, it’s still receiving pingbacks, I’m pretty sure as a result of its having found its way onto one or more college syllabuses.  So between that and the good results I’m seeing in the essay, I’m seriously considering developing the material on algorithmic culture into my next book.  Originally after Late Age I’d planned on focusing on contemporary religious publishing, but increasingly I feel as if that will have to wait.

Drop by the conference if you’re in or around the Nashville area on Friday, March 18th.  I’m kicking things off starting at 9:30 a.m.  And for those of you who can’t make it there, here’s the title slide from the PowerPoint presentation, along with a little taste of the talk’s conclusion:

This latter definition—culture as authoritative principle—is, I believe, the definition that’s chiefly operative in and around algorithmic culture. Today, however, it isn’t culture per se that is a “principle of authority” but increasingly the algorithms to which are delegated the task of driving out entropy, or in Matthew Arnold’s language, “anarchy.”  You might even say that culture is fast becoming—in domains ranging from retail to rental, search to social networking, and well beyond—the positive remainder of specific information processing tasks, especially as they relate to the informatics of crowds.  And in this sense algorithms have significantly taken on what, at least since Arnold, has been one of culture’s chief responsibilities, namely, the task of “reassembling the social,” as Bruno Latour puts it—here, though, by discovering statistical correlations that would appear to unite an otherwise disparate and dispersed crowd of people.

I expect to post a complete draft of the piece on “Algorithmic Culture” to my project site once I’ve tightened it up a bit. Hopefully it will generate even more comments, questions, and provocations than the blog post that inspired the work initially.

In the meantime, I’d welcome any feedback you may have about the short excerpt appearing above, or on the talk if you’re going to be in Nashville this week.

Share

Critical Lede on "The Abuses of Literacy"

My favorite podcast, The Critical Lede, just reviewed my recent piece appearing in Communication and Critical/Cultural Studies, The Abuses of Literacy: Amazon Kindle and the Right to Read.”  Check out the broadcast here — and thanks to the show’s great hosts, Benjamin Myers and Desiree Rowe of the University of South Carolina Upstate.kahovka-service.ru

Share