Tag Archive for mass culture

New Material on Algorithmic Culture

A quick announcement about two new pieces from me, both of which relate to my ongoing research on the subject of algorithmic culture.

The first is an interview with Giuseppe Granieri, posted on his Futurists’ Views site over on Medium.  The tagline is: “Culture now has two audiences: people and machines.”  It’s a free-ranging conversation, apparently readable in six minutes, about algorithms, AI, the culture industry, and the etymology of the word, culture.

About that wordover on Culture Digitally you’ll find a draft essay of mine, examining culture’s shifting definition in relationship to digital technology.  The piece is available for open comment and reflection.  It’s the first in a series from Ben Peters’ “Digital Keywords” project, of which I’m delighted to be a part.  Thanks in advance for your feedback—and of course with all of the provisos that accompany draft material.

 

Share

Books as Christmas Gifts

Did you know that books were among the very first commercial Christmas presents? That’s right—printed books were integral in helping to invent the modern, consumer-oriented Christmas holiday. Before then it was customary to give food or, if you were wealthy, a monetary “tip” to those who were less well off financially. (The latter might come to a rich person’s door and demand the “tip,” in fact.)  The gift of a printed book changed all that, helping to defuse the class antagonism that typically rose to the surface around the winter holidays.

You can read more about the details of this fascinating history in my post from a few years ago on “How the Books Saved Christmas.”  And if you’re interested in a broader history of the role books played in the invention of contemporary consumer culture, then you should check out The Late Age of Print.  At the risk of pointing out the obvious, it makes a great gift.

 

Share

East Coast Code

There’s lots to like about Lawrence Lessig’s book, Code 2.0—particularly, I find, the distinction he draws between “East Coast Code” (i.e., the law) and “West Coast Code” (i.e., computer hardware and software). He sees both as modes of bringing order to complex systems, albeit through different means. Lessig is also interested in the ways in which West Coast Code has come to be used in ways that strongly resemble, and sometimes even supersede, its East Coast counterpart, as in the case of digital rights management technology. “Code is law,” as he so aptly puts it.

I’ve been playing with something like Lessig’s East Coast-West Coast Code distinction in my ongoing research on algorithmic culture. As I’ve said many times now, “algorithmic culture” refers to the use of computational processes to sort, classify, and hierarchize people, places, objects, and ideas, as well as to the habits of thought, conduct, and expression that flow from those processes. Essentially we’re talking about the management of a complex system—culture—by way of server farms and procedural decision-making software. Think Google or Facebook; this is West Coast Code at its finest.

Perhaps better than anyone, Fred Turner has chronicled the conditions out of which West Coast Code emerged. In From Counterculture to Cyberculture, he shows how, in the 1960s, Stewart Brand and his circle of countercultural compadres humanized computers, which were then widely perceived to be instruments of the military-industrial complex. Through the pages of the Whole Earth Catalog, Brand and company suggested that computers were, like shovels, axes, and hoes, tools with which to craft civilization—or rather to craft new-styled, autonomous civilizations that would no longer depend on the state (i.e., East Coast Code) to manage human affairs.

The deeper I delve into my own research, the more I discover just how complicated—and indeed, how East Coast—is the story of algorithmic culture. I don’t mean to diminish the significance of the work that’s been done about the West Coast, by any means. But just as people had to perform creative work to make computers seem personal, even human, so, too, did people need to perform similar work on the word culture to make it make sense within the realm of computation. And this happened mostly back East, in Cambridge, MA.

“Of course,” you’re probably thinking, “at MIT.” It turns out that MIT wasn’t the primary hub of this semantic and conceptual work, although it would be foolish to deny the influence of famed cybernetician Norbert Wiener here. Where the work took place was at that other rinky-dink school in Cambridge, MA: Harvard. Perhaps you’ve heard of it?

A good portion of my research now is focused on Harvard’s Department of Social Relations, an experimental unit combining Sociology, Psychology, and Cultural Anthropology. It had a relatively short existence, lasting only from 1946-1970, but in that time it graduated people who went on to become the titans of postwar social theory. Clifford Geertz, Stanley Milgram, and Harold Garfinkel are among the most notable PhDs, although myriad other important figures passed through the program as well. One of the more intriguing people I turned up was Dick Price, who went on to found the Esalen Institute (back to the West Coast) after becoming disillusioned by the Clinical Psychology track in SocRel and later suffering a psychotic episode. Dr. Timothy Leary also taught there, from 1961-1963, though he was eventually fired because of his controversial research on the psychological effects of LSD.

I’ve just completed some work focusing on Clifford Geertz and the relationship he shared with Talcott Parsons, his dissertation director and chair of SocRel from 1946-1956. It’s here more than anywhere that I’m discovering how the word culture got inflected by the semantics of computation. Though Geertz would later move away from the strongly cybernetic conceptualization of culture he’d inherited from Parsons, it nonetheless underpins arguably his most important work, especially the material he published in the 1960s and early 70s. This includes his famous “Notes on the Balinese Cockfight,” which is included in the volume The Interpretation of Cultures.

My next stop is Stanley Milgram, where I’ll be looking first at his work on crowd behavior and later at his material on the “small world” phenomenon. The former complicates the conclusions of his famous “obedience to authority” experiments in fascinating ways, and, I’d argue, sets the stage for the notion of “crowd wisdom” so prevalent today. Apropos of the latter, I’m intrigued by how Milgram helped to shrink the social on down to size, as it were, just as worries about the scope and anonymizing power of mass society reached a fever pitch. He did for society essentially what Geertz and Parsons did for culture, I believe, particularly in helping to establish conceptual conditions necessary for the algorithmic management of social relations. Oh—and did I mention that Milgram’s Obedience book, published in 1974, is also laden with cybernetic theory?

To be clear, the point of all this East Coast-West Coast business isn’t to create some silly rivalry—among scholars of computation, or among their favorite historical subjects. (Heaven knows, it would never be Biggie and Tupac!) The point, rather, is to draw attention to the semantic and social-theoretical conditions underpinning a host of computational activities that are prevalent today—conditions whose genesis occurred significantly back East. The story of algorithmic culture isn’t only about hippies, hackers, and Silicon Valley. It’s equally a story about squares who taught and studied at maybe the most elite institution of higher education on America’s East Coast.

Share

WordPress

Lest there be any confusion, yes, indeed, you’re reading The Late Age of Print blog, still authored by me, Ted Striphas.  The last time you visited, the site was probably red, white, black, and gray.  Now it’s not.  I imagine you’re wondering what prompted the change.ir-leasing.ru

The short answer is: a hack.  The longer answer is: algorithmic culture.polvam.ru

At some point in the recent past, and unbeknownst to me, The Late Age of Print got hacked.  Since then I’ve been receiving sporadic reports from readers telling me that their safe browsing software was alerting them to a potential issue with the site.  Responsible digital citizen that I am, I ran numerous malware scans using multiple scanning services.  Only one out of twenty-three of those services ever returned a “suspicious” result, and so I figured, with those odds, that the one positive must be an anomaly.  It was the same service that the readers who’d contacted me also happened to be using.

Well, last week, Facebook implemented a new partnership with an internet security company called Websense.  The latter checks links shared on the social networking site for malware and the like.  A friend alerted me that an update I’d posted linking to Late Age came up as “abusive.”  That was enough; I knew something must be wrong.  I contacted my web hosting service and asked them to scan my site.  Sure enough, they found some malicious code hiding in the back-end.

Here’s the good news: as far as my host and I can tell, the code — which, rest assured, I’ve cleaned — had no effect on readers of Late Age or your computers.  (Having said that, it never hurts to run an anti-virus/malware scan.)  It was intended only for Google and other search engines, and its effects were visible only to them.  The screen capture, below, shows how Google was “seeing” Late Age before the cleanup.  Neither you nor I ever saw anything out of the ordinary around here.

Essentially the code grafted invisible links to specious online pharmacies onto the legitimate links appearing in many of my posts.  The point of the attack, when implemented widely enough, is to game the system of search.  The victim sites all look as if they’re pointing to whatever website the hacker is trying to promote. And with thousands of incoming links, that site is almost guaranteed to come out as a top result whenever someone runs a search query for popular pharma terms.

So, in case you were wondering, I haven’t given up writing and teaching for a career hocking drugs to combat male-pattern baldness and E.D.

This experience has been something of an object lesson for me in the seedier side of algorithmic culture.  I’ve been critical of Google, Amazon, Facebook, and other such sites for the opacity of the systems by which they determine the relevance of products, services, knowledge, and associations.  Those criticisms remain, but now I’m beginning to see another layer of the problem.  The hack has shown me just how vulnerable those systems are to manipulation, and how, then, the frameworks of trust, reputation, and relevance that exist online are deeply — maybe even fundamentally — flawed.

In a more philosophical vein, the algorithms about which I’ve blogged over the last several weeks and months attempt to model “the real.”  They leverage crowd wisdom — information coming in the form of feedback — in an attempt to determine what the world thinks or how it feels about x.  The problem is, the digital real doesn’t exist “out there” waiting to be discovered; it is a work in progress, and much like The Matrix, there are those who understand far better than most how to twist, bend, and mold it to suit their own ends.  They’re out in front of the digital real, as it were, and their actions demonstrate how the results we see on Google, Amazon, Facebook, and elsewhere suffer from what Meaghan Morris has called, in another context, “reality lag.”  They’re not the real; they’re an afterimage.

The other, related issue here concerns the fact that, increasingly, we’re placing the job of determining the digital real in the hands of a small group of authorities.  The irony is that the internet has long been understood to be a decentralized network and lauded, then, for its capacity to endure even when parts of it get compromised.  What the hack of my site has underscored for me, however, is the extent to which the internet has become territorialized of late and thus subject to many of the same types of vulnerabilities it was once thought to have thwarted.  Algorithmic culture is the new mass culture.

Moving on, I’d rather not have spent a good chunk of my week cleaning up after another person’s mischief, but at least the attack gave me an excuse to do something I’d wanted to do for a while now: give Late Age a makeover.  For awhile I’ve been feeling as if the site looked dated, and so I’m happy to give it a fresher look.  I’m not yet used to it, admittedly, but of course feeling comfortable in new style of anything takes time.

The other major change I made was to optimize Late Age for viewing on mobile devices.  Now, if you’re visiting using your smart phone or tablet computer, you’ll see the same content but in significantly streamlined form.  I’m not one to believe that the PC is dead — at least, not yet — but for better or for worse it’s clear that mobile is very much at the center of the internet’s future.  In any case, if you’re using a mobile device and want to see the normal Late Age site, there’s a link at the bottom of the screen allowing you to switch back.

I’d be delighted to hear your feedback about the new Late Age of Print.  Drop me a line, and thanks to all of you who wrote in to let me know something was up with the old site.

 

Share

Algorithmic Literacies

I’ve spent the last few weeks here auditioning ideas for my next book, on the topic of  “algorithmic culture.”  By this I mean the use of computers and complex mathematical routines to sort, classify, and create hierarchies for our many forms of human expression and association.dekor-okno.ru

I’ve been amazed by the reception of these posts, not to mention the extent of their circulation.  Even more to the point, the feedback I’ve been receiving has already prompted me to address some of the gaps in the argument — among them, the nagging question of “what is to be done?”

I should be clear that however much I may criticize Google, Facebook, Netflix, Amazon, and other leaders in the tech industry, I’m a regular user of their products and services.  When I get lost driving, I’m happy that Google Maps is there to save the day.  Facebook has helped me to reconnect with friends whom I thought were lost forever.  And in a city with inadequate bookstores, I’m pleased, for the most part, to have Amazon make suggestions about which titles I ought to know about.

In other words, I don’t mean to suggest that life would be better off without algorithmic culture.  Likewise, I don’t mean to sound as if I’m waxing nostalgic for the “good old days” when small circles of élites got to determine “the best that has been thought and said.”  The question for me is, how might we begin to forge a better algorithmic culture, one that provides for more meaningful participation in the production of our collective life?

It’s this question that’s brought me to the idea of algorithmic literacies, which is something Eli Pariser also talks about in the conclusion of The Filter Bubble. 

I’ve mentioned in previous posts that one of my chief concerns with algorithmic culture has to do with its mysteriousness.  Unless you’re a computer scientist with a Ph.D. in computational mathematics, you probably don’t have a good sense of how algorithmic decision-making actually works.  (I count myself among the latter group.)  Now, I don’t mean to suggest that everyone needs to study computational mathematics, although some basic understanding of the subject couldn’t hurt.  I do mean to suggest, however, that someone needs to begin developing strategies by which to interpret both the processes and products of algorithmic culture, critically.  That’s what I mean, in a very broad sense, by “algorithmic literacies.”

In this I join two friends and colleagues who’ve made related calls.  Siva Vaidhyanathan has coined the phrase “Critical Information Studies” to describe an emerging “transfield” concerned with (among other things) “the rights and abilities of users (or consumers or citizens) to alter the means and techniques through which cultural texts and information are rendered, displayed, and distributed.”  Similarly, Eszter Hargittai has pointed to the inadequacy of the notion of the “digital divide” and has suggested that people instead talk about the uneven distribution of competencies in digital environments.

Algorithmic literacies would proceed from the assumption that computational processes increasingly influence how we perceive, talk about, and act in the world.  Marxists used to call this type of effect “ideology,” although I’m not convinced of the adequacy of a term that still harbors connotations of false consciousness.  Maybe Fredric Jameson’s notion of “cognitive mapping” is more appropriate, given the many ways in which algorithms help us to get our bearings in world abuzz with information.  In any case, we need to start developing a  vocabulary, one that would provide better theoretical tools with which to make sense of the epistemological, communicative, and practical entailments of algorithmic culture.

Relatedly, algorithmic literacies would be concerned with the ways in which individuals, institutions, and technologies game the system of life online. Search engine optimization, reputation management, planted product reviews, content farms — today there are a host of ways to exploit vulnerabilities in the algorithms charged with sifting through culture.  What we need, first of all, is to identify the actors chiefly responsible for these types of malicious activities, for they often operate in the shadows.  But we also need to develop reading strategies that would help people to recognize instances in which someone is attempting to game the system.  Where literary studies teaches students how to read for tone, so, too, would those of us invested in algorithmic literacies begin teaching how to read for evidence of this type of manipulation.

Finally, we need to undertake comparative work in an effort to reverse engineer Google, Facebook, and Amazon, et al.’s proprietary algorithms.  One of the many intriguing parts of The Googlization of Everything is the moment where Vaidhyanathan compares and contrasts the Google search results that are presented to him in different national contexts.  A search for the word “Jew,” for example, yields very different outcomes on the US’s version of Google than it does on Germany’s, where anti-Semitic material is banned.  The point of the exercise isn’t to show that Google is different in different places; the company doesn’t hide that fact at all.  The point, rather, is to use the comparisons to draw inferences about the biases — the politics — that are built into the algorithms people routinely use.

This is only a start.  Weigh in, please.  Clearly there’s major work left to do.

Share

The Conversation of Culture

Last week I was interviewed on probably the best talk radio program about culture and technology, the CBC’s Spark. The interview grew out of my recent series of blog posts on the topic of algorithmic culture.  You can listen to the complete interview, which lasts about fifteen minutes, by following the link on the Spark website.  If you want to cut right to the chase and download an mp3 file of the complete interview, just click here.focuz.ru

The hallmark of a good interviewer is the ability to draw something out of an interviewee that she or he didn’t quite realize was there.  That’s exactly what the host of Spark, Nora Young, did for me.  She posed a question that got me thinking about the process of feedback as it relates to algorithmic culture — something I’ve been faulted on, rightly, in the conversations I’ve been having about my blog posts and scholarly research on the subject.  She asked something to the effect of, “Hasn’t culture always been a black box?”  The implication was: hasn’t the process of determining what’s culturally worthwhile always been mysterious, and if so, then what’s so new about algorithmic culture?

The answer, I believe, has everything to do with the way in which search engine algorithms, product and friend recommendation systems, personalized news feeds, and so forth incorporate our voices into their determinations of what we’ll be exposed to online.

They rely, first of all, on signals, or what you might call latent feedback.  This idea refers to the information about our online activities that’s recorded in the background, as it were, in a manner akin to eavesdropping.  Take Facebook, for example.  Assuming you’re logged in, Facebook registers not only your activities on its own site but also every movement you make across websites with an embedded “like” button.

Then there’s something you might call direct feedback, which refers to the information we voluntarily give up about ourselves and our preferences.  When Amazon.com asks if a product it’s recommended appeals to you, and you click “no,” you’ve explicitly told the company it got that one wrong.

So where’s the problem in that?  Isn’t it the case that these systems are inherently democratic, in that they actively seek and incorporate our feedback?  Well, yes…and no.  The issue here has to do with the way in which they model a conversation about the cultural goods that surround us, and indeed about culture more generally.

The work of culture has long happened inside of a black box, to be sure.  For generations it was chiefly the responsibility of a small circle of white guys who made it their business to determine, in Matthew Arnold’s famous words, “the best that has been thought and said.”

Only the black box wasn’t totally opaque.  The arguments and judgments of these individuals were never beyond question.  They debated fiercely among themselves, often quite publicly; people outside of their circles debated them equally fiercely, if not more so.  That’s why, today, we teach Toni Morrison’s work in our English classes in addition to that of William Shakespeare.

The question I raised near the end of the Spark interview is the one I want to raise here: how do you argue with Google?  Or, to take a related example, what does clicking “not interested” on an Amazon product recommendation actually communicate, beyond the vaguest sense of distaste?  There’s no subtlety or justification there.  You just don’t like it.  Period.  End of story.  This isn’t communication as much as the conveyance of decontextualized information, and it reduces culture from a series of arguments to a series of statements.

Then again, that may not be entirely accurate.  There’s still an argument going on where the algorithmic processing of culture is concerned — it just takes place somewhere deep in the bowels of a server farm, where all of our movements and preferences are aggregated and then filtered.  You can’t argue with Google, Amazon, or Facebook, but it’s not because they’re incapable of argument.  It’s because their systems perform the argument for us, algorithmically.  They obviate the need to justify our preferences to one another, and indeed, before one another.

This is a conversation about culture, yes, but minus its moral obligations.

Share

Cultural Informatics

In my previous post I addressed the question, who speaks for culture in an algorithmic age?  My claim was that humanities scholars once held significant sway over what ended up on our cultural radar screens but that, today, their authority is diminishing in importance.  The work of sorting, classifying, hierarchizing, and curating culture now falls increasingly on the shoulders of engineers, whose determinations of what counts as relevant or worthy result from computational processes.  This is what I’ve been calling, “algorithmic culture.”

The question I want to address this week is, what assumptions about culture underlie the latter approach?  How, in other words, do engineers — particularly computer scientists — seem to understand and then operationalize the culture part of algorithmic culture?

My starting point is, as is often the case, the work of cultural studies scholar Raymond Williams.  He famously observed in Keywords (1976) that culture is “one of the two or three most complicated words in the English language.”  The term is definitionally capacious, that is to say, a result of centuries of shedding and accreting meanings, as well as the broader rise and fall of its etymological fortunes.  Yet, Williams didn’t mean for this statement to be taken as merely descriptive; there was an ethic implied in it, too see this site.  Tread lightly in approaching culture.  Make good sense of it, but do well not to diminish its complexity.

Those who take an algorithmic approach to culture proceed under the assumption that culture is “expressive.”  More specifically, all the stuff we make, practices we engage in, and experiences we have cast astonishing amounts of information out into the world.  This is what I mean by “cultural informatics,” the title of this post.  Algorithmic culture operates first of all my subsuming culture under the rubric of information — by understanding culture as fundamentally, even intrinsically, informational and then operating on it accordingly.

One of the virtues of the category “information” is its ability to link any number of seemingly disparate phenomena together: the movements of an airplane, the functioning of a genome, the activities of an economy, the strategies in a card game, the changes in the weather, etc.  It is an extraordinarily powerful abstraction, one whose import I have come to appreciate, deeply, over the course of my research.

The issue I have pertains to the epistemological entailments that flow from locating culture within the framework of information.  What do you have to do with — or maybe to — culture once you commit to understanding it informationally?

The answer to this question begins with the “other” of information: entropy, or the measure of a system’s disorder.  The point of cultural informatics is, by and large, to drive out entropy — to bring order to the cultural chaos by ferreting out the signal that exists amid all the noise.  This is basically how Google works when you execute a search.  It’s also how sites like Amazon.com and Netflix recommend products to you.  The presumption here is that there’s a logic or pattern hidden within culture and that, through the application of the right mathematics, you’ll eventually come to find it.

There’s nothing fundamentally wrong with this understanding of culture.  Something like it has kept anthropologists, sociologists, literary critics, and host of others in business for well over a century.  Indeed there are cultural routines you can point to, whether or not you use computers to find them.  But having said that, it’s worth mentioning that culture consists of more than just logic and pattern.  Intrinsic to culture is, in fact, noise, or the very stuff that gets filtered out of algorithmic culture.

At least, that’s what more recent developments within the discipline of anthropology teach us.  I’m thinking of Renato Rosaldo‘s fantastic book Culture and Truth (1989), and in particular of the chapter, “Putting Culture in Motion.”  There Rosaldo argues for a more elastic understanding of culture, one that refuses to see inconsistency or disorder as something needing to be purged.  “We often improvise, learn by doing, and make things up as we go along,” he states.  He puts it even more bluntly later on: “Do our options really come down to the vexed choice between supporting cultural order or yielding to the chaos of brute idiocy?”

The informatics of culture is oddly paradoxical in that it hinges on a more and less powerful conceptualization of culture.  It is more powerful because of the way culture can be rendered equivalent, informationally speaking, with all of those phenomena (and many more) I mentioned above.  And yet, it is less powerful because of the way the livingness, the inventiveness — what Eli Pariser describes as the “serendipity” — of culture must be shed in the process of creating that equivalence.

What is culture without noise?  What is culture besides noise?  It is a domain of practice and experience diminished in its complexity.  And it is exactly the type of culture Raymond Williams warned us about, for it is one we presume to know but barely know the half of.

Share

Who Speaks for Culture?

I’ve blogged off and on over the past 15 months about “algorithmic culture.”  The subject first came to my attention when I learned about the Amazon Kindle’s “popular highlights” feature, which aggregates data about the passages Kindle owners have deemed important enough to underline.Укладка дикого камня

Since then I’ve been doing a fair amount of algorithmic culture spotting, mostly in the form of news articles.  I’ve tweeted about a few of them.  In one case, I learned that in some institutions college roommate selection is now being determined algorithmically — often, by  matching up individuals with similar backgrounds and interests.  In another, I discovered a pilot program that recommends college courses based on a student’s “planned major, past academic performance, and data on how similar students fared in that class.”  Even scholarly trends are now beginning to be mapped algorithmically in an attempt to identify new academic disciplines and hot-spots.

There’s much to be impressed by in these systems, both functionally and technologically.  Yet, as Eli Pariser notes in his highly engaging book The Filter Bubble, a major downside is their tendency to push people in the direction of the already known, reducing the possibility for serendipitous encounters and experiences.

When I began writing about “algorithmic culture,” I used the term mainly to describe how the sorting, classifying, hierarchizing, and curating of people, places, objects, and ideas was beginning to be given over to machine-based information processing systems.  The work of culture, I argued, was becoming increasingly algorithmic, at least in some domains of life.

As I continue my research on the topic, I see an even broader definition of algorithmic culture starting to emerge.  The preceding examples (and many others I’m happy to share) suggest that some of our most basic habits of thought, conduct, and expression — the substance of what Raymond Williams once called “culture as a whole way of life” — are coming to be affected by algorithms, too.  It’s not only that cultural work is becoming algorithmic; cultural life is as well.

The growing prevalence of algorithmic culture raises all sorts of questions.  What is the determining power of technology?  What understandings of people and culture — what “affordances” — do these systems embody? What are the implications of the tendency, at least at present, to encourage people to inhabit experiential and epistemological enclaves?

But there’s an even more fundamental issue at stake here, too: who speaks for culture?

For the last 150 years or so, the answer was fairly clear.  The humanities spoke for culture and did so almost exclusively.  Culture was both its subject and object.  For all practical purposes the humanities “owned” culture, if for no other reason than the arts, language, and literature were deemed too touchy-feely to fall within the bailiwick of scientific reason.

Today the tide seems to be shifting.  As Siva Vaidhyanathan has pointed out in The Googlization of Everything, engineers — mostly computer scientists — today hold extraordinary sway over what does or doesn’t end up on our cultural radar.  To put it differently, amid the din of our pubic conversations about culture, their voices are the ones that increasingly get heard or are perceived as authoritative.  But even this statement isn’t entirely accurate, for we almost never hear directly from these individuals.  Their voices manifest themselves in fragments of code and interface so subtle and diffuse that the computer seems to speak, and to do so without bias or predilection.

So who needs the humanities — even the so-called “digital humanities” — when your Kindle can tell you what in your reading you ought to be paying attention to?

Share

Book Rentals — A New Road to Serfdom?

Last week I blogged about the proliferation of book rental programs, particularly those focused on college students and their textbooks.  I raised questions about their promises of savings over traditional purchase and buyback, and asked whether most college students ever truly bought their textbooks, anyway.

But there’s more at stake in book renting — beyond the possibility of manipulation by advertising, or even the mutation of a business model.  There are broader social, economic, and attitudinal considerations that arise when people like you and me cease being the owners of books and instead become their lessees.

The last time book renting really caught on was during the Great Depression of the 1930s.  I’ve blogged about this before; it’s how the now-defunct Waldenbooks chain got its start.  What’s interesting to me is the context out of which book rental first emerged: a severe economic crisis — a time when the gap between rich and poor became a chasm, and disposable income all but dried up for ordinary people.  While I don’t believe the present-day renewal of interest in book renting is reducible to the economic meltdown of 2008 (and beyond), I cannot help but be struck by the similarity in the timing.

Indeed, in the United States, we’ve been hearing report after report about how the income of the wealthiest Americans — a tiny minority — has been growing, while that of the majority has been slipping.  Right now the wealthiest 20% of the population controls a whopping 84% of the nation’s wealth.  In crude terms, we’re moving in the direction of a society consisting of “haves” and the “have-nots,” or, more to the point, of people who can afford to own property (broadly construed) and those who cannot.

Now, I don’t mean to deny the benefits that come from book renting.  Realistically, most people don’t want to own every book they read, and for good reason.  Not all books are keepers; they’re also heavy and consume valuable space — the paper ones, anyway.  Beyond that, when books become too expensive for people to own outright, it’s good to have some type of affordable option (in addition to libraries) to keep people reading. Rental may be something of a boon from an environmental standpoint, finally, because you can produce fewer goods and consume fewer resources in the process.

But there’s also a major downside.

Renting books, as with rental more broadly, means you no longer get to set the terms of your relationship with these goods.  Can you underline, highlight, or annotate a book you’ve rented?  What about dog-earing important pages?  Legally speaking, can you loan a rented book to a friend?  Can you duplicate any of the pages, assuming they’re for personal use?  In a traditional ownership situation, you’re the one who provides the answers to these questions.  You’re in control.  When you lease, the answers are dictated by the property owner, or rentier, who naturally puts her or his interests ahead of yours.

Renting is, then, a type of power relationship in which the rentier holds all of the cards — or, at least, the really goods ones.  And here I’m reminded of a passage from the cultural studies scholar Raymond Williams, who, in his magnificent essay “Culture Is Ordinary” (1958), talks about how the coming of power and consumer goods to the impoverished Welsh countryside transformed people’s senses of themselves.  The ability to own consumer goods, Williams said, heightened the “personal grasp” his friends and family felt over their lives.  The presence of these items and their ability to use them however they saw fit made them less beholden to wealthy, outside authorities.

Today, the tide seems to be shifting the opposite way.  Economic conditions are such that rental is becoming a more attractive option again — and not only for books YOURURL.com.  And with it slips that sense of personal grasp Williams talked about.  Often, signing a lease is an exercise in having to accept terms and conditions someone else has laid out for you.  More disturbingly, doing so over and over again may well reinforce an attitude of deference and resignation among we, the lessees.

With apologies to Hayek, renting books could be a pathway leading us down the road to serfdom.

Share

A World Without Oprah

Most of you probably already know that the final broadcast of The Oprah Winfrey Show aired Wednesday, May 25, 2011.  After 25 years of hosting the popular syndicated talk show, Oprah decided it was time to move on.Бакланы

Of course, what that also means is the end of Oprah’s Book Club, which some credit with having “changed the way America reads.”  Others go further, suggesting that the Club “changed America” during its 15 year run, from 1996-2011.  I offer a more measured view in Chapter 4 of The Late Age of Print, where I focus on the strategies Oprah used to connect novels and some nonfiction works with actual and potential readers.

Whatever way you cut it, we now live in a world without daily appearances by Oprah.  I’m sure that upsets a great many people — individuals who, like my mother, dutifully tuned in most weekday afternoons to watch her show.  For them it’s as if an old friend has moved away.

Others, though, are overjoyed to finally see her go.  I could point you in the direction of any number of books and internet sites that hate on Oprah.  (Mostly they accuse her of having popularized therapy culture in the United States.)  Instead, I thought a little Late Age of Print back-story might provide a different perspective on why certain people aren’t saying “goodbye” to Oprah as much as “good riddance.”

I was fortunate to have had a bunch of generous souls read drafts of my book before Columbia University Press published it in 2009.  The feedback was rich and varied, and it certainly helped to improve the manuscript.  One strange thing kept cropping up, though.  The reviewers either loved or hated Chapter 4.

The two who most disliked it went as far as to recommend that I drop it from the book.  Essentially they were asking me to write about the past and present of popular book culture in the United States as if the Club never existed.  What they wanted was a world without Oprah.

Needless to say, I thought the suggestion was absurd.  The Oprah chapter was and is integral to the “consumerism” part of my “consumerism to control” argument, plus it sets up and is a foil of sorts to the next chapter, on Harry Potter protectionism.

What’s telling is that both of the readers who suggested cutting Oprah keyed into my discussion of the Jonathan Franzen and James Frey controversies but completely overlooked the bulk of Chapter 4; mostly I explore how people featured on the Oprah show — the vast majority of whom were women, and many, women of color — read and responded to the Book Club selections.  In the end, I believe the reviewers’ objections to the chapter had less to do with my arguments and analysis and more to do with their lingering disdain for all things Oprah.

Thankfully my a-m-a-z-i-n-g editor at Columbia, Philip Leventhal, had the good sense to let me keep the chapter.  The many positive reviews I’ve since received of the book, and of the Oprah chapter in particular, would seem to confirm that I did manage to say something worthwhile there.

The funny thing is, despite the focus, Chapter 4 isn’t fundamentally about Oprah or her Book Club.  It’s more of an attempt to answer the question, What gets people excited about books and reading today?  That’s something everybody invested in book culture ought to be asking, from authors, publishers, and booksellers to librarians, teachers, parents, and beyond. Whether you like Oprah or not is beside the point.

Still, what made Oprah’s Book Club fascinating for me were the clever ways Winfrey and her producers responded to that question: by making book reading a more social — and sociable — activity; troubling generic distinctions between literature and life; touring viewers around bookstores; strategizing how to squeeze reading time into busy schedules; and varying the degree of difficulty of the selections so as not to alienate anyone.  They came up with these ideas, incidentally, by listening closely to readers and their needs.

Would that our English teachers (or reviewers) listened so well.  Farewell, Oprah, and thank you.  Your talk show may be gone, but you’ll always be a part of my world.

Share