Archive for Related Work

The Book Industry’s Moneyball

Some folks have asked me how I came to the idea of algorithmic culture, the subject of my next book as well as many of my blog posts of late.  I usually respond by pointing them in the direction of chapter three of The Late Age of Print, which focuses on Amazon.com, product coding, and the rise digital communications in business.

It occurs to me, though, that Amazon wasn’t exactly what inspired me to begin writing about algorithms, computational processes, and the broader application of principles of scientific reason to the book world.  My real inspiration came from someone you’ve probably never heard of before (unless, of course, you’ve read The Late Age of Print). I’m talking about Orion Howard (O. H.) Cheney, a banker and business consultant whose ideas did more to lay the groundwork for today’s book industry than perhaps anyone’s.

Cheney was born in 1869 in Bloomington, Illinois.  For much of his adult life he lived and worked in New York State, where, from 1909-1911, he served as the State Superintendent of Banks and later as a high level executive in the banking industry.  In 1932 he published what was to be the first comprehensive study of the book business in the United States, the Economic Survey of the Book Industry, 1930-1931.  It almost immediately came to be known as the “Cheney Report” due to the author’s refusal to soft-peddle his criticisms of, well, pretty much anyone who had anything to do with promoting books in the United States — from authors and publishers on down to librarians and school teachers, and everyone else in between.

In essence, Cheney wanted to fundamentally rethink the game of publishing.  His notorious report was the book industry equivalent of Moneyball.

If you haven’t read Michael Lewis’ Moneyball: The Art of Winning an Unfair Game (2003), you should.  It’s about how the Oakland A’s, one of the most poorly financed teams in Major League Baseball, used computer algorithms (so-called “Sabermetrics“) to build a successful franchise by identifying highly skilled yet undervalued players.  The protagonists of Moneyball, A’s General Manager Billy Bean and Assistant GM Paul DePodesta, did everything in their power to purge gut feeling from the game.  Indeed, one of the book’s central claims is that assessments of player performance have long been driven by unexamined assumptions about how ball players ought to look, move, and behave, usually to a team’s detriment.

The A’s method for identifying talent and devising on-field strategy raised the ire of practically all baseball traditionalists.  It yielded insights that were so far afield of the conventional wisdom that its proponents were apt to seem crazy, even after they started winning big.

It’s the same story with The Cheney Report.  Consider this passage, where Cheney faults the book industry for operating on experience and intuition instead of a statistically sound “fact basis”:

Facts are the only basis for management in publishing, as they must be in any field.  In that respect, the book industry is painfully behind many others — both in facts relating to the industry as a whole and in facts of individual [publishing] houses….”Luck”; waiting for a best-seller; intuitive publishing by a “born publisher” — these must give way as the basis for the industry, for the sake of the industry and everybody in it….In too many publishing operations the theory seems to be that learning from experience means learning how to do a thing right by continuing to do it wrong (pp. 167-68).

This, more than 70 years before Moneyball!  And, like Beane and DePodesta, Cheney was raked over the coals by almost everyone in the industry he was criticizing.  They refused to listen to him, despite the fact that, in the throes of the Great Depression, most everything that had worked in the book industry didn’t seem to be working so well anymore.

Well, it’s almost the same story. Beane and DePodesta have enjoyed excellent careers in Major League Baseball, despite the heresy of their ideas.  They’ve been fortunate to have lived at a time when algorithms and computational mathematics are enough the norm that at least some can recognize the value of what they’ve brought to the game.

The Cheney Report, in contrast, had almost no immediate effect on the book industry.  The Report suffered due to its — and Cheney’s own — untimeliness.  The cybernetics revolution was still more than a decade off, and so the idea of imagining the book industry as a complexly communicative ecosystem was all but unimaginable to most.  This was true even with Cheney, who, in his insistence on ascertaining the “facts,” was fumbling around for what would later come to be known as “information.”

Today we live in O. H. Cheney’s vision for the book world, or, at least, some semblance of it.  People wonder why Amazon.com has so shaken up all facets of the industry.  It’s an aggressive competitor, to be sure, but its success is premised more on its having fundamentally rethought the game.  And for this Jeff Bezos owes a serious thank you to a grumpy old banker who, in the 1930s, wrote the first draft of what would go on to become publishing’s new playbook.

Share

What is an Algorithm?

For close to two years now I’ve been blogging about “algorithmic culture” — the use of computational processes to sort, classify, and hierarchize people, places, objects, and ideas.  Since I began there’s been something of a blossoming of work on the topic, including a recent special issue of the journal Theory, Culture and Society on codes and codings (see, in particular, the pieces by Amoore on derivatives trading and Cheney-Lippold on algorithmic identity). There’s also some excellent work developing around the idea of “algorithmic literacies,” most notably by Tarleton Gillespie and Cathy Davidson.  Needless to say, I’m pleased to have found some fellow travelers.

One of the things that strikes me about so much of the work on algorithmic culture, however rigorous and inspiring it may be, is the extent to which the word algorithm goes undefined.  It is as if the meaning of the word were plainly apparent: it’s just procedural math, right, mostly statistical in nature and focused on large data sets?  Well, sure it is, but to leave the word algorithm at that is to resign ourselves to living with a mystified abstraction.  I’m not willing to do that. To understand what algorithms do to culture, and the emerging culture of algorithms, it makes sense to spend some time figuring out what an algorithm is.

Even before getting into semantics, however, it’s worth thinking about just how prevalent the word algorithm is.  Why even bother if it’s just some odd term circulating on the fringes of language?  What’s interesting about algorithm is that, until about 1960 or so, it was exactly that type of word.  Here’s a frame grab from a search I ran recently on the Google Books Ngram Viewer, which allows you to chart the frequency of word usage in the data giant’s books database.

(Yes, I realize the irony in using the tools of algorithmic culture to study algorithmic culture.  We are always hopelessly complicit.)

What does this graph tell us?  First, algorithm remains a fairly specialized word, even to this day.  At its peak (circa 1995 or so) its frequency was just a notch over 0.0024%; compare that to the word the, which accounts for about 5% of all English language words appearing in the Google Books database.  More intriguing to me, though, is the fact that the word algorithm almost doesn’t register at all until about 1900, and that it’s a word whose stock has clearly been on the rise ever since 1960.  Indeed, the sharp pitch of the curve since then is striking, suggesting its circulation well beyond the somewhat insular confines of mathematics.

Should we assume that the word algorithm is new, then?  Not at all.  It is, in fact, a fairly old word, derived from the name of the 9th century Perian mathematician al-Khwārizmī, who developed some of the first principles of algebra.  Even more intriguing to me, though, is the fact that the word algorithm was not, until about 1960, the only form of the word in use.  Before then one could also speak of an algorism, with an “s” instead of a “th” in the middle.

Based on the numbers, algorism has never achieved particularly wide circulation, although its fortunes did start to rise around 1900.  Interestingly, it reaches its peak usage (as determined by Google) long about 1960, which is to say right around the same time algorithm starts to achieve broader usage.  Here’s what the two terms look like when charted together:

Google Ngram | Algorithm, Algorism

Where does all this leave us, then?  Before even attempting to broach the issue of semantics, or the meaning of the word algorithm, we first have to untangle a series of historical knots.

  • Why are there two forms of the “same” word?
  • Why does usage of algorithm take off around 1960?
  • Why does algorism fade after 1960, following a modest but steady 60 year rise?

I have answers to each of these questions, but the history’s so dense that it’s probably not best to share it in short form here on the blog.  (I give talks on the subject, however, and the material all will eventually appear in the book.)  For now, suffice it to say that any consideration of algorithms or algorithmic culture ought to begin not from the myopia of the present day but instead from the vantage point of history.

Indeed, it may be that the question, “what is an algorithm?” is the wrong one to ask — or, at least, the wrong one to ask, first.  Through what historical twists and turns did we arrive at today’s preferred senses of the word algorithm?  That seems to me a more pressing and pertinent question, because it compels us to look into the cultural gymnastics by which a word with virtually no cachet has grown into one whose referents increasingly play a decisive role in our lives.

Share

Performing Scholarly Communication

A short piece I wrote for the journal Text and Performance Quarterly (TPQ) has just been published.  It’s called “Performing Scholarly Communication,” and it’s included in a special section on “The Performative Possibilities of New Media” edited by the wonderful Desireé Rowe and Benjamin Myers.  The section includes contributions by Michael LeVan and Marcyrose Chvasta, Jonathan M. Gray, and Craig-Gingrich Philbrook, along with an introduction by and a formal contribution from Desireé and Ben.  You can peruse the complete contents here.

My essay is a companion to another short piece I published (and blogged about) last year called “The Visible College.”  “The Visible College” focuses on how journal publications hide much of the labor that goes into their production.  It then goes on to make a case for how we might re-engineer academic serials to better account for that work.  “Performing Scholarly Communication” reflects on one specific publishing experiment I’ve run over on my project site, The Differences and Repetitions Wiki, in which I basically opened the door for anyone to co-write an essay with me.  Both pieces also talk about the history of scholarly journal publishing at some length, mostly in an effort to think through where our present-day journal publishing practices, or performances, come from.  One issue I keep coming back to here is scarcity, or rather how scholars, journal editors, and publishers operate today as if the material relations of journal production typical of the 18th and 19th centuries still straightforwardly applied.

I’ve mentioned before that Desireé and Ben host a wonderful weekly podcast called the The Critical LedeLast week’s show focused on the TPQ forum and gathered together all of the contributors to discuss it.  I draw attention to this not only because I really admire Desireé and Ben’s podcast but also because it fulfills an important scholarly function as well.  You may not know this, but the publisher of TPQ, Taylor & Francis, routinely “embargoes” work published in this and many other of its journals.  The embargo stipulates that authors are barred from making any version of their work available on a public website for 18 months from the date of publication.  I’d be less concerned about this stipulation if more libraries and institutions had access to TPQ and journals like it, but alas, they do not.  In other words, if you cannot access TPQ, at least you can get a flavor of the research published in the forum by listening to me and my fellow contributors dish about it over on The Critical Lede.

I should add that the Taylor & Francis publication embargo hit close to home for me.  Almost a year and a half ago I posted a draft of “Performing Scholarly Communication” to The Differences and Repetitions Wiki and invited people to comment on it.  The response was amazing, and the work improved significantly as a result of the feedback I received there.  The problem is, I had to “disappear” the draft or pre-print version once my piece was accepted for publication in TPQ.  You can still read the commentary, which T&F does not own, but that’s almost like reading marginalia absent the text to which the notes refer!

Here’s the good news, though: if you’d like a copy of “Performing Scholarly Communication” for professional purposes, you can email me to request a free PDF copy.  And with that let me say that I do indeed appreciate how Taylor & Francis does support this type of limited distribution of one’s work, even as I wish the company would do much better in terms of supporting open access to scholarly research.

Share

Digital Natives? Not So Fast

I’m about the enter the final week of my undergraduate “Cultures of Books and Reading” class here at Indiana University.  I’ll be sad to see it go.  Not only has the group been excellent this semester, but I’ve learned so much about how my students are negotiating this protracted and profound moment of transition in the book world — what I like to call, following J. David Bolter, “the late age of print.”

One of the things that struck me early on in the class was the extent to which my students seemed to have embraced the notion that they’re “digital natives.”  This is the idea that people born after, say, 1985 or so grew up in a world consisting primarily of digital media.  They are, as such, more comfortable and even savvy with it than so-called “digital immigrants” — analog frumps like me who’ve had to wrestle with the transition to digital and who do not, therefore, fundamentally understand it.

It didn’t occur to me until last Wednesday that I hadn’t heard mention of the term “digital natives” in the class for weeks.  What prompted the revelation was a student-led facilitation on Robert Darnton’s 2009 essay from the New York Review of Books, on the Google book scanning project.

We’d spent the previous two classes weighing the merits of Kevin Kelly’s effusions about digital books and Sven Birkerts‘ poo-pooings of them.  In Darnton we had a piece not only about the virtues and vices of book digitization, but also one that offered a sobering glimpse into the potential political-economic and cultural fallout had the infamous Google book settlement been approved earlier this year.  It’s a measured piece, in other words, and deeply cognizant of the ways in which books, however defined, move through and inhabit people’s worlds.

In this it seemed to connect with the bookish experiences of my group purported digital natives, whose remarks confounded any claims that theirs was a generationally specific, or unified, experience with media.

Here’s a sampling from the discussion (and hat’s off to the facilitation group for prompting such an enlightening one!):

One student mentioned a print-on-paper children’s book her mother had handed down to her.  My student’s mother had inscribed it when she herself was seven or eight years old, and had asked her daughter to add her own inscription when she’d reached the same age.  My student intends to pass the book on one day to her own children so that they, too, may add their own inscriptions.  The heirloom paper book clearly is still alive and well, at least in the eyes of one digital native.

Another student talked about how she purchases paper copies of the the e-books she most enjoys reading on her Barnes & Noble Nook.  I didn’t get the chance to ask if these paper copies were physical trophies or if she actually read them, but in any case it’s intriguing to think about how the digital may feed into the analog, and vice-versa.

Other students complained about the amount of digitized reading their professors assign, stating that they’re less likely to read for class when the material is not on paper.  Others chimed in here, mentioning that they’ll read as much as their prepaid print quotas at the campus computer labs allow, and then after that they’re basically done.  (Incidentally, faculty and students using Indiana University’s computer labs printed about 25 million — yes, million — pages during the 2010-2011 academic year.)

On a related note, a couple of students talked about how they use Google Books to avoid buying expensive course texts.  Interestingly, they noted, 109 pages of one of the books I assign in “The Cultures of Books and Reading” happen to appear there.  The implication was that they’d read what was cheap and convenient to access, but nothing more.  (Grimace.)

Finally, I was intrigued by one of the remarks from my student who, at the beginning of the term, had asked me about the acceptability of purchasing course texts for his Kindle.  He discussed the challenges he’s faced in making the transition from print to digital during his tenure as a college student.  He noted how much work it’s taken him to migrate from one book form (and all the ancillary material it generates) to the other.  Maybe he’s a digital native, maybe he isn’t; the point is, he lives in a world that’s still significantly analog, a world that compels him to engage in sometimes fraught negotiations with whatever media he’s using.

All this in a class of 33 students!  Based on this admittedly limited sample, I feel as if the idea of “digital natives” doesn’t get us very far.  It smooths over too many differences.  It also lets people who embrace the idea off the hook too easily, analytically speaking, for it relieves them of the responsibility of accounting for the extent to which print and other “old” media still affect the daily lives of people, young or old.

Maybe it’ll be different for the next generation.  For now, though, it seems as if we all are, to greater and lesser degrees, digital immigrants.

Share

The Visible College

After having spent the last five weeks blogging about about algorithmic culture, I figured both you and I deserved a change of pace.  I’d like to share some new research of mine that was just published in a free, Open Access periodical called The International Journal of Communicationberryjam.ru

My piece is called “The Visible College.”  It addresses the many ways in which the form of scholarly publications — especially that of journal articles — obscures the density of the collaboration typical of academic authorship in the humanities.  Here’s the first line: “Authorship may have died at the hands of a French philosopher drunk on Balzac, but it returned a few months later, by accident, when an American social psychologist turned people’s attention skyward.”  Intrigued?

My essay appears as part of a featured section on the politics of academic labor in the discipline of communication.  The forum is edited by my good friend and colleague, Jonathan Sterne.  His introductory essay is a must-read for anyone in the field — and, for that matter, anyone who receives a paycheck for performing academic labor.  (Well, maybe not my colleagues in the Business School….)  Indeed it’s a wonderful, programmatic piece outlining how people in universities can make substantive change there, both individually and collectively.  The section includes contributions from: Thomas A. Discenna; Toby Miller; Michael Griffin; Victor Pickard; Carol Stabile; Fernando P. Delgado; Amy Pason; Kathleen F. McConnell; Sarah Banet-Weiser and Alexandra Juhasz; Ira Wagman and Michael Z. Newman; Mark Hayward; Jayson Harsin; Kembrew McLeod; Joel Saxe; Michelle Rodino-Colocino; and two anonymous authors.  Most of the essays are on the short side, so you can enjoy the forum in tasty, snack-sized chunks.

My own piece presented me with a paradox.  Here I was, writing about how academic journal articles do a lousy job of representing all the labor that goes into them — in the form of an academic journal article!  (At least it’s a Creative Commons-licensed, Open Access one.)  Needless to say, I couldn’t leave it at that.  I decided to create a dossier of materials relating to the production of the essay, which I’ve archived on another of my websites, The Differences and Repetitions Wiki (D&RW).  The dossier includes all of my email exchanges with Jonathan Sterne, along with several early drafts of the piece.  It’s astonishing to see just how much “The Visible College” changed as a result of my dialogue with Jonathan.  It’s also astonishing to see, then, just how much of the story of academic production gets left out of that slim sliver of “thank-yous” we call the acknowledgments.

“The Visible College Dossier” is still a fairly crude instrument, admittedly.  It’s an experiment — one among several others hosted on D&RW in which I try to tinker with the form and content of scholarly writing.  I’d welcome your feedback on this or any other of my experiments, not to mention “The Visible College.”

Enjoy — and happy Halloween!  Speaking of which, if you’re looking for something book related and Halloween-y, check out my blog post from a few years ago on the topic of anthropodermic bibliopegy.

Share

Algorithmic Literacies

I’ve spent the last few weeks here auditioning ideas for my next book, on the topic of  “algorithmic culture.”  By this I mean the use of computers and complex mathematical routines to sort, classify, and create hierarchies for our many forms of human expression and association.dekor-okno.ru

I’ve been amazed by the reception of these posts, not to mention the extent of their circulation.  Even more to the point, the feedback I’ve been receiving has already prompted me to address some of the gaps in the argument — among them, the nagging question of “what is to be done?”

I should be clear that however much I may criticize Google, Facebook, Netflix, Amazon, and other leaders in the tech industry, I’m a regular user of their products and services.  When I get lost driving, I’m happy that Google Maps is there to save the day.  Facebook has helped me to reconnect with friends whom I thought were lost forever.  And in a city with inadequate bookstores, I’m pleased, for the most part, to have Amazon make suggestions about which titles I ought to know about.

In other words, I don’t mean to suggest that life would be better off without algorithmic culture.  Likewise, I don’t mean to sound as if I’m waxing nostalgic for the “good old days” when small circles of élites got to determine “the best that has been thought and said.”  The question for me is, how might we begin to forge a better algorithmic culture, one that provides for more meaningful participation in the production of our collective life?

It’s this question that’s brought me to the idea of algorithmic literacies, which is something Eli Pariser also talks about in the conclusion of The Filter Bubble. 

I’ve mentioned in previous posts that one of my chief concerns with algorithmic culture has to do with its mysteriousness.  Unless you’re a computer scientist with a Ph.D. in computational mathematics, you probably don’t have a good sense of how algorithmic decision-making actually works.  (I count myself among the latter group.)  Now, I don’t mean to suggest that everyone needs to study computational mathematics, although some basic understanding of the subject couldn’t hurt.  I do mean to suggest, however, that someone needs to begin developing strategies by which to interpret both the processes and products of algorithmic culture, critically.  That’s what I mean, in a very broad sense, by “algorithmic literacies.”

In this I join two friends and colleagues who’ve made related calls.  Siva Vaidhyanathan has coined the phrase “Critical Information Studies” to describe an emerging “transfield” concerned with (among other things) “the rights and abilities of users (or consumers or citizens) to alter the means and techniques through which cultural texts and information are rendered, displayed, and distributed.”  Similarly, Eszter Hargittai has pointed to the inadequacy of the notion of the “digital divide” and has suggested that people instead talk about the uneven distribution of competencies in digital environments.

Algorithmic literacies would proceed from the assumption that computational processes increasingly influence how we perceive, talk about, and act in the world.  Marxists used to call this type of effect “ideology,” although I’m not convinced of the adequacy of a term that still harbors connotations of false consciousness.  Maybe Fredric Jameson’s notion of “cognitive mapping” is more appropriate, given the many ways in which algorithms help us to get our bearings in world abuzz with information.  In any case, we need to start developing a  vocabulary, one that would provide better theoretical tools with which to make sense of the epistemological, communicative, and practical entailments of algorithmic culture.

Relatedly, algorithmic literacies would be concerned with the ways in which individuals, institutions, and technologies game the system of life online. Search engine optimization, reputation management, planted product reviews, content farms — today there are a host of ways to exploit vulnerabilities in the algorithms charged with sifting through culture.  What we need, first of all, is to identify the actors chiefly responsible for these types of malicious activities, for they often operate in the shadows.  But we also need to develop reading strategies that would help people to recognize instances in which someone is attempting to game the system.  Where literary studies teaches students how to read for tone, so, too, would those of us invested in algorithmic literacies begin teaching how to read for evidence of this type of manipulation.

Finally, we need to undertake comparative work in an effort to reverse engineer Google, Facebook, and Amazon, et al.’s proprietary algorithms.  One of the many intriguing parts of The Googlization of Everything is the moment where Vaidhyanathan compares and contrasts the Google search results that are presented to him in different national contexts.  A search for the word “Jew,” for example, yields very different outcomes on the US’s version of Google than it does on Germany’s, where anti-Semitic material is banned.  The point of the exercise isn’t to show that Google is different in different places; the company doesn’t hide that fact at all.  The point, rather, is to use the comparisons to draw inferences about the biases — the politics — that are built into the algorithms people routinely use.

This is only a start.  Weigh in, please.  Clearly there’s major work left to do.

Share

Happy New Year

Since the New Year is always a time for endings and beginnings, I thought I’d share an image I snapped recently at the Monroe County Public Library here in Bloomington, Indiana.  It’s of two old library check-out cards — the type that, when I was young, used to be slipped into the front covers of books and stamped with due dates.visualcage.ru

My favorite part has to be the warning about a ten cent penalty in the event the patron loses the check-out slip. It’s also intriguing to see that the latest due date appearing on the top card is from 1982. I wonder if it was from an unpopular book, or if the MCPL began computerizing around then. I should have asked.

If you’re wondering where I found these cards, the answer may come as something of a disappointment. They were in the children’s room, where they were being used as scrap paper for youngsters to practice writing. (At least they hadn’t been thrown away, I suppose.) I’m not much of a nostalgic, yet some part of me still wishes they’d been on display showing visitors — especially those raised in the computer age — the history of libraries and librarianship. It’s interesting to think about how a record keeping device that was once important enough to carry a penalty for loss, however small, is now discarded on purpose. Change isn’t inevitable, but it sure is relentless.

Happy New Year, everyone, and I’ll see you again early in 2011 with some exciting news.

Share

Critical Lede on “The Abuses of Literacy”

My favorite podcast, The Critical Lede, just reviewed my recent piece appearing in Communication and Critical/Cultural Studies, The Abuses of Literacy: Amazon Kindle and the Right to Read.”  Check out the broadcast here — and thanks to the show’s great hosts, Benjamin Myers and Desiree Rowe of the University of South Carolina Upstate.kahovka-service.ru

Share

The Right to Read

A couple of weeks ago I blogged here about a short essay I’d written, “E-books: No Friends of Free Expression,” and about a longer academic journal article on which it was based called, “The Abuses of Literacy: Amazon Kindle and the Right to Read.”  Well, since then I’ve had a bunch of people writing in asking for copies of the article, and even more asking me about the “right to read.”ceoec.ru

Here’s what I know about the latter.

To the best of my knowledge, the idea first appeared in a 1994 law review article by Jessica Litman called “The Exclusive Right to Read.”  It was picked up, extended, and given significant legal grounding by Julie E. Cohen in her 1996 (master)piece, “The Right to Read Anonymously.”  Then, in 1997, free software guru Richard Stallman dramatized the idea in a pithy little parable called — you guessed it — “The Right to Read.

The American Library Association proposed something like a “right to read” back in 1953, when it issued its first “Freedom to Read Statement.”  (The statement has since been updated, most recently in 2004, although it remains relatively quiet on the subject of 3G- and wifi-enabled e-readers.)  Meanwhile, the Reading Rights Coalition, an advocacy organization, was formed in 2009 after the Author’s Guild claimed the Kindle 2’s text-to-speech function violated its members’ audiobook rights — a claim that understandably didn’t sit well with the 30 million Americans with “print disabilities.”  Finally, librarian Alycia Sellie and technologist Matthew Goins developed a “Readers’ Bill of Rights for Digital Books,” which concludes with the important provision that reader information ought to remain private.

I’m sure there’s lots that I’ve missed and would welcome any further information you may have about the right to read.  For now, I hope you’re enjoying National Freedom of Speech Week, and don’t forget that reading is an integral part of the circuitry of free expression.

Share

E-Books: No Friends of Free Expression

I’ve just published a short essay called “E-books — No Friends of Free Expression” in the National Communication Association’s online magazine, Communication Currents. It was commissioned in anticipation of National Freedom of Speech Week, which will be recognized in the United States from October 18th to 24th, 2010. Here’s a short excerpt from the piece, in case you’re interested:

It may seem odd to suggest that reading has something to do with freedom of expression. It’s one thing to read a book, after all, but a different matter to write one. Nevertheless, we shouldn’t lose sight of the fact that reading is an expressive activity in its own right, resulting in notes, dog-eared pages, highlights, and other forms of communicative fallout antabuse tablets online. Even more to the point, as Georgetown Law Professor Julie E. Cohen observes, “Freedom of speech is an empty guarantee unless one has something—anything—to say…[T]he content of one’s speech is shaped by one’s response to all prior speech, both oral and written, to which one has been exposed.” Reading is an integral part of the circuitry of free expression, because it forms a basis upon which our future communications are built. Anything that impinges upon our ability to read freely is liable to short-circuit this connection.

I then go on to explore the surveillance activities that are quite common among commercially available e-readers; I also question how the erosion of private reading may affect not only what we choose to read but also what we may then choose to say.

The Comm Currents piece is actually a precis of a much longer essay of mine just out in Communication and Critical/Cultural Studies 7(3) (September 2010), pp. 297 – 317, as part of a special issue on rights. The title is “The Abuses of Literacy: Amazon Kindle and the Right to Read.” Here’s the abstract:

This paper focuses on the Amazon Kindle e-reader’s two-way communications capabilities on the one hand and on its parent company’s recent forays into data services on the other. I argue that however convenient a means Kindle may be for acquiring e-books and other types of digital content, the device nevertheless disposes reading to serve a host of inconvenient—indeed, illiberal—ends. Consequently, the technology underscores the growing importance of a new and fundamental right to counterbalance the illiberal tendencies that it embodies—a “right to read,” which would complement the existing right to free expression.

Keywords: Kindle; Amazon.com; Digital Rights; Reading; Privacy

Feel free to email me if you’d like a copy of “The Abuses of Literacy.” I’d be happy to share one with you.

The title of the journal article, incidentally, pays homage to Richard Hoggart’s famous book The Uses of Literacy, which is widely recognized as one of the founding texts of the field of cultural studies. It’s less well known that he also published a follow-up piece many years later called “The Abuses of Literacy,” which, as it turns out, he’d intended to be the title of Uses before the publisher insisted on a change.

Anyway, I hope you enjoy the work. Feedback is always welcome and appreciated.

Share