Concurring Opinions

I’m guest posting this week over on the legal blog Concurring Opinions, which is holding a symposium on Georgetown law professor Julie E. Cohen’s great new book, Configuring the Networked Self: Law, Code, and the Play of Everyday Practice (Yale University Press, 2012).  (FYI, it’s available to download for free under a Creative Commons license.)  In other words, even though I don’t have any new material for you here on the Late Age of Print, I hope you’ll follow me on over to Concurring Opinions.

Having said that, I thought it might be interesting to link you to a recent study I saw mentioned in the Washington Post sometime last week.  The author, Craig L. Garthwaite, who is a professor in the Kellogg School of Management at Northwestern University, argues that Oprah’s Book Club actually hurt book sales overall, this despite the bump that occurred for each one of Winfrey’s selections.  I haven’t yet had a chance to review the piece carefully, especially its methodology, but I have to say that I’m intrigued by its counter-intuitiveness.  I’d welcome any thoughts of feedback you may have on the Garthwaite study; I’ll do my best to chime in as well.

See you next week, and in the meantime, don’t forget to like the new Late Age of Print Facebook page.

Share

E-Reading on a Schwinn

I just wrapped up an interview about Late Age, where my interlocutor asked me about my scholarly relationship to e-books.  It was such an intriguing question, because it forced me to admit to, and to begin working through, a contradiction with which I’ve wrestled privately for quite some time now: the amount I write about e-books is incommensurate with my consumption of them.  Or, to put it more straightforwardly, I haven’t read many e-books, despite the fact that I write about them all the time.

There you have it, then.  The cat’s out of the bag.  Truth be told, I’ve read exactly two e-books “cover to cover” (although we cannot exactly say that about them, can we?) since I began writing about the technology back in 2001: Keith Sawyer’s Group Genius: The Creative Power of Collaboration; and Michael Lewis’ Moneyball: The Art of Winning and Unfair Game.  Currently I’m halfway through the Walter Isaacson biography, Steve Jobs.  That brings the tally up to two-and-a-half, and it may be as high as three, four, or five once you’ve factored in all the sample chapters I’ve downloaded and read.

The question is, why have I kept my distance?  I’m not lazy — of that much I can assure you.  I’ve spent countless hours studying the designs, interfaces, capabilities, terms of use, and any number of other aspects of most major commercially available e-readers.  And I’m not one of those fly-by-night academics who picks up on some trend but has no personal investment in it.  I don’t read a lot of e-books because I can’t read a lot of e-books.  The technology as it currently exists is ill-equipped to handle my particular needs as a scholarly reader.

I’ll show you what I mean.  Below are three photos of a book — Stuart Ewen’s Captains of Consciousness — that my graduate students and I discussed two weeks ago in seminar.

 

The first shows the inner flyleaf, where I’ve created an index based on key ideas and themes from the text.  The second is the title page, where I’ve jotted down a brainstorm about the text in general.  The third shows another small index consisting of passages, themes, and so forth that I wanted to address specifically in class.

I know what you’re thinking: Kindle, Nook, and iBooks all allow you to take notes on a text, mark passages, and more.  You’re absolutely right.  The difference for me, though, is the way the form of a physical book allows you to organize this information, both spatially and temporally.  You’ll see, for instance, the double lines appearing in my index in the image at left.  That’s a “generational” marker for me, cuing me to notes I took upon rereading (and rereading and rereading…) the text.  This also then signals ideas and themes that were most recently on my mind, ones that I ought to be returning to in my current research.  Ditto the brainstorm page, which allows me to take notes on the text independent of any specific passage.  (Sometimes these pages of notes become quite elaborate for me, in fact.)

It’s an archival issue, I suppose, and as a scholar I have unusually specific archival needs when it comes to reading books.  And with this I realize that however much the Kindle, Nook, and iPad may be devices for readers (that’s the tagline of a marketing campaign for the e-ink Kindle), they’re actually designed for general or nonspecialist readers.

This isn’t really surprising, since to grow market share you want to capture as broad an audience as possible.  But beyond that, most people don’t need to read books like scholars.  In fact, that’s a reason why portable, paperback books became so popular in the late 19th century and again in the mid-to-late 20th century: books can actually be cheap and even disposable things to which readers might not ever return. Very few people want or need to treat them as sacred objects.

So why am I not a prolific e-reader?  I’ll put it this way: would you rather ride the Tour de France on a clunky, off-the-shelf Schwinn or a custom Italian racing bike?

I’m not drawing this analogy to be snooty.  As I’ve said, most people don’t need the expensive Italian racing bike.  It would be a complete waste of money, especially when most of the time you’re just out for a casual ride.  Instead, I’m trying to underscore how the mark of a good technology is that it seems to disappear for the user — something I discovered, incidentally, from reading the Kindle edition of the Steve Jobs biography.  The present generation of e-readers forces me to get caught up in and become frustrated with the technology — this in contrast to the technology of the physical book, which has more of a capacity to disappear for me, or at least work with me.

Maybe I’ll come around in the end, or maybe Amazon, Barnes & Noble, and Apple will continue adding features to their devices so that they become more agreeable to specialist readers like me.  Until then, though, I’m sticking to atoms for serious reading and bits for fun.


P.S.  Please don’t forget to like the Late Age of Print Facebook page that I just launched!

Share

"The Shannon and Weaver Model"

First things first: some housekeeping.  Last week I launched a Facebook page for The Late Age of Print.   Because so many of my readers are presumably Facebook users, I thought it might be nice to create a “one-stop shop” for updates about new blog content, tweets, and anything else related to my work on the relationship between print media and algorithmic culture.  Please check out the page and, if you’re so inclined, give it a like.

Okay…on to matters at hand.

This week I thought it might be fun to open with a little blast from the past.  Below is a picture of the first page of my notebook from my first collegiate communication course.  I was an eighteen year-old beginning my second semester at the University of New Hampshire, and I had the good fortune of enrolling in Professor W—-‘s introductory “Communication and the Social Order” course, CMN 402.  It wouldn’t be an overstatement to call the experience life changing, since the class essentially started me on my career path.

What interests me (beyond the hilariously grumpy-looking doodle in the margin) is a diagram appearing toward the bottom of the page.  It’s an adaptation of what I would later be told was the “Shannon and Weaver” model of communication, named for the electrical engineer Claude Shannon and the mathematician Warren Weaver.

CMN 402 - UNH Jan. 28, 1992

Note what I jotted down immediately below the diagram: “1.) this model is false (limited) because comm is only one way (linear); 2.) & assumes that sender is active & receiver is passive; & 3.) ignores the fact that sender & receiver interact w/ one another.”  Here’s what the model looks like in its original form, as published in Shannon and Weaver’s Mathematical Theory of Communication (1949, based on a paper Shannon published in 1948).

Shannon & Weaver Model of Communication, 1948/1949

Such was the lesson from day one of just about every communication theory course I subsequently took and, later on, taught.  Shannon and Weaver were wrong.  They were scientists who didn’t understand people, much less how we communicate.  They reduced communication to a mere instrument and, in the process, stripped it of its deeply humane, world-building dimensions.  In graduate school I discovered that if you really wanted to pull the rug out from under another communication scholar’s work, you accused them of premising their argument on the Shannon and Weaver model.  It was the ultimate trump-card.

So the upshot was, Shannon and Weaver’s view of communication was worth lingering on only long enough to reject it.  Twenty years later, I see something more compelling in it.

A couple of things started me down this path.  Several years ago I read Tiziana Terranova’s wonderful book Network Culture: Politics for the Information Age (Pluto Press, 2004), which contains an extended reflection on Shannon and Weaver’s work.  Most importantly she takes it seriously, thinking through its relevance to contemporary information ecosystems.  Second, I happened across an article in the July 2010 issue of Wired magazine called “Sergey’s Search,” about Google co-founder Sergey Brin’s use of big data to find a cure for Parkinson’s Disease, for which he is genetically predisposed.  This passage in particular made me sit up and take notice:

In epidemiology, this is known as syndromic surveillance, and it usually involves checking drugstores for purchases of cold medicines, doctor’s offices for diagnoses, and so forth. But because acquiring timely data can be difficult, syndromic surveillance has always worked better in theory than in practice. By looking at search queries, though, Google researchers were able to analyze data in near real time. Indeed, Flu Trends can point to a potential flu outbreak two weeks faster than the CDC’s conventional methods, with comparable accuracy. “It’s amazing that you can get that kind of signal out of very noisy data,” Brin says. “It just goes to show that when you apply our newfound computational power to large amounts of data—and sometimes it’s not perfect data—it can be very powerful.” The same, Brin argues, would hold with patient histories. “Even if any given individual’s information is not of that great quality, the quantity can make a big difference. Patterns can emerge.”

Here was my aha! moment.  A Google search initiates a process of filtering the web, which, according to Brin, starts out as a thick soup of noisy data.  Its algorithms ferret out the signal amid all this noise, probabilistically, yielding the rank-ordered results you end up seeing on screen.

It’s textbook Shannon and Weaver.  And here it is, at the heart of a service that handles three billion searches per day — which is to say nothing of Google’s numerous other products, let alone those of its competitors, that behave accordingly.

So how was it, I wondered, that my discipline, Communication Studies, could have so completely missed the boat on this?  Why do we persist in dismissing the Shannon and Weaver model, when it’s had such uptake in and application to the real world?

The answer has to do with how one understands the purposes of theory.  Should theory provide a framework for understanding how the world actually works?  Or should it help people to think differently about their world and how it could work?  James Carey puts it more eloquently in Communication as Culture: Essays on Media and Society: “Models of communication are…not merely representations of communication but representations for communication: templates that guide, unavailing or not, concrete processes of human interaction, mass and interpersonal” (p. 32).

The genius of Shanon’s original paper from 1948 and its subsequent popularization by Weaver lies in many things, among them, their having formulated a model of communication located on the threshold of these two understandings of theory.  As a scientist Shannon surely felt accountable to the empirical world, and his work reflects that.  Yet, it also seems clear that Shannon and Weaver’s work has, over the last 60 years or so, taken on a life of its own, feeding back into the reality they first set about describing.  Shannon and Weaver didn’t merely model the world; they ended up enlarging it, changing it, and making it over in the image of their research.

And this is why, twenty years ago, I was taught to reject their thinking.  My colleagues in Communication Studies believed Shannon and Weaver were trying to model communication as it really existed.  Maybe they were.  But what they were also doing was pointing in the direction of a nascent way of conceptualizing communication, one that’s had more practical uptake than any comparable framework Communication Studies has thus far managed to produce.

Of course, in 1992 the World Wide Web was still in its infancy; Sergey Brin and Larry Page were, like me, just starting college; and Google wouldn’t appear on the scene for another six years.  I can’t blame Professor W—- for misinterpreting the Shannon and Weaver model.  If anything, all I can do is say “thank you” to her for introducing me to ideas so rich that I’ve wrestled with them for two decades.

Share

How Publishers Misunderstand Kindle

Last week, in a post entitled “The Book Industry’s Moneyball,” I blogged about the origins of my interest in algorithmic culture — the use of computational processes to sort, classify, and hierarchize people, places, objects, and ideas.  There I discussed a study published in 1932, the so-called “Cheney Report,” which imagined a highly networked book industry whose decisions were driven exclusively by “facts,” or in contemporary terms, “information.”

It occurred to me, in thinking through the matter more this week, that the Cheney Report wasn’t the only way in which I stumbled on to the topic of algorithmic culture.  Something else led me there was well — something more present-day.  I’m talking about the Amazon Kindle, which I wrote about in a scholarly essay published in the journal Communication and Critical/Cultural Studies (CCCS) back in 2010.  The title is “The Abuses of Literacy: Amazon Kindle and the Right to Read.”  (You can read a precis of the piece here.)

The CCCS essay focused on privacy issues related to devices like the Kindle, Nook, and iPad, which quietly relay information about what and how you’ve been reading back to their respective corporate custodians.  Since it appeared that’s become a fairly widespread concern, and I’d like to think my piece had something to do with nudging the conversation in that direction.

Anyway, in prepping to write the essay, a good friend of mine, M—-, suggested I read Adam Greenfield’s Everyware: The Dawning Age of Ubiquitous Computing (New Riders, 2006).   It’s an astonishingly good book, one I would recommend highly to anyone who writes about digital technologies.

Greenfield - Everyware

I didn’t really know much about algorithms or information when I first read Everyware.  Of course, that didn’t stop me from quoting Greenfield in “The Abuses of Literacy,” where I made a passing reference to what he calls “ambient informatics.”  This refers to the idea that almost every aspect our world is giving off some type of information.  People interested in ubiquitous computing, or ubicomp, want to figure out ways to detect, process, and in some cases exploit that information.  With any number of mobile technologies, from smart phones to Kindle, ubicomp is fast becoming an everyday part of our reality.

The phrase “ambient informatics” has stuck with me ever since I first quoted it, and on Wednesday of last week it hit me again like a lightning bolt.  A friend and I were talking about Google Voice, which, he reminded me, may look like a telephone service from the perspective of its users, but it’s so much more from the perspective of Google.  Voice gives Google access to hours upon hours of spoken conversation that it can then use to train its natural language processing systems — systems that are essential to improving speech-to-text recognition, voiced-based searching, and any number of other vox-based services.  Its a weird kind of switcheroo, one that most of us don’t even realize is happening.

So what would it mean, I wondered, to think about Kindle not from the vantage point of its users but instead from that of Amazon.com?  As soon as you ask this question, it soon becomes apparent that Kindle is only nominally an e-reader.  It is, like Google Voice, a means to some other, data-driven end: specifically, the end of apprehending the “ambient informatics” of reading.  In this scenario Kindle books become a hook whose purpose is to get us to tell Amazon.com more about who we are, where we go, and what we do.

Imagine what Amazon must know about people’s reading habits — and who knows what else?!  And imagine how valuable that information could be!

What’s interesting to me, beyond the privacy concerns I’ve addressed elsewhere, is how, with Kindle, book publishers now seem to be confusing means with ends.  It’s understandable, really.  As literary people they’re disposed to think about books as ends in themselves — as items people acquire for purposes of reading.  Indeed, this has long been the “being” of books, especially physical ones. With Kindle, however, books are in the process of getting an existential makeover.  Today they’re becoming prompts for all sorts of personal and ambient information, much of which then goes on to become proprietary to Amazon.com.

I would venture to speculate that, despite the success of the Nook, Barnes & Noble has yet to fully wake up to this fact as well.  For more than a century the company has fancied itself a bookseller — this in contrast to Amazon, which CEO Jeff Bezos once described as “a technology company at its core” (Advertising Age, June 1, 2005).  The one sells books, the other bandies in information (which is to say nothing of all the physical stuff Amazon sells).  The difference is fundamental.

Where does all this leave us, then?  First and foremost, publishers need to begin recognizing the dual existence of their Kindle books: that is, as both means and ends.  I suppose they should also press Amazon for some type of “cut” — informational, financial, or otherwise — since Amazon is in a manner of speaking free-riding on the publishers’ products.

This last point I raise with some trepidation, though; the humanist in me feels a compulsion to pull back.  Indeed it’s here that I begin to glimpse the realization of O. H. Cheney’s world, where matters of the heart are anathema and reason, guided by information, dictates virtually all publishing decisions.  I say this in the thick of the Kindle edition of Walter Isaacson’s biography of Steve Jobs, where I’ve learned that intuition, even unbridled emotion, guided much of Jobs’ decision making.

Information may be the order of the day, but that’s no reason to overlook what Jobs so successfully grasped.  Technology alone isn’t enough.  It’s best when “married” to the liberal arts and humanities.

Share

The Book Industry's Moneyball

Some folks have asked me how I came to the idea of algorithmic culture, the subject of my next book as well as many of my blog posts of late.  I usually respond by pointing them in the direction of chapter three of The Late Age of Print, which focuses on Amazon.com, product coding, and the rise digital communications in business.

It occurs to me, though, that Amazon wasn’t exactly what inspired me to begin writing about algorithms, computational processes, and the broader application of principles of scientific reason to the book world.  My real inspiration came from someone you’ve probably never heard of before (unless, of course, you’ve read The Late Age of Print). I’m talking about Orion Howard (O. H.) Cheney, a banker and business consultant whose ideas did more to lay the groundwork for today’s book industry than perhaps anyone’s.

Cheney was born in 1869 in Bloomington, Illinois.  For much of his adult life he lived and worked in New York State, where, from 1909-1911, he served as the State Superintendent of Banks and later as a high level executive in the banking industry.  In 1932 he published what was to be the first comprehensive study of the book business in the United States, the Economic Survey of the Book Industry, 1930-1931.  It almost immediately came to be known as the “Cheney Report” due to the author’s refusal to soft-peddle his criticisms of, well, pretty much anyone who had anything to do with promoting books in the United States — from authors and publishers on down to librarians and school teachers, and everyone else in between.

In essence, Cheney wanted to fundamentally rethink the game of publishing.  His notorious report was the book industry equivalent of Moneyball.

If you haven’t read Michael Lewis’ Moneyball: The Art of Winning an Unfair Game (2003), you should.  It’s about how the Oakland A’s, one of the most poorly financed teams in Major League Baseball, used computer algorithms (so-called “Sabermetrics“) to build a successful franchise by identifying highly skilled yet undervalued players.  The protagonists of Moneyball, A’s General Manager Billy Bean and Assistant GM Paul DePodesta, did everything in their power to purge gut feeling from the game.  Indeed, one of the book’s central claims is that assessments of player performance have long been driven by unexamined assumptions about how ball players ought to look, move, and behave, usually to a team’s detriment.

The A’s method for identifying talent and devising on-field strategy raised the ire of practically all baseball traditionalists.  It yielded insights that were so far afield of the conventional wisdom that its proponents were apt to seem crazy, even after they started winning big.

It’s the same story with The Cheney Report.  Consider this passage, where Cheney faults the book industry for operating on experience and intuition instead of a statistically sound “fact basis”:

Facts are the only basis for management in publishing, as they must be in any field.  In that respect, the book industry is painfully behind many others — both in facts relating to the industry as a whole and in facts of individual [publishing] houses….”Luck”; waiting for a best-seller; intuitive publishing by a “born publisher” — these must give way as the basis for the industry, for the sake of the industry and everybody in it….In too many publishing operations the theory seems to be that learning from experience means learning how to do a thing right by continuing to do it wrong (pp. 167-68).

This, more than 70 years before Moneyball!  And, like Beane and DePodesta, Cheney was raked over the coals by almost everyone in the industry he was criticizing.  They refused to listen to him, despite the fact that, in the throes of the Great Depression, most everything that had worked in the book industry didn’t seem to be working so well anymore.

Well, it’s almost the same story. Beane and DePodesta have enjoyed excellent careers in Major League Baseball, despite the heresy of their ideas.  They’ve been fortunate to have lived at a time when algorithms and computational mathematics are enough the norm that at least some can recognize the value of what they’ve brought to the game.

The Cheney Report, in contrast, had almost no immediate effect on the book industry.  The Report suffered due to its — and Cheney’s own — untimeliness.  The cybernetics revolution was still more than a decade off, and so the idea of imagining the book industry as a complexly communicative ecosystem was all but unimaginable to most.  This was true even with Cheney, who, in his insistence on ascertaining the “facts,” was fumbling around for what would later come to be known as “information.”

Today we live in O. H. Cheney’s vision for the book world, or, at least, some semblance of it.  People wonder why Amazon.com has so shaken up all facets of the industry.  It’s an aggressive competitor, to be sure, but its success is premised more on its having fundamentally rethought the game.  And for this Jeff Bezos owes a serious thank you to a grumpy old banker who, in the 1930s, wrote the first draft of what would go on to become publishing’s new playbook.

Share

What is an Algorithm?

For close to two years now I’ve been blogging about “algorithmic culture” — the use of computational processes to sort, classify, and hierarchize people, places, objects, and ideas.  Since I began there’s been something of a blossoming of work on the topic, including a recent special issue of the journal Theory, Culture and Society on codes and codings (see, in particular, the pieces by Amoore on derivatives trading and Cheney-Lippold on algorithmic identity). There’s also some excellent work developing around the idea of “algorithmic literacies,” most notably by Tarleton Gillespie and Cathy Davidson.  Needless to say, I’m pleased to have found some fellow travelers.

One of the things that strikes me about so much of the work on algorithmic culture, however rigorous and inspiring it may be, is the extent to which the word algorithm goes undefined.  It is as if the meaning of the word were plainly apparent: it’s just procedural math, right, mostly statistical in nature and focused on large data sets?  Well, sure it is, but to leave the word algorithm at that is to resign ourselves to living with a mystified abstraction.  I’m not willing to do that. To understand what algorithms do to culture, and the emerging culture of algorithms, it makes sense to spend some time figuring out what an algorithm is.

Even before getting into semantics, however, it’s worth thinking about just how prevalent the word algorithm is.  Why even bother if it’s just some odd term circulating on the fringes of language?  What’s interesting about algorithm is that, until about 1960 or so, it was exactly that type of word.  Here’s a frame grab from a search I ran recently on the Google Books Ngram Viewer, which allows you to chart the frequency of word usage in the data giant’s books database.

(Yes, I realize the irony in using the tools of algorithmic culture to study algorithmic culture.  We are always hopelessly complicit.)

What does this graph tell us?  First, algorithm remains a fairly specialized word, even to this day.  At its peak (circa 1995 or so) its frequency was just a notch over 0.0024%; compare that to the word the, which accounts for about 5% of all English language words appearing in the Google Books database.  More intriguing to me, though, is the fact that the word algorithm almost doesn’t register at all until about 1900, and that it’s a word whose stock has clearly been on the rise ever since 1960.  Indeed, the sharp pitch of the curve since then is striking, suggesting its circulation well beyond the somewhat insular confines of mathematics.

Should we assume that the word algorithm is new, then?  Not at all.  It is, in fact, a fairly old word, derived from the name of the 9th century Perian mathematician al-Khwārizmī, who developed some of the first principles of algebra.  Even more intriguing to me, though, is the fact that the word algorithm was not, until about 1960, the only form of the word in use.  Before then one could also speak of an algorism, with an “s” instead of a “th” in the middle.

Based on the numbers, algorism has never achieved particularly wide circulation, although its fortunes did start to rise around 1900.  Interestingly, it reaches its peak usage (as determined by Google) long about 1960, which is to say right around the same time algorithm starts to achieve broader usage.  Here’s what the two terms look like when charted together:

Google Ngram | Algorithm, Algorism

Where does all this leave us, then?  Before even attempting to broach the issue of semantics, or the meaning of the word algorithm, we first have to untangle a series of historical knots.

  • Why are there two forms of the “same” word?
  • Why does usage of algorithm take off around 1960?
  • Why does algorism fade after 1960, following a modest but steady 60 year rise?

I have answers to each of these questions, but the history’s so dense that it’s probably not best to share it in short form here on the blog.  (I give talks on the subject, however, and the material all will eventually appear in the book.)  For now, suffice it to say that any consideration of algorithms or algorithmic culture ought to begin not from the myopia of the present day but instead from the vantage point of history.

Indeed, it may be that the question, “what is an algorithm?” is the wrong one to ask — or, at least, the wrong one to ask, first.  Through what historical twists and turns did we arrive at today’s preferred senses of the word algorithm?  That seems to me a more pressing and pertinent question, because it compels us to look into the cultural gymnastics by which a word with virtually no cachet has grown into one whose referents increasingly play a decisive role in our lives.

Share

Performing Scholarly Communication

A short piece I wrote for the journal Text and Performance Quarterly (TPQ) has just been published.  It’s called “Performing Scholarly Communication,” and it’s included in a special section on “The Performative Possibilities of New Media” edited by the wonderful Desireé Rowe and Benjamin Myers.  The section includes contributions by Michael LeVan and Marcyrose Chvasta, Jonathan M. Gray, and Craig-Gingrich Philbrook, along with an introduction by and a formal contribution from Desireé and Ben.  You can peruse the complete contents here.

My essay is a companion to another short piece I published (and blogged about) last year called “The Visible College.”  “The Visible College” focuses on how journal publications hide much of the labor that goes into their production.  It then goes on to make a case for how we might re-engineer academic serials to better account for that work.  “Performing Scholarly Communication” reflects on one specific publishing experiment I’ve run over on my project site, The Differences and Repetitions Wiki, in which I basically opened the door for anyone to co-write an essay with me.  Both pieces also talk about the history of scholarly journal publishing at some length, mostly in an effort to think through where our present-day journal publishing practices, or performances, come from.  One issue I keep coming back to here is scarcity, or rather how scholars, journal editors, and publishers operate today as if the material relations of journal production typical of the 18th and 19th centuries still straightforwardly applied.

I’ve mentioned before that Desireé and Ben host a wonderful weekly podcast called the The Critical LedeLast week’s show focused on the TPQ forum and gathered together all of the contributors to discuss it.  I draw attention to this not only because I really admire Desireé and Ben’s podcast but also because it fulfills an important scholarly function as well.  You may not know this, but the publisher of TPQ, Taylor & Francis, routinely “embargoes” work published in this and many other of its journals.  The embargo stipulates that authors are barred from making any version of their work available on a public website for 18 months from the date of publication.  I’d be less concerned about this stipulation if more libraries and institutions had access to TPQ and journals like it, but alas, they do not.  In other words, if you cannot access TPQ, at least you can get a flavor of the research published in the forum by listening to me and my fellow contributors dish about it over on The Critical Lede.

I should add that the Taylor & Francis publication embargo hit close to home for me.  Almost a year and a half ago I posted a draft of “Performing Scholarly Communication” to The Differences and Repetitions Wiki and invited people to comment on it.  The response was amazing, and the work improved significantly as a result of the feedback I received there.  The problem is, I had to “disappear” the draft or pre-print version once my piece was accepted for publication in TPQ.  You can still read the commentary, which T&F does not own, but that’s almost like reading marginalia absent the text to which the notes refer!

Here’s the good news, though: if you’d like a copy of “Performing Scholarly Communication” for professional purposes, you can email me to request a free PDF copy.  And with that let me say that I do indeed appreciate how Taylor & Francis does support this type of limited distribution of one’s work, even as I wish the company would do much better in terms of supporting open access to scholarly research.

Share

Soft-Core Book Porn

Most of you reading this blog probably don’t know that I’m Director of Graduate Studies in the Department of Communication and Culture here at Indiana University.  What that means is that I’m knee-deep in graduate admissions files right now; what that also means is that I don’t have quite as much time for blogging as I normally would.  The good news is that I’m rapidly clearing the decks, and that I should be back to regular blogging pretty soon.

Until then, happy 2012 (belatedly), and here’s a little book soft-core book porn to tide you over — an amazing stop-motion animation video that was filmed in Toronto’s Type Bookstore.  If you’ve been reading this blog over the years, then you’ll know I’m not a huge fan of the whole “the only real books are paper books” motif (much as I do enjoy paper books).  Even so, you cannot but be impressed by the time, care, and resolve that must have gone into the production of this short.  Clearly it was a labor of love, on several levels.

Share

Happy Holidays!

I’ll be back in 2012, most likely the second week in January.  Until then, happy holidays to all of my readers, and thanks for supporting The Late Age of Print — both the book and the blog2011 has been a banner year for Late Age, and with you it promises to get even better.ragrani.ru

Until then, here’s a little something for you — a Christmas tree composed entirely of books.  I’m not sure whether to see the sculpture as a cool art piece or a statement about what to with paper books now that e-readers are becoming ubiquitous.  Either way I guess the image is on theme, at least around this end of the internet.

Best wishes, and see you in 2012.

Share

Digital Natives? Not So Fast

I’m about the enter the final week of my undergraduate “Cultures of Books and Reading” class here at Indiana University.  I’ll be sad to see it go.  Not only has the group been excellent this semester, but I’ve learned so much about how my students are negotiating this protracted and profound moment of transition in the book world — what I like to call, following J. David Bolter, “the late age of print.”

One of the things that struck me early on in the class was the extent to which my students seemed to have embraced the notion that they’re “digital natives.”  This is the idea that people born after, say, 1985 or so grew up in a world consisting primarily of digital media.  They are, as such, more comfortable and even savvy with it than so-called “digital immigrants” — analog frumps like me who’ve had to wrestle with the transition to digital and who do not, therefore, fundamentally understand it.

It didn’t occur to me until last Wednesday that I hadn’t heard mention of the term “digital natives” in the class for weeks.  What prompted the revelation was a student-led facilitation on Robert Darnton’s 2009 essay from the New York Review of Books, on the Google book scanning project.

We’d spent the previous two classes weighing the merits of Kevin Kelly’s effusions about digital books and Sven Birkerts‘ poo-pooings of them.  In Darnton we had a piece not only about the virtues and vices of book digitization, but also one that offered a sobering glimpse into the potential political-economic and cultural fallout had the infamous Google book settlement been approved earlier this year.  It’s a measured piece, in other words, and deeply cognizant of the ways in which books, however defined, move through and inhabit people’s worlds.

In this it seemed to connect with the bookish experiences of my group purported digital natives, whose remarks confounded any claims that theirs was a generationally specific, or unified, experience with media.

Here’s a sampling from the discussion (and hat’s off to the facilitation group for prompting such an enlightening one!):

One student mentioned a print-on-paper children’s book her mother had handed down to her.  My student’s mother had inscribed it when she herself was seven or eight years old, and had asked her daughter to add her own inscription when she’d reached the same age.  My student intends to pass the book on one day to her own children so that they, too, may add their own inscriptions.  The heirloom paper book clearly is still alive and well, at least in the eyes of one digital native.

Another student talked about how she purchases paper copies of the the e-books she most enjoys reading on her Barnes & Noble Nook.  I didn’t get the chance to ask if these paper copies were physical trophies or if she actually read them, but in any case it’s intriguing to think about how the digital may feed into the analog, and vice-versa.

Other students complained about the amount of digitized reading their professors assign, stating that they’re less likely to read for class when the material is not on paper.  Others chimed in here, mentioning that they’ll read as much as their prepaid print quotas at the campus computer labs allow, and then after that they’re basically done.  (Incidentally, faculty and students using Indiana University’s computer labs printed about 25 million — yes, million — pages during the 2010-2011 academic year.)

On a related note, a couple of students talked about how they use Google Books to avoid buying expensive course texts.  Interestingly, they noted, 109 pages of one of the books I assign in “The Cultures of Books and Reading” happen to appear there.  The implication was that they’d read what was cheap and convenient to access, but nothing more.  (Grimace.)

Finally, I was intrigued by one of the remarks from my student who, at the beginning of the term, had asked me about the acceptability of purchasing course texts for his Kindle.  He discussed the challenges he’s faced in making the transition from print to digital during his tenure as a college student.  He noted how much work it’s taken him to migrate from one book form (and all the ancillary material it generates) to the other.  Maybe he’s a digital native, maybe he isn’t; the point is, he lives in a world that’s still significantly analog, a world that compels him to engage in sometimes fraught negotiations with whatever media he’s using.

All this in a class of 33 students!  Based on this admittedly limited sample, I feel as if the idea of “digital natives” doesn’t get us very far.  It smooths over too many differences.  It also lets people who embrace the idea off the hook too easily, analytically speaking, for it relieves them of the responsibility of accounting for the extent to which print and other “old” media still affect the daily lives of people, young or old.

Maybe it’ll be different for the next generation.  For now, though, it seems as if we all are, to greater and lesser degrees, digital immigrants.

Share