Archive for Digital Humanities

The Internet of Words

A piece I just penned, “The Internet of Words,” is now out in The Chronicle of Higher Education. In part, it’s a review of two wonderful new books about social media: Alice E. Marwick’s Status Update: Celebrity, Publicity, and Branding in the Social Media Age, and danah boyd’s It’s Complicated: The Social Lives of Networked TeensBoth books were published within the last year by Yale University Press.

But the piece is also a meditation on words, taking the occasion of both books to think through the semantics of digital culture. It’s inspired by Raymond Williams‘ Keywords: A Vocabulary of Culture and Society (1976; 2nd ed., 1983), looking closely at how language shifts accompany, and sometimes precede, technological change. Here’s a snippet:

Changes in the language are as much a part of the story of technology as innovative new products, high-stakes mergers and acquisitions, and charismatic corporate leaders. They bear witness to the emergence of new technological realities, yet they also help facilitate them. Facebook wouldn’t have a billion-plus users absent some compelling features. It also wouldn’t have them without people like me first coming to terms with the new semantics of friendship.

It was great having an opportunity to connect some dots between my scholarly work on algorithmic culture and the keywords approach I’ve been developing via Williams. The piece is also a public-facing statement of how I approach the digital humanities.

Please share—and I hope you like it.

Share

New Material on Algorithmic Culture

A quick announcement about two new pieces from me, both of which relate to my ongoing research on the subject of algorithmic culture.

The first is an interview with Giuseppe Granieri, posted on his Futurists’ Views site over on Medium.  The tagline is: “Culture now has two audiences: people and machines.”  It’s a free-ranging conversation, apparently readable in six minutes, about algorithms, AI, the culture industry, and the etymology of the word, culture.

About that wordover on Culture Digitally you’ll find a draft essay of mine, examining culture’s shifting definition in relationship to digital technology.  The piece is available for open comment and reflection.  It’s the first in a series from Ben Peters’ “Digital Keywords” project, of which I’m delighted to be a part.  Thanks in advance for your feedback—and of course with all of the provisos that accompany draft material.

 

Share

Late Age of Print – the Podcast

Welcome back and happy new year!  Okay—so 2013 is more than three weeks old at this point.  What can I say?  The semester started and I needed to hit the ground running.  In any case I’m pleased to be back and glad that you are, too.

My first post of the year is actually something of an old one, or at least it’s about new material that was produced about eighteen months ago.  Back in the summer of 2011 I keynoted the Association for Cultural Studies Summer Institute in Ghent, Belgium.  It was a blast—and not only because I got to talk about algorithmic culture and interact with a host of bright faculty and students.  I also recorded a podcast there with Janneke Adema, a Ph.D. student at Coventry University, UK whose work on the future of scholarly publishing is excellent and whose blog, Open Reflections, I recommend highly.

Janneke and I sat down in Ghent for the better part of an hour for a fairly wide-ranging conversation, much of it having to do with The Late Age of Print and my experiments in digital publishing.  It was real treat to revisit Late Age after a couple of years and to discuss some of the choices I made while I was writing it.  I’ve long thought the book was a tad quirky in its approach, and so the podcast gave me a wonderful opportunity to provide some missing explanation and backstory.  It was also great to have a chance to foreground some of the experimental digital publishing tools I’ve created, as I almost never put this aspect of my work on the same level as my written scholarship (though this is changing).

The resulting podcast, “The Late Age of Print and the Future of Cultural Studies,” is part of the journal Culture Machine’s podcast series.  Janneke and I discussed the following:

  • How have digital technologies affected my research and writing practices?
  • What advice would I, as a creator of digital scholarly tools, give to early career scholars seeking to undertake similar work?
  • Why do I experiment with modes of scholarly communication, or seek “to perform scholarly communication differently?”
  • How do I approach the history of books and reading, and how does my approach differ from more ethnographically oriented work?
  • How did I find the story amid the numerous topics I wrestle with in The Late Age of Print?

I hope you like the podcast.  Do feel welcome to share it on Twitter, Facebook, or wherever.  And speaking of social media, don’t forget—if you haven’t already, you can still download a Creative Commons-licensed PDF of The Late Age of Print.  It will only cost a tweet or a post on Facebook.  Yes, really.

Share

New Writing – Working Papers in Cultural Studies

If it wasn’t clear already, I needed a little break from blogging.  This past year has been an amazing one here on The Late Age of Print, with remarkable response to many of my posts — particularly those about my new research on algorithmic culture.  But with the school year wrapping up in early May, I decided I needed a little break; hence, the crickets around here.  I’m back now and will be blogging regularly throughout the summer, although maybe not quite as regularly as I would during the academic year.  Thanks for sticking around.

I suppose it’s not completely accurate to say the school year “wrapped up” for me in early May.  I went right from grading final papers to finishing an essay my friend and colleague Mark Hayward and I had been working on throughout the semester.  (This was also a major reason behind the falloff in my blogging.)  The piece is called “Working Papers in Cultural Studies, or, the Virtues of Gray Literature,” and we’ll be presenting a version of it at the upcoming Crossroads in Cultural Studies conference in Paris.

“Working Papers” is, essentially, a retelling of the origins of British cultural studies from a materialist perspective.  It’s conventional in that it focuses on one of the key institutions where the field first coalesced: the Centre for Contemporary Cultural Studies, which was founded at the University of Birmingham in 1964 under the leadership of Richard Hoggart.  It’s unconventional, however, in that the essay focuses less on the Centre’s key figures or on what they had to say in their work.  Instead it looks closely at the form of the Centre’s publications, many of which were produced in-house in a manner that was rough around the edges.

Mark and I were interested in how, physically, these materials seemed to embody an ethic of publication prevalent at the Centre, which stressed the provisionality of the research produced by faculty, students, and affiliates. The essay thus is an attempt to solve a riddle: how did the Centre manage to achieve almost mythical status, in spite of the fact that it wasn’t much in the business of producing definitive statements about the politics of contemporary culture?  Take for instance its most well known publication, Working Papers in Cultural Studies, whose very title indicates that every article appearing in the journal was on some level a draft.

I won’t give away the ending, but I will point you in the direction of the complete essay.  It’s hosted on my site for writing projects, The Differences & Repetitions Wiki (which I may well rename the Late Age of Print Wiki).  Mark and I have created an archive for “Working Papers in Cultural Studies, or, the Virtues of Gray Literature,” where you’ll find not only the latest version of the essay and earlier drafts but also a bunch of materials pertaining to their production.  We wanted to channel some of the lessons we learned from Birmingham, which led us to go public with the process of our work.  (This is in keeping with another essay I published recently, “The Visible College,” a version of which you can also find over on D&RW.)

Our “Working Papers” essay is currently in open beta, which means there’s at least another round of edits to go before we could say it’s release-ready.  That’s where you come in.  We’d welcome your comments on the piece, as we’re about to embark on what will probably be the penultimate revision.  Thank you in advance, and we hope you like what you see.

Share

How Publishers Misunderstand Kindle

Last week, in a post entitled “The Book Industry’s Moneyball,” I blogged about the origins of my interest in algorithmic culture — the use of computational processes to sort, classify, and hierarchize people, places, objects, and ideas.  There I discussed a study published in 1932, the so-called “Cheney Report,” which imagined a highly networked book industry whose decisions were driven exclusively by “facts,” or in contemporary terms, “information.”

It occurred to me, in thinking through the matter more this week, that the Cheney Report wasn’t the only way in which I stumbled on to the topic of algorithmic culture.  Something else led me there was well — something more present-day.  I’m talking about the Amazon Kindle, which I wrote about in a scholarly essay published in the journal Communication and Critical/Cultural Studies (CCCS) back in 2010.  The title is “The Abuses of Literacy: Amazon Kindle and the Right to Read.”  (You can read a precis of the piece here.)

The CCCS essay focused on privacy issues related to devices like the Kindle, Nook, and iPad, which quietly relay information about what and how you’ve been reading back to their respective corporate custodians.  Since it appeared that’s become a fairly widespread concern, and I’d like to think my piece had something to do with nudging the conversation in that direction.

Anyway, in prepping to write the essay, a good friend of mine, M—-, suggested I read Adam Greenfield’s Everyware: The Dawning Age of Ubiquitous Computing (New Riders, 2006).   It’s an astonishingly good book, one I would recommend highly to anyone who writes about digital technologies.

Greenfield - Everyware

I didn’t really know much about algorithms or information when I first read Everyware.  Of course, that didn’t stop me from quoting Greenfield in “The Abuses of Literacy,” where I made a passing reference to what he calls “ambient informatics.”  This refers to the idea that almost every aspect our world is giving off some type of information.  People interested in ubiquitous computing, or ubicomp, want to figure out ways to detect, process, and in some cases exploit that information.  With any number of mobile technologies, from smart phones to Kindle, ubicomp is fast becoming an everyday part of our reality.

The phrase “ambient informatics” has stuck with me ever since I first quoted it, and on Wednesday of last week it hit me again like a lightning bolt.  A friend and I were talking about Google Voice, which, he reminded me, may look like a telephone service from the perspective of its users, but it’s so much more from the perspective of Google.  Voice gives Google access to hours upon hours of spoken conversation that it can then use to train its natural language processing systems — systems that are essential to improving speech-to-text recognition, voiced-based searching, and any number of other vox-based services.  Its a weird kind of switcheroo, one that most of us don’t even realize is happening.

So what would it mean, I wondered, to think about Kindle not from the vantage point of its users but instead from that of Amazon.com?  As soon as you ask this question, it soon becomes apparent that Kindle is only nominally an e-reader.  It is, like Google Voice, a means to some other, data-driven end: specifically, the end of apprehending the “ambient informatics” of reading.  In this scenario Kindle books become a hook whose purpose is to get us to tell Amazon.com more about who we are, where we go, and what we do.

Imagine what Amazon must know about people’s reading habits — and who knows what else?!  And imagine how valuable that information could be!

What’s interesting to me, beyond the privacy concerns I’ve addressed elsewhere, is how, with Kindle, book publishers now seem to be confusing means with ends.  It’s understandable, really.  As literary people they’re disposed to think about books as ends in themselves — as items people acquire for purposes of reading.  Indeed, this has long been the “being” of books, especially physical ones. With Kindle, however, books are in the process of getting an existential makeover.  Today they’re becoming prompts for all sorts of personal and ambient information, much of which then goes on to become proprietary to Amazon.com.

I would venture to speculate that, despite the success of the Nook, Barnes & Noble has yet to fully wake up to this fact as well.  For more than a century the company has fancied itself a bookseller — this in contrast to Amazon, which CEO Jeff Bezos once described as “a technology company at its core” (Advertising Age, June 1, 2005).  The one sells books, the other bandies in information (which is to say nothing of all the physical stuff Amazon sells).  The difference is fundamental.

Where does all this leave us, then?  First and foremost, publishers need to begin recognizing the dual existence of their Kindle books: that is, as both means and ends.  I suppose they should also press Amazon for some type of “cut” — informational, financial, or otherwise — since Amazon is in a manner of speaking free-riding on the publishers’ products.

This last point I raise with some trepidation, though; the humanist in me feels a compulsion to pull back.  Indeed it’s here that I begin to glimpse the realization of O. H. Cheney’s world, where matters of the heart are anathema and reason, guided by information, dictates virtually all publishing decisions.  I say this in the thick of the Kindle edition of Walter Isaacson’s biography of Steve Jobs, where I’ve learned that intuition, even unbridled emotion, guided much of Jobs’ decision making.

Information may be the order of the day, but that’s no reason to overlook what Jobs so successfully grasped.  Technology alone isn’t enough.  It’s best when “married” to the liberal arts and humanities.

Share

Digital Natives? Not So Fast

I’m about the enter the final week of my undergraduate “Cultures of Books and Reading” class here at Indiana University.  I’ll be sad to see it go.  Not only has the group been excellent this semester, but I’ve learned so much about how my students are negotiating this protracted and profound moment of transition in the book world — what I like to call, following J. David Bolter, “the late age of print.”

One of the things that struck me early on in the class was the extent to which my students seemed to have embraced the notion that they’re “digital natives.”  This is the idea that people born after, say, 1985 or so grew up in a world consisting primarily of digital media.  They are, as such, more comfortable and even savvy with it than so-called “digital immigrants” — analog frumps like me who’ve had to wrestle with the transition to digital and who do not, therefore, fundamentally understand it.

It didn’t occur to me until last Wednesday that I hadn’t heard mention of the term “digital natives” in the class for weeks.  What prompted the revelation was a student-led facilitation on Robert Darnton’s 2009 essay from the New York Review of Books, on the Google book scanning project.

We’d spent the previous two classes weighing the merits of Kevin Kelly’s effusions about digital books and Sven Birkerts‘ poo-pooings of them.  In Darnton we had a piece not only about the virtues and vices of book digitization, but also one that offered a sobering glimpse into the potential political-economic and cultural fallout had the infamous Google book settlement been approved earlier this year.  It’s a measured piece, in other words, and deeply cognizant of the ways in which books, however defined, move through and inhabit people’s worlds.

In this it seemed to connect with the bookish experiences of my group purported digital natives, whose remarks confounded any claims that theirs was a generationally specific, or unified, experience with media.

Here’s a sampling from the discussion (and hat’s off to the facilitation group for prompting such an enlightening one!):

One student mentioned a print-on-paper children’s book her mother had handed down to her.  My student’s mother had inscribed it when she herself was seven or eight years old, and had asked her daughter to add her own inscription when she’d reached the same age.  My student intends to pass the book on one day to her own children so that they, too, may add their own inscriptions.  The heirloom paper book clearly is still alive and well, at least in the eyes of one digital native.

Another student talked about how she purchases paper copies of the the e-books she most enjoys reading on her Barnes & Noble Nook.  I didn’t get the chance to ask if these paper copies were physical trophies or if she actually read them, but in any case it’s intriguing to think about how the digital may feed into the analog, and vice-versa.

Other students complained about the amount of digitized reading their professors assign, stating that they’re less likely to read for class when the material is not on paper.  Others chimed in here, mentioning that they’ll read as much as their prepaid print quotas at the campus computer labs allow, and then after that they’re basically done.  (Incidentally, faculty and students using Indiana University’s computer labs printed about 25 million — yes, million — pages during the 2010-2011 academic year.)

On a related note, a couple of students talked about how they use Google Books to avoid buying expensive course texts.  Interestingly, they noted, 109 pages of one of the books I assign in “The Cultures of Books and Reading” happen to appear there.  The implication was that they’d read what was cheap and convenient to access, but nothing more.  (Grimace.)

Finally, I was intrigued by one of the remarks from my student who, at the beginning of the term, had asked me about the acceptability of purchasing course texts for his Kindle.  He discussed the challenges he’s faced in making the transition from print to digital during his tenure as a college student.  He noted how much work it’s taken him to migrate from one book form (and all the ancillary material it generates) to the other.  Maybe he’s a digital native, maybe he isn’t; the point is, he lives in a world that’s still significantly analog, a world that compels him to engage in sometimes fraught negotiations with whatever media he’s using.

All this in a class of 33 students!  Based on this admittedly limited sample, I feel as if the idea of “digital natives” doesn’t get us very far.  It smooths over too many differences.  It also lets people who embrace the idea off the hook too easily, analytically speaking, for it relieves them of the responsibility of accounting for the extent to which print and other “old” media still affect the daily lives of people, young or old.

Maybe it’ll be different for the next generation.  For now, though, it seems as if we all are, to greater and lesser degrees, digital immigrants.

Share

Algorithmic Literacies

I’ve spent the last few weeks here auditioning ideas for my next book, on the topic of  “algorithmic culture.”  By this I mean the use of computers and complex mathematical routines to sort, classify, and create hierarchies for our many forms of human expression and association.dekor-okno.ru

I’ve been amazed by the reception of these posts, not to mention the extent of their circulation.  Even more to the point, the feedback I’ve been receiving has already prompted me to address some of the gaps in the argument — among them, the nagging question of “what is to be done?”

I should be clear that however much I may criticize Google, Facebook, Netflix, Amazon, and other leaders in the tech industry, I’m a regular user of their products and services.  When I get lost driving, I’m happy that Google Maps is there to save the day.  Facebook has helped me to reconnect with friends whom I thought were lost forever.  And in a city with inadequate bookstores, I’m pleased, for the most part, to have Amazon make suggestions about which titles I ought to know about.

In other words, I don’t mean to suggest that life would be better off without algorithmic culture.  Likewise, I don’t mean to sound as if I’m waxing nostalgic for the “good old days” when small circles of élites got to determine “the best that has been thought and said.”  The question for me is, how might we begin to forge a better algorithmic culture, one that provides for more meaningful participation in the production of our collective life?

It’s this question that’s brought me to the idea of algorithmic literacies, which is something Eli Pariser also talks about in the conclusion of The Filter Bubble. 

I’ve mentioned in previous posts that one of my chief concerns with algorithmic culture has to do with its mysteriousness.  Unless you’re a computer scientist with a Ph.D. in computational mathematics, you probably don’t have a good sense of how algorithmic decision-making actually works.  (I count myself among the latter group.)  Now, I don’t mean to suggest that everyone needs to study computational mathematics, although some basic understanding of the subject couldn’t hurt.  I do mean to suggest, however, that someone needs to begin developing strategies by which to interpret both the processes and products of algorithmic culture, critically.  That’s what I mean, in a very broad sense, by “algorithmic literacies.”

In this I join two friends and colleagues who’ve made related calls.  Siva Vaidhyanathan has coined the phrase “Critical Information Studies” to describe an emerging “transfield” concerned with (among other things) “the rights and abilities of users (or consumers or citizens) to alter the means and techniques through which cultural texts and information are rendered, displayed, and distributed.”  Similarly, Eszter Hargittai has pointed to the inadequacy of the notion of the “digital divide” and has suggested that people instead talk about the uneven distribution of competencies in digital environments.

Algorithmic literacies would proceed from the assumption that computational processes increasingly influence how we perceive, talk about, and act in the world.  Marxists used to call this type of effect “ideology,” although I’m not convinced of the adequacy of a term that still harbors connotations of false consciousness.  Maybe Fredric Jameson’s notion of “cognitive mapping” is more appropriate, given the many ways in which algorithms help us to get our bearings in world abuzz with information.  In any case, we need to start developing a  vocabulary, one that would provide better theoretical tools with which to make sense of the epistemological, communicative, and practical entailments of algorithmic culture.

Relatedly, algorithmic literacies would be concerned with the ways in which individuals, institutions, and technologies game the system of life online. Search engine optimization, reputation management, planted product reviews, content farms — today there are a host of ways to exploit vulnerabilities in the algorithms charged with sifting through culture.  What we need, first of all, is to identify the actors chiefly responsible for these types of malicious activities, for they often operate in the shadows.  But we also need to develop reading strategies that would help people to recognize instances in which someone is attempting to game the system.  Where literary studies teaches students how to read for tone, so, too, would those of us invested in algorithmic literacies begin teaching how to read for evidence of this type of manipulation.

Finally, we need to undertake comparative work in an effort to reverse engineer Google, Facebook, and Amazon, et al.’s proprietary algorithms.  One of the many intriguing parts of The Googlization of Everything is the moment where Vaidhyanathan compares and contrasts the Google search results that are presented to him in different national contexts.  A search for the word “Jew,” for example, yields very different outcomes on the US’s version of Google than it does on Germany’s, where anti-Semitic material is banned.  The point of the exercise isn’t to show that Google is different in different places; the company doesn’t hide that fact at all.  The point, rather, is to use the comparisons to draw inferences about the biases — the politics — that are built into the algorithms people routinely use.

This is only a start.  Weigh in, please.  Clearly there’s major work left to do.

Share

Culturomics

I learned last month from Wired that something along the lines of what I’ve been calling “algorithmic culture” already has a name — culturomics.lux-standart.ru

According to Jonathan Keats, author of the magazine’s monthly “Jargon Watch” section, culturomics refers to “the study of memes and cultural trends using high-throughput quantitative analysis of books.”  The term was first noted in another Wired article, published last December, which reported on a study using Google books to track historical, or “evolutionary,” trends in language.  Interestingly, the study wasn’t published in a humanities journal.  It appeared in Science.

The researchers behind culturomics have also launched a website allowing you to search the Google book database for keywords and phrases, to “see how [their] usage frequency has been changing throughout the past few centuries.”  They follow up by calling the service “addictive.”

Culturomics weds “culture” to the suffix “-nomos,” the anchor for words like economics, genomics, astronomy, physiognomy, and so forth.  “-Nomos” can refer either to “the distribution of things” or, more specifically, to a “worldview.”  In this sense culturomics refers to the distribution of language resources (words) in the extant published literature of some period and the types of frameworks for understanding those resources embody.

I must confess to being intrigued by culturomics, however much I find the term to be clunky. My initial work on algorithmic culture tracks language changes in and around three keywords — information, crowd, and algorithm, in the spirit of Raymond Williams’ Culture and Society — and has given me a new appreciation for both the sociality of language and its capacity for transformation.  Methodologically culturomics seems, well, right, and I’ll be intrigued to see what a search for my keywords on the website might yield.

Having said that, I still want to hold onto the idea of algorithmic culture.  I prefer the term because it places the algorithm center-stage rather than allowing it to recede into the background, as does culturomicsAlgorithmic culture encourages us to see computational process not as a window onto the world but as an instrument of order and authoritative decision making.  The point of algorithmic culture, both terminologically and methodologically, is to help us understand the politics of algorithms and thus to approach them and the work they do more circumspectly, even critically.

I should mention, by the way, that this is increasingly how I’ve come to understand the so-called “digital humanities.”  The digital humanities aren’t just about doing traditional humanities work on digital objects, nor are they only about making the shift in humanities publishing from analog to digital platforms.  Instead the digital humanities, if there is such a thing, should focus on the ways in which the work of culture is increasingly delegated to computational process and, more importantly, the political consequences that follow from our doing so.

And this is the major difference, I suppose, between an interest in the distribution of language resources — culturomics — and a concern for the politics of the systems we use to understand those distributions — algorithmic culture.

Share