Archive for Related Work

The Internet of Words

A piece I just penned, “The Internet of Words,” is now out in The Chronicle of Higher Education. In part, it’s a review of two wonderful new books about social media: Alice E. Marwick’s Status Update: Celebrity, Publicity, and Branding in the Social Media Age, and danah boyd’s It’s Complicated: The Social Lives of Networked TeensBoth books were published within the last year by Yale University Press.

But the piece is also a meditation on words, taking the occasion of both books to think through the semantics of digital culture. It’s inspired by Raymond Williams‘ Keywords: A Vocabulary of Culture and Society (1976; 2nd ed., 1983), looking closely at how language shifts accompany, and sometimes precede, technological change. Here’s a snippet:

Changes in the language are as much a part of the story of technology as innovative new products, high-stakes mergers and acquisitions, and charismatic corporate leaders. They bear witness to the emergence of new technological realities, yet they also help facilitate them. Facebook wouldn’t have a billion-plus users absent some compelling features. It also wouldn’t have them without people like me first coming to terms with the new semantics of friendship.

It was great having an opportunity to connect some dots between my scholarly work on algorithmic culture and the keywords approach I’ve been developing via Williams. The piece is also a public-facing statement of how I approach the digital humanities.

Please share—and I hope you like it.

Share

Out from Under the Embargo

I’m delighted to report that my essay, “Performing Scholarly Communication,” is once again freely available on the open web.  The piece appeared in the January 2012 issue of the journal Text and Performance Quarterly but hasn’t much seen the light of day since then, subject to the publisher’s 18-month post-publication embargo.  You can now read and respond to the complete piece on my other website, The Differences & Repetitions Wiki, where I host a variety of open source writing projects.

By the way, if you’re interested in scholarly communication, the history of cultural studies, or both, then you might want to check out another piece appearing on D&RW: “Working Papers in Cultural Studies, or, the Virtues of Gray Literature,” which I coauthored with Mark Hayward.  It’s set to appear in the next issue of the journal New Formations.  A version of the piece has existed on D&RW since March 2012, and in fact you can trace its development all the way through to today, when I posted the nicely-formatted, final version that Mark and I submitted for typesetting.  Always, comments are welcome and appreciated.  If you’d rather cut right to the chase, then you can download the uncorrected page proofs for the “WPCS” piece by clicking here.

Take some time to poke around D&RW, by the way.   There are a bunch of other papers and projects  there, some, but not all, having to do with the history and politics of scholarly communication.

Lastly, a note of thanks to all of you who tweeted, Facebooked, or otherwise spread the word about the final days of the free Late Age of Print download.  I truly appreciate all of your support.

Share

Free Download and Other News

Sorry, dear readers, for the precipitous falloff in posting.  I was on a roll during the first two or three years of The Late Age of Print blog, but since then I’ve been overwhelmed by administrative duties, my ongoing research on the topic of algorithmic culture (as well as some other side projects), and helping to raise a preschooler.  Blogging has become something of a luxury of late.  Not to worry, though: I’m not hanging up my gloves, though obviously I’m backing off a bit.

I’m writing, first of all, to alert you to my latest interview, appearing on Figure/Ground.  If you’re not familiar with F/G, it’s a fantastic “open source, student-led, para-academic collaboration.”  There you’ll find an outstanding series of interviews with leading figures in media/technology studies—people like Ian Bogost, Jodi Dean, Kathleen Fitzpatrick, Gary Genosko, Katherine Hayles, Henry Jenkins, Douglas Kellner, Robert McChesney, Eric McLuhan, John Durham Peters, Douglas Rushkoff, Peter Zhang, and a host of others.  Needless to say, I’m honored to join such distinguished company.  I thank Justin Dowdall for taking the time to prepare such challenging questions.

I’m also writing to give you some fair warning.  Columbia University Press, my publisher, and I have been in talks for a few months about the freely downloadable, Creative Commons-licensed PDF of The Late Age of Print.  As you may know, it’s been accessible via this blog for more than four years now.  I don’t have an accurate count of the number of times it’s been downloaded, though I can assure you the number would be reasonably impressive.  But it’s been four years, and print sales have slowed somewhat.  Back in December I implemented a “pay with a Tweet” program, requiring anyone who wanted to download the book without paying also to spread the word about the book on Twitter or Facebook.   That’s helped to jumpstart sales a bit, but in any case my editor at Columbia and I agreed that it’s finally time to pull the plug on the free download.  I hope you’ll understand.

I plan on taking the free PDF down at the end of July.  If you still want the book for the cost of a tweet or a Facebook post, this is your last chance (of course, I’d welcome reviews on Amazon.com or additional likes on the book’s Facebook page, too).  After that…well, you know the drill.

 

Share

East Coast Code

There’s lots to like about Lawrence Lessig’s book, Code 2.0—particularly, I find, the distinction he draws between “East Coast Code” (i.e., the law) and “West Coast Code” (i.e., computer hardware and software). He sees both as modes of bringing order to complex systems, albeit through different means. Lessig is also interested in the ways in which West Coast Code has come to be used in ways that strongly resemble, and sometimes even supersede, its East Coast counterpart, as in the case of digital rights management technology. “Code is law,” as he so aptly puts it.

I’ve been playing with something like Lessig’s East Coast-West Coast Code distinction in my ongoing research on algorithmic culture. As I’ve said many times now, “algorithmic culture” refers to the use of computational processes to sort, classify, and hierarchize people, places, objects, and ideas, as well as to the habits of thought, conduct, and expression that flow from those processes. Essentially we’re talking about the management of a complex system—culture—by way of server farms and procedural decision-making software. Think Google or Facebook; this is West Coast Code at its finest.

Perhaps better than anyone, Fred Turner has chronicled the conditions out of which West Coast Code emerged. In From Counterculture to Cyberculture, he shows how, in the 1960s, Stewart Brand and his circle of countercultural compadres humanized computers, which were then widely perceived to be instruments of the military-industrial complex. Through the pages of the Whole Earth Catalog, Brand and company suggested that computers were, like shovels, axes, and hoes, tools with which to craft civilization—or rather to craft new-styled, autonomous civilizations that would no longer depend on the state (i.e., East Coast Code) to manage human affairs.

The deeper I delve into my own research, the more I discover just how complicated—and indeed, how East Coast—is the story of algorithmic culture. I don’t mean to diminish the significance of the work that’s been done about the West Coast, by any means. But just as people had to perform creative work to make computers seem personal, even human, so, too, did people need to perform similar work on the word culture to make it make sense within the realm of computation. And this happened mostly back East, in Cambridge, MA.

“Of course,” you’re probably thinking, “at MIT.” It turns out that MIT wasn’t the primary hub of this semantic and conceptual work, although it would be foolish to deny the influence of famed cybernetician Norbert Wiener here. Where the work took place was at that other rinky-dink school in Cambridge, MA: Harvard. Perhaps you’ve heard of it?

A good portion of my research now is focused on Harvard’s Department of Social Relations, an experimental unit combining Sociology, Psychology, and Cultural Anthropology. It had a relatively short existence, lasting only from 1946-1970, but in that time it graduated people who went on to become the titans of postwar social theory. Clifford Geertz, Stanley Milgram, and Harold Garfinkel are among the most notable PhDs, although myriad other important figures passed through the program as well. One of the more intriguing people I turned up was Dick Price, who went on to found the Esalen Institute (back to the West Coast) after becoming disillusioned by the Clinical Psychology track in SocRel and later suffering a psychotic episode. Dr. Timothy Leary also taught there, from 1961-1963, though he was eventually fired because of his controversial research on the psychological effects of LSD.

I’ve just completed some work focusing on Clifford Geertz and the relationship he shared with Talcott Parsons, his dissertation director and chair of SocRel from 1946-1956. It’s here more than anywhere that I’m discovering how the word culture got inflected by the semantics of computation. Though Geertz would later move away from the strongly cybernetic conceptualization of culture he’d inherited from Parsons, it nonetheless underpins arguably his most important work, especially the material he published in the 1960s and early 70s. This includes his famous “Notes on the Balinese Cockfight,” which is included in the volume The Interpretation of Cultures.

My next stop is Stanley Milgram, where I’ll be looking first at his work on crowd behavior and later at his material on the “small world” phenomenon. The former complicates the conclusions of his famous “obedience to authority” experiments in fascinating ways, and, I’d argue, sets the stage for the notion of “crowd wisdom” so prevalent today. Apropos of the latter, I’m intrigued by how Milgram helped to shrink the social on down to size, as it were, just as worries about the scope and anonymizing power of mass society reached a fever pitch. He did for society essentially what Geertz and Parsons did for culture, I believe, particularly in helping to establish conceptual conditions necessary for the algorithmic management of social relations. Oh—and did I mention that Milgram’s Obedience book, published in 1974, is also laden with cybernetic theory?

To be clear, the point of all this East Coast-West Coast business isn’t to create some silly rivalry—among scholars of computation, or among their favorite historical subjects. (Heaven knows, it would never be Biggie and Tupac!) The point, rather, is to draw attention to the semantic and social-theoretical conditions underpinning a host of computational activities that are prevalent today—conditions whose genesis occurred significantly back East. The story of algorithmic culture isn’t only about hippies, hackers, and Silicon Valley. It’s equally a story about squares who taught and studied at maybe the most elite institution of higher education on America’s East Coast.

Share

Late Age of Print – the Podcast

Welcome back and happy new year!  Okay—so 2013 is more than three weeks old at this point.  What can I say?  The semester started and I needed to hit the ground running.  In any case I’m pleased to be back and glad that you are, too.

My first post of the year is actually something of an old one, or at least it’s about new material that was produced about eighteen months ago.  Back in the summer of 2011 I keynoted the Association for Cultural Studies Summer Institute in Ghent, Belgium.  It was a blast—and not only because I got to talk about algorithmic culture and interact with a host of bright faculty and students.  I also recorded a podcast there with Janneke Adema, a Ph.D. student at Coventry University, UK whose work on the future of scholarly publishing is excellent and whose blog, Open Reflections, I recommend highly.

Janneke and I sat down in Ghent for the better part of an hour for a fairly wide-ranging conversation, much of it having to do with The Late Age of Print and my experiments in digital publishing.  It was real treat to revisit Late Age after a couple of years and to discuss some of the choices I made while I was writing it.  I’ve long thought the book was a tad quirky in its approach, and so the podcast gave me a wonderful opportunity to provide some missing explanation and backstory.  It was also great to have a chance to foreground some of the experimental digital publishing tools I’ve created, as I almost never put this aspect of my work on the same level as my written scholarship (though this is changing).

The resulting podcast, “The Late Age of Print and the Future of Cultural Studies,” is part of the journal Culture Machine’s podcast series.  Janneke and I discussed the following:

  • How have digital technologies affected my research and writing practices?
  • What advice would I, as a creator of digital scholarly tools, give to early career scholars seeking to undertake similar work?
  • Why do I experiment with modes of scholarly communication, or seek “to perform scholarly communication differently?”
  • How do I approach the history of books and reading, and how does my approach differ from more ethnographically oriented work?
  • How did I find the story amid the numerous topics I wrestle with in The Late Age of Print?

I hope you like the podcast.  Do feel welcome to share it on Twitter, Facebook, or wherever.  And speaking of social media, don’t forget—if you haven’t already, you can still download a Creative Commons-licensed PDF of The Late Age of Print.  It will only cost a tweet or a post on Facebook.  Yes, really.

Share

Algorithms Are Decision Systems

My latest interview on the topic of algorithmic culture is now available on the 40kBooks blog.  It’s an Italian website, although you can find the interview in both the original English and in Italian translation.

The interview provides something like a summary of my latest thinking on algorithmic culture, a good deal of which was born out of the new research that I blogged about here last time.  Here’s an excerpt from the interview:

Culture has long been about argument and reconciliation: argument in the sense that groups of people have ongoing debates, whether explicit or implicit, about their norms of thought, conduct, and expression; and reconciliation in the sense that virtually all societies have some type of mechanism in place – always political – by which to decide whose arguments ultimately will hold sway. You might think of culture as an ongoing conversation that a society has about how its members ought to comport themselves.

Increasingly today, computational technologies are tasked with the work of reconciliation, and algorithms are a principal means to that end. Algorithms are essentially decision systems—sets of procedures that specify how someone or something ought to proceed given a particular set of circumstances. Their job is to consider, or weigh, the significance of all of the arguments or information floating around online (and even offline) and then to determine which among those arguments is the most important or worthy. Another way of putting this would be to say that algorithms aggregate a conversation about culture that, thanks to technologies like the internet, has become ever more diffuse and disaggregated.

Something I did not address at any length in the interview is the historical backdrop against which I’ve set the new research: the Second World War, particularly the atrocities that precipitated, occurred during, and concluded it.  My hypothesis is that the desire to offload cultural decision-making onto computer algorithms stems significantly, although not exclusively, from a crisis of faith that emerged in and around World War II.  No longer, it seems, could we human beings be trusted to govern ourselves ethically and responsibly, and so some other means needed to be sought to do the job we’re seemingly incapable of doing.

A bunch of readers have asked me if I’ve published any of my work on algorithmic culture in academic journals.  The answer, as yet, is no, mostly because I’m working on developing and refining the ideas here, in dialogue with all of you, before formalizing my position.  (THANK YOU for the ongoing feedback, by the way!)  Having said that, I’m polishing the piece I blogged about last time, “‘An Infernal Culture Machine’: Intellectual Foundations of Algorithmic Culture,” and plan on submitting it to a scholarly journal fairly soon.  You’re welcome to email me directly if you’d like a copy of the working draft.


P.S. If you haven’t already, check out Tarleton Gillespie’s latest post over on Culture Digitally, about his new essay on “The Relevance of Algorithms.”

Share

Cloud Control

Okay, I fibbed.  Almost two months ago I promised I’d be back blogging regularly.  Obviously, that hasn’t been the case — not by a long shot.  My summer got eaten up with writing, travel, the Crossroads in Cultural Studies conference, lots of student obligations, and a bunch of other things.  The blogging never materialized, unfortunately, which seems to be a trend for me in the summertime.  Maybe one of these years I’ll just accept this fact and declare a formal hiatus.

Anyway, I have lots of good material to blog about but not much time to do so — at least, not right now.  To tide you over, then, I’m linking you to my latest interview with Future Tense, the great weekly radio show on technology and culture produced by the Australian Broadcasting Corporation.  The topic is cloud computing, which is timely and important given the migration of great swaths of information from people’s home computers and laptops to Amazon Web Services, Dropbox, Google Drive, iCloud, Microsoft Cloud Services, and other offsite storage services.  Mine is the third interview following the one with danah boyd, with whom I was pleased to share the stage as it were.  The direct link to the mp3 audio file of the program is here if you want to cut right to the chase.

This is my second interview with Future Tense.  Back in March I recorded a show about algorithms with them, based on my ongoing research on algorithmic culture.  What a blast to have a chance to chat again with FT’s great host, Antony Funnell!

So, more anon.  I can’t tell you when, exactly, though my best guess would be towards the end of the month.  Rest assured — and I really mean this — I’ll be back.  You know I can’t stay away for too long!

Share

New Writing – Working Papers in Cultural Studies

If it wasn’t clear already, I needed a little break from blogging.  This past year has been an amazing one here on The Late Age of Print, with remarkable response to many of my posts — particularly those about my new research on algorithmic culture.  But with the school year wrapping up in early May, I decided I needed a little break; hence, the crickets around here.  I’m back now and will be blogging regularly throughout the summer, although maybe not quite as regularly as I would during the academic year.  Thanks for sticking around.

I suppose it’s not completely accurate to say the school year “wrapped up” for me in early May.  I went right from grading final papers to finishing an essay my friend and colleague Mark Hayward and I had been working on throughout the semester.  (This was also a major reason behind the falloff in my blogging.)  The piece is called “Working Papers in Cultural Studies, or, the Virtues of Gray Literature,” and we’ll be presenting a version of it at the upcoming Crossroads in Cultural Studies conference in Paris.

“Working Papers” is, essentially, a retelling of the origins of British cultural studies from a materialist perspective.  It’s conventional in that it focuses on one of the key institutions where the field first coalesced: the Centre for Contemporary Cultural Studies, which was founded at the University of Birmingham in 1964 under the leadership of Richard Hoggart.  It’s unconventional, however, in that the essay focuses less on the Centre’s key figures or on what they had to say in their work.  Instead it looks closely at the form of the Centre’s publications, many of which were produced in-house in a manner that was rough around the edges.

Mark and I were interested in how, physically, these materials seemed to embody an ethic of publication prevalent at the Centre, which stressed the provisionality of the research produced by faculty, students, and affiliates. The essay thus is an attempt to solve a riddle: how did the Centre manage to achieve almost mythical status, in spite of the fact that it wasn’t much in the business of producing definitive statements about the politics of contemporary culture?  Take for instance its most well known publication, Working Papers in Cultural Studies, whose very title indicates that every article appearing in the journal was on some level a draft.

I won’t give away the ending, but I will point you in the direction of the complete essay.  It’s hosted on my site for writing projects, The Differences & Repetitions Wiki (which I may well rename the Late Age of Print Wiki).  Mark and I have created an archive for “Working Papers in Cultural Studies, or, the Virtues of Gray Literature,” where you’ll find not only the latest version of the essay and earlier drafts but also a bunch of materials pertaining to their production.  We wanted to channel some of the lessons we learned from Birmingham, which led us to go public with the process of our work.  (This is in keeping with another essay I published recently, “The Visible College,” a version of which you can also find over on D&RW.)

Our “Working Papers” essay is currently in open beta, which means there’s at least another round of edits to go before we could say it’s release-ready.  That’s where you come in.  We’d welcome your comments on the piece, as we’re about to embark on what will probably be the penultimate revision.  Thank you in advance, and we hope you like what you see.

Share

Concurring Opinions

I’m guest posting this week over on the legal blog Concurring Opinions, which is holding a symposium on Georgetown law professor Julie E. Cohen’s great new book, Configuring the Networked Self: Law, Code, and the Play of Everyday Practice (Yale University Press, 2012).  (FYI, it’s available to download for free under a Creative Commons license.)  In other words, even though I don’t have any new material for you here on the Late Age of Print, I hope you’ll follow me on over to Concurring Opinions.

Having said that, I thought it might be interesting to link you to a recent study I saw mentioned in the Washington Post sometime last week.  The author, Craig L. Garthwaite, who is a professor in the Kellogg School of Management at Northwestern University, argues that Oprah’s Book Club actually hurt book sales overall, this despite the bump that occurred for each one of Winfrey’s selections.  I haven’t yet had a chance to review the piece carefully, especially its methodology, but I have to say that I’m intrigued by its counter-intuitiveness.  I’d welcome any thoughts of feedback you may have on the Garthwaite study; I’ll do my best to chime in as well.

See you next week, and in the meantime, don’t forget to like the new Late Age of Print Facebook page.

Share

"The Shannon and Weaver Model"

First things first: some housekeeping.  Last week I launched a Facebook page for The Late Age of Print.   Because so many of my readers are presumably Facebook users, I thought it might be nice to create a “one-stop shop” for updates about new blog content, tweets, and anything else related to my work on the relationship between print media and algorithmic culture.  Please check out the page and, if you’re so inclined, give it a like.

Okay…on to matters at hand.

This week I thought it might be fun to open with a little blast from the past.  Below is a picture of the first page of my notebook from my first collegiate communication course.  I was an eighteen year-old beginning my second semester at the University of New Hampshire, and I had the good fortune of enrolling in Professor W—-‘s introductory “Communication and the Social Order” course, CMN 402.  It wouldn’t be an overstatement to call the experience life changing, since the class essentially started me on my career path.

What interests me (beyond the hilariously grumpy-looking doodle in the margin) is a diagram appearing toward the bottom of the page.  It’s an adaptation of what I would later be told was the “Shannon and Weaver” model of communication, named for the electrical engineer Claude Shannon and the mathematician Warren Weaver.

CMN 402 - UNH Jan. 28, 1992

Note what I jotted down immediately below the diagram: “1.) this model is false (limited) because comm is only one way (linear); 2.) & assumes that sender is active & receiver is passive; & 3.) ignores the fact that sender & receiver interact w/ one another.”  Here’s what the model looks like in its original form, as published in Shannon and Weaver’s Mathematical Theory of Communication (1949, based on a paper Shannon published in 1948).

Shannon & Weaver Model of Communication, 1948/1949

Such was the lesson from day one of just about every communication theory course I subsequently took and, later on, taught.  Shannon and Weaver were wrong.  They were scientists who didn’t understand people, much less how we communicate.  They reduced communication to a mere instrument and, in the process, stripped it of its deeply humane, world-building dimensions.  In graduate school I discovered that if you really wanted to pull the rug out from under another communication scholar’s work, you accused them of premising their argument on the Shannon and Weaver model.  It was the ultimate trump-card.

So the upshot was, Shannon and Weaver’s view of communication was worth lingering on only long enough to reject it.  Twenty years later, I see something more compelling in it.

A couple of things started me down this path.  Several years ago I read Tiziana Terranova’s wonderful book Network Culture: Politics for the Information Age (Pluto Press, 2004), which contains an extended reflection on Shannon and Weaver’s work.  Most importantly she takes it seriously, thinking through its relevance to contemporary information ecosystems.  Second, I happened across an article in the July 2010 issue of Wired magazine called “Sergey’s Search,” about Google co-founder Sergey Brin’s use of big data to find a cure for Parkinson’s Disease, for which he is genetically predisposed.  This passage in particular made me sit up and take notice:

In epidemiology, this is known as syndromic surveillance, and it usually involves checking drugstores for purchases of cold medicines, doctor’s offices for diagnoses, and so forth. But because acquiring timely data can be difficult, syndromic surveillance has always worked better in theory than in practice. By looking at search queries, though, Google researchers were able to analyze data in near real time. Indeed, Flu Trends can point to a potential flu outbreak two weeks faster than the CDC’s conventional methods, with comparable accuracy. “It’s amazing that you can get that kind of signal out of very noisy data,” Brin says. “It just goes to show that when you apply our newfound computational power to large amounts of data—and sometimes it’s not perfect data—it can be very powerful.” The same, Brin argues, would hold with patient histories. “Even if any given individual’s information is not of that great quality, the quantity can make a big difference. Patterns can emerge.”

Here was my aha! moment.  A Google search initiates a process of filtering the web, which, according to Brin, starts out as a thick soup of noisy data.  Its algorithms ferret out the signal amid all this noise, probabilistically, yielding the rank-ordered results you end up seeing on screen.

It’s textbook Shannon and Weaver.  And here it is, at the heart of a service that handles three billion searches per day — which is to say nothing of Google’s numerous other products, let alone those of its competitors, that behave accordingly.

So how was it, I wondered, that my discipline, Communication Studies, could have so completely missed the boat on this?  Why do we persist in dismissing the Shannon and Weaver model, when it’s had such uptake in and application to the real world?

The answer has to do with how one understands the purposes of theory.  Should theory provide a framework for understanding how the world actually works?  Or should it help people to think differently about their world and how it could work?  James Carey puts it more eloquently in Communication as Culture: Essays on Media and Society: “Models of communication are…not merely representations of communication but representations for communication: templates that guide, unavailing or not, concrete processes of human interaction, mass and interpersonal” (p. 32).

The genius of Shanon’s original paper from 1948 and its subsequent popularization by Weaver lies in many things, among them, their having formulated a model of communication located on the threshold of these two understandings of theory.  As a scientist Shannon surely felt accountable to the empirical world, and his work reflects that.  Yet, it also seems clear that Shannon and Weaver’s work has, over the last 60 years or so, taken on a life of its own, feeding back into the reality they first set about describing.  Shannon and Weaver didn’t merely model the world; they ended up enlarging it, changing it, and making it over in the image of their research.

And this is why, twenty years ago, I was taught to reject their thinking.  My colleagues in Communication Studies believed Shannon and Weaver were trying to model communication as it really existed.  Maybe they were.  But what they were also doing was pointing in the direction of a nascent way of conceptualizing communication, one that’s had more practical uptake than any comparable framework Communication Studies has thus far managed to produce.

Of course, in 1992 the World Wide Web was still in its infancy; Sergey Brin and Larry Page were, like me, just starting college; and Google wouldn’t appear on the scene for another six years.  I can’t blame Professor W—- for misinterpreting the Shannon and Weaver model.  If anything, all I can do is say “thank you” to her for introducing me to ideas so rich that I’ve wrestled with them for two decades.

Share