Archive for Ted Striphas

Performing Scholarly Communication

A short piece I wrote for the journal Text and Performance Quarterly (TPQ) has just been published.  It’s called “Performing Scholarly Communication,” and it’s included in a special section on “The Performative Possibilities of New Media” edited by the wonderful Desireé Rowe and Benjamin Myers.  The section includes contributions by Michael LeVan and Marcyrose Chvasta, Jonathan M. Gray, and Craig-Gingrich Philbrook, along with an introduction by and a formal contribution from Desireé and Ben.  You can peruse the complete contents here.

My essay is a companion to another short piece I published (and blogged about) last year called “The Visible College.”  “The Visible College” focuses on how journal publications hide much of the labor that goes into their production.  It then goes on to make a case for how we might re-engineer academic serials to better account for that work.  “Performing Scholarly Communication” reflects on one specific publishing experiment I’ve run over on my project site, The Differences and Repetitions Wiki, in which I basically opened the door for anyone to co-write an essay with me.  Both pieces also talk about the history of scholarly journal publishing at some length, mostly in an effort to think through where our present-day journal publishing practices, or performances, come from.  One issue I keep coming back to here is scarcity, or rather how scholars, journal editors, and publishers operate today as if the material relations of journal production typical of the 18th and 19th centuries still straightforwardly applied.

I’ve mentioned before that Desireé and Ben host a wonderful weekly podcast called the The Critical LedeLast week’s show focused on the TPQ forum and gathered together all of the contributors to discuss it.  I draw attention to this not only because I really admire Desireé and Ben’s podcast but also because it fulfills an important scholarly function as well.  You may not know this, but the publisher of TPQ, Taylor & Francis, routinely “embargoes” work published in this and many other of its journals.  The embargo stipulates that authors are barred from making any version of their work available on a public website for 18 months from the date of publication.  I’d be less concerned about this stipulation if more libraries and institutions had access to TPQ and journals like it, but alas, they do not.  In other words, if you cannot access TPQ, at least you can get a flavor of the research published in the forum by listening to me and my fellow contributors dish about it over on The Critical Lede.

I should add that the Taylor & Francis publication embargo hit close to home for me.  Almost a year and a half ago I posted a draft of “Performing Scholarly Communication” to The Differences and Repetitions Wiki and invited people to comment on it.  The response was amazing, and the work improved significantly as a result of the feedback I received there.  The problem is, I had to “disappear” the draft or pre-print version once my piece was accepted for publication in TPQ.  You can still read the commentary, which T&F does not own, but that’s almost like reading marginalia absent the text to which the notes refer!

Here’s the good news, though: if you’d like a copy of “Performing Scholarly Communication” for professional purposes, you can email me to request a free PDF copy.  And with that let me say that I do indeed appreciate how Taylor & Francis does support this type of limited distribution of one’s work, even as I wish the company would do much better in terms of supporting open access to scholarly research.

Share

Soft-Core Book Porn

Most of you reading this blog probably don’t know that I’m Director of Graduate Studies in the Department of Communication and Culture here at Indiana University.  What that means is that I’m knee-deep in graduate admissions files right now; what that also means is that I don’t have quite as much time for blogging as I normally would.  The good news is that I’m rapidly clearing the decks, and that I should be back to regular blogging pretty soon.

Until then, happy 2012 (belatedly), and here’s a little book soft-core book porn to tide you over — an amazing stop-motion animation video that was filmed in Toronto’s Type Bookstore.  If you’ve been reading this blog over the years, then you’ll know I’m not a huge fan of the whole “the only real books are paper books” motif (much as I do enjoy paper books).  Even so, you cannot but be impressed by the time, care, and resolve that must have gone into the production of this short.  Clearly it was a labor of love, on several levels.

Share

Happy Holidays!

I’ll be back in 2012, most likely the second week in January.  Until then, happy holidays to all of my readers, and thanks for supporting The Late Age of Print — both the book and the blog2011 has been a banner year for Late Age, and with you it promises to get even better.

Until then, here’s a little something for you — a Christmas tree composed entirely of books.  I’m not sure whether to see the sculpture as a cool art piece or a statement about what to with paper books now that e-readers are becoming ubiquitous.  Either way I guess the image is on theme, at least around this end of the internet.

Photo via Imgur

Best wishes, and see you in 2012.

Share

Digital Natives? Not So Fast

I’m about the enter the final week of my undergraduate “Cultures of Books and Reading” class here at Indiana University.  I’ll be sad to see it go.  Not only has the group been excellent this semester, but I’ve learned so much about how my students are negotiating this protracted and profound moment of transition in the book world — what I like to call, following J. David Bolter, “the late age of print.”

One of the things that struck me early on in the class was the extent to which my students seemed to have embraced the notion that they’re “digital natives.”  This is the idea that people born after, say, 1985 or so grew up in a world consisting primarily of digital media.  They are, as such, more comfortable and even savvy with it than so-called “digital immigrants” — analog frumps like me who’ve had to wrestle with the transition to digital and who do not, therefore, fundamentally understand it.

It didn’t occur to me until last Wednesday that I hadn’t heard mention of the term “digital natives” in the class for weeks.  What prompted the revelation was a student-led facilitation on Robert Darnton’s 2009 essay from the New York Review of Books, on the Google book scanning project.

We’d spent the previous two classes weighing the merits of Kevin Kelly’s effusions about digital books and Sven Birkerts‘ poo-pooings of them.  In Darnton we had a piece not only about the virtues and vices of book digitization, but also one that offered a sobering glimpse into the potential political-economic and cultural fallout had the infamous Google book settlement been approved earlier this year.  It’s a measured piece, in other words, and deeply cognizant of the ways in which books, however defined, move through and inhabit people’s worlds.

In this it seemed to connect with the bookish experiences of my group purported digital natives, whose remarks confounded any claims that theirs was a generationally specific, or unified, experience with media.

Here’s a sampling from the discussion (and hat’s off to the facilitation group for prompting such an enlightening one!):

One student mentioned a print-on-paper children’s book her mother had handed down to her.  My student’s mother had inscribed it when she herself was seven or eight years old, and had asked her daughter to add her own inscription when she’d reached the same age.  My student intends to pass the book on one day to her own children so that they, too, may add their own inscriptions.  The heirloom paper book clearly is still alive and well, at least in the eyes of one digital native.

Another student talked about how she purchases paper copies of the the e-books she most enjoys reading on her Barnes & Noble Nook.  I didn’t get the chance to ask if these paper copies were physical trophies or if she actually read them, but in any case it’s intriguing to think about how the digital may feed into the analog, and vice-versa.

Other students complained about the amount of digitized reading their professors assign, stating that they’re less likely to read for class when the material is not on paper.  Others chimed in here, mentioning that they’ll read as much as their prepaid print quotas at the campus computer labs allow, and then after that they’re basically done.  (Incidentally, faculty and students using Indiana University’s computer labs printed about 25 million — yes, million – pages during the 2010-2011 academic year.)

On a related note, a couple of students talked about how they use Google Books to avoid buying expensive course texts.  Interestingly, they noted, 109 pages of one of the books I assign in “The Cultures of Books and Reading” happen to appear there.  The implication was that they’d read what was cheap and convenient to access, but nothing more.  (Grimace.)

Finally, I was intrigued by one of the remarks from my student who, at the beginning of the term, had asked me about the acceptability of purchasing course texts for his Kindle.  He discussed the challenges he’s faced in making the transition from print to digital during his tenure as a college student.  He noted how much work it’s taken him to migrate from one book form (and all the ancillary material it generates) to the other.  Maybe he’s a digital native, maybe he isn’t; the point is, he lives in a world that’s still significantly analog, a world that compels him to engage in sometimes fraught negotiations with whatever media he’s using.

All this in a class of 33 students!  Based on this admittedly limited sample, I feel as if the idea of “digital natives” doesn’t get us very far.  It smooths over too many differences.  It also lets people who embrace the idea off the hook too easily, analytically speaking, for it relieves them of the responsibility of accounting for the extent to which print and other “old” media still affect the daily lives of people, young or old.

Maybe it’ll be different for the next generation.  For now, though, it seems as if we all are, to greater and lesser degrees, digital immigrants.

Share

Define “Future”

First, I hope all of my readers in the United States had a wonderful Thanksgiving.  I really needed a break myself, so I took last week off from blogging in order to recharge.  Second, I want to thank everyone for the amazing response to my previous post, on e-reading and indie bookstores.  I haven’t had a post receive that much attention in a while.  All the the feedback just goes to show how urgent the situation is.

On to matters at hand: the release of the fifth edition of the American Heritage Dictionary.  I don’t know if you’ve been following the story, but in case you haven’t, the New York Times ran a solid piece about a month ago on the marketing campaign surrounding the volume’s release.  It’s quite a blitz, and not cheap.  The publisher, Houghton Mifflin Harcourt, shelled out $300,000 to promote AHD5.  The volume retails for US$60, so the publisher will need to sell 5,000 copies just to cover the marketing, and I’d guess at least double that to cover production and distribution costs.

Thatsalottadictionary.

But even more interesting to me than the marketing is Houghton Mifflin Harcourt’s decision to produce both physical and electronic editions of the AHD5.  At a time when we hear over and over again about how the future is digital — and the future is now! — the publisher has decided to take a hybrid approach.  It has released AHD5 in four different formats: a print volume; an e-book; a website; and an app.  The latter three are digital, admittedly, although the disproportion is probably a function of the proliferation of electronic platforms.

The AHD5 e-book is completely overpriced at $60, although I say that not having perused it to see its features, if any.  The app doesn’t come cheap, either, at $24.99, although you get it for free if you buy the print edition.  It’s intriguing to think about how different media can affect the perceived value of language.

The publisher’s decision to offer AHD5 in multiple formats was partly a pragmatic decision, no doubt.  These are transitional times for books and other forms print media, and no one can say for sure what the future will hold (unless you’re Amazon CEO Jeff Bezos).  But the decision was, from a historico-theoretical standpoint, unusually well thought-out, too.

Protracted periods of change — and the uncertainties that surround them — beget intense forms of partisanship, something’s that’s all too apparent right now in book culture.  You might call it, “format fundamentalism.”  On the one hand, we have those who believe print is the richest, most authentic and enduring medium of human expression.  At the opposite extreme are the digital denizens who see print media as a little more than a quaint holdover from late-medieval times.  There are many people who fall in between, of course, if not in theory then most definitely in practice, but in any case the compulsion to pick a side is a strong one.

The problem with format fundamentalism is that print and electronic media both have their strengths and weaknesses.  More to the point, the weaknesses of the one are often compensated for by the strengths of the other, such that we end up with a more robust media sphere when the two are encouraged to co-exist rather than pitted against one another.

So let’s return to the example of AHD5.  Print-on-paper dictionaries are cumbersome — something that’s also true, to greater and lesser degrees, of most such books.  And in this regard, apps and other types of e-editions provide welcome relief when it comes to the challenges of storing dictionaries and other weighty tomes.  And yet, there’s something to be said for the shear preponderance of physical books, to which their capacity to endure is surely related.  The same cannot quite be said of digital editions, hundreds and even thousands of which can be stuffed into a single Amazon Kindle, Barnes & Noble Nook, or Apple iPad.  The endurance of these books depends significantly on the longevity and goodwill of corporate custodians for whom preservation is a mandate only as long as it remains profitable.

I could go on, but these are issues I address at length in the preface to the paperback edition of Late Age.  The point is, it’s more useful to think about print and electronic media not as contrary but as complementary, in fact we need to begin developing policies and legislation to create a media sphere balanced around this principle.

But until then, hat’s off to Houghton Mifflin Harcourt for providing an excellent model for how to proceed.

Share

The Indies and the E’s

OR, HOW TO SAVE INDEPENDENT BOOKSTORES ONE E-BOOK AT A TIME

Several weeks ago I mentioned the “Cultures of Books and Reading” class I’m teaching this semester at Indiana University.  It’s been a blast so far.  My students have had so many provocative things to say about the present and future of book culture.  More than anything, I’m amazed at the extent to which many of them seem to be book lovers, however book may be defined these days.

Right now I’m about midstream grading their second papers.  I structured the assignment in the form of a debate, asking each student to stake out and defend a position on this statement: “Physical bookstores are neither relevant nor necessary in the age of Amazon.com, and U.S. book culture is better off without them.”  In case you’re wondering, there’s been an almost equal balance between “pro” and “con” thus far.

One recurrent theme I’ve been seeing concerns how independent booksellers have almost no presence in the realm of e-readers and e-reading.  Really, it’s an oligarchy.  Amazon, Barnes & Noble, and to a lesser extent, Apple have an almost exclusive lock on the commercial e-book market in the United States.  And in this sense, my students have reminded me, the handwriting is basically on the wall for the Indies.  Unless they get their act together — soon — they’re liable to end up frozen out of probably the most important book market to have emerged since the paperback revolution of the 1950s and 60s.

Thus far the strategy of the Indies seems to be, ignore e-books, and they’ll go away.  But these booksellers have it backward.  The “e” isn’t apt to disappear in this scenario, but the Indies are.  How, then, can independent booksellers hope to get a toehold in the world of e-reading?

The first thing they need to do is, paradoxically, to cease acting independently.  Years ago the Indies banded together to launch the e-commerce site, IndieBound, which is basically a collective portal through which individual booksellers can market their stock of physical books online.  I can’t say the actual sales model is the best, but the spirit of cooperation is outstanding.  Companies like Amazon, Barnes & Noble, and Apple are too well capitalized for any one independent store to realistically compete.  Together, though, the Indies have a fighting chance.

Second, the Indies need to exploit a vulnerability in the dominant e-book platforms; they then need to build and market a device of their own accordingly.  So listen up, Indies — here’s your exploit, for which I won’t even charge you a consulting fee: Amazon, B&N, and Apple all use proprietary e-book formats.  Every Kindle, Nook, and iBook is basically tethered to its respective corporate custodian, whose long-term survival is a precondition of the continuing existence of one’s e-library.  Were Barnes & Noble ever to go under, for example, then poof! – one’s Nook library essentially vanishes, or at least it ceases to be as functional as it once was due to the discontinuation of software updates, bug fixes, new content, etc.

What the Indies need to do, then, is to create an open e-book system, one that’s feature rich and, more importantly, platform agnostic.  Indeed, one of the great virtues of printed books is their platform agnosticism.  The bound, paper book isn’t tied to any one publisher, printer, or bookseller.  In the event that one or more happens to go under, the format — and thus the content — still endures.  That’s another advantage the Indies have over the e-book oligarchs, by the way: there are many of them.  The survival of any e-book platform they may produce thus wouldn’t depend on the well being of any one independent bookseller but rather on that of the broader institution of independent bookselling.

How do you make it work, financially?  The IndieBound model, whereby shoppers who want to buy printed books are funneled to a local member bookshop, won’t work very well, I suspect.  Local doesn’t make much sense in the world of e-commerce, much less in the world of e-books.  It doesn’t really matter “where” online you buy a digital good, since really it just comes to you from a remote server anyway.  So here’s an alternative: allow independent booksellers to buy shares in, say, IndieRead, or maybe Ind-ē.  Sales of all e-books are centralized and profits get distributed based on the proportion of any given shop’s buy-in.

There you have it.  Will the Indies run with it?  Or will all of the students enrolled in my next  “Cultures of Books and Reading” class conclude that independent bookselling has become irrelevant indeed?

Share

The Visible College

After having spent the last five weeks blogging about about algorithmic culture, I figured both you and I deserved a change of pace.  I’d like to share some new research of mine that was just published in a free, Open Access periodical called The International Journal of Communication

My piece is called “The Visible College.”  It addresses the many ways in which the form of scholarly publications — especially that of journal articles — obscures the density of the collaboration typical of academic authorship in the humanities.  Here’s the first line: “Authorship may have died at the hands of a French philosopher drunk on Balzac, but it returned a few months later, by accident, when an American social psychologist turned people’s attention skyward.”  Intrigued?

My essay appears as part of a featured section on the politics of academic labor in the discipline of communication.  The forum is edited by my good friend and colleague, Jonathan Sterne.  His introductory essay is a must-read for anyone in the field — and, for that matter, anyone who receives a paycheck for performing academic labor.  (Well, maybe not my colleagues in the Business School….)  Indeed it’s a wonderful, programmatic piece outlining how people in universities can make substantive change there, both individually and collectively.  The section includes contributions from: Thomas A. Discenna; Toby Miller; Michael Griffin; Victor Pickard; Carol Stabile; Fernando P. Delgado; Amy Pason; Kathleen F. McConnell; Sarah Banet-Weiser and Alexandra Juhasz; Ira Wagman and Michael Z. Newman; Mark Hayward; Jayson Harsin; Kembrew McLeod; Joel Saxe; Michelle Rodino-Colocino; and two anonymous authors.  Most of the essays are on the short side, so you can enjoy the forum in tasty, snack-sized chunks.

My own piece presented me with a paradox.  Here I was, writing about how academic journal articles do a lousy job of representing all the labor that goes into them — in the form of an academic journal article!  (At least it’s a Creative Commons-licensed, Open Access one.)  Needless to say, I couldn’t leave it at that.  I decided to create a dossier of materials relating to the production of the essay, which I’ve archived on another of my websites, The Differences and Repetitions Wiki (D&RW).  The dossier includes all of my email exchanges with Jonathan Sterne, along with several early drafts of the piece.  It’s astonishing to see just how much “The Visible College” changed as a result of my dialogue with Jonathan.  It’s also astonishing to see, then, just how much of the story of academic production gets left out of that slim sliver of “thank-yous” we call the acknowledgments.

“The Visible College Dossier” is still a fairly crude instrument, admittedly.  It’s an experiment — one among several others hosted on D&RW in which I try to tinker with the form and content of scholarly writing.  I’d welcome your feedback on this or any other of my experiments, not to mention “The Visible College.”

Enjoy — and happy Halloween!  Speaking of which, if you’re looking for something book related and Halloween-y, check out my blog post from a few years ago on the topic of anthropodermic bibliopegy.

Share

Hacking the Real

Lest there be any confusion, yes, indeed, you’re reading The Late Age of Print blog, still authored by me, Ted Striphas.  The last time you visited, the site was probably red, white, black, and gray.  Now it’s not.  I imagine you’re wondering what prompted the change.

The short answer is: a hack.  The longer answer is: algorithmic culture.

At some point in the recent past, and unbeknownst to me, The Late Age of Print got hacked.  Since then I’ve been receiving sporadic reports from readers telling me that their safe browsing software was alerting them to a potential issue with the site.  Responsible digital citizen that I am, I ran numerous malware scans using multiple scanning services.  Only one out of twenty-three of those services ever returned a “suspicious” result, and so I figured, with those odds, that the one positive must be an anomaly.  It was the same service that the readers who’d contacted me also happened to be using.

Well, last week, Facebook implemented a new partnership with an internet security company called Websense.  The latter checks links shared on the social networking site for malware and the like.  A friend alerted me that an update I’d posted linking to Late Age came up as “abusive.”  That was enough; I knew something must be wrong.  I contacted my web hosting service and asked them to scan my site.  Sure enough, they found some malicious code hiding in the back-end.

Here’s the good news: as far as my host and I can tell, the code — which, rest assured, I’ve cleaned — had no effect on readers of Late Age or your computers.  (Having said that, it never hurts to run an anti-virus/malware scan.)  It was intended only for Google and other search engines, and its effects were visible only to them.  The screen capture, below, shows how Google was “seeing” Late Age before the cleanup.  Neither you nor I ever saw anything out of the ordinary around here.

Essentially the code grafted invisible links to specious online pharmacies onto the legitimate links appearing in many of my posts.  The point of the attack, when implemented widely enough, is to game the system of search.  The victim sites all look as if they’re pointing to whatever website the hacker is trying to promote. And with thousands of incoming links, that site is almost guaranteed to come out as a top result whenever someone runs a search query for popular pharma terms.

So, in case you were wondering, I haven’t given up writing and teaching for a career hocking drugs to combat male-pattern baldness and E.D.

This experience has been something of an object lesson for me in the seedier side of algorithmic culture.  I’ve been critical of Google, Amazon, Facebook, and other such sites for the opacity of the systems by which they determine the relevance of products, services, knowledge, and associations.  Those criticisms remain, but now I’m beginning to see another layer of the problem.  The hack has shown me just how vulnerable those systems are to manipulation, and how, then, the frameworks of trust, reputation, and relevance that exist online are deeply — maybe even fundamentally — flawed.

In a more philosophical vein, the algorithms about which I’ve blogged over the last several weeks and months attempt to model “the real.”  They leverage crowd wisdom — information coming in the form of feedback — in an attempt to determine what the world thinks or how it feels about x.  The problem is, the digital real doesn’t exist “out there” waiting to be discovered; it is a work in progress, and much like The Matrix, there are those who understand far better than most how to twist, bend, and mold it to suit their own ends.  They’re out in front of the digital real, as it were, and their actions demonstrate how the results we see on Google, Amazon, Facebook, and elsewhere suffer from what Meaghan Morris has called, in another context, “reality lag.”  They’re not the real; they’re an afterimage.

The other, related issue here concerns the fact that, increasingly, we’re placing the job of determining the digital real in the hands of a small group of authorities.  The irony is that the internet has long been understood to be a decentralized network and lauded, then, for its capacity to endure even when parts of it get compromised.  What the hack of my site has underscored for me, however, is the extent to which the internet has become territorialized of late and thus subject to many of the same types of vulnerabilities it was once thought to have thwarted.  Algorithmic culture is the new mass culture.

Moving on, I’d rather not have spent a good chunk of my week cleaning up after another person’s mischief, but at least the attack gave me an excuse to do something I’d wanted to do for a while now: give Late Age a makeover.  For awhile I’ve been feeling as if the site looked dated, and so I’m happy to give it a fresher look.  I’m not yet used to it, admittedly, but of course feeling comfortable in new style of anything takes time.

The other major change I made was to optimize Late Age for viewing on mobile devices.  Now, if you’re visiting using your smart phone or tablet computer, you’ll see the same content but in significantly streamlined form.  I’m not one to believe that the PC is dead — at least, not yet — but for better or for worse it’s clear that mobile is very much at the center of the internet’s future.  In any case, if you’re using a mobile device and want to see the normal Late Age site, there’s a link at the bottom of the screen allowing you to switch back.

I’d be delighted to hear your feedback about the new Late Age of Print.  Drop me a line, and thanks to all of you who wrote in to let me know something was up with the old site.

 

Share

Algorithmic Literacies

I’ve spent the last few weeks here auditioning ideas for my next book, on the topic of  “algorithmic culture.”  By this I mean the use of computers and complex mathematical routines to sort, classify, and create hierarchies for our many forms of human expression and association.

I’ve been amazed by the reception of these posts, not to mention the extent of their circulation.  Even more to the point, the feedback I’ve been receiving has already prompted me to address some of the gaps in the argument — among them, the nagging question of “what is to be done?”

I should be clear that however much I may criticize Google, Facebook, Netflix, Amazon, and other leaders in the tech industry, I’m a regular user of their products and services.  When I get lost driving, I’m happy that Google Maps is there to save the day.  Facebook has helped me to reconnect with friends whom I thought were lost forever.  And in a city with inadequate bookstores, I’m pleased, for the most part, to have Amazon make suggestions about which titles I ought to know about.

In other words, I don’t mean to suggest that life would be better off without algorithmic culture.  Likewise, I don’t mean to sound as if I’m waxing nostalgic for the “good old days” when small circles of élites got to determine “the best that has been thought and said.”  The question for me is, how might we begin to forge a better algorithmic culture, one that provides for more meaningful participation in the production of our collective life?

It’s this question that’s brought me to the idea of algorithmic literacies, which is something Eli Pariser also talks about in the conclusion of The Filter Bubble. 

I’ve mentioned in previous posts that one of my chief concerns with algorithmic culture has to do with its mysteriousness.  Unless you’re a computer scientist with a Ph.D. in computational mathematics, you probably don’t have a good sense of how algorithmic decision-making actually works.  (I count myself among the latter group.)  Now, I don’t mean to suggest that everyone needs to study computational mathematics, although some basic understanding of the subject couldn’t hurt.  I do mean to suggest, however, that someone needs to begin developing strategies by which to interpret both the processes and products of algorithmic culture, critically.  That’s what I mean, in a very broad sense, by “algorithmic literacies.”

In this I join two friends and colleagues who’ve made related calls.  Siva Vaidhyanathan has coined the phrase “Critical Information Studies” to describe an emerging “transfield” concerned with (among other things) “the rights and abilities of users (or consumers or citizens) to alter the means and techniques through which cultural texts and information are rendered, displayed, and distributed.”  Similarly, Eszter Hargittai has pointed to the inadequacy of the notion of the “digital divide” and has suggested that people instead talk about the uneven distribution of competencies in digital environments.

Algorithmic literacies would proceed from the assumption that computational processes increasingly influence how we perceive, talk about, and act in the world.  Marxists used to call this type of effect “ideology,” although I’m not convinced of the adequacy of a term that still harbors connotations of false consciousness.  Maybe Fredric Jameson’s notion of “cognitive mapping” is more appropriate, given the many ways in which algorithms help us to get our bearings in world abuzz with information.  In any case, we need to start developing a  vocabulary, one that would provide better theoretical tools with which to make sense of the epistemological, communicative, and practical entailments of algorithmic culture.

Relatedly, algorithmic literacies would be concerned with the ways in which individuals, institutions, and technologies game the system of life online. Search engine optimization, reputation management, planted product reviews, content farms — today there are a host of ways to exploit vulnerabilities in the algorithms charged with sifting through culture.  What we need, first of all, is to identify the actors chiefly responsible for these types of malicious activities, for they often operate in the shadows.  But we also need to develop reading strategies that would help people to recognize instances in which someone is attempting to game the system.  Where literary studies teaches students how to read for tone, so, too, would those of us invested in algorithmic literacies begin teaching how to read for evidence of this type of manipulation.

Finally, we need to undertake comparative work in an effort to reverse engineer Google, Facebook, and Amazon, et al.’s proprietary algorithms.  One of the many intriguing parts of The Googlization of Everything is the moment where Vaidhyanathan compares and contrasts the Google search results that are presented to him in different national contexts.  A search for the word “Jew,” for example, yields very different outcomes on the US’s version of Google than it does on Germany’s, where anti-Semitic material is banned.  The point of the exercise isn’t to show that Google is different in different places; the company doesn’t hide that fact at all.  The point, rather, is to use the comparisons to draw inferences about the biases — the politics — that are built into the algorithms people routinely use.

This is only a start.  Weigh in, please.  Clearly there’s major work left to do.

Share

The Conversation of Culture

Last week I was interviewed on probably the best talk radio program about culture and technology, the CBC’s Spark. The interview grew out of my recent series of blog posts on the topic of algorithmic culture.  You can listen to the complete interview, which lasts about fifteen minutes, by following the link on the Spark website.  If you want to cut right to the chase and download an mp3 file of the complete interview, just click here.

The hallmark of a good interviewer is the ability to draw something out of an interviewee that she or he didn’t quite realize was there.  That’s exactly what the host of Spark, Nora Young, did for me.  She posed a question that got me thinking about the process of feedback as it relates to algorithmic culture — something I’ve been faulted on, rightly, in the conversations I’ve been having about my blog posts and scholarly research on the subject.  She asked something to the effect of, “Hasn’t culture always been a black box?”  The implication was: hasn’t the process of determining what’s culturally worthwhile always been mysterious, and if so, then what’s so new about algorithmic culture?

The answer, I believe, has everything to do with the way in which search engine algorithms, product and friend recommendation systems, personalized news feeds, and so forth incorporate our voices into their determinations of what we’ll be exposed to online.

They rely, first of all, on signals, or what you might call latent feedback.  This idea refers to the information about our online activities that’s recorded in the background, as it were, in a manner akin to eavesdropping.  Take Facebook, for example.  Assuming you’re logged in, Facebook registers not only your activities on its own site but also every movement you make across websites with an embedded “like” button.

Then there’s something you might call direct feedback, which refers to the information we voluntarily give up about ourselves and our preferences.  When Amazon.com asks if a product it’s recommended appeals to you, and you click “no,” you’ve explicitly told the company it got that one wrong.

So where’s the problem in that?  Isn’t it the case that these systems are inherently democratic, in that they actively seek and incorporate our feedback?  Well, yes…and no.  The issue here has to do with the way in which they model a conversation about the cultural goods that surround us, and indeed about culture more generally.

The work of culture has long happened inside of a black box, to be sure.  For generations it was chiefly the responsibility of a small circle of white guys who made it their business to determine, in Matthew Arnold’s famous words, “the best that has been thought and said.”

Only the black box wasn’t totally opaque.  The arguments and judgments of these individuals were never beyond question.  They debated fiercely among themselves, often quite publicly; people outside of their circles debated them equally fiercely, if not more so.  That’s why, today, we teach Toni Morrison’s work in our English classes in addition to that of William Shakespeare.

The question I raised near the end of the Spark interview is the one I want to raise here: how do you argue with Google?  Or, to take a related example, what does clicking “not interested” on an Amazon product recommendation actually communicate, beyond the vaguest sense of distaste?  There’s no subtlety or justification there.  You just don’t like it.  Period.  End of story.  This isn’t communication as much as the conveyance of decontextualized information, and it reduces culture from a series of arguments to a series of statements.

Then again, that may not be entirely accurate.  There’s still an argument going on where the algorithmic processing of culture is concerned — it just takes place somewhere deep in the bowels of a server farm, where all of our movements and preferences are aggregated and then filtered.  You can’t argue with Google, Amazon, or Facebook, but it’s not because they’re incapable of argument.  It’s because their systems perform the argument for us, algorithmically.  They obviate the need to justify our preferences to one another, and indeed, before one another.

This is a conversation about culture, yes, but minus its moral obligations.

Share