Tag Archive for back office

Call for Papers – EJCS on Data Mining & Analytics

 Call for Papers: The European Journal of Cultural Studies
Special issue on Data Mining/Analytics

Editors: Mark Andrejevic (University of Queensland, Australia); Alison Hearn (University of Western Ontario, Canada); Helen Kennedy (University of Leeds, UK)

The widespread use of social media has given rise to new forms of monitoring, mining and aggregation strategies designed to monetize the huge volumes of data such usage produces. Social media monitoring and analysis industries, experts and consultancies have emerged offering a broad range of social media intelligence and reputation management services. Such services typically involve a range of analytical methods (sentiment analysis, opinion mining, social network analysis, machine learning, natural language processing), often offered in black-boxed proprietary form, in order to gain insights into public opinion, mood, networks and relationships and identify potential word-of-mouth influencers. Ostensibly, these various forms of data mining, analytics and machine learning also are paving the way for the development of a more intelligent or ‘semantic’ Web 3.0, offering a more ‘productive and intuitive’ user experience. As commercial and non-commercial organisations alike seek to monitor, influence, manage and direct social media conversations, and as global usage of social media expands, questions surface that challenge celebratory accounts of the democratizing, participatory possibilities of social media. Remembering that Web 2.0 was always intended as a business manifesto – O’Reilly’s early maxims included, after all, ‘data is the next Intel inside’, ‘users add value’ and ‘collaboration as data collection’ – we need to interrogate social media not only as communication tools, but also as techno-economic constructs with important implications for the management of populations and the formation of subjects. Data mining and analytics are about much more than targeted advertising: they envision new strategies for forecasting, targeting, and decision making in a growing range of social realms (employment, education, health care, policing, urban planning, epidemiology, etc.) with the potential to usher in new, unaccountable, and opaque forms of discrimination, sorting, inclusion and exclusion. As Web 3.0 and the ‘big data’ it generates moves inexorably toward predictive analytics and the overt technocratic management of human sociality, urgent questions arise about how such data are gathered, constructed and sold, to what ends they are deployed, who gets access to them, and how their analysis is regulated (boyd and Crawford 2012).

This special issue aims to bring together scholars who interrogate social media intelligence work undertaken in the name of big data, big business and big government. It aims to draw together empirically-grounded and theoretically-informed analyses of the key issues in contemporary forms of data mining and analytics from across disparate fields and methodologies. . Contributions are invited that address a range of related issues. Areas for consideration could include, but are not limited to:

  • Political economy of social media platforms
  • Algorithmic culture
  • User perspectives on data mining
  • The politics of data visualisation
  • Big data and the cultural industries
  • Data journalism
  • The social life of big data methods
  • Inequalities and exclusions in data mining
  • Affective prediction and control
  • Data mining and new subjectivities
  • Ethics, regulation and data mining
  • Conceptualising big/data/mining
  • Social media intelligence at work
  • Social media and surveillance
  • Critical histories of data mining, sorting, and surveillance

Prospective contributors should email an abstract of 500-700 words to the issue editors by 9th December 2013 (to h.kennedy@leeds.ac.uk). Full articles should be submitted to Helen Kennedy (h.kennedy@leeds.ac.uk) by 12th May 2014. Manuscripts must be no longer than 7,000 words. Articles should meet with The European Journal of Cultural Studies’ aim to promote empirically based, theoretically informed cultural studies; essayist discussion papers are not normally accepted by this journal. All articles will be refereed: invitation to submit a paper to the special issue in no way guarantees that the paper will be published; this is dependent on the review process.

Details:
Abstract deadline: 9th December 2013 (to h.kennedy@leeds.ac.uk);
Decisions on abstracts communicated by 13th January 2014;
Article submission deadline: 12th May 2014 (to h.kennedy@leeds.ac.uk);
Final submission/review process complete: 13th October 2014;
For publication in 2015.

Share

Cloud Control

Okay, I fibbed.  Almost two months ago I promised I’d be back blogging regularly.  Obviously, that hasn’t been the case — not by a long shot.  My summer got eaten up with writing, travel, the Crossroads in Cultural Studies conference, lots of student obligations, and a bunch of other things.  The blogging never materialized, unfortunately, which seems to be a trend for me in the summertime.  Maybe one of these years I’ll just accept this fact and declare a formal hiatus.

Anyway, I have lots of good material to blog about but not much time to do so — at least, not right now.  To tide you over, then, I’m linking you to my latest interview with Future Tense, the great weekly radio show on technology and culture produced by the Australian Broadcasting Corporation.  The topic is cloud computing, which is timely and important given the migration of great swaths of information from people’s home computers and laptops to Amazon Web Services, Dropbox, Google Drive, iCloud, Microsoft Cloud Services, and other offsite storage services.  Mine is the third interview following the one with danah boyd, with whom I was pleased to share the stage as it were.  The direct link to the mp3 audio file of the program is here if you want to cut right to the chase.

This is my second interview with Future Tense.  Back in March I recorded a show about algorithms with them, based on my ongoing research on algorithmic culture.  What a blast to have a chance to chat again with FT’s great host, Antony Funnell!

So, more anon.  I can’t tell you when, exactly, though my best guess would be towards the end of the month.  Rest assured — and I really mean this — I’ll be back.  You know I can’t stay away for too long!

Share

The Book Industry's Moneyball

Some folks have asked me how I came to the idea of algorithmic culture, the subject of my next book as well as many of my blog posts of late.  I usually respond by pointing them in the direction of chapter three of The Late Age of Print, which focuses on Amazon.com, product coding, and the rise digital communications in business.

It occurs to me, though, that Amazon wasn’t exactly what inspired me to begin writing about algorithms, computational processes, and the broader application of principles of scientific reason to the book world.  My real inspiration came from someone you’ve probably never heard of before (unless, of course, you’ve read The Late Age of Print). I’m talking about Orion Howard (O. H.) Cheney, a banker and business consultant whose ideas did more to lay the groundwork for today’s book industry than perhaps anyone’s.

Cheney was born in 1869 in Bloomington, Illinois.  For much of his adult life he lived and worked in New York State, where, from 1909-1911, he served as the State Superintendent of Banks and later as a high level executive in the banking industry.  In 1932 he published what was to be the first comprehensive study of the book business in the United States, the Economic Survey of the Book Industry, 1930-1931.  It almost immediately came to be known as the “Cheney Report” due to the author’s refusal to soft-peddle his criticisms of, well, pretty much anyone who had anything to do with promoting books in the United States — from authors and publishers on down to librarians and school teachers, and everyone else in between.

In essence, Cheney wanted to fundamentally rethink the game of publishing.  His notorious report was the book industry equivalent of Moneyball.

If you haven’t read Michael Lewis’ Moneyball: The Art of Winning an Unfair Game (2003), you should.  It’s about how the Oakland A’s, one of the most poorly financed teams in Major League Baseball, used computer algorithms (so-called “Sabermetrics“) to build a successful franchise by identifying highly skilled yet undervalued players.  The protagonists of Moneyball, A’s General Manager Billy Bean and Assistant GM Paul DePodesta, did everything in their power to purge gut feeling from the game.  Indeed, one of the book’s central claims is that assessments of player performance have long been driven by unexamined assumptions about how ball players ought to look, move, and behave, usually to a team’s detriment.

The A’s method for identifying talent and devising on-field strategy raised the ire of practically all baseball traditionalists.  It yielded insights that were so far afield of the conventional wisdom that its proponents were apt to seem crazy, even after they started winning big.

It’s the same story with The Cheney Report.  Consider this passage, where Cheney faults the book industry for operating on experience and intuition instead of a statistically sound “fact basis”:

Facts are the only basis for management in publishing, as they must be in any field.  In that respect, the book industry is painfully behind many others — both in facts relating to the industry as a whole and in facts of individual [publishing] houses….”Luck”; waiting for a best-seller; intuitive publishing by a “born publisher” — these must give way as the basis for the industry, for the sake of the industry and everybody in it….In too many publishing operations the theory seems to be that learning from experience means learning how to do a thing right by continuing to do it wrong (pp. 167-68).

This, more than 70 years before Moneyball!  And, like Beane and DePodesta, Cheney was raked over the coals by almost everyone in the industry he was criticizing.  They refused to listen to him, despite the fact that, in the throes of the Great Depression, most everything that had worked in the book industry didn’t seem to be working so well anymore.

Well, it’s almost the same story. Beane and DePodesta have enjoyed excellent careers in Major League Baseball, despite the heresy of their ideas.  They’ve been fortunate to have lived at a time when algorithms and computational mathematics are enough the norm that at least some can recognize the value of what they’ve brought to the game.

The Cheney Report, in contrast, had almost no immediate effect on the book industry.  The Report suffered due to its — and Cheney’s own — untimeliness.  The cybernetics revolution was still more than a decade off, and so the idea of imagining the book industry as a complexly communicative ecosystem was all but unimaginable to most.  This was true even with Cheney, who, in his insistence on ascertaining the “facts,” was fumbling around for what would later come to be known as “information.”

Today we live in O. H. Cheney’s vision for the book world, or, at least, some semblance of it.  People wonder why Amazon.com has so shaken up all facets of the industry.  It’s an aggressive competitor, to be sure, but its success is premised more on its having fundamentally rethought the game.  And for this Jeff Bezos owes a serious thank you to a grumpy old banker who, in the 1930s, wrote the first draft of what would go on to become publishing’s new playbook.

Share

The Visible College

After having spent the last five weeks blogging about about algorithmic culture, I figured both you and I deserved a change of pace.  I’d like to share some new research of mine that was just published in a free, Open Access periodical called The International Journal of Communicationberryjam.ru

My piece is called “The Visible College.”  It addresses the many ways in which the form of scholarly publications — especially that of journal articles — obscures the density of the collaboration typical of academic authorship in the humanities.  Here’s the first line: “Authorship may have died at the hands of a French philosopher drunk on Balzac, but it returned a few months later, by accident, when an American social psychologist turned people’s attention skyward.”  Intrigued?

My essay appears as part of a featured section on the politics of academic labor in the discipline of communication.  The forum is edited by my good friend and colleague, Jonathan Sterne.  His introductory essay is a must-read for anyone in the field — and, for that matter, anyone who receives a paycheck for performing academic labor.  (Well, maybe not my colleagues in the Business School….)  Indeed it’s a wonderful, programmatic piece outlining how people in universities can make substantive change there, both individually and collectively.  The section includes contributions from: Thomas A. Discenna; Toby Miller; Michael Griffin; Victor Pickard; Carol Stabile; Fernando P. Delgado; Amy Pason; Kathleen F. McConnell; Sarah Banet-Weiser and Alexandra Juhasz; Ira Wagman and Michael Z. Newman; Mark Hayward; Jayson Harsin; Kembrew McLeod; Joel Saxe; Michelle Rodino-Colocino; and two anonymous authors.  Most of the essays are on the short side, so you can enjoy the forum in tasty, snack-sized chunks.

My own piece presented me with a paradox.  Here I was, writing about how academic journal articles do a lousy job of representing all the labor that goes into them — in the form of an academic journal article!  (At least it’s a Creative Commons-licensed, Open Access one.)  Needless to say, I couldn’t leave it at that.  I decided to create a dossier of materials relating to the production of the essay, which I’ve archived on another of my websites, The Differences and Repetitions Wiki (D&RW).  The dossier includes all of my email exchanges with Jonathan Sterne, along with several early drafts of the piece.  It’s astonishing to see just how much “The Visible College” changed as a result of my dialogue with Jonathan.  It’s also astonishing to see, then, just how much of the story of academic production gets left out of that slim sliver of “thank-yous” we call the acknowledgments.

“The Visible College Dossier” is still a fairly crude instrument, admittedly.  It’s an experiment — one among several others hosted on D&RW in which I try to tinker with the form and content of scholarly writing.  I’d welcome your feedback on this or any other of my experiments, not to mention “The Visible College.”

Enjoy — and happy Halloween!  Speaking of which, if you’re looking for something book related and Halloween-y, check out my blog post from a few years ago on the topic of anthropodermic bibliopegy.

Share

WordPress

Lest there be any confusion, yes, indeed, you’re reading The Late Age of Print blog, still authored by me, Ted Striphas.  The last time you visited, the site was probably red, white, black, and gray.  Now it’s not.  I imagine you’re wondering what prompted the change.ir-leasing.ru

The short answer is: a hack.  The longer answer is: algorithmic culture.polvam.ru

At some point in the recent past, and unbeknownst to me, The Late Age of Print got hacked.  Since then I’ve been receiving sporadic reports from readers telling me that their safe browsing software was alerting them to a potential issue with the site.  Responsible digital citizen that I am, I ran numerous malware scans using multiple scanning services.  Only one out of twenty-three of those services ever returned a “suspicious” result, and so I figured, with those odds, that the one positive must be an anomaly.  It was the same service that the readers who’d contacted me also happened to be using.

Well, last week, Facebook implemented a new partnership with an internet security company called Websense.  The latter checks links shared on the social networking site for malware and the like.  A friend alerted me that an update I’d posted linking to Late Age came up as “abusive.”  That was enough; I knew something must be wrong.  I contacted my web hosting service and asked them to scan my site.  Sure enough, they found some malicious code hiding in the back-end.

Here’s the good news: as far as my host and I can tell, the code — which, rest assured, I’ve cleaned — had no effect on readers of Late Age or your computers.  (Having said that, it never hurts to run an anti-virus/malware scan.)  It was intended only for Google and other search engines, and its effects were visible only to them.  The screen capture, below, shows how Google was “seeing” Late Age before the cleanup.  Neither you nor I ever saw anything out of the ordinary around here.

Essentially the code grafted invisible links to specious online pharmacies onto the legitimate links appearing in many of my posts.  The point of the attack, when implemented widely enough, is to game the system of search.  The victim sites all look as if they’re pointing to whatever website the hacker is trying to promote. And with thousands of incoming links, that site is almost guaranteed to come out as a top result whenever someone runs a search query for popular pharma terms.

So, in case you were wondering, I haven’t given up writing and teaching for a career hocking drugs to combat male-pattern baldness and E.D.

This experience has been something of an object lesson for me in the seedier side of algorithmic culture.  I’ve been critical of Google, Amazon, Facebook, and other such sites for the opacity of the systems by which they determine the relevance of products, services, knowledge, and associations.  Those criticisms remain, but now I’m beginning to see another layer of the problem.  The hack has shown me just how vulnerable those systems are to manipulation, and how, then, the frameworks of trust, reputation, and relevance that exist online are deeply — maybe even fundamentally — flawed.

In a more philosophical vein, the algorithms about which I’ve blogged over the last several weeks and months attempt to model “the real.”  They leverage crowd wisdom — information coming in the form of feedback — in an attempt to determine what the world thinks or how it feels about x.  The problem is, the digital real doesn’t exist “out there” waiting to be discovered; it is a work in progress, and much like The Matrix, there are those who understand far better than most how to twist, bend, and mold it to suit their own ends.  They’re out in front of the digital real, as it were, and their actions demonstrate how the results we see on Google, Amazon, Facebook, and elsewhere suffer from what Meaghan Morris has called, in another context, “reality lag.”  They’re not the real; they’re an afterimage.

The other, related issue here concerns the fact that, increasingly, we’re placing the job of determining the digital real in the hands of a small group of authorities.  The irony is that the internet has long been understood to be a decentralized network and lauded, then, for its capacity to endure even when parts of it get compromised.  What the hack of my site has underscored for me, however, is the extent to which the internet has become territorialized of late and thus subject to many of the same types of vulnerabilities it was once thought to have thwarted.  Algorithmic culture is the new mass culture.

Moving on, I’d rather not have spent a good chunk of my week cleaning up after another person’s mischief, but at least the attack gave me an excuse to do something I’d wanted to do for a while now: give Late Age a makeover.  For awhile I’ve been feeling as if the site looked dated, and so I’m happy to give it a fresher look.  I’m not yet used to it, admittedly, but of course feeling comfortable in new style of anything takes time.

The other major change I made was to optimize Late Age for viewing on mobile devices.  Now, if you’re visiting using your smart phone or tablet computer, you’ll see the same content but in significantly streamlined form.  I’m not one to believe that the PC is dead — at least, not yet — but for better or for worse it’s clear that mobile is very much at the center of the internet’s future.  In any case, if you’re using a mobile device and want to see the normal Late Age site, there’s a link at the bottom of the screen allowing you to switch back.

I’d be delighted to hear your feedback about the new Late Age of Print.  Drop me a line, and thanks to all of you who wrote in to let me know something was up with the old site.

 

Share

Cultural Informatics

In my previous post I addressed the question, who speaks for culture in an algorithmic age?  My claim was that humanities scholars once held significant sway over what ended up on our cultural radar screens but that, today, their authority is diminishing in importance.  The work of sorting, classifying, hierarchizing, and curating culture now falls increasingly on the shoulders of engineers, whose determinations of what counts as relevant or worthy result from computational processes.  This is what I’ve been calling, “algorithmic culture.”

The question I want to address this week is, what assumptions about culture underlie the latter approach?  How, in other words, do engineers — particularly computer scientists — seem to understand and then operationalize the culture part of algorithmic culture?

My starting point is, as is often the case, the work of cultural studies scholar Raymond Williams.  He famously observed in Keywords (1976) that culture is “one of the two or three most complicated words in the English language.”  The term is definitionally capacious, that is to say, a result of centuries of shedding and accreting meanings, as well as the broader rise and fall of its etymological fortunes.  Yet, Williams didn’t mean for this statement to be taken as merely descriptive; there was an ethic implied in it, too see this site.  Tread lightly in approaching culture.  Make good sense of it, but do well not to diminish its complexity.

Those who take an algorithmic approach to culture proceed under the assumption that culture is “expressive.”  More specifically, all the stuff we make, practices we engage in, and experiences we have cast astonishing amounts of information out into the world.  This is what I mean by “cultural informatics,” the title of this post.  Algorithmic culture operates first of all my subsuming culture under the rubric of information — by understanding culture as fundamentally, even intrinsically, informational and then operating on it accordingly.

One of the virtues of the category “information” is its ability to link any number of seemingly disparate phenomena together: the movements of an airplane, the functioning of a genome, the activities of an economy, the strategies in a card game, the changes in the weather, etc.  It is an extraordinarily powerful abstraction, one whose import I have come to appreciate, deeply, over the course of my research.

The issue I have pertains to the epistemological entailments that flow from locating culture within the framework of information.  What do you have to do with — or maybe to — culture once you commit to understanding it informationally?

The answer to this question begins with the “other” of information: entropy, or the measure of a system’s disorder.  The point of cultural informatics is, by and large, to drive out entropy — to bring order to the cultural chaos by ferreting out the signal that exists amid all the noise.  This is basically how Google works when you execute a search.  It’s also how sites like Amazon.com and Netflix recommend products to you.  The presumption here is that there’s a logic or pattern hidden within culture and that, through the application of the right mathematics, you’ll eventually come to find it.

There’s nothing fundamentally wrong with this understanding of culture.  Something like it has kept anthropologists, sociologists, literary critics, and host of others in business for well over a century.  Indeed there are cultural routines you can point to, whether or not you use computers to find them.  But having said that, it’s worth mentioning that culture consists of more than just logic and pattern.  Intrinsic to culture is, in fact, noise, or the very stuff that gets filtered out of algorithmic culture.

At least, that’s what more recent developments within the discipline of anthropology teach us.  I’m thinking of Renato Rosaldo‘s fantastic book Culture and Truth (1989), and in particular of the chapter, “Putting Culture in Motion.”  There Rosaldo argues for a more elastic understanding of culture, one that refuses to see inconsistency or disorder as something needing to be purged.  “We often improvise, learn by doing, and make things up as we go along,” he states.  He puts it even more bluntly later on: “Do our options really come down to the vexed choice between supporting cultural order or yielding to the chaos of brute idiocy?”

The informatics of culture is oddly paradoxical in that it hinges on a more and less powerful conceptualization of culture.  It is more powerful because of the way culture can be rendered equivalent, informationally speaking, with all of those phenomena (and many more) I mentioned above.  And yet, it is less powerful because of the way the livingness, the inventiveness — what Eli Pariser describes as the “serendipity” — of culture must be shed in the process of creating that equivalence.

What is culture without noise?  What is culture besides noise?  It is a domain of practice and experience diminished in its complexity.  And it is exactly the type of culture Raymond Williams warned us about, for it is one we presume to know but barely know the half of.

Share

Who Speaks for Culture?

I’ve blogged off and on over the past 15 months about “algorithmic culture.”  The subject first came to my attention when I learned about the Amazon Kindle’s “popular highlights” feature, which aggregates data about the passages Kindle owners have deemed important enough to underline.Укладка дикого камня

Since then I’ve been doing a fair amount of algorithmic culture spotting, mostly in the form of news articles.  I’ve tweeted about a few of them.  In one case, I learned that in some institutions college roommate selection is now being determined algorithmically — often, by  matching up individuals with similar backgrounds and interests.  In another, I discovered a pilot program that recommends college courses based on a student’s “planned major, past academic performance, and data on how similar students fared in that class.”  Even scholarly trends are now beginning to be mapped algorithmically in an attempt to identify new academic disciplines and hot-spots.

There’s much to be impressed by in these systems, both functionally and technologically.  Yet, as Eli Pariser notes in his highly engaging book The Filter Bubble, a major downside is their tendency to push people in the direction of the already known, reducing the possibility for serendipitous encounters and experiences.

When I began writing about “algorithmic culture,” I used the term mainly to describe how the sorting, classifying, hierarchizing, and curating of people, places, objects, and ideas was beginning to be given over to machine-based information processing systems.  The work of culture, I argued, was becoming increasingly algorithmic, at least in some domains of life.

As I continue my research on the topic, I see an even broader definition of algorithmic culture starting to emerge.  The preceding examples (and many others I’m happy to share) suggest that some of our most basic habits of thought, conduct, and expression — the substance of what Raymond Williams once called “culture as a whole way of life” — are coming to be affected by algorithms, too.  It’s not only that cultural work is becoming algorithmic; cultural life is as well.

The growing prevalence of algorithmic culture raises all sorts of questions.  What is the determining power of technology?  What understandings of people and culture — what “affordances” — do these systems embody? What are the implications of the tendency, at least at present, to encourage people to inhabit experiential and epistemological enclaves?

But there’s an even more fundamental issue at stake here, too: who speaks for culture?

For the last 150 years or so, the answer was fairly clear.  The humanities spoke for culture and did so almost exclusively.  Culture was both its subject and object.  For all practical purposes the humanities “owned” culture, if for no other reason than the arts, language, and literature were deemed too touchy-feely to fall within the bailiwick of scientific reason.

Today the tide seems to be shifting.  As Siva Vaidhyanathan has pointed out in The Googlization of Everything, engineers — mostly computer scientists — today hold extraordinary sway over what does or doesn’t end up on our cultural radar.  To put it differently, amid the din of our pubic conversations about culture, their voices are the ones that increasingly get heard or are perceived as authoritative.  But even this statement isn’t entirely accurate, for we almost never hear directly from these individuals.  Their voices manifest themselves in fragments of code and interface so subtle and diffuse that the computer seems to speak, and to do so without bias or predilection.

So who needs the humanities — even the so-called “digital humanities” — when your Kindle can tell you what in your reading you ought to be paying attention to?

Share

The Billion Dollar Book

About a week ago Michael Eisen, who teaches evolutionary biology at UC Berkeley, blogged about a shocking discovery one of his postdocs had made in early April.  The discovery happened not in his lab, but of all places on Amazon.com.abisgroup.ru

While searching the site for a copy of Peter Lawrence’s book The Making of a Fly (1992), long out of print, the postdoc happened across two merchants selling secondhand editions for — get this — $1.7 million and $2.2 million respectively!  A series of price escalations ensued as Eisen returned to the product page over following days and weeks until one seller’s copy topped out at $23 million.

But that’s not the worst of it.  One of the comments Eisen received on his blog post pointed to a different secondhand book selling on Amazon for $900 million.  It wasn’t an original edition of the Gutenberg Bible from 1463, nor was it a one-of-a-kind art book, either.  What screed was worth almost $1 billion?  Why, a paperback copy of actress Lana Turner’s autobiography, published in 1991, of course!  (I suspect the price may change, so in the event that it does, here’s a screen shot showing the price on Saturday, April 30th.)

Good scientist that he is, Eisen hypothesized that something wasn’t right about the prices on the fly book.  After all, they seemed to be adjusting themselves upward each time he returned to the site, and like two countries engaged in an arms race, they always seemed to do so in relationship to each other.  Eisen crunched some numbers:

On the day we discovered the million dollar prices, the copy offered by bordeebook [one of the sellers] was 1.270589 times the price of the copy offered by profnath [the other seller].  And now the bordeebook copy was 1.270589 times profnath again. So clearly at least one of the sellers was setting their price algorithmically in response to changes in the other’s price. I continued to watch carefully and the full pattern emerged. (emphasis added)

So the culprit behind the extraordinarily high prices wasn’t a couple of greedy (or totally out of touch) booksellers.  It was, instead, the automated systems — the computer algorithms — working behind the scenes in response to perceived market dynamics.

I’ve spent the last couple of blog posts talking about algorithmic culture, and I believe what we’re seeing here — algorithmic pricing — may well be an extension of it.

It’s a bizarre development.  It’s bizarre not because computers are involved in setting prices (though in this case they could have been doing a better job of it, clearly).  It is bizarre because of the way in which algorithms are being used to disrupt and ultimately manipulate — albeit not always successfully — the informatics of markets.

Indeed, I’m becoming  convinced that algorithms (at least as I’ve been talking about them) are a response to the decentralized forms of social interaction that grew up out of, and against, the centralized forms of culture, politics, and economics that were prevalent in the second and third quarters of 20th century.  Interestingly, the thinkers who conjured up the idea of decentralized societies often turned to markets — and more specifically, to the price system — in an attempt to understand how individuals distributed far and wide could effectively coordinate their affairs absent governmental and other types of intervention.

That makes me wonder: are the algorithms being used on Amazon and elsewhere an emergent form of “government,” broadly understood?  And if so, what does a billion dollar book say about the prospects for good government in an algorithmic age?

Share

Culturomics

I learned last month from Wired that something along the lines of what I’ve been calling “algorithmic culture” already has a name — culturomics.lux-standart.ru

According to Jonathan Keats, author of the magazine’s monthly “Jargon Watch” section, culturomics refers to “the study of memes and cultural trends using high-throughput quantitative analysis of books.”  The term was first noted in another Wired article, published last December, which reported on a study using Google books to track historical, or “evolutionary,” trends in language.  Interestingly, the study wasn’t published in a humanities journal.  It appeared in Science.

The researchers behind culturomics have also launched a website allowing you to search the Google book database for keywords and phrases, to “see how [their] usage frequency has been changing throughout the past few centuries.”  They follow up by calling the service “addictive.”

Culturomics weds “culture” to the suffix “-nomos,” the anchor for words like economics, genomics, astronomy, physiognomy, and so forth.  “-Nomos” can refer either to “the distribution of things” or, more specifically, to a “worldview.”  In this sense culturomics refers to the distribution of language resources (words) in the extant published literature of some period and the types of frameworks for understanding those resources embody.

I must confess to being intrigued by culturomics, however much I find the term to be clunky. My initial work on algorithmic culture tracks language changes in and around three keywords — information, crowd, and algorithm, in the spirit of Raymond Williams’ Culture and Society — and has given me a new appreciation for both the sociality of language and its capacity for transformation.  Methodologically culturomics seems, well, right, and I’ll be intrigued to see what a search for my keywords on the website might yield.

Having said that, I still want to hold onto the idea of algorithmic culture.  I prefer the term because it places the algorithm center-stage rather than allowing it to recede into the background, as does culturomicsAlgorithmic culture encourages us to see computational process not as a window onto the world but as an instrument of order and authoritative decision making.  The point of algorithmic culture, both terminologically and methodologically, is to help us understand the politics of algorithms and thus to approach them and the work they do more circumspectly, even critically.

I should mention, by the way, that this is increasingly how I’ve come to understand the so-called “digital humanities.”  The digital humanities aren’t just about doing traditional humanities work on digital objects, nor are they only about making the shift in humanities publishing from analog to digital platforms.  Instead the digital humanities, if there is such a thing, should focus on the ways in which the work of culture is increasingly delegated to computational process and, more importantly, the political consequences that follow from our doing so.

And this is the major difference, I suppose, between an interest in the distribution of language resources — culturomics — and a concern for the politics of the systems we use to understand those distributions — algorithmic culture.

Share

Algorithmic Culture, Redux

Back in June I blogged here about “Algorithmic Culture,” or the sorting, classifying, and hierarchizing of people, places, objects, and ideas using computational processes.  (Think Google search, Amazon’s product recommendations, who gets featured in your Facebook news feed, etc.)  Well, for the past several months I’ve been developing an essay on the theme, and it’s finally done.  I’ll be debuting it at Vanderbilt University’s “American Cultures in the Digital Age” conference on Friday, March 18th, which I’m keynoting along with Kelly Joyce (College of William & Mary), Cara Finnegan (University of Illinois), and Eszter Hargittai (Northwestern University).  Needless to say, I’m thrilled to be joining such distinguished company at what promises to be, well, an event.
Кровля из металлочерепицы. Ее достоинства и недостатки.

The piece I posted originally on algorithmic culture generated a surprising — and exciting — amount of response.  In fact, nine months later, it’s still receiving pingbacks, I’m pretty sure as a result of its having found its way onto one or more college syllabuses.  So between that and the good results I’m seeing in the essay, I’m seriously considering developing the material on algorithmic culture into my next book.  Originally after Late Age I’d planned on focusing on contemporary religious publishing, but increasingly I feel as if that will have to wait.

Drop by the conference if you’re in or around the Nashville area on Friday, March 18th.  I’m kicking things off starting at 9:30 a.m.  And for those of you who can’t make it there, here’s the title slide from the PowerPoint presentation, along with a little taste of the talk’s conclusion:

This latter definition—culture as authoritative principle—is, I believe, the definition that’s chiefly operative in and around algorithmic culture. Today, however, it isn’t culture per se that is a “principle of authority” but increasingly the algorithms to which are delegated the task of driving out entropy, or in Matthew Arnold’s language, “anarchy.”  You might even say that culture is fast becoming—in domains ranging from retail to rental, search to social networking, and well beyond—the positive remainder of specific information processing tasks, especially as they relate to the informatics of crowds.  And in this sense algorithms have significantly taken on what, at least since Arnold, has been one of culture’s chief responsibilities, namely, the task of “reassembling the social,” as Bruno Latour puts it—here, though, by discovering statistical correlations that would appear to unite an otherwise disparate and dispersed crowd of people.

I expect to post a complete draft of the piece on “Algorithmic Culture” to my project site once I’ve tightened it up a bit. Hopefully it will generate even more comments, questions, and provocations than the blog post that inspired the work initially.

In the meantime, I’d welcome any feedback you may have about the short excerpt appearing above, or on the talk if you’re going to be in Nashville this week.

Share