Archive for Algorithmic Culture

New Material on Algorithmic Culture

A quick announcement about two new pieces from me, both of which relate to my ongoing research on the subject of algorithmic culture.

The first is an interview with Giuseppe Granieri, posted on his Futurists’ Views site over on Medium.  The tagline is: “Culture now has two audiences: people and machines.”  It’s a free-ranging conversation, apparently readable in six minutes, about algorithms, AI, the culture industry, and the etymology of the word, culture.

About that wordover on Culture Digitally you’ll find a draft essay of mine, examining culture’s shifting definition in relationship to digital technology.  The piece is available for open comment and reflection.  It’s the first in a series from Ben Peters’ “Digital Keywords” project, of which I’m delighted to be a part.  Thanks in advance for your feedback—and of course with all of the provisos that accompany draft material.



Late Age On the Radio

Just a quick post linking you to my latest radio interview, with WFHB-Bloomington’s Doug Storm.  Doug is one of the hosts of a great program called “Interchange,” and this past Tuesday I was delighted to share with him a broad-ranging conversation about many of the topics I address in The Late Age of Print—the longevity of books, print (and paper) culture, reading practices, taste hierarchies, and more.  Toward the end, the conversation turned to my latest work, on the politics of algorithmic culture.

The program lasts about an hour.  Enjoy!


Call for Papers – EJCS on Data Mining & Analytics

 Call for Papers: The European Journal of Cultural Studies
Special issue on Data Mining/Analytics

Editors: Mark Andrejevic (University of Queensland, Australia); Alison Hearn (University of Western Ontario, Canada); Helen Kennedy (University of Leeds, UK)

The widespread use of social media has given rise to new forms of monitoring, mining and aggregation strategies designed to monetize the huge volumes of data such usage produces. Social media monitoring and analysis industries, experts and consultancies have emerged offering a broad range of social media intelligence and reputation management services. Such services typically involve a range of analytical methods (sentiment analysis, opinion mining, social network analysis, machine learning, natural language processing), often offered in black-boxed proprietary form, in order to gain insights into public opinion, mood, networks and relationships and identify potential word-of-mouth influencers. Ostensibly, these various forms of data mining, analytics and machine learning also are paving the way for the development of a more intelligent or ‘semantic’ Web 3.0, offering a more ‘productive and intuitive’ user experience. As commercial and non-commercial organisations alike seek to monitor, influence, manage and direct social media conversations, and as global usage of social media expands, questions surface that challenge celebratory accounts of the democratizing, participatory possibilities of social media. Remembering that Web 2.0 was always intended as a business manifesto – O’Reilly’s early maxims included, after all, ‘data is the next Intel inside’, ‘users add value’ and ‘collaboration as data collection’ – we need to interrogate social media not only as communication tools, but also as techno-economic constructs with important implications for the management of populations and the formation of subjects. Data mining and analytics are about much more than targeted advertising: they envision new strategies for forecasting, targeting, and decision making in a growing range of social realms (employment, education, health care, policing, urban planning, epidemiology, etc.) with the potential to usher in new, unaccountable, and opaque forms of discrimination, sorting, inclusion and exclusion. As Web 3.0 and the ‘big data’ it generates moves inexorably toward predictive analytics and the overt technocratic management of human sociality, urgent questions arise about how such data are gathered, constructed and sold, to what ends they are deployed, who gets access to them, and how their analysis is regulated (boyd and Crawford 2012).

This special issue aims to bring together scholars who interrogate social media intelligence work undertaken in the name of big data, big business and big government. It aims to draw together empirically-grounded and theoretically-informed analyses of the key issues in contemporary forms of data mining and analytics from across disparate fields and methodologies. . Contributions are invited that address a range of related issues. Areas for consideration could include, but are not limited to:

  • Political economy of social media platforms
  • Algorithmic culture
  • User perspectives on data mining
  • The politics of data visualisation
  • Big data and the cultural industries
  • Data journalism
  • The social life of big data methods
  • Inequalities and exclusions in data mining
  • Affective prediction and control
  • Data mining and new subjectivities
  • Ethics, regulation and data mining
  • Conceptualising big/data/mining
  • Social media intelligence at work
  • Social media and surveillance
  • Critical histories of data mining, sorting, and surveillance

Prospective contributors should email an abstract of 500-700 words to the issue editors by 9th December 2013 (to Full articles should be submitted to Helen Kennedy ( by 12th May 2014. Manuscripts must be no longer than 7,000 words. Articles should meet with The European Journal of Cultural Studies’ aim to promote empirically based, theoretically informed cultural studies; essayist discussion papers are not normally accepted by this journal. All articles will be refereed: invitation to submit a paper to the special issue in no way guarantees that the paper will be published; this is dependent on the review process.

Abstract deadline: 9th December 2013 (to;
Decisions on abstracts communicated by 13th January 2014;
Article submission deadline: 12th May 2014 (to;
Final submission/review process complete: 13th October 2014;
For publication in 2015.


Call for Papers – Rhetoric and Computation

If you’re interested in algorithmic culture, etc., then you might want to consider submitting to this special issue of Computational Culture—an excellent, peer-reviewed open access journal.

Call for Papers: Special Issue of Computational Culture on Rhetoric and Computation

Rhetoric has historically been a discipline concerned with the ways that spoken and written language shape human activity. Similarly, emerging work in digital media studies (in areas such as software studies, critical code studies, and platform studies) seeks to describe the ways that computation shapes contemporary life. This special issue of Computational Culture on “Rhetoric and Computation” merges these two modes of inquiry to explore how together they can help us to understand ways that our communication and computational activities are now constituted by both human and computer languages.

Coupling the analysis of rhetoric with computation provokes a number of questions: How is the rhetorical force of computational objects different from or similar to that of language, sound, or image? What new modes of communication open up when we view computation as an expressive medium? How does computation shape or constrain rhetorical action? What new tropes, figures, and strategies emerge in computational environments? How do programmers deploy rhetoric at the level of code and interface? These questions are not exhaustive, and we welcome papers or computational projects that pursue these questions and others like them.
Topics or projects might include:

  • Computational artifacts (such as video games or art installations) designed to make procedural arguments and model systems or phenomena
  • Analysis of multiple choice tests processed by computers as rhetorical artifacts, aimed at both human (citizens, students) and nonhuman (machine) audiences.
  • How computational strategies such as surveillance supercede more traditional spheres of rhetorical deliberation such as written law
  • The ways in which computational data interpellate individuals and define citizenship
  • Strategies of the “quantified self” as a way of shaping human behaviour
  • Rhetorical analysis of computational systems used by governmental, educational, and political entities
  • How computational systems are described for different audiences from groups of expert programmers to the general public
  • The use of software algorithms to simulate and evaluate various activities, such as writing and conversation
  • Rhetorical strategies deployed by communities of programmers and designers in marginal comments, online forums or physical workplaces
  • Analysis of computational machines as rhetors (i.e., understanding the actions of such machines in terms of the tropes, figures, and strategies they deploy)

300 word abstracts are due November 25, 2013. Abstracts will be reviewed by the Computational Culture Editorial Board and the special issue editors. Authors of selected abstracts will be notified by January 31, 2014 and invited to submit full manuscripts by April 1, 2014. These manuscripts are subject to outside peer review according to Computational Culture’s policies. The issue will be published Fall 2014.

Please send abstracts and inquiries to Jim Brown and Annette Vee.
James J. Brown, Jr., Assistant Professor
Department of English and Program in Digital Studies, University of Wisconsin-Madison

Annette Vee, Assistant Professor
Department of English, University of Pittsburgh

Computational Culture is an online open-access peer-reviewed journal of inter-disciplinary enquiry into the nature of cultural computational objects, practices, processes and structures.


East Coast Code

There’s lots to like about Lawrence Lessig’s book, Code 2.0—particularly, I find, the distinction he draws between “East Coast Code” (i.e., the law) and “West Coast Code” (i.e., computer hardware and software). He sees both as modes of bringing order to complex systems, albeit through different means. Lessig is also interested in the ways in which West Coast Code has come to be used in ways that strongly resemble, and sometimes even supersede, its East Coast counterpart, as in the case of digital rights management technology. “Code is law,” as he so aptly puts it.

I’ve been playing with something like Lessig’s East Coast-West Coast Code distinction in my ongoing research on algorithmic culture. As I’ve said many times now, “algorithmic culture” refers to the use of computational processes to sort, classify, and hierarchize people, places, objects, and ideas, as well as to the habits of thought, conduct, and expression that flow from those processes. Essentially we’re talking about the management of a complex system—culture—by way of server farms and procedural decision-making software. Think Google or Facebook; this is West Coast Code at its finest.

Perhaps better than anyone, Fred Turner has chronicled the conditions out of which West Coast Code emerged. In From Counterculture to Cyberculture, he shows how, in the 1960s, Stewart Brand and his circle of countercultural compadres humanized computers, which were then widely perceived to be instruments of the military-industrial complex. Through the pages of the Whole Earth Catalog, Brand and company suggested that computers were, like shovels, axes, and hoes, tools with which to craft civilization—or rather to craft new-styled, autonomous civilizations that would no longer depend on the state (i.e., East Coast Code) to manage human affairs.

The deeper I delve into my own research, the more I discover just how complicated—and indeed, how East Coast—is the story of algorithmic culture. I don’t mean to diminish the significance of the work that’s been done about the West Coast, by any means. But just as people had to perform creative work to make computers seem personal, even human, so, too, did people need to perform similar work on the word culture to make it make sense within the realm of computation. And this happened mostly back East, in Cambridge, MA.

“Of course,” you’re probably thinking, “at MIT.” It turns out that MIT wasn’t the primary hub of this semantic and conceptual work, although it would be foolish to deny the influence of famed cybernetician Norbert Wiener here. Where the work took place was at that other rinky-dink school in Cambridge, MA: Harvard. Perhaps you’ve heard of it?

A good portion of my research now is focused on Harvard’s Department of Social Relations, an experimental unit combining Sociology, Psychology, and Cultural Anthropology. It had a relatively short existence, lasting only from 1946-1970, but in that time it graduated people who went on to become the titans of postwar social theory. Clifford Geertz, Stanley Milgram, and Harold Garfinkel are among the most notable PhDs, although myriad other important figures passed through the program as well. One of the more intriguing people I turned up was Dick Price, who went on to found the Esalen Institute (back to the West Coast) after becoming disillusioned by the Clinical Psychology track in SocRel and later suffering a psychotic episode. Dr. Timothy Leary also taught there, from 1961-1963, though he was eventually fired because of his controversial research on the psychological effects of LSD.

I’ve just completed some work focusing on Clifford Geertz and the relationship he shared with Talcott Parsons, his dissertation director and chair of SocRel from 1946-1956. It’s here more than anywhere that I’m discovering how the word culture got inflected by the semantics of computation. Though Geertz would later move away from the strongly cybernetic conceptualization of culture he’d inherited from Parsons, it nonetheless underpins arguably his most important work, especially the material he published in the 1960s and early 70s. This includes his famous “Notes on the Balinese Cockfight,” which is included in the volume The Interpretation of Cultures.

My next stop is Stanley Milgram, where I’ll be looking first at his work on crowd behavior and later at his material on the “small world” phenomenon. The former complicates the conclusions of his famous “obedience to authority” experiments in fascinating ways, and, I’d argue, sets the stage for the notion of “crowd wisdom” so prevalent today. Apropos of the latter, I’m intrigued by how Milgram helped to shrink the social on down to size, as it were, just as worries about the scope and anonymizing power of mass society reached a fever pitch. He did for society essentially what Geertz and Parsons did for culture, I believe, particularly in helping to establish conceptual conditions necessary for the algorithmic management of social relations. Oh—and did I mention that Milgram’s Obedience book, published in 1974, is also laden with cybernetic theory?

To be clear, the point of all this East Coast-West Coast business isn’t to create some silly rivalry—among scholars of computation, or among their favorite historical subjects. (Heaven knows, it would never be Biggie and Tupac!) The point, rather, is to draw attention to the semantic and social-theoretical conditions underpinning a host of computational activities that are prevalent today—conditions whose genesis occurred significantly back East. The story of algorithmic culture isn’t only about hippies, hackers, and Silicon Valley. It’s equally a story about squares who taught and studied at maybe the most elite institution of higher education on America’s East Coast.


Algorithms Are Decision Systems

My latest interview on the topic of algorithmic culture is now available on the 40kBooks blog.  It’s an Italian website, although you can find the interview in both the original English and in Italian translation.

The interview provides something like a summary of my latest thinking on algorithmic culture, a good deal of which was born out of the new research that I blogged about here last time.  Here’s an excerpt from the interview:

Culture has long been about argument and reconciliation: argument in the sense that groups of people have ongoing debates, whether explicit or implicit, about their norms of thought, conduct, and expression; and reconciliation in the sense that virtually all societies have some type of mechanism in place – always political – by which to decide whose arguments ultimately will hold sway. You might think of culture as an ongoing conversation that a society has about how its members ought to comport themselves.

Increasingly today, computational technologies are tasked with the work of reconciliation, and algorithms are a principal means to that end. Algorithms are essentially decision systems—sets of procedures that specify how someone or something ought to proceed given a particular set of circumstances. Their job is to consider, or weigh, the significance of all of the arguments or information floating around online (and even offline) and then to determine which among those arguments is the most important or worthy. Another way of putting this would be to say that algorithms aggregate a conversation about culture that, thanks to technologies like the internet, has become ever more diffuse and disaggregated.

Something I did not address at any length in the interview is the historical backdrop against which I’ve set the new research: the Second World War, particularly the atrocities that precipitated, occurred during, and concluded it.  My hypothesis is that the desire to offload cultural decision-making onto computer algorithms stems significantly, although not exclusively, from a crisis of faith that emerged in and around World War II.  No longer, it seems, could we human beings be trusted to govern ourselves ethically and responsibly, and so some other means needed to be sought to do the job we’re seemingly incapable of doing.

A bunch of readers have asked me if I’ve published any of my work on algorithmic culture in academic journals.  The answer, as yet, is no, mostly because I’m working on developing and refining the ideas here, in dialogue with all of you, before formalizing my position.  (THANK YOU for the ongoing feedback, by the way!)  Having said that, I’m polishing the piece I blogged about last time, “‘An Infernal Culture Machine’: Intellectual Foundations of Algorithmic Culture,” and plan on submitting it to a scholarly journal fairly soon.  You’re welcome to email me directly if you’d like a copy of the working draft.

P.S. If you haven’t already, check out Tarleton Gillespie’s latest post over on Culture Digitally, about his new essay on “The Relevance of Algorithms.”


Updates on Algorithmic Culture

You know when you have close to 7,000 comments in your spam filter that you haven’t checked in on your blog in a while.  Sigh.  Sorry about that.  The good news is that I’ve been busy producing a bunch of new material on algorithmic culture that I’m excited to share here, finally.

The first is a podcast on “Algorithms and Cultural Production” that you can hear on Culture Digitally.  It’s a conversation between me and the two principals over at C.D., Tarleton Gillespie and Hector Postigo.  You may know Tarleton from his great work on the politics of Twitter trends, which you can read on Salon, among many other notable works.  Hector just published his own book, The Digital Rights Movement: The Role of Technology in Subverting Digital Copyright (MIT Press), and a co-edited volume, Managing Privacy Through Accountability (Palgrave Macmillan); both look excellent and I look forward to reading them.

The other major work is an essay I’ve been pecking away at for the last few months entitled, “An Infernal Culture Machine: Intellectual Foundations of Algorithmic Culture.”  I’ve finally got a finished draft in hand, and I’ll be debuting it on Wednesday, November 7 at the Center for the Humanities (CHAT) Lounge at Temple University in Philadelphia (Gladfelter Hall, 10th floor).  The time is 4:00–5:30 pm.

The essay is prompted by the question, “What is culture today?” which I ask recognizing that our experiences of culture may not entirely square with the standard definitions you’ll find in dictionaries.  I’ll be looking specifically at the emergence an algorithmic understanding of culture in the third quarter of the twentieth century and its uptake today in systems like Facebook, Amazon, Netflix, and others.  Here’s the abstract, in case you’re interested:

An Infernal Culture Machine: Intellectual Foundations of Algorithmic Culture

The word culture has changed dramatically over the last sixty years, stretching its meaning in ways that people may be able to recognize but not fully articulate.  My talk traces that shift to culture’s encounter with cybernetic theory, a body of research whose central concern is the process of communication and control in complex systems. Its main focus is the prevailing sociological and anthropological literature on culture of postwar America, particularly that of the third quarter of the 20th century. The writings of Talcott Parsons and Clifford Geertz are exemplary in this regard, but an individual lesser known to the human sciences figures prominently here as well: the termite scientist Alfred. E. Emerson, whose influence on Parsons’ conceptualization of culture was particularly deep and abiding. I intend to show how, within this constellation of work, we can begin to register the historical rudiments of what, in our own time, has coalesced into the phenomenon of “algorithmic culture,” or the use of computational processes to sort, classify, and hierarchize people, places, objects, and ideas.

The essay was a blast to write, taking me into the realm of etymology, entomology, and even Parsons’ FBI file.  It sounds eclectic, but the narrative holds together pretty well, I assure you.

I can’t promise when exactly I’ll be back here again, but I will be back.  You know I love you, readers!



Cloud Control

Okay, I fibbed.  Almost two months ago I promised I’d be back blogging regularly.  Obviously, that hasn’t been the case — not by a long shot.  My summer got eaten up with writing, travel, the Crossroads in Cultural Studies conference, lots of student obligations, and a bunch of other things.  The blogging never materialized, unfortunately, which seems to be a trend for me in the summertime.  Maybe one of these years I’ll just accept this fact and declare a formal hiatus.

Anyway, I have lots of good material to blog about but not much time to do so — at least, not right now.  To tide you over, then, I’m linking you to my latest interview with Future Tense, the great weekly radio show on technology and culture produced by the Australian Broadcasting Corporation.  The topic is cloud computing, which is timely and important given the migration of great swaths of information from people’s home computers and laptops to Amazon Web Services, Dropbox, Google Drive, iCloud, Microsoft Cloud Services, and other offsite storage services.  Mine is the third interview following the one with danah boyd, with whom I was pleased to share the stage as it were.  The direct link to the mp3 audio file of the program is here if you want to cut right to the chase.

This is my second interview with Future Tense.  Back in March I recorded a show about algorithms with them, based on my ongoing research on algorithmic culture.  What a blast to have a chance to chat again with FT’s great host, Antony Funnell!

So, more anon.  I can’t tell you when, exactly, though my best guess would be towards the end of the month.  Rest assured — and I really mean this — I’ll be back.  You know I can’t stay away for too long!


Two Interviews

My blogging got interrupted as a result of my (very welcome) spring break travels, so apologies for not posting any new material last week.  But it wasn’t just travel that kept me from writing.  I’ve also been busy giving interviews about my past and current research projects, which, truth be told, were a real blast to do.  Here’s a bit about them.

The first is a two-part Q & A with the great Henry Jenkins, author of Convergence Culture (NYU Press, 2006) and Textual Poachers (Routledge, 1992), among many other notable books and articles.  The interview with Henry was a great opportunity to sit down and revisit arguments and themes from The Late Age of Print, now three years on.  It also gave me a chance to reflect a bit on what Late Age might have looked like were I writing it today, e.g., in light of Borders’ recent liquidation,’s forays into social media-based e-reading, and more.  Part I of the interview, which focuses mostly on the first half of Late Age, is here;  part II, which focuses largely on material from the second half of the book, is here.

I was also interview recently by the good folks at “Future Tense,” a fantastic radio program produced for the Australian Broadcasting Corporation.  For those of you who may be unacquainted with the show, here’s a little information about it: “Future Tense explores the social, cultural, political and economic fault lines arising from rapid change. The weekly half-hour program/podcast takes a critical look at new technologies, new approaches and new ways of thinking. From politics to social media to urban agriculture, nothing is outside our brief.”  Great stuff, needless to say, and so I was thrilled when they approached me to talk about my recent work on algorithmic culture as part of their March 25th program, “The Algorithm.”  You can listen to the complete show here.  Mine is the first voice you’ll hear following host Antony Funnell’s introduction of the program.

Thanks for reading, listening, and commenting.  And while you’re at it,  please don’t forget to like the new Late Age of Print Facebook page.


“The Shannon and Weaver Model”

First things first: some housekeeping.  Last week I launched a Facebook page for The Late Age of Print.   Because so many of my readers are presumably Facebook users, I thought it might be nice to create a “one-stop shop” for updates about new blog content, tweets, and anything else related to my work on the relationship between print media and algorithmic culture.  Please check out the page and, if you’re so inclined, give it a like.

Okay…on to matters at hand.

This week I thought it might be fun to open with a little blast from the past.  Below is a picture of the first page of my notebook from my first collegiate communication course.  I was an eighteen year-old beginning my second semester at the University of New Hampshire, and I had the good fortune of enrolling in Professor W—-’s introductory “Communication and the Social Order” course, CMN 402.  It wouldn’t be an overstatement to call the experience life changing, since the class essentially started me on my career path.

What interests me (beyond the hilariously grumpy-looking doodle in the margin) is a diagram appearing toward the bottom of the page.  It’s an adaptation of what I would later be told was the “Shannon and Weaver” model of communication, named for the electrical engineer Claude Shannon and the mathematician Warren Weaver.

CMN 402 - UNH Jan. 28, 1992

Note what I jotted down immediately below the diagram: “1.) this model is false (limited) because comm is only one way (linear); 2.) & assumes that sender is active & receiver is passive; & 3.) ignores the fact that sender & receiver interact w/ one another.”  Here’s what the model looks like in its original form, as published in Shannon and Weaver’s Mathematical Theory of Communication (1949, based on a paper Shannon published in 1948).

Shannon & Weaver Model of Communication, 1948/1949

Such was the lesson from day one of just about every communication theory course I subsequently took and, later on, taught.  Shannon and Weaver were wrong.  They were scientists who didn’t understand people, much less how we communicate.  They reduced communication to a mere instrument and, in the process, stripped it of its deeply humane, world-building dimensions.  In graduate school I discovered that if you really wanted to pull the rug out from under another communication scholar’s work, you accused them of premising their argument on the Shannon and Weaver model.  It was the ultimate trump-card.

So the upshot was, Shannon and Weaver’s view of communication was worth lingering on only long enough to reject it.  Twenty years later, I see something more compelling in it.

A couple of things started me down this path.  Several years ago I read Tiziana Terranova’s wonderful book Network Culture: Politics for the Information Age (Pluto Press, 2004), which contains an extended reflection on Shannon and Weaver’s work.  Most importantly she takes it seriously, thinking through its relevance to contemporary information ecosystems.  Second, I happened across an article in the July 2010 issue of Wired magazine called “Sergey’s Search,” about Google co-founder Sergey Brin’s use of big data to find a cure for Parkinson’s Disease, for which he is genetically predisposed.  This passage in particular made me sit up and take notice:

In epidemiology, this is known as syndromic surveillance, and it usually involves checking drugstores for purchases of cold medicines, doctor’s offices for diagnoses, and so forth. But because acquiring timely data can be difficult, syndromic surveillance has always worked better in theory than in practice. By looking at search queries, though, Google researchers were able to analyze data in near real time. Indeed, Flu Trends can point to a potential flu outbreak two weeks faster than the CDC’s conventional methods, with comparable accuracy. “It’s amazing that you can get that kind of signal out of very noisy data,” Brin says. “It just goes to show that when you apply our newfound computational power to large amounts of data—and sometimes it’s not perfect data—it can be very powerful.” The same, Brin argues, would hold with patient histories. “Even if any given individual’s information is not of that great quality, the quantity can make a big difference. Patterns can emerge.”

Here was my aha! moment.  A Google search initiates a process of filtering the web, which, according to Brin, starts out as a thick soup of noisy data.  Its algorithms ferret out the signal amid all this noise, probabilistically, yielding the rank-ordered results you end up seeing on screen.

It’s textbook Shannon and Weaver.  And here it is, at the heart of a service that handles three billion searches per day — which is to say nothing of Google’s numerous other products, let alone those of its competitors, that behave accordingly.

So how was it, I wondered, that my discipline, Communication Studies, could have so completely missed the boat on this?  Why do we persist in dismissing the Shannon and Weaver model, when it’s had such uptake in and application to the real world?

The answer has to do with how one understands the purposes of theory.  Should theory provide a framework for understanding how the world actually works?  Or should it help people to think differently about their world and how it could work?  James Carey puts it more eloquently in Communication as Culture: Essays on Media and Society: “Models of communication are…not merely representations of communication but representations for communication: templates that guide, unavailing or not, concrete processes of human interaction, mass and interpersonal” (p. 32).

The genius of Shanon’s original paper from 1948 and its subsequent popularization by Weaver lies in many things, among them, their having formulated a model of communication located on the threshold of these two understandings of theory.  As a scientist Shannon surely felt accountable to the empirical world, and his work reflects that.  Yet, it also seems clear that Shannon and Weaver’s work has, over the last 60 years or so, taken on a life of its own, feeding back into the reality they first set about describing.  Shannon and Weaver didn’t merely model the world; they ended up enlarging it, changing it, and making it over in the image of their research.

And this is why, twenty years ago, I was taught to reject their thinking.  My colleagues in Communication Studies believed Shannon and Weaver were trying to model communication as it really existed.  Maybe they were.  But what they were also doing was pointing in the direction of a nascent way of conceptualizing communication, one that’s had more practical uptake than any comparable framework Communication Studies has thus far managed to produce.

Of course, in 1992 the World Wide Web was still in its infancy; Sergey Brin and Larry Page were, like me, just starting college; and Google wouldn’t appear on the scene for another six years.  I can’t blame Professor W—- for misinterpreting the Shannon and Weaver model.  If anything, all I can do is say “thank you” to her for introducing me to ideas so rich that I’ve wrestled with them for two decades.