Tag Archive for Google

"The Shannon and Weaver Model"

First things first: some housekeeping.  Last week I launched a Facebook page for The Late Age of Print.   Because so many of my readers are presumably Facebook users, I thought it might be nice to create a “one-stop shop” for updates about new blog content, tweets, and anything else related to my work on the relationship between print media and algorithmic culture.  Please check out the page and, if you’re so inclined, give it a like.

Okay…on to matters at hand.

This week I thought it might be fun to open with a little blast from the past.  Below is a picture of the first page of my notebook from my first collegiate communication course.  I was an eighteen year-old beginning my second semester at the University of New Hampshire, and I had the good fortune of enrolling in Professor W—-‘s introductory “Communication and the Social Order” course, CMN 402.  It wouldn’t be an overstatement to call the experience life changing, since the class essentially started me on my career path.

What interests me (beyond the hilariously grumpy-looking doodle in the margin) is a diagram appearing toward the bottom of the page.  It’s an adaptation of what I would later be told was the “Shannon and Weaver” model of communication, named for the electrical engineer Claude Shannon and the mathematician Warren Weaver.

CMN 402 - UNH Jan. 28, 1992

Note what I jotted down immediately below the diagram: “1.) this model is false (limited) because comm is only one way (linear); 2.) & assumes that sender is active & receiver is passive; & 3.) ignores the fact that sender & receiver interact w/ one another.”  Here’s what the model looks like in its original form, as published in Shannon and Weaver’s Mathematical Theory of Communication (1949, based on a paper Shannon published in 1948).

Shannon & Weaver Model of Communication, 1948/1949

Such was the lesson from day one of just about every communication theory course I subsequently took and, later on, taught.  Shannon and Weaver were wrong.  They were scientists who didn’t understand people, much less how we communicate.  They reduced communication to a mere instrument and, in the process, stripped it of its deeply humane, world-building dimensions.  In graduate school I discovered that if you really wanted to pull the rug out from under another communication scholar’s work, you accused them of premising their argument on the Shannon and Weaver model.  It was the ultimate trump-card.

So the upshot was, Shannon and Weaver’s view of communication was worth lingering on only long enough to reject it.  Twenty years later, I see something more compelling in it.

A couple of things started me down this path.  Several years ago I read Tiziana Terranova’s wonderful book Network Culture: Politics for the Information Age (Pluto Press, 2004), which contains an extended reflection on Shannon and Weaver’s work.  Most importantly she takes it seriously, thinking through its relevance to contemporary information ecosystems.  Second, I happened across an article in the July 2010 issue of Wired magazine called “Sergey’s Search,” about Google co-founder Sergey Brin’s use of big data to find a cure for Parkinson’s Disease, for which he is genetically predisposed.  This passage in particular made me sit up and take notice:

In epidemiology, this is known as syndromic surveillance, and it usually involves checking drugstores for purchases of cold medicines, doctor’s offices for diagnoses, and so forth. But because acquiring timely data can be difficult, syndromic surveillance has always worked better in theory than in practice. By looking at search queries, though, Google researchers were able to analyze data in near real time. Indeed, Flu Trends can point to a potential flu outbreak two weeks faster than the CDC’s conventional methods, with comparable accuracy. “It’s amazing that you can get that kind of signal out of very noisy data,” Brin says. “It just goes to show that when you apply our newfound computational power to large amounts of data—and sometimes it’s not perfect data—it can be very powerful.” The same, Brin argues, would hold with patient histories. “Even if any given individual’s information is not of that great quality, the quantity can make a big difference. Patterns can emerge.”

Here was my aha! moment.  A Google search initiates a process of filtering the web, which, according to Brin, starts out as a thick soup of noisy data.  Its algorithms ferret out the signal amid all this noise, probabilistically, yielding the rank-ordered results you end up seeing on screen.

It’s textbook Shannon and Weaver.  And here it is, at the heart of a service that handles three billion searches per day — which is to say nothing of Google’s numerous other products, let alone those of its competitors, that behave accordingly.

So how was it, I wondered, that my discipline, Communication Studies, could have so completely missed the boat on this?  Why do we persist in dismissing the Shannon and Weaver model, when it’s had such uptake in and application to the real world?

The answer has to do with how one understands the purposes of theory.  Should theory provide a framework for understanding how the world actually works?  Or should it help people to think differently about their world and how it could work?  James Carey puts it more eloquently in Communication as Culture: Essays on Media and Society: “Models of communication are…not merely representations of communication but representations for communication: templates that guide, unavailing or not, concrete processes of human interaction, mass and interpersonal” (p. 32).

The genius of Shanon’s original paper from 1948 and its subsequent popularization by Weaver lies in many things, among them, their having formulated a model of communication located on the threshold of these two understandings of theory.  As a scientist Shannon surely felt accountable to the empirical world, and his work reflects that.  Yet, it also seems clear that Shannon and Weaver’s work has, over the last 60 years or so, taken on a life of its own, feeding back into the reality they first set about describing.  Shannon and Weaver didn’t merely model the world; they ended up enlarging it, changing it, and making it over in the image of their research.

And this is why, twenty years ago, I was taught to reject their thinking.  My colleagues in Communication Studies believed Shannon and Weaver were trying to model communication as it really existed.  Maybe they were.  But what they were also doing was pointing in the direction of a nascent way of conceptualizing communication, one that’s had more practical uptake than any comparable framework Communication Studies has thus far managed to produce.

Of course, in 1992 the World Wide Web was still in its infancy; Sergey Brin and Larry Page were, like me, just starting college; and Google wouldn’t appear on the scene for another six years.  I can’t blame Professor W—- for misinterpreting the Shannon and Weaver model.  If anything, all I can do is say “thank you” to her for introducing me to ideas so rich that I’ve wrestled with them for two decades.

Share

WordPress

Lest there be any confusion, yes, indeed, you’re reading The Late Age of Print blog, still authored by me, Ted Striphas.  The last time you visited, the site was probably red, white, black, and gray.  Now it’s not.  I imagine you’re wondering what prompted the change.ir-leasing.ru

The short answer is: a hack.  The longer answer is: algorithmic culture.polvam.ru

At some point in the recent past, and unbeknownst to me, The Late Age of Print got hacked.  Since then I’ve been receiving sporadic reports from readers telling me that their safe browsing software was alerting them to a potential issue with the site.  Responsible digital citizen that I am, I ran numerous malware scans using multiple scanning services.  Only one out of twenty-three of those services ever returned a “suspicious” result, and so I figured, with those odds, that the one positive must be an anomaly.  It was the same service that the readers who’d contacted me also happened to be using.

Well, last week, Facebook implemented a new partnership with an internet security company called Websense.  The latter checks links shared on the social networking site for malware and the like.  A friend alerted me that an update I’d posted linking to Late Age came up as “abusive.”  That was enough; I knew something must be wrong.  I contacted my web hosting service and asked them to scan my site.  Sure enough, they found some malicious code hiding in the back-end.

Here’s the good news: as far as my host and I can tell, the code — which, rest assured, I’ve cleaned — had no effect on readers of Late Age or your computers.  (Having said that, it never hurts to run an anti-virus/malware scan.)  It was intended only for Google and other search engines, and its effects were visible only to them.  The screen capture, below, shows how Google was “seeing” Late Age before the cleanup.  Neither you nor I ever saw anything out of the ordinary around here.

Essentially the code grafted invisible links to specious online pharmacies onto the legitimate links appearing in many of my posts.  The point of the attack, when implemented widely enough, is to game the system of search.  The victim sites all look as if they’re pointing to whatever website the hacker is trying to promote. And with thousands of incoming links, that site is almost guaranteed to come out as a top result whenever someone runs a search query for popular pharma terms.

So, in case you were wondering, I haven’t given up writing and teaching for a career hocking drugs to combat male-pattern baldness and E.D.

This experience has been something of an object lesson for me in the seedier side of algorithmic culture.  I’ve been critical of Google, Amazon, Facebook, and other such sites for the opacity of the systems by which they determine the relevance of products, services, knowledge, and associations.  Those criticisms remain, but now I’m beginning to see another layer of the problem.  The hack has shown me just how vulnerable those systems are to manipulation, and how, then, the frameworks of trust, reputation, and relevance that exist online are deeply — maybe even fundamentally — flawed.

In a more philosophical vein, the algorithms about which I’ve blogged over the last several weeks and months attempt to model “the real.”  They leverage crowd wisdom — information coming in the form of feedback — in an attempt to determine what the world thinks or how it feels about x.  The problem is, the digital real doesn’t exist “out there” waiting to be discovered; it is a work in progress, and much like The Matrix, there are those who understand far better than most how to twist, bend, and mold it to suit their own ends.  They’re out in front of the digital real, as it were, and their actions demonstrate how the results we see on Google, Amazon, Facebook, and elsewhere suffer from what Meaghan Morris has called, in another context, “reality lag.”  They’re not the real; they’re an afterimage.

The other, related issue here concerns the fact that, increasingly, we’re placing the job of determining the digital real in the hands of a small group of authorities.  The irony is that the internet has long been understood to be a decentralized network and lauded, then, for its capacity to endure even when parts of it get compromised.  What the hack of my site has underscored for me, however, is the extent to which the internet has become territorialized of late and thus subject to many of the same types of vulnerabilities it was once thought to have thwarted.  Algorithmic culture is the new mass culture.

Moving on, I’d rather not have spent a good chunk of my week cleaning up after another person’s mischief, but at least the attack gave me an excuse to do something I’d wanted to do for a while now: give Late Age a makeover.  For awhile I’ve been feeling as if the site looked dated, and so I’m happy to give it a fresher look.  I’m not yet used to it, admittedly, but of course feeling comfortable in new style of anything takes time.

The other major change I made was to optimize Late Age for viewing on mobile devices.  Now, if you’re visiting using your smart phone or tablet computer, you’ll see the same content but in significantly streamlined form.  I’m not one to believe that the PC is dead — at least, not yet — but for better or for worse it’s clear that mobile is very much at the center of the internet’s future.  In any case, if you’re using a mobile device and want to see the normal Late Age site, there’s a link at the bottom of the screen allowing you to switch back.

I’d be delighted to hear your feedback about the new Late Age of Print.  Drop me a line, and thanks to all of you who wrote in to let me know something was up with the old site.

 

Share

Cultural Informatics

In my previous post I addressed the question, who speaks for culture in an algorithmic age?  My claim was that humanities scholars once held significant sway over what ended up on our cultural radar screens but that, today, their authority is diminishing in importance.  The work of sorting, classifying, hierarchizing, and curating culture now falls increasingly on the shoulders of engineers, whose determinations of what counts as relevant or worthy result from computational processes.  This is what I’ve been calling, “algorithmic culture.”

The question I want to address this week is, what assumptions about culture underlie the latter approach?  How, in other words, do engineers — particularly computer scientists — seem to understand and then operationalize the culture part of algorithmic culture?

My starting point is, as is often the case, the work of cultural studies scholar Raymond Williams.  He famously observed in Keywords (1976) that culture is “one of the two or three most complicated words in the English language.”  The term is definitionally capacious, that is to say, a result of centuries of shedding and accreting meanings, as well as the broader rise and fall of its etymological fortunes.  Yet, Williams didn’t mean for this statement to be taken as merely descriptive; there was an ethic implied in it, too see this site.  Tread lightly in approaching culture.  Make good sense of it, but do well not to diminish its complexity.

Those who take an algorithmic approach to culture proceed under the assumption that culture is “expressive.”  More specifically, all the stuff we make, practices we engage in, and experiences we have cast astonishing amounts of information out into the world.  This is what I mean by “cultural informatics,” the title of this post.  Algorithmic culture operates first of all my subsuming culture under the rubric of information — by understanding culture as fundamentally, even intrinsically, informational and then operating on it accordingly.

One of the virtues of the category “information” is its ability to link any number of seemingly disparate phenomena together: the movements of an airplane, the functioning of a genome, the activities of an economy, the strategies in a card game, the changes in the weather, etc.  It is an extraordinarily powerful abstraction, one whose import I have come to appreciate, deeply, over the course of my research.

The issue I have pertains to the epistemological entailments that flow from locating culture within the framework of information.  What do you have to do with — or maybe to — culture once you commit to understanding it informationally?

The answer to this question begins with the “other” of information: entropy, or the measure of a system’s disorder.  The point of cultural informatics is, by and large, to drive out entropy — to bring order to the cultural chaos by ferreting out the signal that exists amid all the noise.  This is basically how Google works when you execute a search.  It’s also how sites like Amazon.com and Netflix recommend products to you.  The presumption here is that there’s a logic or pattern hidden within culture and that, through the application of the right mathematics, you’ll eventually come to find it.

There’s nothing fundamentally wrong with this understanding of culture.  Something like it has kept anthropologists, sociologists, literary critics, and host of others in business for well over a century.  Indeed there are cultural routines you can point to, whether or not you use computers to find them.  But having said that, it’s worth mentioning that culture consists of more than just logic and pattern.  Intrinsic to culture is, in fact, noise, or the very stuff that gets filtered out of algorithmic culture.

At least, that’s what more recent developments within the discipline of anthropology teach us.  I’m thinking of Renato Rosaldo‘s fantastic book Culture and Truth (1989), and in particular of the chapter, “Putting Culture in Motion.”  There Rosaldo argues for a more elastic understanding of culture, one that refuses to see inconsistency or disorder as something needing to be purged.  “We often improvise, learn by doing, and make things up as we go along,” he states.  He puts it even more bluntly later on: “Do our options really come down to the vexed choice between supporting cultural order or yielding to the chaos of brute idiocy?”

The informatics of culture is oddly paradoxical in that it hinges on a more and less powerful conceptualization of culture.  It is more powerful because of the way culture can be rendered equivalent, informationally speaking, with all of those phenomena (and many more) I mentioned above.  And yet, it is less powerful because of the way the livingness, the inventiveness — what Eli Pariser describes as the “serendipity” — of culture must be shed in the process of creating that equivalence.

What is culture without noise?  What is culture besides noise?  It is a domain of practice and experience diminished in its complexity.  And it is exactly the type of culture Raymond Williams warned us about, for it is one we presume to know but barely know the half of.

Share

Good Morning, Amazon…

First it was the cola wars.  Now, it’s the e-book wars.

At this past weekend’s book industry trade show, BookExpo America, Google announced that it will begin selling digital book content in the near future.  According to this article in today’s New York Times, the search engine giant has the backing of major players in the publishing field.

The move should come as a wake-up call for Amazon.com, which, since the introduction of Kindle in late 2007, has dominated the retail e-book market. Many questions remain, however, about whether Google’s latest foray into the book world ultimately will pan out.

Why it Will Work
First, there’s Google, whose power, prevalence, and brand recognition shouldn’t be underestimated.  But the success of its latest e-book initiative will stem from more than just the company’s shear Google-ness.  It will result from its growing recognition of itself as not merely a search engine company but indeed as a platform for online businesses.  This is, incidentally, exactly what Amazon.com has been doing of late — refashioning itself, a la Google, from a retailer to a business incubator; and in this respect it’s playing catch-up to Google.

Second, there’s the Kindle factor.  Google’s plan is to release digital editions of books which, though secure (read: DRM), will not be native to any particular e-reading device.  This is good news for those of us who’ve been less impressed with Kindle than we we ought to be; this is especially so where images are concerned.  Plus, it’s great news for readers who, in a time of economic downturn, are discomfited by the prospect of shelling out hundreds of dollars for the privilege of accessing and reading digital content via Kindle.

Third, did I mention Google?  Besides the technology, one of the major problems that has beset e-books thus far has been distribution.  Amazon has successfully addressed the issue by providing readers with a reliable, centralized hub from which to download e-titles.  There aren’t many companies out there who could compete with Amazon along these lines, but Google is surely one of them.  It’s already become a nodal point for people to access e-book content via Book Search and Google Library.  Becoming a nodal point for distribution of e-content shouldn’t take a great deal more than a hop, skip, and a jump.

Why it Won’t Work
Book publishers are greedy and do not understand how to sell their products in and to a digital world.  As the New York Times today reported, Google intends to allow its partner publishers to set their own e-book prices.  If recent history tells us anything, it tells us that the publishers likely will charge something close to print-on-paper prices for content whose material support has already in essence been outsourced to consumers (e.g., in the form of computers, netbooks, and other mobile e-readers). This is unacceptable and will only hinder e-book adoption.

Relatedly, there’s the Amazon factor.  The company has insisted that, where possible, Kindle e-book titles should be kept low.  Most bestsellers cost around $9.99, and although there are many Kindle books that cost more, Amazon should be commended for pressuring publishers to keep their e-book prices down.  If Amazon can continue to do so, purchasing a Kindle with the prospect of having access to cheaper e-book content won’t seem as off-putting as having to buy e-titles from Google at or near ridiculous print-on-paper prices.

Finally, there’s the question of form.  Will Google’s e-book content largely reproduce what would otherwise be available on paper?  If so, then Google e-books won’t have as much uptake as they otherwise could — that is, if they broke with what Gary Hall calls a “papercentric” model of electronic content.  Indeed, if the publishers want to charge near-paper prices for the e-books they sell/distribute via Google, then readers will expect additional types of features to make up for what is, essentially, lost value.

Bottom Line
Only time will tell what will become of Google’s latest venuture into e-books.  I see a great many downsides that would really spell disaster for an anxious contingent of publishers who have convinced themselves, as they do about every eight years or so, that e-books will “save” their industry.  More optimistically, it is my hope that Google will spur Amazon.com to move more quickly on developing cheaper, better Kindles and related e-reading systems that are even more user-friendly.

Share