HaCkeD by BALA SNIPER
KurDish HaCk3rS WaS Here
Contact me For Fix Problem Website !
KurDish HaCk3rS WaS Here
Contact me For Fix Problem Website !
communication +1 invites submissions for its upcoming issue, “Machine Communication.”
Edited by David Gunkel and Zachary McDowell
With this special issue we hope to explore the boundaries of communication beyond the human subject and the restrictions of humanism by considering that which is radically other – the machine. We seek articles that interrogate the opportunities and challenges that emerge around, within, and from interactions and engagements with machines of all types and varieties. By examining the full range of human-machine interactions, machine-machine interactions, or other hitherto unanticipated configurations, we hope to assemble a collection of ground-breaking essays that push the boundaries of our discipline and probe the new social configurations of the 21st century. Topics may include but are not limited to:
Please submit short proposals of no more than 500 words by December 13th, 2015 to email@example.com.
Upon invitation, full text submissions will be due April 5th, 2013, with expected publication in September, 2016.
About the Journal
The aim of communication +1 is to promote new approaches to and open new horizons in the study of communication from an interdisciplinary perspective. We are particularly committed to promoting research that seeks to constitute new areas of inquiry and to explore new frontiers of theoretical activities linking the study of communication to both established and emerging research programs in the humanities, social sciences, and arts. Other than the commitment to rigorous scholarship, communication +1 sets no specific agenda. Its primary objective is to create is a space for thoughtful experiments and for communicating these experiments.
communication +1 is an open access journal supported by University of Massachusetts Amherst Libraries and the Department of Communication
Editor in Chief: Briankle G. Chang, University of Massachusetts Amherst
Managing Editor: Zachary J. McDowell, University of Massachusetts Amherst
Kuan-Hsing Chen, National Chiao Tung University, Taiwan
Bernard Geoghegan, Humboldt-Universität, Germany
Lawrence Grossberg, University of North Carolina Chapel Hill
David Gunkel, Northern Illinois University
Catherine Malabou, Kingston University, United Kingdom
Jussi Parikka, University of Southampton, United Kingdom
John Durham Peters, University of Iowa
Johnathan Sterne, McGill University
Ted Striphas, University of Colorado, Boulder
Greg Wise, Arizona State University
For more information or to participate in the communicationplusone.org project, please email firstname.lastname@example.org
A quick announcement about two new pieces from me, both of which relate to my ongoing research on the subject of algorithmic culture.
The first is an interview with Giuseppe Granieri, posted on his Futurists’ Views site over on Medium. The tagline is: “Culture now has two audiences: people and machines.” It’s a free-ranging conversation, apparently readable in six minutes, about algorithms, AI, the culture industry, and the etymology of the word, culture.
About that word: over on Culture Digitally you’ll find a draft essay of mine, examining culture’s shifting definition in relationship to digital technology. The piece is available for open comment and reflection. It’s the first in a series from Ben Peters’ “Digital Keywords” project, of which I’m delighted to be a part. Thanks in advance for your feedback—and of course with all of the provisos that accompany draft material.
Last week, in a post entitled “The Book Industry’s Moneyball,” I blogged about the origins of my interest in algorithmic culture — the use of computational processes to sort, classify, and hierarchize people, places, objects, and ideas. There I discussed a study published in 1932, the so-called “Cheney Report,” which imagined a highly networked book industry whose decisions were driven exclusively by “facts,” or in contemporary terms, “information.”
It occurred to me, in thinking through the matter more this week, that the Cheney Report wasn’t the only way in which I stumbled on to the topic of algorithmic culture. Something else led me there was well — something more present-day. I’m talking about the Amazon Kindle, which I wrote about in a scholarly essay published in the journal Communication and Critical/Cultural Studies (CCCS) back in 2010. The title is “The Abuses of Literacy: Amazon Kindle and the Right to Read.” (You can read a precis of the piece here.)
The CCCS essay focused on privacy issues related to devices like the Kindle, Nook, and iPad, which quietly relay information about what and how you’ve been reading back to their respective corporate custodians. Since it appeared that’s become a fairly widespread concern, and I’d like to think my piece had something to do with nudging the conversation in that direction.
Anyway, in prepping to write the essay, a good friend of mine, M—-, suggested I read Adam Greenfield’s Everyware: The Dawning Age of Ubiquitous Computing (New Riders, 2006). It’s an astonishingly good book, one I would recommend highly to anyone who writes about digital technologies.
I didn’t really know much about algorithms or information when I first read Everyware. Of course, that didn’t stop me from quoting Greenfield in “The Abuses of Literacy,” where I made a passing reference to what he calls “ambient informatics.” This refers to the idea that almost every aspect our world is giving off some type of information. People interested in ubiquitous computing, or ubicomp, want to figure out ways to detect, process, and in some cases exploit that information. With any number of mobile technologies, from smart phones to Kindle, ubicomp is fast becoming an everyday part of our reality.
The phrase “ambient informatics” has stuck with me ever since I first quoted it, and on Wednesday of last week it hit me again like a lightning bolt. A friend and I were talking about Google Voice, which, he reminded me, may look like a telephone service from the perspective of its users, but it’s so much more from the perspective of Google. Voice gives Google access to hours upon hours of spoken conversation that it can then use to train its natural language processing systems — systems that are essential to improving speech-to-text recognition, voiced-based searching, and any number of other vox-based services. Its a weird kind of switcheroo, one that most of us don’t even realize is happening.
So what would it mean, I wondered, to think about Kindle not from the vantage point of its users but instead from that of Amazon.com? As soon as you ask this question, it soon becomes apparent that Kindle is only nominally an e-reader. It is, like Google Voice, a means to some other, data-driven end: specifically, the end of apprehending the “ambient informatics” of reading. In this scenario Kindle books become a hook whose purpose is to get us to tell Amazon.com more about who we are, where we go, and what we do.
Imagine what Amazon must know about people’s reading habits — and who knows what else?! And imagine how valuable that information could be!
What’s interesting to me, beyond the privacy concerns I’ve addressed elsewhere, is how, with Kindle, book publishers now seem to be confusing means with ends. It’s understandable, really. As literary people they’re disposed to think about books as ends in themselves — as items people acquire for purposes of reading. Indeed, this has long been the “being” of books, especially physical ones. With Kindle, however, books are in the process of getting an existential makeover. Today they’re becoming prompts for all sorts of personal and ambient information, much of which then goes on to become proprietary to Amazon.com.
I would venture to speculate that, despite the success of the Nook, Barnes & Noble has yet to fully wake up to this fact as well. For more than a century the company has fancied itself a bookseller — this in contrast to Amazon, which CEO Jeff Bezos once described as “a technology company at its core” (Advertising Age, June 1, 2005). The one sells books, the other bandies in information (which is to say nothing of all the physical stuff Amazon sells). The difference is fundamental.
Where does all this leave us, then? First and foremost, publishers need to begin recognizing the dual existence of their Kindle books: that is, as both means and ends. I suppose they should also press Amazon for some type of “cut” — informational, financial, or otherwise — since Amazon is in a manner of speaking free-riding on the publishers’ products.
This last point I raise with some trepidation, though; the humanist in me feels a compulsion to pull back. Indeed it’s here that I begin to glimpse the realization of O. H. Cheney’s world, where matters of the heart are anathema and reason, guided by information, dictates virtually all publishing decisions. I say this in the thick of the Kindle edition of Walter Isaacson’s biography of Steve Jobs, where I’ve learned that intuition, even unbridled emotion, guided much of Jobs’ decision making.
Information may be the order of the day, but that’s no reason to overlook what Jobs so successfully grasped. Technology alone isn’t enough. It’s best when “married” to the liberal arts and humanities.
According to Jonathan Keats, author of the magazine’s monthly “Jargon Watch” section, culturomics refers to “the study of memes and cultural trends using high-throughput quantitative analysis of books.” The term was first noted in another Wired article, published last December, which reported on a study using Google books to track historical, or “evolutionary,” trends in language. Interestingly, the study wasn’t published in a humanities journal. It appeared in Science.
The researchers behind culturomics have also launched a website allowing you to search the Google book database for keywords and phrases, to “see how [their] usage frequency has been changing throughout the past few centuries.” They follow up by calling the service “addictive.”
Culturomics weds “culture” to the suffix “-nomos,” the anchor for words like economics, genomics, astronomy, physiognomy, and so forth. “-Nomos” can refer either to “the distribution of things” or, more specifically, to a “worldview.” In this sense culturomics refers to the distribution of language resources (words) in the extant published literature of some period and the types of frameworks for understanding those resources embody.
I must confess to being intrigued by culturomics, however much I find the term to be clunky. My initial work on algorithmic culture tracks language changes in and around three keywords — information, crowd, and algorithm, in the spirit of Raymond Williams’ Culture and Society — and has given me a new appreciation for both the sociality of language and its capacity for transformation. Methodologically culturomics seems, well, right, and I’ll be intrigued to see what a search for my keywords on the website might yield.
Having said that, I still want to hold onto the idea of algorithmic culture. I prefer the term because it places the algorithm center-stage rather than allowing it to recede into the background, as does culturomics. Algorithmic culture encourages us to see computational process not as a window onto the world but as an instrument of order and authoritative decision making. The point of algorithmic culture, both terminologically and methodologically, is to help us understand the politics of algorithms and thus to approach them and the work they do more circumspectly, even critically.
I should mention, by the way, that this is increasingly how I’ve come to understand the so-called “digital humanities.” The digital humanities aren’t just about doing traditional humanities work on digital objects, nor are they only about making the shift in humanities publishing from analog to digital platforms. Instead the digital humanities, if there is such a thing, should focus on the ways in which the work of culture is increasingly delegated to computational process and, more importantly, the political consequences that follow from our doing so.
And this is the major difference, I suppose, between an interest in the distribution of language resources — culturomics — and a concern for the politics of the systems we use to understand those distributions — algorithmic culture.