Sometimes, I wonder what historians of the future are going to be writing about, when they examine the early twenty-first century. No doubt, the term “digital revolution” is going to creep in to more than one monograph of the future about our present-day times. Cultural historians (if cultural history is still in vogue) might also, I think, take some delight in tracing the ways in which Google has entered into modern consciousness. Perhaps they’ll trace the moment when Google ceased to be only a proper noun, when the phrase “Let’s google it!” first appeared, and then flourished, in popular discourse. Or maybe they’ll explore the ways in which Google has become a part of popular culture and everyday life, to the point of inspiring satirical responses expressed in, you guessed it, digital ways.

Here are some anecdotes to help that future cultural historian.

* * * * *

Awhile ago, a friend told me an amusing story about how the father of one of her friends was confused about the nature of the Internet. He had never used it before (yes, there are still such folks), and he didn’t quite know what it was all about. So, one day, he asked his son to explain, framing his question according to the only term that he was familiar with – or had heard often enough: “Is Google,” he asked innocently, “the Internet?” The son choked back a gasp of unholy laughter, and proceeded to explain the phenomenon of the Internet to his father. However, if he had simplified his response, if he had said that Google was, in a way, the Internet, he may not have been all that wrong.

* * * * *

During Christmas dinner with my family this past winter, Google (of all topics) entered into our conversation. I don’t remember how exactly. All I recall is that my mom, who (yes, it’s true) had never heard of Google before, perked up when she heard the term at the dinner table, probably because of its odd sound. “Google?” she said, brows furrowed, “what is Google?” To that, my dad, without missing a beat, responded (in Chinese) that Google “is the big brother of the Internet.” Now, “big brother” (or “dai lo”) in Cantonese, when used in a figurative sense, simply means someone who is to be respected, some important or dominant figure or force. But I couldn’t help laughing at the Orwellian overtones that my father’s comment had unwittingly implied. He had meant big brother; I, of course, had heard Big Brother, Chinese-style.

* * * * *

Back in September, Dr. Don Spanner, my archival sciences professor, showed the class a video clip called Epic 2015. Its opening lines were captivatingly ambiguous: “It is the best of times,” said the solemn narrator, “It is the worst of times.” We were entranced by the video’s fictitious yet somewhat chilling projection of the world in 2015, which involved no less than the merging of two powerful companies (Google and Amazon) to become Googlezon, an entity whose information-making and dissemination power had reduced even the might of the New York Times. At the end of the clip, Don joked that the first time he watched it, he just wanted to sit in a corner and stare at paper for a long, long time. We all laughed – and, perhaps, shivered inside a bit too.

Subsequently, I mentioned the clip to a friend, remarking how it was so interesting to see just how big Google had become, as evidenced by the fact that it was inspiring such responses as Epic 2015 with its subtle questioning of the Google empire and its cultural hegemony. My friend in turn enlightened me further about other similar responses. He asked if I had ever heard of “The Googling”? I hadn’t. So he emailed me links to several clips on YouTube, which explore Google’s services (such as their mapping devices) in a new – and, of course, hilariously sinister – way. To view them…simply google “The Googling.” 🙂 (There are five parts.)

* * * * *

To the cultural historian of the future:

It was true. Google was (is?) ubiquitous, to the point that it entered into dinner table conversations and was mistaken (or correctly identified?) for the Internet. Even to the point of inspiring satirical YouTube clips and prophetic visions of a Google-ized world. That is, of course, when you know something is big – when it becomes the subject of cultural humour and unease, negotiated and even resisted in satirical ways.

So, we embraced Google even while scrutinizing it at arm’s length. We questioned Google even while googling. It’s what we did in the early twenty-first century.

Personalized Utopia or Orwellian Dystopia?

“Search engines will become more pervasive than they already are, but paradoxically less visible, if you allow them to personalise increasingly your own search experience.” [1]

“If you’re really concerned about technology, however, remember that it has the most potential to be dangerous when you stop seeing it.” [2]

* * * * *

In my last Digital History class, I (very casually) threw out the term “Orwellian dystopia” into the pool of ideas and concepts for potential discussion. I threw it out a bit in jest (because I have a declared weakness for polysyllabic words), but mostly in earnest (because, as indicated in my last post, I have had cause to think about Orwell and 1984 lately). The term isn’t mine of course, but comes out of one of the readings for the week: Phil Bradley’s “Search Engines: Where We Were, Are Now, and Will Ever Be.”

As its title clearly suggests, Bradley’s article traces the evolution of search engines, from their rather crude beginnings when web design wasn’t yet a consideration to their present-day, sophisticated forms, which promise to make our searches more personally relevant than ever before. Ruminating on the potential for search engines to get to know us individually – to the point of recommending events that we (as in you specifically, or me specifically) might wish to attend when visiting a new city or whether the supermarket down the road has the same item you or I want, only cheaper – Bradley makes the point about the pervasity and increasing invisibility of search engines which forms the first of the two opening quotes above. He then wonders if the ways in which users are sitting back, letting the alerting services of search engines bring custom-made information to them – “since the engines can monitor what you do and where you go” – will lead to an “Orwellian dystopia” of sorts. Bradley’s advice for avoiding such a dystopia? “Users will need to consider very carefully,” he writes, “exactly to what extent they let search engines into their lives.”

Bradley’s point about the expansive, yet increasingly invisible, nature of search engines fits nicely with some of the ideas articulated in a blog post of my Digital History prof, Dr. William Turkel (or Bill, as he would wish us to call him). In this post, entitled “Luddism is a Luxury You Can’t Afford” (which, I might add, graciously avoids lambasting latter day Luddites but seeks instead to understand them), Bill considers what it is exactly that neo-Luddites are objecting to when they consider technology. Drawing on Heidegger’s distinction between ready-at-hand and present-at-hand objects, Bill points out that it is the second group that poses problems for those uncomfortable with technology. This is simply because these objects are always visible and mostly intrusive – “something you have to deal with.”

Meanwhile, ready-at-hand things are invisible, unnoticed, and therefore accepted as a natural – rather than a technological – part of existence (the coffee cup, electricity, the iPod even). However, “these invisible and pervasive technologies,” Bill notes, in the same vein as Bradley, “are exactly the ones that humanists should be thinking about…because they have the deepest implications for who and what we are.” The post ends with the words I quoted above, about invisibility being the most potentially “dangerous” aspect of technology.

* * * * *

I have been ruminating lately on the idea of transparency on the Web, and it seems to me that there is a strange sort of tension that exists.

On the one hand, as Kevin Kelly’s talk has shown (discussed in the previous post), users are required to be transparent in cyberspace, if they wish to have any sort of personalization. The Web can’t make recommendations if it doesn’t know you, if your searching patterns, socializing patterns, buying patterns, browsing patterns are not visible.

On the other hand, its very collection of this information and the ways in which it presents you with a set of personalized results are becoming less visible, as Bradley has argued. One might actually put this another way, and say that the Web, search engines included, is becoming more transparent, not in making itself visible, but in making itself invisible. It is becoming so transparent that, like spotlessly clear glass, users cannot see that there is something mediating between them and the information world out there, and so they might be tempted to conclude that the search results they’ve gotten are natural and obvious results showing the most naturally important and self-evident links.

In our last Digital History class, it was Rob, I believe, who brought up the issue of Google search results and the problem that they can be very limiting – without the user realizing that they are limiting. Since, as Dan Cohen has said, Google is the first resource that most students go to for research (at least, the graduate students he polled do), [3] the results it presents may very well define how a subject is understood. The danger, then, is that users won’t be critical of their search results, and how they may be tailored or skewed based on numerous factors, because they don’t even realize that such mediation mechanisms are taking place. Thus, invisibility, as Bill has noted, is a definite problem. And, I have to wonder as well, in terms of research, if personalization is too. Will one’s understanding of World War One, say, or of John A. McDonald, or Joseph Stalin, or Mary Wollstonecraft be influenced by one’s buying patterns? Music interests? Socializing habits? If it’s to be accepted that students start their research using Google, and they are signing into Google’s services when conducting such research, what implications does this have on their understanding of history? On the writing of history?

So to try to tie together the strands of these last two posts: it seems that transparency on the Web, in the search engines that exist, is rooted in their invisibility – such engines are a window to the information world that’s easily mistaken for an unmediated glimpse of the real world itself – while transparency of users means their utter visibility and machine-readability. I agree with Bradley and Bill that we shouldn’t take invisible technologies for granted – they need to be explored, critiqued, and discussed; made visible, in other words – so that users can decide for themselves how far they want to go in the age of personalization.

* * * * *

P.S.

I feel compelled to add that to approach these issues from a questioning stance is not to reject the benefits of search engines, or of personalization, or of the Web, or of technology at all. (I, for one, recently ordered a book from Amazon based on its recommendation – or, more precisely, because its very first recommendation was a book that a friend had already suggested I read; that Amazon was in line with someone who knows me well amazed me enough to purchase the book!) The issue is never simply, I think, technology in and of itself. It is the uses of technology, how new capabilities, especially in the digital age, is going to be employed and what their developments and effects might mean (and have already meant) for individuals and groups in society that is the crucial issue at hand.

To conclude, I think Marshall McLuhan’s classic metaphor about the dual nature of technology – how it both “extends” and “amputates” – is still relevant and instructive today. It seems that in most discussions about technological innovation, we nearly always hear about “extensions” (Kelly did the same thing; though interestingly, he went so far as to reverse McLuhan’s idea, calling humans the extension of the machine) but we rarely hear about “amputations.” Perhaps a balanced approach – one that keeps visible both the advantages and disadvantages of new technologies, that considers their impact in a broad sense, neither blindly fearing them because they are new nor unreservedly embracing them for the same reason – is the way to ensure that we remain critical in the digital age.

_______________________________

[1] Phil Bradley, “Search Engines: Where We Were, Are Now, and Will Ever Be,” Ariadne Magazine 47 (2006), http://www.ariadne.ac.uk/issue47/search-engines/.

[2] William J. Turkel, “Luddism is a Luxury You Can’t Afford,” Digital History Hacks, http://digitalhistoryhacks.blogspot.com/2007/04/luddism-is-luxury-you-cant-afford.html.

[3] Daniel J. Cohen, “The Single Box Humanities Search,” Dan Cohen’s Digital Humanities Blog, http://www.dancohen.org/2006/04/17/the-single-box-humanities-search/.

“Predicting the Next 5000 Days of the Web”

Recently, a friend of mine sent me a link to a video on TED.com. In this video (which, if you have 20 minutes to spare, is highly worth checking out), the presenter, Kevin Kelly, traces the Web’s development over the last 5000 days of its existence. Calling it humanity’s most reliable machine, Kelly compares the Web, in its size and complexity, to the human brain – with the notable difference being, of course, that the former, and not the latter, is doubling in power every two years. As he sees it, this machine is going to “exceed humanity in processing power” by 2040.

Not only does Kelly look back on the last 5000 days, he also projects forward, and considers what the next 5000 days will bring in the Web’s evolution. What he envisions is that the Web will become a single global construct – “the One” he calls it, for lack of a better term – and that all devices – cell phones, iPods, computers, etc. – will look into the One, will be portals into this single Machine.

The Web, Kelly states (pretty calmly, I might add), will own everything; no bits will reside outside of it. Everything will be connected to it because everything will have some aspect of the digital built into it that will allow it to be “machine-readable.” “Every item and artifact,” he envisions, “will have embedded in it some little sliver of webness and connection.” So a pair of shoes, for instance, might be thought of as “a chip with heels” and a car as “a chip with wheels.” No longer a virtual environment of linked pages, the Web, in Kelly’s conception, will become a space in which actual items, physical things, can be linked to and will find their representation on the Web. He calls this entity the Internet of Things.

We’re not, of course, quite at that stage yet, but we are, according to Kelly, entering into the era of linked data, where not only web pages are being linked, but specific information, ideas, words, nouns even, are being connected. One example is the social site which allows a person to construct an elaborate social network online. From Kelly’s perspective, all this data – the social connections and relationships that we each have – should not have to be re-entered from one site to the next; you should just have to convey it once. The Web, he says, should know you and who all your friends are – “that’s what you want” he states (again, quite matter-of-factly) – and that is where he sees things moving: that the Web should know and remember all this data about each of us, at a personal level.

In this new world of shared data and linked things, where (as I have pondered in a previous post) the line between the virtual and the physical is no longer an identifiable line, Kelly sees one important implication: co-dependency.

The Web, he says, will be ubiquitous, and, in fact, for him, “the closer it is, the better.” Of course, in knowing us personally, in anticipating our needs, the Web exacts a price: “Total personalization in this new world,” Kelly concedes, “will require total transparency.” But this is not a hefty price for Kelly, who, it seems, would prefer to ask Google to tell him vital information (like his phone number for instance) rather than try to remember such things himself.

This sheer dependency on the web is not a frightening prospect for Kelly; he compares it to our dependency on the alphabet, something we cannot imagine ourselves without. In the same way, he argues, we won’t be able to imagine ourselves without this Machine, a machine which is at once going to be smarter than any one of us, but is going to (somehow) be a reflection of all of us.

Kelly ends his talk with a “to do” list, which, for its tone alone, needs to be repeated:

There is only One machine.
The Web is its OS.
All screens look into the One.
No bits will live outside the web.
To share is to gain.
Let the One read it.
The One is us.

I will confess that when I finished watching the video, I shuddered a little. I found Kelly’s predictions to be fascinating but also pretty unsettling. (And I’ll admit: it made me think of The Matrix and 1984 at various points.) My reaction, I suppose, stems from a discomfort with anything that smacks of…centralization, of one big global entity, so the concept of One Machine that owns everything and connects everyone, One Machine which is both us, and yet a bigger and smarter and better us, is simply disconcerting.

I can admit my own biases. As a History student, I’ve examined enough regimes over the years to be wary of certain rhetoric and have noticed how, at times, things framed in terms of unity and community and sharing (in this case, of data, networks, knowledge, etc.) can devolve into something that becomes a cultural hegemony of sorts, or worse, a system of pervasive surveillance. (Kelly did, after all, mention shoes as “a chip with heels” and cars as a “chip with wheels,” developments which can certainly aid individuals in staying connected to one another, for instance, but can also aid the State in staying connected to individuals.)

The embedding of the digital into everything in the material world, including people, so that it can be read by one machine, is an unsettling notion. I guess I don’t prefer to have everything personalized, to have a single, networked, global machine that reads me and knows everything and everyone connected to me. It may be convenient – and it seems that convenience, along with speed, are the two guiding principles behind technological developments – but I’d rather not be so transparent in cyberspace, to be so utterly “Machine-readable.”

Putting aside my own personal reaction to Kelly’s predictions, what are the implications of his thoughts concerning the Web’s evolution on the discipline of history? If realized, the new world of linked things will certainly demand, as some digital historians have already noted, a re-thinking of how history is researched and presented. The possibilities for new connections to be made – not only between data and ideas, but between actual things, artifacts – are, I’m sure, going to change and open up the ways in which history is understood, communicated, and taught, especially by and to the public.

I can imagine already, down the road, that someone interested, say, in the history and development of the camera, might be able to pull up, in a split-second search, every single camera made in the 20th century that is owned and catalogued by all the museums that have taken the trouble to embed digital info into these objects, which then makes them linkable, “machine-friendly”. Artifacts made in the 21st century that already contain that “sliver of webness and connection” will be even more easy to search for and pull up (for the 22nd century historian let’s say), existing, as it were, already as digital-physical hybrids. The ease with which actual things can be connected and thus compared, regardless of their geographical location, is going to make for some interesting comparative histories.

So, the possibility for new engagements with history is there (as it ever is, I think, with the emergence of new technologies). I only wonder how one is going to possibly keep up with all the changes, with the Web and its continual evolution.