October 2008


Personalized Utopia or Orwellian Dystopia?

“Search engines will become more pervasive than they already are, but paradoxically less visible, if you allow them to personalise increasingly your own search experience.” [1]

“If you’re really concerned about technology, however, remember that it has the most potential to be dangerous when you stop seeing it.” [2]

* * * * *

In my last Digital History class, I (very casually) threw out the term “Orwellian dystopia” into the pool of ideas and concepts for potential discussion. I threw it out a bit in jest (because I have a declared weakness for polysyllabic words), but mostly in earnest (because, as indicated in my last post, I have had cause to think about Orwell and 1984 lately). The term isn’t mine of course, but comes out of one of the readings for the week: Phil Bradley’s “Search Engines: Where We Were, Are Now, and Will Ever Be.”

As its title clearly suggests, Bradley’s article traces the evolution of search engines, from their rather crude beginnings when web design wasn’t yet a consideration to their present-day, sophisticated forms, which promise to make our searches more personally relevant than ever before. Ruminating on the potential for search engines to get to know us individually – to the point of recommending events that we (as in you specifically, or me specifically) might wish to attend when visiting a new city or whether the supermarket down the road has the same item you or I want, only cheaper – Bradley makes the point about the pervasity and increasing invisibility of search engines which forms the first of the two opening quotes above. He then wonders if the ways in which users are sitting back, letting the alerting services of search engines bring custom-made information to them – “since the engines can monitor what you do and where you go” – will lead to an “Orwellian dystopia” of sorts. Bradley’s advice for avoiding such a dystopia? “Users will need to consider very carefully,” he writes, “exactly to what extent they let search engines into their lives.”

Bradley’s point about the expansive, yet increasingly invisible, nature of search engines fits nicely with some of the ideas articulated in a blog post of my Digital History prof, Dr. William Turkel (or Bill, as he would wish us to call him). In this post, entitled “Luddism is a Luxury You Can’t Afford” (which, I might add, graciously avoids lambasting latter day Luddites but seeks instead to understand them), Bill considers what it is exactly that neo-Luddites are objecting to when they consider technology. Drawing on Heidegger’s distinction between ready-at-hand and present-at-hand objects, Bill points out that it is the second group that poses problems for those uncomfortable with technology. This is simply because these objects are always visible and mostly intrusive – “something you have to deal with.”

Meanwhile, ready-at-hand things are invisible, unnoticed, and therefore accepted as a natural – rather than a technological – part of existence (the coffee cup, electricity, the iPod even). However, “these invisible and pervasive technologies,” Bill notes, in the same vein as Bradley, “are exactly the ones that humanists should be thinking about…because they have the deepest implications for who and what we are.” The post ends with the words I quoted above, about invisibility being the most potentially “dangerous” aspect of technology.

* * * * *

I have been ruminating lately on the idea of transparency on the Web, and it seems to me that there is a strange sort of tension that exists.

On the one hand, as Kevin Kelly’s talk has shown (discussed in the previous post), users are required to be transparent in cyberspace, if they wish to have any sort of personalization. The Web can’t make recommendations if it doesn’t know you, if your searching patterns, socializing patterns, buying patterns, browsing patterns are not visible.

On the other hand, its very collection of this information and the ways in which it presents you with a set of personalized results are becoming less visible, as Bradley has argued. One might actually put this another way, and say that the Web, search engines included, is becoming more transparent, not in making itself visible, but in making itself invisible. It is becoming so transparent that, like spotlessly clear glass, users cannot see that there is something mediating between them and the information world out there, and so they might be tempted to conclude that the search results they’ve gotten are natural and obvious results showing the most naturally important and self-evident links.

In our last Digital History class, it was Rob, I believe, who brought up the issue of Google search results and the problem that they can be very limiting – without the user realizing that they are limiting. Since, as Dan Cohen has said, Google is the first resource that most students go to for research (at least, the graduate students he polled do), [3] the results it presents may very well define how a subject is understood. The danger, then, is that users won’t be critical of their search results, and how they may be tailored or skewed based on numerous factors, because they don’t even realize that such mediation mechanisms are taking place. Thus, invisibility, as Bill has noted, is a definite problem. And, I have to wonder as well, in terms of research, if personalization is too. Will one’s understanding of World War One, say, or of John A. McDonald, or Joseph Stalin, or Mary Wollstonecraft be influenced by one’s buying patterns? Music interests? Socializing habits? If it’s to be accepted that students start their research using Google, and they are signing into Google’s services when conducting such research, what implications does this have on their understanding of history? On the writing of history?

So to try to tie together the strands of these last two posts: it seems that transparency on the Web, in the search engines that exist, is rooted in their invisibility – such engines are a window to the information world that’s easily mistaken for an unmediated glimpse of the real world itself – while transparency of users means their utter visibility and machine-readability. I agree with Bradley and Bill that we shouldn’t take invisible technologies for granted – they need to be explored, critiqued, and discussed; made visible, in other words – so that users can decide for themselves how far they want to go in the age of personalization.

* * * * *

P.S.

I feel compelled to add that to approach these issues from a questioning stance is not to reject the benefits of search engines, or of personalization, or of the Web, or of technology at all. (I, for one, recently ordered a book from Amazon based on its recommendation – or, more precisely, because its very first recommendation was a book that a friend had already suggested I read; that Amazon was in line with someone who knows me well amazed me enough to purchase the book!) The issue is never simply, I think, technology in and of itself. It is the uses of technology, how new capabilities, especially in the digital age, is going to be employed and what their developments and effects might mean (and have already meant) for individuals and groups in society that is the crucial issue at hand.

To conclude, I think Marshall McLuhan’s classic metaphor about the dual nature of technology – how it both “extends” and “amputates” – is still relevant and instructive today. It seems that in most discussions about technological innovation, we nearly always hear about “extensions” (Kelly did the same thing; though interestingly, he went so far as to reverse McLuhan’s idea, calling humans the extension of the machine) but we rarely hear about “amputations.” Perhaps a balanced approach – one that keeps visible both the advantages and disadvantages of new technologies, that considers their impact in a broad sense, neither blindly fearing them because they are new nor unreservedly embracing them for the same reason – is the way to ensure that we remain critical in the digital age.

_______________________________

[1] Phil Bradley, “Search Engines: Where We Were, Are Now, and Will Ever Be,” Ariadne Magazine 47 (2006), http://www.ariadne.ac.uk/issue47/search-engines/.

[2] William J. Turkel, “Luddism is a Luxury You Can’t Afford,” Digital History Hacks, http://digitalhistoryhacks.blogspot.com/2007/04/luddism-is-luxury-you-cant-afford.html.

[3] Daniel J. Cohen, “The Single Box Humanities Search,” Dan Cohen’s Digital Humanities Blog, http://www.dancohen.org/2006/04/17/the-single-box-humanities-search/.

Advertisements
“Predicting the Next 5000 Days of the Web”

Recently, a friend of mine sent me a link to a video on TED.com. In this video (which, if you have 20 minutes to spare, is highly worth checking out), the presenter, Kevin Kelly, traces the Web’s development over the last 5000 days of its existence. Calling it humanity’s most reliable machine, Kelly compares the Web, in its size and complexity, to the human brain – with the notable difference being, of course, that the former, and not the latter, is doubling in power every two years. As he sees it, this machine is going to “exceed humanity in processing power” by 2040.

Not only does Kelly look back on the last 5000 days, he also projects forward, and considers what the next 5000 days will bring in the Web’s evolution. What he envisions is that the Web will become a single global construct – “the One” he calls it, for lack of a better term – and that all devices – cell phones, iPods, computers, etc. – will look into the One, will be portals into this single Machine.

The Web, Kelly states (pretty calmly, I might add), will own everything; no bits will reside outside of it. Everything will be connected to it because everything will have some aspect of the digital built into it that will allow it to be “machine-readable.” “Every item and artifact,” he envisions, “will have embedded in it some little sliver of webness and connection.” So a pair of shoes, for instance, might be thought of as “a chip with heels” and a car as “a chip with wheels.” No longer a virtual environment of linked pages, the Web, in Kelly’s conception, will become a space in which actual items, physical things, can be linked to and will find their representation on the Web. He calls this entity the Internet of Things.

We’re not, of course, quite at that stage yet, but we are, according to Kelly, entering into the era of linked data, where not only web pages are being linked, but specific information, ideas, words, nouns even, are being connected. One example is the social site which allows a person to construct an elaborate social network online. From Kelly’s perspective, all this data – the social connections and relationships that we each have – should not have to be re-entered from one site to the next; you should just have to convey it once. The Web, he says, should know you and who all your friends are – “that’s what you want” he states (again, quite matter-of-factly) – and that is where he sees things moving: that the Web should know and remember all this data about each of us, at a personal level.

In this new world of shared data and linked things, where (as I have pondered in a previous post) the line between the virtual and the physical is no longer an identifiable line, Kelly sees one important implication: co-dependency.

The Web, he says, will be ubiquitous, and, in fact, for him, “the closer it is, the better.” Of course, in knowing us personally, in anticipating our needs, the Web exacts a price: “Total personalization in this new world,” Kelly concedes, “will require total transparency.” But this is not a hefty price for Kelly, who, it seems, would prefer to ask Google to tell him vital information (like his phone number for instance) rather than try to remember such things himself.

This sheer dependency on the web is not a frightening prospect for Kelly; he compares it to our dependency on the alphabet, something we cannot imagine ourselves without. In the same way, he argues, we won’t be able to imagine ourselves without this Machine, a machine which is at once going to be smarter than any one of us, but is going to (somehow) be a reflection of all of us.

Kelly ends his talk with a “to do” list, which, for its tone alone, needs to be repeated:

There is only One machine.
The Web is its OS.
All screens look into the One.
No bits will live outside the web.
To share is to gain.
Let the One read it.
The One is us.

I will confess that when I finished watching the video, I shuddered a little. I found Kelly’s predictions to be fascinating but also pretty unsettling. (And I’ll admit: it made me think of The Matrix and 1984 at various points.) My reaction, I suppose, stems from a discomfort with anything that smacks of…centralization, of one big global entity, so the concept of One Machine that owns everything and connects everyone, One Machine which is both us, and yet a bigger and smarter and better us, is simply disconcerting.

I can admit my own biases. As a History student, I’ve examined enough regimes over the years to be wary of certain rhetoric and have noticed how, at times, things framed in terms of unity and community and sharing (in this case, of data, networks, knowledge, etc.) can devolve into something that becomes a cultural hegemony of sorts, or worse, a system of pervasive surveillance. (Kelly did, after all, mention shoes as “a chip with heels” and cars as a “chip with wheels,” developments which can certainly aid individuals in staying connected to one another, for instance, but can also aid the State in staying connected to individuals.)

The embedding of the digital into everything in the material world, including people, so that it can be read by one machine, is an unsettling notion. I guess I don’t prefer to have everything personalized, to have a single, networked, global machine that reads me and knows everything and everyone connected to me. It may be convenient – and it seems that convenience, along with speed, are the two guiding principles behind technological developments – but I’d rather not be so transparent in cyberspace, to be so utterly “Machine-readable.”

Putting aside my own personal reaction to Kelly’s predictions, what are the implications of his thoughts concerning the Web’s evolution on the discipline of history? If realized, the new world of linked things will certainly demand, as some digital historians have already noted, a re-thinking of how history is researched and presented. The possibilities for new connections to be made – not only between data and ideas, but between actual things, artifacts – are, I’m sure, going to change and open up the ways in which history is understood, communicated, and taught, especially by and to the public.

I can imagine already, down the road, that someone interested, say, in the history and development of the camera, might be able to pull up, in a split-second search, every single camera made in the 20th century that is owned and catalogued by all the museums that have taken the trouble to embed digital info into these objects, which then makes them linkable, “machine-friendly”. Artifacts made in the 21st century that already contain that “sliver of webness and connection” will be even more easy to search for and pull up (for the 22nd century historian let’s say), existing, as it were, already as digital-physical hybrids. The ease with which actual things can be connected and thus compared, regardless of their geographical location, is going to make for some interesting comparative histories.

So, the possibility for new engagements with history is there (as it ever is, I think, with the emergence of new technologies). I only wonder how one is going to possibly keep up with all the changes, with the Web and its continual evolution.

Courtyard of Glendon College

During the last weekend in September, I had the good fortune to be able to attend my first – I hesitate to call it this, for reasons that should become apparent – academic conference hosted at the Glendon College campus of York University in Toronto.

It’s true that academics organized this conference – several energetic PhD students from York & University of Toronto spent over a year putting it together. It’s also true that a good number of academics attended it – there were scholars from universities across Canada and even in the States. And, in terms of organization, scholarship, presentation, and professionalism, I am sure the conference rivalled any others that are set within the academy. But to call this particular gathering an academic conference is to undercut somewhat its very reason for being, which was to consider history – and how history is done – beyond the walls of the university, at the level of community.

The organizers named this conference “Active History: History for the Future” and their welcome statement in the conference booklet summarizes well its non-academic spirit: “The conference themes…address the ways in which historians and other scholars must do more than produce knowledge for peer-reviewed journals and academic monographs, must do more than present at academic conferences, must do more than require oral interviewees to sign ethics forms and read over transcripts.”

Having read, and lamented with my peers, about the gaping divide between public history and academic history, having wondered myself whether the history that I might participate in producing as a public historian will ever be, or be considered, as “valid” as the histories generated by those within academia, attending this conference felt a little bit like coming home.

Public history is not, of course, exactly identical to active history – the latter, as I understand it, is an approach to history that self-consciously attempts to understand the past in order to change the present and shape the future. But if the field of public history itself does not seem to me to be quite as socially and politically driven in its usual incarnations (which is not to say that it can’t be), in many ways, these two kinds of approaches to doing history overlap. I noticed this just in the vocabulary of the conference, in the key words and concepts that were articulated again and again, words like:

community; stories; narrative; engagement; accessibility; dialogue; communication; digitization; interactivity; teaching; multimedia; creativity; audience; collaboration; negotiation; inclusivity; participatory; partnerships; networks; reflexivity; and material culture – just to name some.

Most of these are not words that typically describe academic history, but they’re words that I get excited about. And it was heartening to see that there is a large network of researchers, both university-based and community-based, just as excited too.

So, what did I take home from the conference? The following were some of my observations, in no particular order.

Creativity counts.

Actually, it doesn’t only count; it seems crucial in any project geared towards presenting the past to the public. The good news is, there seems to be countless ways to be creative.

One engaging way is through food. Karim Tiro, from Xavier University, shared about an exhibit on the history of sugar that he’s planning. The twist? It’s going to be set in a public food market – a civic space, he said, where the community gathers and makes itself visible – instead of within a traditional museum setting with its oftentimes authoritative curatorial voice, which can be distancing. Such markets, he said, are great spaces to share history because people are naturally interested in food. His project strikes me as an innovative way to approach important historical issues – like slavery, like politics – through something people are intimately familiar with. And Karim is turning that on its head too. His goal: to make the familiar unfamiliar, and thus to hopefully engage.

Outside the box is the place to be.

Conference attendees were keen to think beyond the boundaries of traditional history, whether it was ivory tower history, glass-encased history (i.e. in museums), or mainstream history. The desire is to move away from only producing manuscripts sprawling with footnotes, or only accessing traditional archives that are silent when it comes to the histories of those who didn’t leave written records, to recognizing the importance of oral histories, personal stories, and other ways of understanding the past, especially as it relates to marginalized groups. This desire was interestingly expressed in the very methods of some of the presenters themselves.

Eva Marie Garroutte of Boston College illustrated how one could craft a research methodology based on a particular cultural practice within a community, and, in so doing, to include the research subjects in the history-making process. This meant that we had a chance to learn about the Cherokee Stomp Dance and to hear about how methods of research could incorporate structural elements from this cultural practice. Mary Breen of the Second Wave Archival Project presented on feminist history and allotted some of her presentation time to reading directly from excerpts of oral history transcripts. The result? We got to hear stories in the voices, cadences, tones of the female participants themselves (including the humorous story of one woman who, throughout her marriage, always kept some “running away” money on her – just in case).

Community research = respectful research.

Lillian Petroff of the Multicultural Historical Society, who conducts oral histories of members from various communities, expressed the stakes so well: “When people agree to be interviewed,” she said, “they are putting the meaning of their lives in your hands.” So she’s careful to approach her interviewees with respect, always as subjects, never as objects, and the result is that she often ends up forming lifelong friendships. Her goal is to build relationships and engage in dialogue. Lillian made an interesting point that because oral history has often attempted to mirror written history, it has often not been about conversation. And I won’t readily forget her provocative admonition not to “pimp” as a researcher: using your subjects, getting what you need, and then exiting.

Audience matters.

Indeed it does. Speakers and participants talked spiritedly about making history accessible, interactive, and engaging. Creative ways for drawing an audience, especially one that might not be interested in history at the outset, were discussed, from holding meetings in museum spaces (which is far less intimidating than being asked to go visit a museum) to bringing history out onto the streets (using posters, for example) to hosting community events that spark interest in the histories of one’s own neighbours (like holding “antique roadshows” where members can bring items for “show and tell” and, possibly, donate them!).

Two is better than one.

Active history, public history, is not isolated history. Collaboration – not only with academics, but with community members, community-based researchers, members of historical societies and of other relevant organizations – is crucial. Heather George, a fellow UWO Public History student, (bravely) stated the need for us to realize that academic historians have one way of approaching history, community members have another, equally valid, way, and that we must work at incorporating both in any historical narrative. Lisa Helps, one of the organizers from U of T, articulated this as the need for collaborative methodologies. We left thinking about the importance of developing networks and partnerships with diverse people and groups, and of the need to share resources, knowledge, and expertise. The resounding idea is that good history in the public realm will always be collaborative – and transformative too, for both participants and researchers, as Lisa expressed.

Technology is your friend.

Not surprisingly, digitization was mentioned over the course of the conference. So were websites, GIS mapping, and even YouTube. Perhaps the only key word of the digital age I expected to hear but didn’t was Wikipedia!

Lorraine O’Donnell, an independent historian, put it nicely when she referred to the web as a “repository for personal and community memory and history,” and stressed it as a resource that we should all work towards using. And James Cullingham, owner of Tamarack Productions, in his “how-to-make-a-living” advice to us Public History students in particular, threw out the word “multiplatform”: can the project, he asked, be conceived as something that can be watched on a cell phone, on the web, or on television? (The answer should be yes.)

Reflexivity is always important.

Craig Heron of York University emphasized the need for us to think more about how people learn. Beyond just slapping text on an exhibit panel or on a website, he said that we need to consider how information is created and what message people leave with. Being aware of one’s practice, of how one communicates, even of power imbalances, were important themes that resurfaced throughout the conference.

And, finally, not to forget that history in the public realm can be contentious:

Politics happens.

Rhonda Hinther from the Canadian Museum of Civilization talked about the challenges of producing history in museums. Certain histories are seen as just too controversial or too political for museum settings. Or they’re simply not the kinds of history that attracts a crowd. Thus, “doing history in a federally-funded setting,” she said, “can be uncomfortable.” One has to be pretty creative to slip in “other” histories and to be prepared – for clashes with administration. But it can also be very rewarding. For Rhonda, I think, part of the reward is to be called a “subversive curator” at the end of the day.