Map of Tenderness.  Dreaming Objects.  First Letter.  Whisper.

What images do these words and phrases evoke for you?  And why have I chosen them, seemingly out of the blue?

Museum of VancouverThey’re the titles of some of the pieces on display in the Museum of Vancouver’s Art of Craft exhibit, which launched last week, and I had the good fortune to attend its opening party last Wednesday night with a friend.

Even though I often bemoan the fact that I can’t quite grasp modern art – especially pieces that reflect elusive minimalism (read: geometric shapes on large, white canvases) – I’m always in awe of the creative spirit that seeps through artistic works of the imagination.

The MOV’s Art of Craft exhibit was no exception: it inspired wonder, reflection, amazement, and, yes, perplexity.  (Modern art seems, at times, to be a very unsettling question mark – a beautiful one, of course, but nevertheless a question mark.)

Elegantly laid out, the exhibition leads the viewer through three different galleries.  The first is entitled Unity & Diversity.  It showcases a wide range of Canadian handiworks recently on display at the Cheongju International Craft Biennale in Korea.  The second is called By Hand.  It features pieces from BC and the Yukon, and explores the artists’ creative processes.  The third represents Craft from the Republic of Korea, and provides the visitor with a glimpse into Korea’s tradition of craftmaking and the directions it is taking.

Although all of the galleries contained unique and thought-provoking pieces, I think I lingered the longest in the first one on Unity & Diversity, perhaps because I was most intrigued by the ideas presented in its opening panel.  Printed in large font to the left of the smaller, introductory text were the words of art critic and curator, Rachel Gotlieb.  It read as follows:

The absence of a national style for craft may be a good thing for current makers because there is no domineering aesthetic or style to overcome.

The introductory text to Unity & Diversity, into which the above quote is also incorporated, expresses the idea that it is the diverse imaginations and talents of our artists that define Canadian craftmaking, that there is “no such thing as a specifically ‘Canadian’ type of craft.”  This seems to be an upbeat, modern approach to a more general – and age-old – lament: that Canada has no national culture to speak of that is uniquely her own.

Rather than bemoan the lack of a dominant aesthetic, the curators of Unity & Diversity stress the strength – and, ironically, unity – that results from the “rich layers of difference” from which Canadian craftmaking is woven.  And Gotlieb’s words are a reminder of a positive side that I had not considered before: that the lack of a national style frees up the artist to explore what is on his or her heart – free from the constraint of having to conform to a dominant tradition and free from the impulse to purposely react against it because it is the status quo.

This introductory panel was beautifully written – and the rest of the text throughout the different galleries was too.  I’m certain the authors revised countless drafts to get it just right – a balance of scholarly research and audience-friendly language.

Dreaming ObjectsIt’s just a pity that very few people, at least the ones I observed, stopped long enough to read it.  Many simply threw a perfunctory glance in its direction before walking on eagerly to see the actual pieces. 

Granted, the design and layout might have had something to do with this.  Rendered in white on a beige background, the text tastefully – but also unfortunately – blended into the neutral backdrop.  Moreover, the panel was placed in a rather narrow part of the entrance to the first gallery, with most of the craft pieces visible only when you turned a corner past the panel.  Both the narrow space (not so conducive to reflective pausing & reading on a crowded evening) and the enticing corner (very conducive to suspense: what lies around it?) likely sealed the fate of the opening text panel, however well written…

All of this reminds me of what my Digital History professor had emphasized when we were designing our virtual exhibit on William Harvey last year: that text panels should be the last option when communicating a message because, well, people don’t really like to read in an exhibit setting…they’d much rather interact with the things on display; at the very least, they’d much rather observe them directly without text getting in the way.

I think I’m going to experience museums quite differently now since completing my degree…

UWO

Where to begin?

About a year and a half ago, I moved to London.  No, not that London.  The other one.  In Ontario, about a two hour drive south of Toronto.  I was just beginning the Public History MA Program at the University of Western Ontario, and anticipating that I would have to field a lot of inquiries there as I had in Vancouver, where acquaintances, relatives, co-workers, and generally well-meaning people interested in my career goals had inquired what exactly “public history” was.

My answers were usually prefaced by a nervous, apologetic chuckle; they were also often riddled with ellipses:

“Public history…well…hehe…you see…it’s, uh, history…for the public! Ha ha…ahem.”

Of course, I learned to articulate more sophisticated answers before I left — such as “I’m going to study how history has been communicated to the public as well as participate in the communication of history to the public” — but they never deterred the practical listener from asking the typical, and typically dreaded, question:

“Oh.  Well – what are you going to do with that?”

If I felt the listener was at all capable of secret, impractical dreams, I’d share about my interest in history and communication and design, about how I hoped to develop exhibit content one day because I enjoyed research and writing and uncovering the compelling -story part of history – or herstory! – to share with a wide audience.

But if I was tired – and dubious of my listener’s sympathies, I’d simply say “museums.”  Understanding would dawn in my interlocutor’s eyes, followed by a shadow of pity – for the narrow field I’d chosen.

“Vancouver doesn’t have a lot of, uh, museums,” they’d say, after a significant pause.

You can imagine my surprise then, when three of the first handful of people I met in London, outside of my peer group, did not look at me with confusion, pity, or disbelief when they heard about what I was studying and learned that I had flown all the way from Vancouver to study it.

Although these listeners all came from different backgrounds – one was a Master’s student in the Department of Mathematics at Western; the other a PhD candidate in Physics, also at Western; and the third a congenial middle-aged employee from Loblaws – their responses were unified in their recognition of the relevance of such a program – or, rather, the relevance of such a program’s approach.  Both UWO students in fact drew analogies between Public History and — wonder of wonders! — other scientific fields based on the common ground that they were all about communicating specialized knowledge to a general audience in a comprehensible way.

So, I heard for instance about how a friend of the Mathematics student, studying Geography, was enrolled in a course geared towards presenting information about natural disasters and how to prepare for them to an uninitiated audience.  I also learned that the PhD candidate was involved in explaining developments in biophysical engineering to a non-scientific crowd.

As for the friendly Loblaws employee, with whom I had begun a conversation while we were both waiting for the bus, she was eager to hear about the potential of the Web for making history engaging and accessible.  Moreover, she was excited about its educational prospects in a Digital Age and could understand why I had chosen to pursue this field.

“How exciting!” she had repeated, again and again, during the course of our conversation.  (“How odd,” I had thought, again and again, that she could understand my enthusiasm and imagine the work I’d be doing better than some intellectual professionals I had encountered.)

I’ve never forgotten these three Londoners; they gave me hope that perhaps the field I’d entered was not so curious – or impractical – after all.

And hope, I remember, was something I greatly needed as a new student to London at the time who, being a generally risk-adverse-“let’s-weigh-all-the-pros-and-cons-of-a-decision”-type person, had just thrown caution to the wind to move east.  I had left a comfortable position at a local university – you know, one with regular pay and benefits; what aspiring Public History students dare to dream about at night – where I had been working for a solid three years in order to move to (the other) London in pursuit of what seemed a vague dream at best.  To find out that that dream was not so nebulous, not so incomprehensible as feared, when I described it, haltingly, to three strangers, was a thrilling relief.

Since that time, and especially after a most rewarding internship at the fabulous City of Vancouver Archives this past fall (and if you’ve never associated the word “fabulous” with “archives” before, please be forewarned: I fully intend on convincing you of this association in future posts), I’ve learned that the reaction of those three Londoners was not a strange fluke of a sympathetic universe.

The idea that had resonated with them, what I had learned to articulate better by then – the idea of taking specialized knowledge and making it accessible to an uninitiated audience – is one that is not so very impractical at all.  Many, many professionals do it: from doctors and pharmacists who have to communicate important and complex information to those for whom medical language would be gibberish, to programmers and developers who have to work closely with non-technical clients whose vision of a particular application’s functions may not be, let us say, technically sound or practicable.

The ability to translate esoteric knowledge into palatable information – what Public Historians in training must learn to do and do well – is applicable to those fields that deal with the general public.  And, at last count, there are 2, 933 fields that do this.  Alright, I just made up that number, but you know what I mean.  Walk into any bookstore, for instance, and you cannot miss the countless number of books by subject experts written purposely for the layperson.

What perhaps sets a Public Historian’s training apart from, say, that of a doctor or a pharmacist (other than, you know, the fact that we don’t have to deal with cadavers or chemical substances) is that we also learn how to make the knowledge we’re sharing compelling and engaging, not just informative.

Yes, that might mean that we play the entertainer and not just the educator a lot of the time, but we try to be responsible, ethical entertainers.  And also, I think it’s safe to say, (almost) everyone likes to hear a good story – so why not tell one if you can?  History is certainly full of them, just waiting to be told.  How it is told to a general audience – using what words, what images, what methods, what technologies – lies in the province of Public History.

After completing the program at Western and having a chance to translate theory into practice at a local archival institution, I can only say this: I feel privileged to have had the opportunity to venture into the study of Public History as well as to have had the support of family and friends who, if they didn’t altogether understand why I had chosen this particular career path, still cheered me on from a distance – and continue to cheer me on now, as I begin an exciting position as part-time Archivist at the City of Vancouver Archives, responsible for outreach efforts.

And that nervous chuckle that once was a knee-jerk reaction whenever anyone asked me what I was studying?  Gone.  In its place is an enthusiastic determination to get the word out about the value of history to society.  And by the way, I’m fully intent on making history and archives sexy.  But that is for another post altogether.  🙂


“I hope somehow to be able to speak what Manan Ahmed calls “future-ese,” to be able to learn (some of) the language of the programmer over the course of this year so that I can begin to “re-imagine“, as Ahmed has exhorted, the old in new ways. I’m excited, if duly daunted, by the prospects.” ~ Quoted from my first blog post, 10 September 2008.

* * * * *
If I ever meet Manan Ahmed, whose Polyglot Manifestos I and II were two of the very first assigned readings for our Digital History class, I would let him know that, like any effective manifesto, his inspired me to take a certain course of action this year – to sign up for the role of Programmer for the digital exhibit that the class would be preparing on the work of Dr. William Harvey.

Incidentally, if I ever did meet Manan Ahmed, I would also casually let him know that I hold him entirely responsible for the sleepless nights I had, agonizing over the code for the program I was attempting to write for an interactive exhibit on Harvey.

I might add here that I knew as much about programming as I did about APIs and mashups prior to this year, which is to say, nada.

(Accusatory) jesting aside, I’ve been reflecting on what it has been like to learn programming from scratch over the course of this school year. I was inspired, as mentioned, by Ahmed’s call for the historian to be more than simply a scholar submerged in past-ese without regard for how their studies might be made relevant to a modern audience (i.e. in present-ese) or how it might be re-imagined in the age of mass-digitization (i.e. in future-ese). How compelling was his call for historians to be “socially-engaged scholar[s],” how apt his challenge for us to become polyglots – master “togglers,” if you will, between past-ese, present-ese, and future-ese – apt especially to those of us with public history ambitions, who had entered the program interested in communicating the past to a general audience in new (i.e. digital) ways. [1]

“All that is required,” Ahmed wrote simply (alas, too simply), as a directive for historians willing to venture into the programmer’s world, “is to expand our reading a bit.” [2]

After my eight-month foray into programming, the words “all” and “a bit” in Ahmed’s above statement strike me as just a tad bit understated. I agree that reading was certainly a major part of my process to learn how to program this year: I pored over, highlighted, marked up, and even wrote conversational notes to the authors of my text (such as the occasional “not clear!”). But I think Ahmed might have also mentioned that not only reading but practicing, experimenting, fumbling, failing, and, yes, even agonizing, are all part of the process of learning how to speak some of the programmer’s language.

Like immersion into any new language, programming has its own set of daunting rules to absorb; break any one of them and you won’t be understood – at all. The program simply won’t run. (I don’t know how many error messages I gnashed my teeth at.) As well, like any language, there is always more than one way to say the same thing – and some of them are more “logical,” “eloquent,” or just plain clearer than others; concision and verbosity, I’ve learned, apply equally in the programmer’s world as they do in the writer’s. (I’ve also observed that my tendency to be wordy applies equally too in the world of code. In fact, I was delighted to learn about the concept of iteration, where lines of repetitive code could be magically – well, okay, mathematically – reduced to a few simple lines, using variables and a certain formula. If only paring down written text were so easy!)

Needless to say, I found the immersion into the programmer’s language very challenging. It was challenging (and, I will admit, even harrowing at times) because not only was I trying to accumulate basic knowledge of the new language, I was also brainstorming ideas for an interactive exhibit on Harvey at the same time. In some ways, it felt like I was trying to devise a Shakespearean sonnet in Chinese or with the vocabulary of a second grader (which is pretty much the extent of my vocabulary in Chinese). All I could envision was something rudimentary at best.

It was challenging to design an exhibit as I was learning the new language simply because I did not know if the ideas that I or others had were actually possible, or, more precisely, would be actually possible for me to learn how to do within the time limit. (I also discovered a humorous difference between the kinds of ideas thrown out by those in Programming versus those in non-Programming roles; the “anything is possible” optimism that technology seems to inspire was not so readily exhibited by those of us who had confronted, and would still have to confront, the befuddling intricacies of code.)

Despite all the challenges, uncertainties, and yes, even secret fears that the particular interactive exhibit I was working on might not come to fruition, things worked out. We hosted our Digital Exhibit on Harvey in early April; all programs functioned; no computers crashed (thank goodness). Looking back to September and my reasons for deciding to learn how to program, I think I am glad, after all, that Ahmed had made it sound so simple. With just a bit of reading, he had written coaxingly, the socially-conscious scholar will be well on his or her way to programming, to filling that gap between the public and the past, and between computer scientists and the future of history. If he had spelled out all the emotions one was apt to go through when learning how to program, I’d probably not have taken it on and thus have missed out on learning to speak a new language, on learning to speak in code.

____________________________

[1] Manan Ahmed, “The Polyglot Manifesto I,” Chapati Mystery, http://www.chapatimystery.com/archives/univercity/the_polyglot_manifesto_i.html.

[2] Manan Ahmed, “The Polyglot Manifesto II,” Chapati Mystery, http://www.chapatimystery.com/archives/univercity/the_polyglot_manifesto_ii.html.

This week’s Public History readings examine the relationship between history and the environment. Both Rebecca Conard’s and David Glassberg’s articles mention a key idea that environmental historians take for granted: that there is nothing natural about “nature”, nothing inevitable about the way that physical landscapes have evolved over time. The presupposed dichotomy between the urban and the “wild”, between human beings, on the one hand, and the “natural” environment, on the other, is not so clear cut at all. Rather, as Glassberg and Conard show, individuals, communities, organizations, and governments have played an important (if at times unnoticed or unemphasized) role in shaping the physical landscape. [1]

Both authors point out how the environment has often reflected the heavy hand of human agency in order to make it conform to certain ideas about desirable landscapes. Their discussions of national parks, in particular, suggest that what a landscape does not show is just as important – or even more so – than what it does show. Speaking of national parks in the western United States, Glassberg writes that “the landscapes tourists encountered in these parts, seemingly inhabited only by elk and buffalo, would not have existed if the native peoples had not first been defeated and removed to reservations, and the wildlife populations carefully managed to encourage picturesque megafauna and discourage pesky wolves.” [2] Similarly, Conard mentions how the desire of the US National Park Service to present parks as “pristine” and “uninhabited” spaces were influenced by ideas about the “romantic wilderness”; such an approach to national parks meant that visitors would not see that “these landscapes were ‘uninhabited’ only because U.S. Indian removal policies either had killed the former inhabitants or had relocated them to reservations” [3].

What’s missing from the physical landscape, then, is as instructive as what is apparent to the naked eye. How to convey a landscape’s significance and complexity to a general (and often uninformed) audience, in terms of its cultivated image as well as the absence or removal of elements of its historical development, remains an important task for the public historian. It’s a task that, as Conard strongly suggests, would benefit from discussion and collaboration among those who are intimately involved in preserving and presenting the history of the environment: historic preservationists, environmentalists, and land managers. [4]

In essence, Glassberg’s and Conard’s articles remind me that the landscape is also a source of historical information. It can be “read” as a historical text for insights into the changing values of a community, region, or nation over time. “Landscapes,” as Glassberg writes, “are not simply an arrangement of natural features, they are a language through which humans communicate with one another.” [5] Of course, as the author shows, this language is a complex one, reflecting conflicting interpretations and understandings of the environment. These conflicts also raise important questions about how one conception of the landscape comes to dominate others (and thus to shape its preservation and development in specific ways), requiring us to ask, as Glassberg does, “whose side won out and why?” [6]

___________________________

[1] Rebecca Conard, “Spading Common Ground” in Public History and the Environment, edited by Ed. Martin V. Melosi and Philip V. Scarpino, (Florida: Krieger, 2004) 3-22. David Glassberg, “Interpreting Landscapes,” in ibid., 23-36.

[2] Glassberg, 25.

[3] Conard, 6.

[4] Ibid., 4-5, 8.

[5] Glassberg, 29.

[6] Ibid.

When I was studying history as an undergraduate student, I was particularly fascinated by discussions about historiography. Perhaps it was the influence of my English Lit background, but I tended to do close readings of historical accounts, approaching them almost as literary texts that reflected much about the assumptions and attitudes, biases and values of the writer. It was therefore interesting to be asked in certain history classes to analyse the works of historians not primarily for what they revealed about the past, but for what insights they provided about the particular way of doing history that was “in vogue” at the time.

Over the course of this year, I’ve seen how the idea of the present’s imposition on the past is as applicable to public history as it is to traditional, scholarly history. History in the public realm is certainly as much (or perhaps even more) about the present — that is, the “present” of whoever is, or was, writing the history, composing the plaque text, or curating the exhibit, for instance — as it is about the past.

Museums, for example, as Helen Knibb’s article, “ ‘Present but not Visible’: Searching for Women’s History in Museum Collections,” suggests, do not necessarily present information, in the context of women’s history, about the actual lives and experiences of women from a particular time period. Instead, the artifacts may reveal more about the preoccupations and personal tastes of curators, or about the collecting or donating impulses of those whose items are on display. With regards to the latter, Knibb suggests that women may have simply donated items they thought were important from the standpoint of the museum or of society, rather than in relation to their own experiences. She raises the interesting question of whether “museum collections tell us more about how women collect than how they lived their lives.” [1] Knibb’s article reminds me that museums themselves are constructed sites that are very much influenced by contemporary concerns.

The idea that public history is as much about the time period of the people presenting the history as it is about the history being presented is, I’m sure, hardly startling. But it does remind me of the need which underlies the rationale for these blogs – the need for self-reflexivity. As history students, my peers and I have been trained to read historical accounts critically, with an eye open to its constructed nature, to the ways in which the account reflects the biases of the historian and the preoccupations of his or her time. As public history practitioners, we will have to direct that critical gaze inwards, to assess how our own assumptions and biases are shaping the histories we will help to produce. Moreover, we will also have to negotiate our way through the assumptions and biases of others, who, in the collaborative realm of public history, will also have a stake – sometimes a very substantial one – in the history-making process. Given how contentious history in the public realm can be, not only the need for critical self-reflection but also the ability to practice what Rebecca Conard has called the “art of mediation” [2] are crucial requirements for the practicing public historian.

__________________________

[1] Helen Knibb, “‘Present But Not Visible’: Searching For Women’s History in Museum Collections,” Gender & History 6 (1994): 355, 361-362. The quote is from page 362.

[2] Rebecca Conard, “Facepaint History in the Season of Introspection,” The Public Historian 25, no. 4 (2003): 16. JSTOR, http://www.jstor.org/.

“Would a complete chronicle of everything that ever happened eliminate the need to write history?” — St Andrews final exam question in mediaeval history, 1981

“To give an accurate and exhaustive account of that period would need a far less brilliant pen than mine” — Max Beerbohm

* * * * *

About a year ago, I took a short creative non-fiction course on the topic of writing historical narratives for a general audience. The instructor, Dr. Richard Mackie, emailed the class the above quotes, to stimulate thoughtful reflection about the nature of history and historical writing. (The first quote was actually a question that Dr. Mackie himself encountered as a History student at St. Andrews in the 80s.) These quotes have come to mind lately as I’ve been ruminating about the implications of doing history in a digital age.

The era of the Internet has, I think, made the idea of a “complete chronicle” of our current times more conceivable than ever before. The Web has certainly made it possible for virtually anyone, irrespective of gender, class, ethnicity, etc., to share their thoughts, ideas, photos, videos, even “statuses” (i.e. what one is doing at a precise moment in time) continuously. Provided that all of this electronic data is adequately preserved, there is going to be a vast abundance of information available for anyone a generation or two (or more) down the road who is curious about the interests, opinions, tastes, preoccupations, etc. of ordinary people in our time.

Yet, such information, no matter how detailed, is not the same as history. The chronicling of people’s lives, even on as minute a level as that expressed in an article about “lifelogging” by New Yorker writer, Alec Wilkinson, [1] results only in the production of information. It is the interpretation of that information, the piecing together of disparate parts into a coherent and (hopefully) elegant narrative which pulls out (or, more accurately, constructs) themes and patterns, that transforms it into history, into a meaningful story about the past.

What’s interesting, of course, is that although no historian (I think) would ever claim to write the history on any subject, discussions about the potential of history in the digital age has sometimes suggested the ability for history to be more complete than ever before. The idea of hypertextual history, for instance, where readers of a historical account can click on links leading them to pertinent primary source documents on the topic, say, or to other similar or divergent viewpoints about the particular subject they’re examining, has almost a decentring impact at the same time that it provides more information. It can be easy for readers, I think, to be overwhelmed by the profusion of hyperlinks within a text, and perhaps to never finish reading the actual article to learn the historian’s particular approach to the past.

The beauty of history, I think, is not that it claims to be a complete, exhaustive chronicle that leaves no stone unturned in its examination, but that it presents one angle on the past, a new way of understanding something that is extraordinarily complex and, for that reason, is open to — and I’d even say requires — multiple interpretations. History is, after all, a story as opposed to a record book, a narrative as opposed to mere facts.
______________________________

[1] Wilkinson’s interesting article recounts how computer guru Gordon Bell has been involved in a “lifelogging” experiment, in which he wears a Microsoft-developed device called a SenseCam around his neck that takes continual pictures of his day-to-day experience and allows him to record his thoughts at any given point in time if he so wishes. According to Wilkinson, Bell “collects the daily minutiae of his life so emphatically that he owns the most extensive and unwieldy personal archive of its kind in the world.” Alec Wilkinson, “Remember This? A Project to Record Everything We Do in Life,” The New Yorker.com, May 28, 2007, http://www.newyorker.com/reporting/2007/05/28/070528fa_fact_wilkinson.

Sometimes, I wonder what historians of the future are going to be writing about, when they examine the early twenty-first century. No doubt, the term “digital revolution” is going to creep in to more than one monograph of the future about our present-day times. Cultural historians (if cultural history is still in vogue) might also, I think, take some delight in tracing the ways in which Google has entered into modern consciousness. Perhaps they’ll trace the moment when Google ceased to be only a proper noun, when the phrase “Let’s google it!” first appeared, and then flourished, in popular discourse. Or maybe they’ll explore the ways in which Google has become a part of popular culture and everyday life, to the point of inspiring satirical responses expressed in, you guessed it, digital ways.

Here are some anecdotes to help that future cultural historian.

* * * * *

Awhile ago, a friend told me an amusing story about how the father of one of her friends was confused about the nature of the Internet. He had never used it before (yes, there are still such folks), and he didn’t quite know what it was all about. So, one day, he asked his son to explain, framing his question according to the only term that he was familiar with – or had heard often enough: “Is Google,” he asked innocently, “the Internet?” The son choked back a gasp of unholy laughter, and proceeded to explain the phenomenon of the Internet to his father. However, if he had simplified his response, if he had said that Google was, in a way, the Internet, he may not have been all that wrong.

* * * * *

During Christmas dinner with my family this past winter, Google (of all topics) entered into our conversation. I don’t remember how exactly. All I recall is that my mom, who (yes, it’s true) had never heard of Google before, perked up when she heard the term at the dinner table, probably because of its odd sound. “Google?” she said, brows furrowed, “what is Google?” To that, my dad, without missing a beat, responded (in Chinese) that Google “is the big brother of the Internet.” Now, “big brother” (or “dai lo”) in Cantonese, when used in a figurative sense, simply means someone who is to be respected, some important or dominant figure or force. But I couldn’t help laughing at the Orwellian overtones that my father’s comment had unwittingly implied. He had meant big brother; I, of course, had heard Big Brother, Chinese-style.

* * * * *

Back in September, Dr. Don Spanner, my archival sciences professor, showed the class a video clip called Epic 2015. Its opening lines were captivatingly ambiguous: “It is the best of times,” said the solemn narrator, “It is the worst of times.” We were entranced by the video’s fictitious yet somewhat chilling projection of the world in 2015, which involved no less than the merging of two powerful companies (Google and Amazon) to become Googlezon, an entity whose information-making and dissemination power had reduced even the might of the New York Times. At the end of the clip, Don joked that the first time he watched it, he just wanted to sit in a corner and stare at paper for a long, long time. We all laughed – and, perhaps, shivered inside a bit too.

Subsequently, I mentioned the clip to a friend, remarking how it was so interesting to see just how big Google had become, as evidenced by the fact that it was inspiring such responses as Epic 2015 with its subtle questioning of the Google empire and its cultural hegemony. My friend in turn enlightened me further about other similar responses. He asked if I had ever heard of “The Googling”? I hadn’t. So he emailed me links to several clips on YouTube, which explore Google’s services (such as their mapping devices) in a new – and, of course, hilariously sinister – way. To view them…simply google “The Googling.” 🙂 (There are five parts.)

* * * * *

To the cultural historian of the future:

It was true. Google was (is?) ubiquitous, to the point that it entered into dinner table conversations and was mistaken (or correctly identified?) for the Internet. Even to the point of inspiring satirical YouTube clips and prophetic visions of a Google-ized world. That is, of course, when you know something is big – when it becomes the subject of cultural humour and unease, negotiated and even resisted in satirical ways.

So, we embraced Google even while scrutinizing it at arm’s length. We questioned Google even while googling. It’s what we did in the early twenty-first century.

One of my best friends and I have a tendency to reminisce about our shared experiences. During these (sometimes admittedly nostalgic) moments of looking back, I am always amazed at the different things that have stood out for each of us – a telling word, gesture, expression that I or she would not have ever recalled without the presence of the other.

In a way, then, my friend and I help make each other’s history more complete by remembering details that the other has forgotten. In a way too, it means that the past – or that particular version being remembered in bits and pieces – becomes quite spontaneous for us, entirely dependent on the course of the conversation, on the ebb and flow of memory on that particular day. Reminiscing about the same experience with my friend years later, I find that other aspects surface; the past is, one might say, renewed and re-created in each instance of remembrance, a mental landscape that is both familiar and yet full of surprising colour too.

I think one of the interesting aspects of conducting oral history interviews – which I had the privilege of doing recently with one of the former staff members at a local health care institution – is observing that very organic and spontaneous process of memory in play. While I, of course, did not share in any experiences of my interviewee, bringing only my knowledge of certain aspects of the history of the institution to the table, it was interesting to see how certain memories surfaced for her based on the flow of the conversation.

My understanding of this institution’s history informed the questions that I prepared. Yet the interview was by no means confined to these questions. They became starting points, triggering memories of other aspects of my interviewee’s experience – ones that I had not thought in advance to ask about and perhaps ones that she had not revisited until that moment in time. Another day, another interviewer, would undoubtedly bring other memories to the surface, revealing new pieces of a multifaceted history that can be tapped and reconfigured in so many ways.

And speaking about fragments of the past, I left the interview with an unexpected piece of history – literally. My interviewee was excited and eager to give me a brick that she had kept from the first building of her former work place, constructed in the late 19th century. Embedded with the shape of an animal, it now sits at the foot of my desk, a tangible piece of the past that stands in contrast to the transience and spontaneity of memory.

dream

Steveston, British Columbia

Photography is one of those activities that I can lose myself completely in. The hours I spend on it is time freely given (and hardly felt). Although I’ve been lucky enough to capture a few photographs that I’m pleased with (including the one above, which was modestly granted an honorable mention in the Geography Department’s fundraising contest for United Way), I’ve always considered myself just a tinkerer of sorts. A dabbler, if you will, whose yearning to be “artistic” has been mostly helped by technology. (I credit my Nikon camera completely for any good shots.)

* * * * *

As it turns out, the digital age is apparently very amenable to those with tinkering and dabbling tendencies.

That, at least, was the (hopeful) sense that I got from reading Jeff Howe’s article on “The Rise of Crowdsourcing.” In it, Howe traces the ways in which companies are tapping into “the latent talent of the crowd.” He brings up the example of iStockphoto, a company that sells images shot by amateur photographers – those who do not mind (who, in fact, I’m guessing, would be thrilled about) making a little bit of money doing what they already do in their spare time: take pictures.

According to Howe, the increasing affordability of professional-grade cameras and the assistance of powerful editing software like Photoshop means that the line between professional and amateur work is no longer so clear-cut. Add to that the sharing mechanisms of the Internet, and the fact that photographs taken by amateurs sell for a much lower price than those of professionals, and it seems inevitable that some ingenious person would have thought up a way to apply crowdsourcing to stock photography sooner or later.

Howe provides an even more striking example of how the expertise of the crowd is being plumbed these days. Corporations like Procter and Gamble are turning to science-minded hobbyists and tinkerers to help them solve problems that are stumping their R&D departments. Howe mentions the website InnoCentive as one example of the ways in which companies with a problem and potential problem-solvers are finding each other on the web: the former post their most perplexing scientific hurdles on the site and anyone who is part of the network can then take a stab at solving the problem. If they do, they are finely compensated. And a good number, in fact, do. According to InnoCentive’s chief scientific officer, Jill Panetta, 30% of all problems posted on their website have been solved. That is, to quote Panetta’s words, “30 percent more than would have been solved using a traditional, in-house approach.”

What’s intriguing about all of this is the fact that the solvers, as Howe says, “are not who you might expect.” They may not necessarily have formal training in the particular field in which the problem arises; their specialty may lie in another area altogether. Yet, it is this very diversity of expertise within the crowd of hobbyists that contributes to the success of such networks as InnoCentive. As Howe puts it, “the most efficient networks are those that link to the broadest range of information, knowledge, and experience.” The more disparate the crowd, in other words, the stronger the network. [1] I love the ironies of the digital age.

* * * * *

I’ve been wondering lately about whether history could benefit at all from the diverse knowledge and background of the crowd, whether crowdsourcing – posting a problem or request out in the virtual world in the hopes that someone might have the expertise to be able to fulfill it – could apply to a non-scientific discipline.

In other words, would a History version of InnoCentive work? A network where historical researchers could poll the crowd for information or materials or insight to help fill research gaps…where they could tap into the memories, artifacts, anecdotes, records, ephemera, (and even the ways people understand the past) of a diverse group and thereby possibly access information that might have never made it into the archives for formal preservation? How would the writing and construction of history change if, instead of primarily drawing upon the 5 to 10% of all records that ever make their way into an archives, researchers could tap into the personal archives of a disparate crowd made up of the “broadest range of information, knowledge, and experience”? (Let us put aside, for the moment, the issues of the integrity of the record and its provenance when we talk about “personal archives.” I realize that the shoebox in the attic is not nearly as reassuring a sight as the Hollinger box of the archives.) It seems probable to me that some of the 90 to 95% of records that never make their way into an archival institution are still extant, and that there could be valuable research material in them that could very well change one’s argument about the past. Would crowdsourcing be one way to get at that material?

* * * * *

P.S. Of course, I just realized that I’m raising the above questions without considering a crucial aspect in all the examples of crowdsourcing that Howe mentioned: money. Those who answered the call – whether the amateur photographer or the scientific tinkerer – were paid for their services (ranging from a dollar all the way to an impressive $25,000). To pay someone for a piece of history raises a whole other set of questions…

__________________________

[1] Jeff Howe, “The Rise of Crowdsourcing,” Wired, http://www.wired.com/wired/archive/14.06/crowds.html.

(Or: One Disgruntled User’s Frustrations with Printers)

From the moment that I laid eyes on it, I should have known that I was in for trouble. After all, as the old saying goes, if it seems too good to be true, it probably is.

* * * * *

Here is how the story goes.

Back in September, I decided to purchase a printer. I’m the kind of person who finds reading on the computer for extended periods of time difficult. Despite numerous attempts to read on-screen, I still prefer the physicality of text. My eyes find pleasure and relief in the printed word. (You’ll not, in other words, find me curling up with a Kindle any time soon.)

Anyways, back in September, after a great deal of difficulty that involved no less than lugging a large demo Epson printer on the bus and accidentally denting it a few times during my trip home (it was box-less, carried in two plastic bags, and ridiculously heavy), I managed to transport and set up this beast of a printer in my apartment. It was an all-in-one contraption, able to scan, fax, print, and copy in colour. And it was, amazingly, only $34.99 at Best Buy. A veritable deal.

Within a few weeks, I had used up the complementary black ink and had to purchase a new cartridge, which ran out in a ridiculously short amount of time (even though I bought the “heavy usage,” i.e. more costly, one). In mid-November, the black ink had run out again. After dishing out yet another $25 and installing the new cartridge, I discovered that the complementary colour ink – which I had used to print, perhaps, five documents all term – had somehow run out as well. That’s when I realized that the Age of the New Printers means that everything shuts down when the colour ink is deemed to be empty. The machine’s printing capabilities simply cease to function. All the amount of black ink in the world will not get it to print a single page.

As it was near the end of the term, I simply decided to print all documents at school rather than deal with the fuss – and cost – of getting new ink. In hindsight, a perspective which all historians will always have at their disposal, that was a mistake.

* * * * *

About a week ago, I finally had a chance to pick up some colour ink cartridges (the kind for “moderate usage” only), installed them eagerly into my dusted-off printer, and looked forward to the convenience that modern technology would again afford. (I would not have to go to the computer lab at all hours now just to print readings.)

That’s when I realized that modern technology does not always work in one’s favour. The document I printed, which was simply set in black, came out starkly, aggravatingly, white. The black ink cartridge must have dried out over Christmas break. So, it appeared that I had just shelled out $35 for colour ink in order to be able to access black ink that was no longer operative.

This evening, however, I tried to print a document again, in black, just to see if something miraculous might ensue. And it did. The machine managed to cough out a document in black ink. The printout was extremely irregular in quality – numerous lines were missing – but at least the page was no longer blank. Eager (and slightly foolish), I printed the document several more times, thinking perhaps that persistence would pay off. Although the quality did improve, the document was still quite spotty in areas. That’s when (again, eager and foolish), I decided to clean the print heads, despite the warning that this function would consume ink. I then printed a sample document. The result was much better, but still not perfect. However, I discovered with shock that running the cleaning function used up half of the new colour ink cartridges, according to the meter.

I now had two choices:

1) Run another print head cleaning (which would probably use up the rest of the colour ink – of which I had not, in any real sense, used), or

2) Give up on the black ink cartridge completely (which was also unused, except for the test printouts), and shell out more money for black ink.

(Why I didn’t decide to unplug the printer then and there, affix a sign on it that said “Free – completely, utterly free”, and put it out in the hallway of my apartment, is still something I have not answered satisfactorily yet.)

Instead, I had a flash of inspiration. History came to my rescue: I remembered that I had owned an Epson printer before. It had run out of colour ink before. And I had “fooled” it before – by going through all the on-screen steps of installing a new colour cartridge and “charging” the ink, without actually doing so. With the colour ink meter subsequently indicating that it was “full”, I could then finish troubleshooting the problem that I was dealing with at the time, which required, finicky machine that it was, colour ink.

Cheered by this memory of the not-quite-so-smart-nor-sharp-machine, I decided to run another print head cleaning tonight. Sure enough, this nearly “used” up all the remaining colour ink. I printed a few more test documents in black; their quality was improved somewhat, but it was still blanking out at certain lines. I then tried to run one more cleaning function, but the printer told me that I couldn’t – there was not enough colour ink to support it. In fact, the warning of low ink was now replaced with the message that there was no more colour ink. (Apparently, just even attempting to run a print head cleaning uses up the ink, by the printer’s standards.)

Confident, however, that I could fool the machine, I then proceeded to go through the cartridge-replacement steps, clicking the “Finish” button at the end with a flourish. The printer responded by humming and telling me that “ink charging” was occurring. I smiled – and then, I frowned. A box had popped up indicating that the replaced cartridges were, in techno-lingo, “expended.”

In other words – the machine knew. It was telling me that I could not fool it. It could detect that I had not actually fed it anything new, despite my subsequent actions of physically removing and re-inserting the colour ink cartridges, which, I have to add again, were not really used, but were indelibly (and one might even say ingeniously) branded so by the machine. Such crafty labelling had made the colour ink inoperative.

Welcome, friends, to the Age of the (Aggravatingly) Smart Machine.

* * * * *

P.S. Sixty dollars invested into printer cartridges since mid-November and I have still not been able to print one actual document since then for any useful purpose.

If it were still in vogue to be a Marxist historian, I would seriously point to the ridiculously profitable economics underlying printer design as the source of all our (or at least my) present-day ills!