“I hope somehow to be able to speak what Manan Ahmed calls “future-ese,” to be able to learn (some of) the language of the programmer over the course of this year so that I can begin to “re-imagine“, as Ahmed has exhorted, the old in new ways. I’m excited, if duly daunted, by the prospects.” ~ Quoted from my first blog post, 10 September 2008.

* * * * *
If I ever meet Manan Ahmed, whose Polyglot Manifestos I and II were two of the very first assigned readings for our Digital History class, I would let him know that, like any effective manifesto, his inspired me to take a certain course of action this year – to sign up for the role of Programmer for the digital exhibit that the class would be preparing on the work of Dr. William Harvey.

Incidentally, if I ever did meet Manan Ahmed, I would also casually let him know that I hold him entirely responsible for the sleepless nights I had, agonizing over the code for the program I was attempting to write for an interactive exhibit on Harvey.

I might add here that I knew as much about programming as I did about APIs and mashups prior to this year, which is to say, nada.

(Accusatory) jesting aside, I’ve been reflecting on what it has been like to learn programming from scratch over the course of this school year. I was inspired, as mentioned, by Ahmed’s call for the historian to be more than simply a scholar submerged in past-ese without regard for how their studies might be made relevant to a modern audience (i.e. in present-ese) or how it might be re-imagined in the age of mass-digitization (i.e. in future-ese). How compelling was his call for historians to be “socially-engaged scholar[s],” how apt his challenge for us to become polyglots – master “togglers,” if you will, between past-ese, present-ese, and future-ese – apt especially to those of us with public history ambitions, who had entered the program interested in communicating the past to a general audience in new (i.e. digital) ways. [1]

“All that is required,” Ahmed wrote simply (alas, too simply), as a directive for historians willing to venture into the programmer’s world, “is to expand our reading a bit.” [2]

After my eight-month foray into programming, the words “all” and “a bit” in Ahmed’s above statement strike me as just a tad bit understated. I agree that reading was certainly a major part of my process to learn how to program this year: I pored over, highlighted, marked up, and even wrote conversational notes to the authors of my text (such as the occasional “not clear!”). But I think Ahmed might have also mentioned that not only reading but practicing, experimenting, fumbling, failing, and, yes, even agonizing, are all part of the process of learning how to speak some of the programmer’s language.

Like immersion into any new language, programming has its own set of daunting rules to absorb; break any one of them and you won’t be understood – at all. The program simply won’t run. (I don’t know how many error messages I gnashed my teeth at.) As well, like any language, there is always more than one way to say the same thing – and some of them are more “logical,” “eloquent,” or just plain clearer than others; concision and verbosity, I’ve learned, apply equally in the programmer’s world as they do in the writer’s. (I’ve also observed that my tendency to be wordy applies equally too in the world of code. In fact, I was delighted to learn about the concept of iteration, where lines of repetitive code could be magically – well, okay, mathematically – reduced to a few simple lines, using variables and a certain formula. If only paring down written text were so easy!)

Needless to say, I found the immersion into the programmer’s language very challenging. It was challenging (and, I will admit, even harrowing at times) because not only was I trying to accumulate basic knowledge of the new language, I was also brainstorming ideas for an interactive exhibit on Harvey at the same time. In some ways, it felt like I was trying to devise a Shakespearean sonnet in Chinese or with the vocabulary of a second grader (which is pretty much the extent of my vocabulary in Chinese). All I could envision was something rudimentary at best.

It was challenging to design an exhibit as I was learning the new language simply because I did not know if the ideas that I or others had were actually possible, or, more precisely, would be actually possible for me to learn how to do within the time limit. (I also discovered a humorous difference between the kinds of ideas thrown out by those in Programming versus those in non-Programming roles; the “anything is possible” optimism that technology seems to inspire was not so readily exhibited by those of us who had confronted, and would still have to confront, the befuddling intricacies of code.)

Despite all the challenges, uncertainties, and yes, even secret fears that the particular interactive exhibit I was working on might not come to fruition, things worked out. We hosted our Digital Exhibit on Harvey in early April; all programs functioned; no computers crashed (thank goodness). Looking back to September and my reasons for deciding to learn how to program, I think I am glad, after all, that Ahmed had made it sound so simple. With just a bit of reading, he had written coaxingly, the socially-conscious scholar will be well on his or her way to programming, to filling that gap between the public and the past, and between computer scientists and the future of history. If he had spelled out all the emotions one was apt to go through when learning how to program, I’d probably not have taken it on and thus have missed out on learning to speak a new language, on learning to speak in code.

____________________________

[1] Manan Ahmed, “The Polyglot Manifesto I,” Chapati Mystery, http://www.chapatimystery.com/archives/univercity/the_polyglot_manifesto_i.html.

[2] Manan Ahmed, “The Polyglot Manifesto II,” Chapati Mystery, http://www.chapatimystery.com/archives/univercity/the_polyglot_manifesto_ii.html.

dream

Steveston, British Columbia

Photography is one of those activities that I can lose myself completely in. The hours I spend on it is time freely given (and hardly felt). Although I’ve been lucky enough to capture a few photographs that I’m pleased with (including the one above, which was modestly granted an honorable mention in the Geography Department’s fundraising contest for United Way), I’ve always considered myself just a tinkerer of sorts. A dabbler, if you will, whose yearning to be “artistic” has been mostly helped by technology. (I credit my Nikon camera completely for any good shots.)

* * * * *

As it turns out, the digital age is apparently very amenable to those with tinkering and dabbling tendencies.

That, at least, was the (hopeful) sense that I got from reading Jeff Howe’s article on “The Rise of Crowdsourcing.” In it, Howe traces the ways in which companies are tapping into “the latent talent of the crowd.” He brings up the example of iStockphoto, a company that sells images shot by amateur photographers – those who do not mind (who, in fact, I’m guessing, would be thrilled about) making a little bit of money doing what they already do in their spare time: take pictures.

According to Howe, the increasing affordability of professional-grade cameras and the assistance of powerful editing software like Photoshop means that the line between professional and amateur work is no longer so clear-cut. Add to that the sharing mechanisms of the Internet, and the fact that photographs taken by amateurs sell for a much lower price than those of professionals, and it seems inevitable that some ingenious person would have thought up a way to apply crowdsourcing to stock photography sooner or later.

Howe provides an even more striking example of how the expertise of the crowd is being plumbed these days. Corporations like Procter and Gamble are turning to science-minded hobbyists and tinkerers to help them solve problems that are stumping their R&D departments. Howe mentions the website InnoCentive as one example of the ways in which companies with a problem and potential problem-solvers are finding each other on the web: the former post their most perplexing scientific hurdles on the site and anyone who is part of the network can then take a stab at solving the problem. If they do, they are finely compensated. And a good number, in fact, do. According to InnoCentive’s chief scientific officer, Jill Panetta, 30% of all problems posted on their website have been solved. That is, to quote Panetta’s words, “30 percent more than would have been solved using a traditional, in-house approach.”

What’s intriguing about all of this is the fact that the solvers, as Howe says, “are not who you might expect.” They may not necessarily have formal training in the particular field in which the problem arises; their specialty may lie in another area altogether. Yet, it is this very diversity of expertise within the crowd of hobbyists that contributes to the success of such networks as InnoCentive. As Howe puts it, “the most efficient networks are those that link to the broadest range of information, knowledge, and experience.” The more disparate the crowd, in other words, the stronger the network. [1] I love the ironies of the digital age.

* * * * *

I’ve been wondering lately about whether history could benefit at all from the diverse knowledge and background of the crowd, whether crowdsourcing – posting a problem or request out in the virtual world in the hopes that someone might have the expertise to be able to fulfill it – could apply to a non-scientific discipline.

In other words, would a History version of InnoCentive work? A network where historical researchers could poll the crowd for information or materials or insight to help fill research gaps…where they could tap into the memories, artifacts, anecdotes, records, ephemera, (and even the ways people understand the past) of a diverse group and thereby possibly access information that might have never made it into the archives for formal preservation? How would the writing and construction of history change if, instead of primarily drawing upon the 5 to 10% of all records that ever make their way into an archives, researchers could tap into the personal archives of a disparate crowd made up of the “broadest range of information, knowledge, and experience”? (Let us put aside, for the moment, the issues of the integrity of the record and its provenance when we talk about “personal archives.” I realize that the shoebox in the attic is not nearly as reassuring a sight as the Hollinger box of the archives.) It seems probable to me that some of the 90 to 95% of records that never make their way into an archival institution are still extant, and that there could be valuable research material in them that could very well change one’s argument about the past. Would crowdsourcing be one way to get at that material?

* * * * *

P.S. Of course, I just realized that I’m raising the above questions without considering a crucial aspect in all the examples of crowdsourcing that Howe mentioned: money. Those who answered the call – whether the amateur photographer or the scientific tinkerer – were paid for their services (ranging from a dollar all the way to an impressive $25,000). To pay someone for a piece of history raises a whole other set of questions…

__________________________

[1] Jeff Howe, “The Rise of Crowdsourcing,” Wired, http://www.wired.com/wired/archive/14.06/crowds.html.

(Or: One Disgruntled User’s Frustrations with Printers)

From the moment that I laid eyes on it, I should have known that I was in for trouble. After all, as the old saying goes, if it seems too good to be true, it probably is.

* * * * *

Here is how the story goes.

Back in September, I decided to purchase a printer. I’m the kind of person who finds reading on the computer for extended periods of time difficult. Despite numerous attempts to read on-screen, I still prefer the physicality of text. My eyes find pleasure and relief in the printed word. (You’ll not, in other words, find me curling up with a Kindle any time soon.)

Anyways, back in September, after a great deal of difficulty that involved no less than lugging a large demo Epson printer on the bus and accidentally denting it a few times during my trip home (it was box-less, carried in two plastic bags, and ridiculously heavy), I managed to transport and set up this beast of a printer in my apartment. It was an all-in-one contraption, able to scan, fax, print, and copy in colour. And it was, amazingly, only $34.99 at Best Buy. A veritable deal.

Within a few weeks, I had used up the complementary black ink and had to purchase a new cartridge, which ran out in a ridiculously short amount of time (even though I bought the “heavy usage,” i.e. more costly, one). In mid-November, the black ink had run out again. After dishing out yet another $25 and installing the new cartridge, I discovered that the complementary colour ink – which I had used to print, perhaps, five documents all term – had somehow run out as well. That’s when I realized that the Age of the New Printers means that everything shuts down when the colour ink is deemed to be empty. The machine’s printing capabilities simply cease to function. All the amount of black ink in the world will not get it to print a single page.

As it was near the end of the term, I simply decided to print all documents at school rather than deal with the fuss – and cost – of getting new ink. In hindsight, a perspective which all historians will always have at their disposal, that was a mistake.

* * * * *

About a week ago, I finally had a chance to pick up some colour ink cartridges (the kind for “moderate usage” only), installed them eagerly into my dusted-off printer, and looked forward to the convenience that modern technology would again afford. (I would not have to go to the computer lab at all hours now just to print readings.)

That’s when I realized that modern technology does not always work in one’s favour. The document I printed, which was simply set in black, came out starkly, aggravatingly, white. The black ink cartridge must have dried out over Christmas break. So, it appeared that I had just shelled out $35 for colour ink in order to be able to access black ink that was no longer operative.

This evening, however, I tried to print a document again, in black, just to see if something miraculous might ensue. And it did. The machine managed to cough out a document in black ink. The printout was extremely irregular in quality – numerous lines were missing – but at least the page was no longer blank. Eager (and slightly foolish), I printed the document several more times, thinking perhaps that persistence would pay off. Although the quality did improve, the document was still quite spotty in areas. That’s when (again, eager and foolish), I decided to clean the print heads, despite the warning that this function would consume ink. I then printed a sample document. The result was much better, but still not perfect. However, I discovered with shock that running the cleaning function used up half of the new colour ink cartridges, according to the meter.

I now had two choices:

1) Run another print head cleaning (which would probably use up the rest of the colour ink – of which I had not, in any real sense, used), or

2) Give up on the black ink cartridge completely (which was also unused, except for the test printouts), and shell out more money for black ink.

(Why I didn’t decide to unplug the printer then and there, affix a sign on it that said “Free – completely, utterly free”, and put it out in the hallway of my apartment, is still something I have not answered satisfactorily yet.)

Instead, I had a flash of inspiration. History came to my rescue: I remembered that I had owned an Epson printer before. It had run out of colour ink before. And I had “fooled” it before – by going through all the on-screen steps of installing a new colour cartridge and “charging” the ink, without actually doing so. With the colour ink meter subsequently indicating that it was “full”, I could then finish troubleshooting the problem that I was dealing with at the time, which required, finicky machine that it was, colour ink.

Cheered by this memory of the not-quite-so-smart-nor-sharp-machine, I decided to run another print head cleaning tonight. Sure enough, this nearly “used” up all the remaining colour ink. I printed a few more test documents in black; their quality was improved somewhat, but it was still blanking out at certain lines. I then tried to run one more cleaning function, but the printer told me that I couldn’t – there was not enough colour ink to support it. In fact, the warning of low ink was now replaced with the message that there was no more colour ink. (Apparently, just even attempting to run a print head cleaning uses up the ink, by the printer’s standards.)

Confident, however, that I could fool the machine, I then proceeded to go through the cartridge-replacement steps, clicking the “Finish” button at the end with a flourish. The printer responded by humming and telling me that “ink charging” was occurring. I smiled – and then, I frowned. A box had popped up indicating that the replaced cartridges were, in techno-lingo, “expended.”

In other words – the machine knew. It was telling me that I could not fool it. It could detect that I had not actually fed it anything new, despite my subsequent actions of physically removing and re-inserting the colour ink cartridges, which, I have to add again, were not really used, but were indelibly (and one might even say ingeniously) branded so by the machine. Such crafty labelling had made the colour ink inoperative.

Welcome, friends, to the Age of the (Aggravatingly) Smart Machine.

* * * * *

P.S. Sixty dollars invested into printer cartridges since mid-November and I have still not been able to print one actual document since then for any useful purpose.

If it were still in vogue to be a Marxist historian, I would seriously point to the ridiculously profitable economics underlying printer design as the source of all our (or at least my) present-day ills!

Personalized Utopia or Orwellian Dystopia?

“Search engines will become more pervasive than they already are, but paradoxically less visible, if you allow them to personalise increasingly your own search experience.” [1]

“If you’re really concerned about technology, however, remember that it has the most potential to be dangerous when you stop seeing it.” [2]

* * * * *

In my last Digital History class, I (very casually) threw out the term “Orwellian dystopia” into the pool of ideas and concepts for potential discussion. I threw it out a bit in jest (because I have a declared weakness for polysyllabic words), but mostly in earnest (because, as indicated in my last post, I have had cause to think about Orwell and 1984 lately). The term isn’t mine of course, but comes out of one of the readings for the week: Phil Bradley’s “Search Engines: Where We Were, Are Now, and Will Ever Be.”

As its title clearly suggests, Bradley’s article traces the evolution of search engines, from their rather crude beginnings when web design wasn’t yet a consideration to their present-day, sophisticated forms, which promise to make our searches more personally relevant than ever before. Ruminating on the potential for search engines to get to know us individually – to the point of recommending events that we (as in you specifically, or me specifically) might wish to attend when visiting a new city or whether the supermarket down the road has the same item you or I want, only cheaper – Bradley makes the point about the pervasity and increasing invisibility of search engines which forms the first of the two opening quotes above. He then wonders if the ways in which users are sitting back, letting the alerting services of search engines bring custom-made information to them – “since the engines can monitor what you do and where you go” – will lead to an “Orwellian dystopia” of sorts. Bradley’s advice for avoiding such a dystopia? “Users will need to consider very carefully,” he writes, “exactly to what extent they let search engines into their lives.”

Bradley’s point about the expansive, yet increasingly invisible, nature of search engines fits nicely with some of the ideas articulated in a blog post of my Digital History prof, Dr. William Turkel (or Bill, as he would wish us to call him). In this post, entitled “Luddism is a Luxury You Can’t Afford” (which, I might add, graciously avoids lambasting latter day Luddites but seeks instead to understand them), Bill considers what it is exactly that neo-Luddites are objecting to when they consider technology. Drawing on Heidegger’s distinction between ready-at-hand and present-at-hand objects, Bill points out that it is the second group that poses problems for those uncomfortable with technology. This is simply because these objects are always visible and mostly intrusive – “something you have to deal with.”

Meanwhile, ready-at-hand things are invisible, unnoticed, and therefore accepted as a natural – rather than a technological – part of existence (the coffee cup, electricity, the iPod even). However, “these invisible and pervasive technologies,” Bill notes, in the same vein as Bradley, “are exactly the ones that humanists should be thinking about…because they have the deepest implications for who and what we are.” The post ends with the words I quoted above, about invisibility being the most potentially “dangerous” aspect of technology.

* * * * *

I have been ruminating lately on the idea of transparency on the Web, and it seems to me that there is a strange sort of tension that exists.

On the one hand, as Kevin Kelly’s talk has shown (discussed in the previous post), users are required to be transparent in cyberspace, if they wish to have any sort of personalization. The Web can’t make recommendations if it doesn’t know you, if your searching patterns, socializing patterns, buying patterns, browsing patterns are not visible.

On the other hand, its very collection of this information and the ways in which it presents you with a set of personalized results are becoming less visible, as Bradley has argued. One might actually put this another way, and say that the Web, search engines included, is becoming more transparent, not in making itself visible, but in making itself invisible. It is becoming so transparent that, like spotlessly clear glass, users cannot see that there is something mediating between them and the information world out there, and so they might be tempted to conclude that the search results they’ve gotten are natural and obvious results showing the most naturally important and self-evident links.

In our last Digital History class, it was Rob, I believe, who brought up the issue of Google search results and the problem that they can be very limiting – without the user realizing that they are limiting. Since, as Dan Cohen has said, Google is the first resource that most students go to for research (at least, the graduate students he polled do), [3] the results it presents may very well define how a subject is understood. The danger, then, is that users won’t be critical of their search results, and how they may be tailored or skewed based on numerous factors, because they don’t even realize that such mediation mechanisms are taking place. Thus, invisibility, as Bill has noted, is a definite problem. And, I have to wonder as well, in terms of research, if personalization is too. Will one’s understanding of World War One, say, or of John A. McDonald, or Joseph Stalin, or Mary Wollstonecraft be influenced by one’s buying patterns? Music interests? Socializing habits? If it’s to be accepted that students start their research using Google, and they are signing into Google’s services when conducting such research, what implications does this have on their understanding of history? On the writing of history?

So to try to tie together the strands of these last two posts: it seems that transparency on the Web, in the search engines that exist, is rooted in their invisibility – such engines are a window to the information world that’s easily mistaken for an unmediated glimpse of the real world itself – while transparency of users means their utter visibility and machine-readability. I agree with Bradley and Bill that we shouldn’t take invisible technologies for granted – they need to be explored, critiqued, and discussed; made visible, in other words – so that users can decide for themselves how far they want to go in the age of personalization.

* * * * *

P.S.

I feel compelled to add that to approach these issues from a questioning stance is not to reject the benefits of search engines, or of personalization, or of the Web, or of technology at all. (I, for one, recently ordered a book from Amazon based on its recommendation – or, more precisely, because its very first recommendation was a book that a friend had already suggested I read; that Amazon was in line with someone who knows me well amazed me enough to purchase the book!) The issue is never simply, I think, technology in and of itself. It is the uses of technology, how new capabilities, especially in the digital age, is going to be employed and what their developments and effects might mean (and have already meant) for individuals and groups in society that is the crucial issue at hand.

To conclude, I think Marshall McLuhan’s classic metaphor about the dual nature of technology – how it both “extends” and “amputates” – is still relevant and instructive today. It seems that in most discussions about technological innovation, we nearly always hear about “extensions” (Kelly did the same thing; though interestingly, he went so far as to reverse McLuhan’s idea, calling humans the extension of the machine) but we rarely hear about “amputations.” Perhaps a balanced approach – one that keeps visible both the advantages and disadvantages of new technologies, that considers their impact in a broad sense, neither blindly fearing them because they are new nor unreservedly embracing them for the same reason – is the way to ensure that we remain critical in the digital age.

_______________________________

[1] Phil Bradley, “Search Engines: Where We Were, Are Now, and Will Ever Be,” Ariadne Magazine 47 (2006), http://www.ariadne.ac.uk/issue47/search-engines/.

[2] William J. Turkel, “Luddism is a Luxury You Can’t Afford,” Digital History Hacks, http://digitalhistoryhacks.blogspot.com/2007/04/luddism-is-luxury-you-cant-afford.html.

[3] Daniel J. Cohen, “The Single Box Humanities Search,” Dan Cohen’s Digital Humanities Blog, http://www.dancohen.org/2006/04/17/the-single-box-humanities-search/.

Courtyard of Glendon College

During the last weekend in September, I had the good fortune to be able to attend my first – I hesitate to call it this, for reasons that should become apparent – academic conference hosted at the Glendon College campus of York University in Toronto.

It’s true that academics organized this conference – several energetic PhD students from York & University of Toronto spent over a year putting it together. It’s also true that a good number of academics attended it – there were scholars from universities across Canada and even in the States. And, in terms of organization, scholarship, presentation, and professionalism, I am sure the conference rivalled any others that are set within the academy. But to call this particular gathering an academic conference is to undercut somewhat its very reason for being, which was to consider history – and how history is done – beyond the walls of the university, at the level of community.

The organizers named this conference “Active History: History for the Future” and their welcome statement in the conference booklet summarizes well its non-academic spirit: “The conference themes…address the ways in which historians and other scholars must do more than produce knowledge for peer-reviewed journals and academic monographs, must do more than present at academic conferences, must do more than require oral interviewees to sign ethics forms and read over transcripts.”

Having read, and lamented with my peers, about the gaping divide between public history and academic history, having wondered myself whether the history that I might participate in producing as a public historian will ever be, or be considered, as “valid” as the histories generated by those within academia, attending this conference felt a little bit like coming home.

Public history is not, of course, exactly identical to active history – the latter, as I understand it, is an approach to history that self-consciously attempts to understand the past in order to change the present and shape the future. But if the field of public history itself does not seem to me to be quite as socially and politically driven in its usual incarnations (which is not to say that it can’t be), in many ways, these two kinds of approaches to doing history overlap. I noticed this just in the vocabulary of the conference, in the key words and concepts that were articulated again and again, words like:

community; stories; narrative; engagement; accessibility; dialogue; communication; digitization; interactivity; teaching; multimedia; creativity; audience; collaboration; negotiation; inclusivity; participatory; partnerships; networks; reflexivity; and material culture – just to name some.

Most of these are not words that typically describe academic history, but they’re words that I get excited about. And it was heartening to see that there is a large network of researchers, both university-based and community-based, just as excited too.

So, what did I take home from the conference? The following were some of my observations, in no particular order.

Creativity counts.

Actually, it doesn’t only count; it seems crucial in any project geared towards presenting the past to the public. The good news is, there seems to be countless ways to be creative.

One engaging way is through food. Karim Tiro, from Xavier University, shared about an exhibit on the history of sugar that he’s planning. The twist? It’s going to be set in a public food market – a civic space, he said, where the community gathers and makes itself visible – instead of within a traditional museum setting with its oftentimes authoritative curatorial voice, which can be distancing. Such markets, he said, are great spaces to share history because people are naturally interested in food. His project strikes me as an innovative way to approach important historical issues – like slavery, like politics – through something people are intimately familiar with. And Karim is turning that on its head too. His goal: to make the familiar unfamiliar, and thus to hopefully engage.

Outside the box is the place to be.

Conference attendees were keen to think beyond the boundaries of traditional history, whether it was ivory tower history, glass-encased history (i.e. in museums), or mainstream history. The desire is to move away from only producing manuscripts sprawling with footnotes, or only accessing traditional archives that are silent when it comes to the histories of those who didn’t leave written records, to recognizing the importance of oral histories, personal stories, and other ways of understanding the past, especially as it relates to marginalized groups. This desire was interestingly expressed in the very methods of some of the presenters themselves.

Eva Marie Garroutte of Boston College illustrated how one could craft a research methodology based on a particular cultural practice within a community, and, in so doing, to include the research subjects in the history-making process. This meant that we had a chance to learn about the Cherokee Stomp Dance and to hear about how methods of research could incorporate structural elements from this cultural practice. Mary Breen of the Second Wave Archival Project presented on feminist history and allotted some of her presentation time to reading directly from excerpts of oral history transcripts. The result? We got to hear stories in the voices, cadences, tones of the female participants themselves (including the humorous story of one woman who, throughout her marriage, always kept some “running away” money on her – just in case).

Community research = respectful research.

Lillian Petroff of the Multicultural Historical Society, who conducts oral histories of members from various communities, expressed the stakes so well: “When people agree to be interviewed,” she said, “they are putting the meaning of their lives in your hands.” So she’s careful to approach her interviewees with respect, always as subjects, never as objects, and the result is that she often ends up forming lifelong friendships. Her goal is to build relationships and engage in dialogue. Lillian made an interesting point that because oral history has often attempted to mirror written history, it has often not been about conversation. And I won’t readily forget her provocative admonition not to “pimp” as a researcher: using your subjects, getting what you need, and then exiting.

Audience matters.

Indeed it does. Speakers and participants talked spiritedly about making history accessible, interactive, and engaging. Creative ways for drawing an audience, especially one that might not be interested in history at the outset, were discussed, from holding meetings in museum spaces (which is far less intimidating than being asked to go visit a museum) to bringing history out onto the streets (using posters, for example) to hosting community events that spark interest in the histories of one’s own neighbours (like holding “antique roadshows” where members can bring items for “show and tell” and, possibly, donate them!).

Two is better than one.

Active history, public history, is not isolated history. Collaboration – not only with academics, but with community members, community-based researchers, members of historical societies and of other relevant organizations – is crucial. Heather George, a fellow UWO Public History student, (bravely) stated the need for us to realize that academic historians have one way of approaching history, community members have another, equally valid, way, and that we must work at incorporating both in any historical narrative. Lisa Helps, one of the organizers from U of T, articulated this as the need for collaborative methodologies. We left thinking about the importance of developing networks and partnerships with diverse people and groups, and of the need to share resources, knowledge, and expertise. The resounding idea is that good history in the public realm will always be collaborative – and transformative too, for both participants and researchers, as Lisa expressed.

Technology is your friend.

Not surprisingly, digitization was mentioned over the course of the conference. So were websites, GIS mapping, and even YouTube. Perhaps the only key word of the digital age I expected to hear but didn’t was Wikipedia!

Lorraine O’Donnell, an independent historian, put it nicely when she referred to the web as a “repository for personal and community memory and history,” and stressed it as a resource that we should all work towards using. And James Cullingham, owner of Tamarack Productions, in his “how-to-make-a-living” advice to us Public History students in particular, threw out the word “multiplatform”: can the project, he asked, be conceived as something that can be watched on a cell phone, on the web, or on television? (The answer should be yes.)

Reflexivity is always important.

Craig Heron of York University emphasized the need for us to think more about how people learn. Beyond just slapping text on an exhibit panel or on a website, he said that we need to consider how information is created and what message people leave with. Being aware of one’s practice, of how one communicates, even of power imbalances, were important themes that resurfaced throughout the conference.

And, finally, not to forget that history in the public realm can be contentious:

Politics happens.

Rhonda Hinther from the Canadian Museum of Civilization talked about the challenges of producing history in museums. Certain histories are seen as just too controversial or too political for museum settings. Or they’re simply not the kinds of history that attracts a crowd. Thus, “doing history in a federally-funded setting,” she said, “can be uncomfortable.” One has to be pretty creative to slip in “other” histories and to be prepared – for clashes with administration. But it can also be very rewarding. For Rhonda, I think, part of the reward is to be called a “subversive curator” at the end of the day.