(Or: One Disgruntled User’s Frustrations with Printers)

From the moment that I laid eyes on it, I should have known that I was in for trouble. After all, as the old saying goes, if it seems too good to be true, it probably is.

* * * * *

Here is how the story goes.

Back in September, I decided to purchase a printer. I’m the kind of person who finds reading on the computer for extended periods of time difficult. Despite numerous attempts to read on-screen, I still prefer the physicality of text. My eyes find pleasure and relief in the printed word. (You’ll not, in other words, find me curling up with a Kindle any time soon.)

Anyways, back in September, after a great deal of difficulty that involved no less than lugging a large demo Epson printer on the bus and accidentally denting it a few times during my trip home (it was box-less, carried in two plastic bags, and ridiculously heavy), I managed to transport and set up this beast of a printer in my apartment. It was an all-in-one contraption, able to scan, fax, print, and copy in colour. And it was, amazingly, only $34.99 at Best Buy. A veritable deal.

Within a few weeks, I had used up the complementary black ink and had to purchase a new cartridge, which ran out in a ridiculously short amount of time (even though I bought the “heavy usage,” i.e. more costly, one). In mid-November, the black ink had run out again. After dishing out yet another $25 and installing the new cartridge, I discovered that the complementary colour ink – which I had used to print, perhaps, five documents all term – had somehow run out as well. That’s when I realized that the Age of the New Printers means that everything shuts down when the colour ink is deemed to be empty. The machine’s printing capabilities simply cease to function. All the amount of black ink in the world will not get it to print a single page.

As it was near the end of the term, I simply decided to print all documents at school rather than deal with the fuss – and cost – of getting new ink. In hindsight, a perspective which all historians will always have at their disposal, that was a mistake.

* * * * *

About a week ago, I finally had a chance to pick up some colour ink cartridges (the kind for “moderate usage” only), installed them eagerly into my dusted-off printer, and looked forward to the convenience that modern technology would again afford. (I would not have to go to the computer lab at all hours now just to print readings.)

That’s when I realized that modern technology does not always work in one’s favour. The document I printed, which was simply set in black, came out starkly, aggravatingly, white. The black ink cartridge must have dried out over Christmas break. So, it appeared that I had just shelled out $35 for colour ink in order to be able to access black ink that was no longer operative.

This evening, however, I tried to print a document again, in black, just to see if something miraculous might ensue. And it did. The machine managed to cough out a document in black ink. The printout was extremely irregular in quality – numerous lines were missing – but at least the page was no longer blank. Eager (and slightly foolish), I printed the document several more times, thinking perhaps that persistence would pay off. Although the quality did improve, the document was still quite spotty in areas. That’s when (again, eager and foolish), I decided to clean the print heads, despite the warning that this function would consume ink. I then printed a sample document. The result was much better, but still not perfect. However, I discovered with shock that running the cleaning function used up half of the new colour ink cartridges, according to the meter.

I now had two choices:

1) Run another print head cleaning (which would probably use up the rest of the colour ink – of which I had not, in any real sense, used), or

2) Give up on the black ink cartridge completely (which was also unused, except for the test printouts), and shell out more money for black ink.

(Why I didn’t decide to unplug the printer then and there, affix a sign on it that said “Free – completely, utterly free”, and put it out in the hallway of my apartment, is still something I have not answered satisfactorily yet.)

Instead, I had a flash of inspiration. History came to my rescue: I remembered that I had owned an Epson printer before. It had run out of colour ink before. And I had “fooled” it before – by going through all the on-screen steps of installing a new colour cartridge and “charging” the ink, without actually doing so. With the colour ink meter subsequently indicating that it was “full”, I could then finish troubleshooting the problem that I was dealing with at the time, which required, finicky machine that it was, colour ink.

Cheered by this memory of the not-quite-so-smart-nor-sharp-machine, I decided to run another print head cleaning tonight. Sure enough, this nearly “used” up all the remaining colour ink. I printed a few more test documents in black; their quality was improved somewhat, but it was still blanking out at certain lines. I then tried to run one more cleaning function, but the printer told me that I couldn’t – there was not enough colour ink to support it. In fact, the warning of low ink was now replaced with the message that there was no more colour ink. (Apparently, just even attempting to run a print head cleaning uses up the ink, by the printer’s standards.)

Confident, however, that I could fool the machine, I then proceeded to go through the cartridge-replacement steps, clicking the “Finish” button at the end with a flourish. The printer responded by humming and telling me that “ink charging” was occurring. I smiled – and then, I frowned. A box had popped up indicating that the replaced cartridges were, in techno-lingo, “expended.”

In other words – the machine knew. It was telling me that I could not fool it. It could detect that I had not actually fed it anything new, despite my subsequent actions of physically removing and re-inserting the colour ink cartridges, which, I have to add again, were not really used, but were indelibly (and one might even say ingeniously) branded so by the machine. Such crafty labelling had made the colour ink inoperative.

Welcome, friends, to the Age of the (Aggravatingly) Smart Machine.

* * * * *

P.S. Sixty dollars invested into printer cartridges since mid-November and I have still not been able to print one actual document since then for any useful purpose.

If it were still in vogue to be a Marxist historian, I would seriously point to the ridiculously profitable economics underlying printer design as the source of all our (or at least my) present-day ills!

Advertisements

I like words. I like their abundance. Their variety. The different nuances that they contain. I like how a word like “melancholy,” for example, has a slightly different flavour than “forlorn,” or how the word “myth” conveys something deeper – more emotional, more enduring, more fluid – than its neutral variant, “story.”

Now, most of my Public History peers would probably say that I don’t just like words – I like polysyllabic words. I’ll be the first to admit that long-ish and rather atypical words have a tendency to come to me, unbidden, and that I have an equal tendency to utter them aloud, without thinking. This penchant for the polysyllabic has sometimes even gotten me into trouble, won me unintended notoriety among my peers. 😉

As someone studying in the field of Public History, I realize that I have to think especially carefully about the words that I use, not only because using the “wrong” word can alienate a general audience but also because choosing the “right” word involves weighing various needs, such as those of the client and of the audience, as well as my own for precise language and “good” history. So, while I might immediately prefer the word “auspicious” over “lucky” or feel that “appropriated” explains a situation more clearly than “took,” I’m learning to pause and reconsider the effects of my instinctive word choices, and I’m learning to negotiate the sometimes conflicting needs and desires that exist at the micro-level of diction.

What I didn’t expect was to have to consider the machine as an audience as well. And yet that is what Dan Cohen’s blog post has drawn to my attention. In “When Machines are the Audience,” Cohen suggests that the age of machine-readable text means that we’ll need to write a little differently – or at least with the awareness that what we write can be more, or less, visible in a digital environment depending on the words that we use. The implication is that because text can be read by machines and mined using keyword searches, it is better to use precise and even unique terms or tags in related documents, so that the writing can be more easily searched, grouped, or retrieved. Cohen mentions, for example, how coming up with a unique string of words to identify related history websites can facilitate the process of narrowing searches to these sites only, so that relevant research information can be found. He also cites the example of a legitimate, history-related email being marked as spam, because of its unfortunate use of certain words that are high on the list of favorites for spammers. [1]

Looking over the words I used in a recent email to a friend, I’ll confess that terms ranging from “usurp” and “misnomer” to “vignette” and “quadratic” (yes, as in the quadratic equation) made their way in. (You are probably feeling some pity for my friend right now.) However, I’m consoled, and slightly amused, by the fact that what stands out as atypical word usage is precisely what spam filters ignore. At least, in this area, my penchant for the polysyllabic – for what’s seen as the atypical – has some redemptive purpose. 🙂
________________________________

[1] Daniel J. Cohen, “When Machines are the Audience,” Dan Cohen’s Digital Humanities Blog, http://www.dancohen.org/blog/posts/when_machines_are_the_audience.

“Predicting the Next 5000 Days of the Web”

Recently, a friend of mine sent me a link to a video on TED.com. In this video (which, if you have 20 minutes to spare, is highly worth checking out), the presenter, Kevin Kelly, traces the Web’s development over the last 5000 days of its existence. Calling it humanity’s most reliable machine, Kelly compares the Web, in its size and complexity, to the human brain – with the notable difference being, of course, that the former, and not the latter, is doubling in power every two years. As he sees it, this machine is going to “exceed humanity in processing power” by 2040.

Not only does Kelly look back on the last 5000 days, he also projects forward, and considers what the next 5000 days will bring in the Web’s evolution. What he envisions is that the Web will become a single global construct – “the One” he calls it, for lack of a better term – and that all devices – cell phones, iPods, computers, etc. – will look into the One, will be portals into this single Machine.

The Web, Kelly states (pretty calmly, I might add), will own everything; no bits will reside outside of it. Everything will be connected to it because everything will have some aspect of the digital built into it that will allow it to be “machine-readable.” “Every item and artifact,” he envisions, “will have embedded in it some little sliver of webness and connection.” So a pair of shoes, for instance, might be thought of as “a chip with heels” and a car as “a chip with wheels.” No longer a virtual environment of linked pages, the Web, in Kelly’s conception, will become a space in which actual items, physical things, can be linked to and will find their representation on the Web. He calls this entity the Internet of Things.

We’re not, of course, quite at that stage yet, but we are, according to Kelly, entering into the era of linked data, where not only web pages are being linked, but specific information, ideas, words, nouns even, are being connected. One example is the social site which allows a person to construct an elaborate social network online. From Kelly’s perspective, all this data – the social connections and relationships that we each have – should not have to be re-entered from one site to the next; you should just have to convey it once. The Web, he says, should know you and who all your friends are – “that’s what you want” he states (again, quite matter-of-factly) – and that is where he sees things moving: that the Web should know and remember all this data about each of us, at a personal level.

In this new world of shared data and linked things, where (as I have pondered in a previous post) the line between the virtual and the physical is no longer an identifiable line, Kelly sees one important implication: co-dependency.

The Web, he says, will be ubiquitous, and, in fact, for him, “the closer it is, the better.” Of course, in knowing us personally, in anticipating our needs, the Web exacts a price: “Total personalization in this new world,” Kelly concedes, “will require total transparency.” But this is not a hefty price for Kelly, who, it seems, would prefer to ask Google to tell him vital information (like his phone number for instance) rather than try to remember such things himself.

This sheer dependency on the web is not a frightening prospect for Kelly; he compares it to our dependency on the alphabet, something we cannot imagine ourselves without. In the same way, he argues, we won’t be able to imagine ourselves without this Machine, a machine which is at once going to be smarter than any one of us, but is going to (somehow) be a reflection of all of us.

Kelly ends his talk with a “to do” list, which, for its tone alone, needs to be repeated:

There is only One machine.
The Web is its OS.
All screens look into the One.
No bits will live outside the web.
To share is to gain.
Let the One read it.
The One is us.

I will confess that when I finished watching the video, I shuddered a little. I found Kelly’s predictions to be fascinating but also pretty unsettling. (And I’ll admit: it made me think of The Matrix and 1984 at various points.) My reaction, I suppose, stems from a discomfort with anything that smacks of…centralization, of one big global entity, so the concept of One Machine that owns everything and connects everyone, One Machine which is both us, and yet a bigger and smarter and better us, is simply disconcerting.

I can admit my own biases. As a History student, I’ve examined enough regimes over the years to be wary of certain rhetoric and have noticed how, at times, things framed in terms of unity and community and sharing (in this case, of data, networks, knowledge, etc.) can devolve into something that becomes a cultural hegemony of sorts, or worse, a system of pervasive surveillance. (Kelly did, after all, mention shoes as “a chip with heels” and cars as a “chip with wheels,” developments which can certainly aid individuals in staying connected to one another, for instance, but can also aid the State in staying connected to individuals.)

The embedding of the digital into everything in the material world, including people, so that it can be read by one machine, is an unsettling notion. I guess I don’t prefer to have everything personalized, to have a single, networked, global machine that reads me and knows everything and everyone connected to me. It may be convenient – and it seems that convenience, along with speed, are the two guiding principles behind technological developments – but I’d rather not be so transparent in cyberspace, to be so utterly “Machine-readable.”

Putting aside my own personal reaction to Kelly’s predictions, what are the implications of his thoughts concerning the Web’s evolution on the discipline of history? If realized, the new world of linked things will certainly demand, as some digital historians have already noted, a re-thinking of how history is researched and presented. The possibilities for new connections to be made – not only between data and ideas, but between actual things, artifacts – are, I’m sure, going to change and open up the ways in which history is understood, communicated, and taught, especially by and to the public.

I can imagine already, down the road, that someone interested, say, in the history and development of the camera, might be able to pull up, in a split-second search, every single camera made in the 20th century that is owned and catalogued by all the museums that have taken the trouble to embed digital info into these objects, which then makes them linkable, “machine-friendly”. Artifacts made in the 21st century that already contain that “sliver of webness and connection” will be even more easy to search for and pull up (for the 22nd century historian let’s say), existing, as it were, already as digital-physical hybrids. The ease with which actual things can be connected and thus compared, regardless of their geographical location, is going to make for some interesting comparative histories.

So, the possibility for new engagements with history is there (as it ever is, I think, with the emergence of new technologies). I only wonder how one is going to possibly keep up with all the changes, with the Web and its continual evolution.