Passing on Passwords

I have often mused that, in this so-called Information Age, it would not be difficult to come up with a good opening sentence for a novel. It could be this: “I remember a world without passwords.” While this might not rank up there with the opening sentences of Melville’s Moby Dick or Dickens’ Tale of Two Cities, it does capture the essence of this new era we live in. Moreover, my young graduate students can hardly conceive of a world that is not connected and populated with passwords. Even as we live in a world where we are threatened with identity theft and credit card breaches. despite smug promises of secure networks, we seem to have lost our names to passwords. If information is, indeed, power, it is not our power, but that of corporations and governments that is at play.

Just reflect on what you, assuming you are equipped with a modicum of computer devices, have to go through on a regular basis. You have to create and set passwords of varying complexities. You have to remember them, usually meaning you have to keep a record book of them. You have to keep a duplicate set of a password master book, and then protect them from getting lost or stolen. You are often required to reset passwords and then to make sure that you record these changes. For people who worry about losing their keys or wallets on a daily basis, keeping track of all this can get nightmarish. We may have access to greater quantities of information than any generation has ever had, but it is not as easy as high-tech service providers make it out to be.

It gets complicated the more we reflect on this. A generation ago, there was a lot of discussion and handwringing about “information haves and have-nots.” For some reason the conversation about this has seemed to diminish, even though it is unlikely that it is any less of a problem. While the costs of mobile and other computing devices have declined in respect to their power and capacity, the financial burdens remain considerable. Smart phones are not inexpensive to maintain, and they are not likely to decrease. Are we better for these costs? And, more importantly, can everyone afford them? What happens to those who cannot afford them? Can they compete? Can they function? It seems unlikely that they can.

Passwords seem to be symbolic of greater issues. They reflect, of course, a form of individual control and privilege. With them, you have confidence in your ability to purchase goods, pay your bills, access and use reward points, and invest your resources. You also heighten your sense of security and privacy, but we regularly read and hear reports about how fragile such assurances can be. It seems like everyday a news story breaks about the hacking of credit records of major chain stores or the breaching of financial records of our major banking institutions. What’s the point of our taking so much care to manage our online presence if our records are compromised anyway?

The notions of personal privacy or individual security have been either redefined or obliterated, depending on your perspective, just within the past generation. Every week, it is likely you will receive a statement about privacy from one of your credit card, financial, or utility companies. They are wordy, complicated, and in small print, and it is unlikely you will take much time to read anyone of them (just as you will probably not read most of those software user agreements when they pop up when you install the software). It would be good if you read one of these occasionally. However, this personal approach isn’t even the most important thing when we consider privacy or security. In this age of terror and 24/7 news coverage, the government has become the big issue.

As we hear of terrorist attacks, random shootings, riots, epidemics, and other such events, we often revert to expectations that government will step in to protect us. And government will often revert to using approaches that usurp our privacy and curtail our personal freedoms. Every action we take, especially online, can be followed and logged. Those who participate in online communities, such as FaceBook, give away much of their privacy, sometimes unwittingly but usually with the full knowledge that this information can be tracked by others. All of this provides extra means by which to maneuver around passwords and other barriers we think we have erected for our protection. Government is too often not our protector but what we ought to ear. Whereas our Bill of Rights protects our records found in our homes, now so many of our records are littered across cyberspace. Government sees all of us as potential dangers, as potential terrorists. Passwords are of little use here. Nor are privacy statements.

What can be done about any of this? It is perhaps that the very concepts of privacy are fundamentally changing, that we must acknowledge that we have little that is worth calling privacy. The availability of the late reclusive writer Susan Sontag’s entire digital archive at the UCLA special collections raises all sorts of questions about the nature of privacy, not the least of which is why she wanted such access. Whether this adds anything to our understanding of Sontag is debatable, but it certainly gives us a new kind of archival voyeurism. And maybe that is all that now really matters now (see Jeremy Schmidt and Jacquelyn Ardam, “On Excess: Susan Sontag’s Born-Digital Archives,” Los Angeles Review of Books, 26 October 2014,   http://lareviewofbooks.org/essay/excess-susan-sontags-born-digital-archive/)​. How much of value will we learn about Sontag? Probably not that much, given all the trouble.

In the meantime, we need to learn to manage our passwords and our public presence online and elsewhere. Yet, there is no question we have lots to learn. We need to reach deep inside ourselves, to our personal faiths and senses of morality. Craig Deweiler, from a Christian perspective, writes, “We practice an ancient faith committed to renewal. To t)he loud, we can counter with quiet. To the fast, we can offer slow. And to the superficial, we can go deeper” (Craig Deweiler, iGods: How Technology Shapes Our Spiritual and Social Lives [Grand Rapids, MI: Brazos Press, 2013], p. 207). From whatever perspective, these are admirable goals.

So, yes, I remember a world without passwords. But many do not. It is why a historical perspective is mandatory. Having the longer view is essential for us being able to peer through all the claims of our shiny new age in order to truly comprehend whether our lives have improved or not.

Look Again

Old photographs have long fascinated archivists, historians, other researchers, and the public. The same applies to me. When my interest in history was initially awakened in the late 1950s, photographs were involved. My first visits to Civil War battlefields include connections to images (this is an image of me circa 1960 at Antienam), such as the one of the dead Confederate sniper in Devil’s Den at Gettysburg; I subsequently learned that the body had been moved and staged by the photographer, a not uncommon practice by these early photographers.

The use of images has gotten more sophisticated. They have moved from being used as devices to spice up histories to being recognized as historical sources worthy of careful research on their own. I was fortunate to have as my advisor in my history masters program at the University of Maryland, Walter Rundell, Jr., who used photographs as sources in his history of the early Texas oil industry and whose 1978 Society of American Archivists presidential address concerned photographs as historical evidence (“Photographs as Historical Evidence: Early Texas Oil,” American Archivist 41 [October 1978]: 373-398). Since those days of the rediscovery of archival images, we have had a succession of interesting histories, manuals, and theoretical treatises on photography and its uses and challenges.

A new book – J. Matthew Gallman and Gary W. Gallagher, eds., Lens of War: Exploring Iconic Photographs of the Civil War (Athens: University of Georgia Press, 2015) – suggests the continuing interest in images. The purpose of the book is described as follows: “This is a book about photographs, and about historians. The project began with a very simple observation. People who study the Civil War era spend an enormous amount of energy thinking about and talking about photographs. Yet, we seldom take the photograph as our subject, and we almost never share personal reflections that stray beyond our normal academic writing.” The editors asked an array of scholars “to select one photograph taken during the Civil War and write about it” (p. 2). The result is an interesting collection of essays that individuals interested in historical evidence and its use will find informative.

The volume includes twenty-six essays grouped in five parts – Leaders, Soldiers, Civilians, Victims, Places – with interpretations based on personal observations and archival research. Caroline E. Janney’s essay about a photograph of a family in a military camp provides an example of what these essays contribute: “I am drawn to this photograph time and time again because it offers an endless source of contemplation and emotional appeal. I cannot help but be fascinated by the myriad details, from the ceramic pitchers that seem out of place in a military encampment to the hole in the woman’s sweater that suggests her humble origins. But it also leaves me with more questions than it answers. Was it viewed during the war, or did it become popular only after Appomattox? Did the young woman and the children move with the regiment when it headed to the Virginia peninsula? Did they manage to avoid the devastating camp diseases? If the children survived, how did their wartime experiences shape their adult lives? We will never know. Even so, their image serves as a poignant reminder of how far-reaching the war even from the outset, how intimately it affected families – even how families ostensibly far from the front lines – and how families in turn affected the Civil War” (p. 119).

The book includes a brief bibliographic essay about Civil War photography and photography in general. What would have been useful is an essay drawing together various themes and approaches about using such images. The editors might also have included some references to writings about photographs by archivists, such as Joan Schwartz, concerning the theoretical perspective on images as sources; there is a rich literature in that sector that historians need to know about.

Curate Something

Nowadays we see or hear the word “curate” in television, magazine, and other advertisements. Whereas the word was originally used to describe work associated with art and other museums and personal collecting and connoisseurship, now the word has been appropriated by a variety of other professions, from information science to various humanities fields now engaged in digital work, to a wide array of hobbies and avocations such as genealogy and collecting of various sorts. Why have we seen the emergence of this broader use of the term? Hans Ulrich Obrist describes it this way: “The current vogue for the idea of curating stems from a feature of modern life that is impossible to ignore: the proliferation and reproduction of ideas, raw data, processed information, images, disciplinary knowledge and material products that we are witnessing today” (Hans Ulrich Obrist, with Asaci Raza, Ways of Curating [New York: Faber and Faber, Inc., 2014], p. 23). The purpose of this post is not to debate the appropriateness of the term but to consider the utility of what is implies in our digital era.

The average American is awash in data, most of it in digital form. A lot of this data is deliberately ephemeral, but a lot of it is of permanent value to individuals and society. Family photographs and movies, letters and e-mail, diaries and blogs, scrapbooks and social media such as Facebook, diplomas and award certificates, and other documents are all valuable for marking one’s place in the world and useful for a myriad of reasons. Given the accelerating pace of our world, an increasing number of people have taken to personal archiving projects or collecting of antiques, printed ephemera, and historical re-enfacements as a way of connecting to the past. This has become especially enticing since so much of the digital world is constantly endangered. Backing up one’s life on the “cloud” seems like an uncertain hedge.

This commentator on curation has identified four functions associated with it — preservation, selection, contributing to scholarship (in this assessment, art history), and displaying and arranging. This provides a workable framework for further discussion. Preservation in the digital era may seem like something of a contradiction. Being connected digitally seems to imply always being on the move, always staying current, and always being in the present. Working digitally suggests that our texts, from e-mail to essays and reports, are always subject to change. For a while, professionals involved in preservation management viewed anything digital as not preserved and anything analog as permanent (that is, if subjected to the right precautions); in other words, to preserve a digital text meant that you had to move it into an analog form (print it out and store it in a secure, environmentally sound way). Now we recognize that there are elements of digitally born objects where we need to maintain their digital characteristics, both in order to understand their original intent and to make effective use of their content. We are shifting more and more attention to something called digital curation or stewardship, perhaps the inevitable result of living in a time when virtually everything is born digital. The task now seems to be influencing those who build our hardware and design our software to recognize the importance of a preservation mentality that is not merely antiquarian (collecting old and odd stuff) or some form of elitist connoisseurship (maintaining material objects because of some sort of rarified aesthetic principles).

The latter aspects of preservation bring us face to face with the nature of selection. One of the deceptive ideas of our digital era is that everything can be saved. In fact, we must learn to select carefully from among both our digital and analog sources. While the notion of Big Data has captured a lot of our attention, as a panacea to all of our business, government, and personal information challenges, we must also focus on the most important information as a safeguard in documenting our activities and being accountable. For individuals, this means they must learn to identify what is essential for family memory and economic accountability. Family archives, just like organizational ones, can be built initially from the financial records maintained for tax and related purposes. The memory of the family requires the maintenance of letters, diaries, photographs, and personal mementos of trips, events (such as weddings and baptisms), and favorite activities and pastimes. As an increasing quantity of this is born digital, extra caution needs to be paid to it; this is especially true for photographs, which have in the past two decades become almost exclusively digital. Many individuals and families place more and more of these kinds of materials on websites, blogs, FaceBook, and Twitter — and we need to protect passwords in order to ensure their protection.

Most people will not assume that anything they do to maintain personal and family papers will be of interest to scholars and other researchers. And for many that is a valid conclusion. However, historical, sociological, anthropological and other scholarly studies often use extensively personal and family papers. Shifting research trends have often enthusiastically embraced the discovery and mining of documents representing the full range of social classes. The creations of countless blogs and constantly changing forms of social media have opened up new possibilities for scholars and others to gain a more in-depth knowledge of political, social, economic, religious, and other aspects of our culture. Before then, scholars were reliant on archives for what they possessed, and, until the mid-twentieth century these institutions were largely focused on what the social and economic elites or governmental and business institutions created. The move to personal archiving can be understood for its value both to personal and family memory and for its potential for generating new kinds of archives that can be used by researchers of all sorts and with quite varied aims.

Displaying and arranging objects may be the most readily understood and appreciated aspect of personal curating. Upon entering someone’s house, it is not uncommon to find a wall of family photographs, diplomas, and travel souvenirs, all mnemonic devices intended to place in time, place, and space. In doctor’s and faculty offices we often discover diplomas and awards, generally with no legal or other value but that that of establishing someone’s authority to practice their livelihood. Many restaurants are decorated with historical photographs, sometimes of readily identifiable people and places, but just as likely display images that are not identifiable but which convey a sense of the past and connect the restaurant to its natural and historical surroundings. There is also a considerable sense of historical fabrication as well, where people buy images of other people’s families in order to create a set of surrogate ancestors; this often goes hand-in-hand with the purchasing of antiques and reproductions in order to create a sense of the past. All of this requires a distinct kind of curation.

Getting involved in all this is seen by some as being a check against the uncertain, tumultuous times of the digital era, where change is rapid and texts and objects ephemeral. But we also have to shift our attention and resources to curating digitally born materials. This can encompass careful back-ups of files, but also purposeful selection of certain documents to be created in both digital and analog formats or, in rare instances, to be created only in analog formats. Given the ubiquitous nature of digital systems and our reliance on them for our normal activities, it is unlikely that we will turn our backs completely on their use, even if we wanted to do so. However, we must more deliberately think of how we maintain records and information critical to our well-being.

 

The Persistence of Paper

When most people think about revolutionary information technologies, they usually are reflecting about the latest digital systems. This often occurs because the advertising accompanying the new hardware or software promises it to be revolutionary, that, if purchased, it will change for the better your life. It is safe to say, however, that almost all such information technology shifts have been transformative. A recent spate of books about the development and use of paper, for example, have demonstrated this, even if some of these publications have been more nostalgic or romantic than scholarly.

Two relatively recent books have celebrated the wonders of paper. Irish writer Ian Sansom, Paper: An Elegy (New York: William Morrow, 2012), is “an attempt to show how and why humans became attached to paper and became engrafted and sutured onto and into it, so that our very being might be described as papery” (p. xix). Sansom ranges over a wide range of topics from the making of paper to its applications in maps, books, money, advertisements, architecture, art, toys and games, and containers. Amply illustrated and organized around stories, this is an entertaining book that underscores the importance of paper in society. Likewise, Nicholas A. Basbanes, the prolific commentator on books and book collecting, has covered the same range of topics in his On Paper: The Everything of Its Two-Thousand Year History (New York: Alfred A. Knopf, 2013). Also amply illustrated, but based on a greater set of personal trips and interviews, Basbanes consistently supports just why paper has been so important to us for a vey long time. Some of his comments are particularly relevant to current events, such as, “Using documents to establish identity, ushered in an entirely new standard, and paper was the ideal medium with which to achieve consistency, The material was cheap, light, produced in abundance, and, because of its flexibility, portable – meaning it could be folded and carried about with ease” (pp. 153-154). The next time you are in conversation about important information technologies, these two books might help you make the case for paper. Of course, you could just point to one of the ubiquitous printers, dependent on the use of paper. attached to the computer network.

For a scholarly introduction to the importance of paper, Lothar Müller’s White Magic: The Age of Paper, trans. Jessica Spengler (Cambridge: Polity Press, 2014) is the place to start: “This book explores the cultural techniques, infrastructure, and routines in which paper functions as a medium for storing and circulating words, images, and numbers” (p. xii). Müller considers the spread of the technology producing paper, the rise of paper mills and the factors leading to them (such as universities, government bureaucracies, merchants, and publishing), the influence of the postal system and growth of letter writing, allusions by writers to paper and its uses, emergence of the popular press, the establishment of archives, autograph collecting, and other symbols related to the establishment of paper as a communication medium. As Müller states, “It can also be helpful to look back on the history of paper because paper was never on its own; it always sought a symbiosis with other media” (p. 261). The author concludes, in the last sentence of his book, that the “Paper Age is not yet finished” (p. 263).

We should read such volumes because they remind us of the reasons why those of us in the information professions should possess a broad vision when it comes to understanding the use of media in our society. Bringing together multiple perspectives to study information is one of the primary purposes of an information school, although sometimes it seems to be a struggle to achieve this mission. Wider reading, more interdisciplinary research, and a diverse curriculum all will help to achieve this end.

Think Again

Stanley Fish, a prolific literary theorist and legal scholar, is one of my favorite authors. I like his work because he tries to engage with the public in interesting and useful way not something, not many academics do. I also appreciate his work because he often takes on unpopular causes and minces few words about his opinions. I just as often disagree with him as not, but I always come away better informed and more thoughtful by reading him.

Before I mention his interesting recent book, I wanted to give a couple of examples of his earlier work. He has commented extensively on the nature of academic writing. His How To Write A Sentence and How To Read One (New York: Harper, 2011) reminds us that writing, of any variety, is a form of artistic expression. It requires practice and experimentation: “To be sure, your eventual goal is to be able to write forcefully about issues that matter to you, but if you begin with those issues uppermost in your mind, you will never get to the point where you can do verbal justice to them. It may sound paradoxical, but verbal fluency is the product of hours spent writing about nothing, just as musical fluency is the product of hours spent repeating scales” (p. 26). Writing requires learning to be selective and strategic: “Sentence writers are not copyists; they are selectors. It is impossible not to select when you are making an assertion. The goal is not to be comprehensive, to say everything that could possibly be said to the extent that no one could say anything else; if that were the goal, no sentence could ever be finished. The goal is to communicate forcefully whatever perspective or emphasis or hierarchy of concerns attaches to your present purposes” (p. 38). Writing is a creative act: “What we know of the world comes to us through words, or, to look at it from the other direction, when we write a sentence, we create a world, which is not the world, but the world as it appears within a dimension of assessment” (p. 39). Fish’s advice is first-rate, and his own writing practices what he preaches.

Fish has also weighed in on the testy territory of academic freedom. Another recent book, Versions of Academic Freedom: From Professionalism to Revolution (Chicago: University of Chicago Press, 2014), is a good place for anyone interested in this topic to start. Fish examines five schools of thought about academic freedom, considering particular academic debates and case law, and drawing distinctions between First Amendment rights and academic freedom. He has many things to say that have implications for what academics publish and how they publish although he does not directly consider publishing. For example, “It is perfectly reasonable for academics, like any other group of workers, to desire working conditions that afford maximum freedom of action, but the realization of that desire is a matter of constraint or disciplinary convention, not of law or constitutional right” (p. 86). Fish also draws distinctions between what a professor says in the classroom and what he does in his or her research: “The assistant professor has different responsibilities depending on whether he is in the classroom (and on what kind of class it is, a survey or a seminar) or in the archive, and that his different roles have attached to them different degrees of freedom and constraint and different degrees of freedom and constraint and different degrees of protection” (p. 91) In discussing academic freedom, the concept of the purpose of academic work emerges: “The academy is the place where knowledge is advanced, where the truth about matters physical, conceptual, and social is sought. That’s the job, and that’s also the aspirational norm. The advancement of knowledge and discovering truth are not extrinsic to academic activity, they constitute it” (pp. 131-132). In considering the controversies over hiring or having Holocaust deniers speak in public venues, Fish exclaims, “In fact (a phrase I do not shrink from), the most vigorous debates about history are not about how to interpret the facts, but about what the facts are” (pp. 145-146). Good stuff to ponder.

Fish’s latest book is Think Again: Contrarian Reflections on Life, Culture, Politics, Religion, Law, and Education (Princeton: Princeton University Press, 2015), essays culled from three hundred columns written for the New York Times between 1995 and 2013. That is quite a body of work. The subtitle captures the vast range of topics Fish expounds on, and readers will be able to select their own favorites (mine are his various essays on motion pictures, for long ago I dreamed of being a movie critic, like so many others I suppose). But there are two things I want to comment on. One, is that this collection is a fine resource for learning about the art of the newspaper op-ed – timely, personal, clear essays – that I might require my own students in the future read when I require that they try their hand at such writing (I require this as a means of getting them to think about how to reach the public and advocate for their professions). Two, Fish describes early on in the volume that his educational preparation to become an academic was anything but a foretaste of success. That Stanley Fish has been so successful makes me pause when I lament students’ work, focus, and other attributes; undoubtedly, some of these individuals will go onto being successful. Good for them. I only hope my pushing and prodding might have had something to do with what later comes.

Read a Book

Mid Coast Books, Damarscotta, Maine
Mid Coast Books, Damarscotta, Maine

The view from the front of the classroom is consistent across American colleges and universities. The students are armed with laptops, iPads or one of its competitor versions, smart phones, smart watches, earplugs (unusual, but the most distracting), and just about every other electronic device imaginable. Some of this gear has even been given to the students as part of the perks for having been admitted and matriculated into a university program. The sense by many, including the students themselves, is that these are the most well-informed and connected young people we have ever had. Some believe that they are the most informed generation our country has ever witnessed. And, this perspective is not limited to students — we perceive as much about our use of the same equipment and possess similar attitudes in other sectors of our society. In other words, the students believe what the advertisers and the faculty have been telling them.

This perspective is valid if we believe that information equals knowledge, and that, of course, is not true. It seems right because we are surrounded by advertisements — on billboards, in magazines, on television, and even in movie theaters — declaring it to be so. The latest buzzword, in both the corporate sector and academe, is Big Data. We have moved from worrying about being overwhelmed by information to believing that in all this data is the secret to solving all of the world’s problems (as I discussed in my previous post). That is, we think this until the next trendy, fundable concept comes along in the next few years, maybe sooner. Big Data, no! Big Knowledge, now that would be something.

The weak foundation of all this becomes apparent when we start interacting with students. Mention a current news item that you think is relevant to the topic of the day, and you may be met with blank stares. How, you might ask, with all this mobile technology, is it that they often seem unaware of important news events, even complacent about their lack of knowledge of such events? They are confident they are connected and aware of what is going on in our world because of all their social media; posting on FaceBook, reading tweets, and browsing blogs seems to sustain their confidence, although a little probing reveals significant gaps in their knowledge (after all, they are students). I once mentioned an event that was directly relevant to a class, reported on in newspapers, television news, and news blogs in the previous week; of course, only to discover that a few had heard anything about it. Of course, one of the pleasures of being a teacher is that you can learn from the students as well, their perspective on the world that will almost certainly differ from yours.

Assigning students a book or multiple books for a course often raises their ire. In fact, a very small portion in any class seems to have read closely the majority of assigned books; some even look at books as if they are something new. They struggle with the thesis or describing any specifics of the assigned books when asked. They often have no idea who the author is, and they skip over footnotes or endnotes without understanding how this represents the authority for authorial statements; these are merely diversions, in their minds. They search for information. It is also, perhaps, a reflection of problems being discussed about the changing nature of undergraduate education, where some have lamented the transition away from knowledge and the support of making informed citizens to that of much more narrow vocational goals focusing on skills for employment largely in STEM (Science, Technology, Engineering, and Mathematics) areas. Graduate students often lack holistic knowledge enabling them to tackle challenging ethical, historical, policy, societal, and other problems.

It also seems to me that many students are not book readers. They are readers – but of Web sites, blogs, Twitter, graphic novels, and an interesting assortment of other such materials. But we can ask what that kind of “reading” does for their preparation to be graduate students or to go out into the world to deal with challenging social, political, and economic issues. It is also relevant to wonder whether what they seem inclined to read to be more than really browsing, sampling, and, even, just looking for entertainment. How is this reading educating and informing people to be better citizens and professionals? But the problem of reading is evident in the university in many ways apart from that of students.

Is anybody else worried that reading – reading journals, books, poetry, and magazines like Harper’s or Atlantic – is on the decline in the university? How can that be, one asks, with all those professors and courses and libraries? And, yet, there is evidence all around those ivy-covered walls that that is exactly what is happening. Students seem uneasy, or hostile, when assigned a book, in toto, as required reading for a course. Faculty sit huddled over their laptops and smart phones during meetings, trying to act as if what they were staring at is relevant for the meeting (sometimes it is). Every field now swears that their main contributions will come through brief research articles, packaged as if the findings of experiments; article quantity and/or citation counts are the chief measures of one’s worth. Short-term projects become the normative, and long-term projects sit on the sidelines. The expectation of reading longer, more detailed or complicated texts, has been diminishing. What has caused this? More importantly, what are the implications for education, knowledge creation, and society?

There is something even more substantive about reading and its (potential) importance. We are immersed in an age when we are shifting from an analog to a digital world. I hear, regularly, at my school and elsewhere, that the printed book is dead, libraries are obsolete, and librarians are dinosaurs. Really? We are seeing increased numbers of e-books, but we are also seeing growing numbers of print books. In fact, the universe of books, print and digital, has long represented the real Big Data. The book is far from antiquated, but remains a significant purveyor of knowledge (note, not just information) for scholars, citizens, policymakers, community groups, and others.

We can reflect back on the role books have played in the past. Think of Thomas Paine’s Common Sense, Rachel Carson’s Silent Spring, the Bible and other seminal religious texts, and, well, you will have your own examples and favorites. What recent books can we add to this list? Drew Gilpin Faust’s This Republic of Suffering, a study about the identity and handling of those killed in the Civil War, is one of my suggestions, as it wisely connects to our current world of never-ending war and its mounting human and economic costs. Likewise Susan Sontag’s Regarding the Pain of Others, her last book published before her death in 2004, would be another good addition as she astutely considers the impact of our incessant viewing of war everyday on television, computer screens, in the cinema, and via our computer games. Books are also essential stepping-stones in the development of our own memories. Roger Grenier notes, “If our books aren’t destined for immortality, at least they can become the enduring passwords, the precious relics in lover’s memories” (Roger Grenier, Palace of Books, trans. Alice Kaplan [Chicago: University of Chicago Press, 2014], p. 83).

Anyway, go read a book and see what you are missing. And, yes, I am a book person. I believe in their potential utility, and I even teach a course about books in an iSchool. I think of that course as a small oasis in the vast wilderness of information technologies. Michael Dirda provides an explanation for the importance of books: “Books don’t just furnish a room. A personal library is a reflection of who you are and who you want to be, of what you value and what you desire, of how much you know and how much you’d like to know. . . . Digital texts are all well and good, but books on shelves are a presence in your life. As such, they become a part of your day-to-day existence, remind you, chastising you, calling to you. Plus, book collecting is, hands down, the greatest pastime in the world” (Browsings: A Year of Reading, Collecting, and Living with Books [New York: Pegasus,Books, 2015], p. 233). To that, I say amen.

Big Data, Big Deal

We live in the age of information overload. One commentator puts it this way, “besides the need for accidental connections, there’s the fact that some things, clearly, are beyond the wisdom of crowds — sometimes speed and volume should bend to make way for theory and meaning. Sometimes we do still need to quiet down the rancor of mass opinion and ask a few select voices to speak up. And doing so in past generations has never been such a problem as it is for us. They never dealt with such a glut of information or such a horde of folk eager to misrepresent it” (Michael Harris, The End of Absence: Reclaiming What We’ve Lost in a World of Constant Connection [New York: Current, 2014], pp. 86-87). In my opinion, this is something we experience everywhere we go or watch or listen – television advertisements, conferences, classrooms, and faculty meetings (to name a few) – where we hear about the importance and promise of Big Data, as if it is the answer to solving all of our problems.

I am not a believer in Big Data. It is not that I do not think it does not have some value or that there are instances when we don’t need to preserve it in some usable form. However, I am concerned about how often it is discussed without much regard for privacy or surveillance issues or pursued simply because corporate entities are interested in its use or there are grants that might support research. To a certain extent, Big Data is just the latest digital era buzzword to capture our attention, and, like others, it will pass away (or it should). Big Data seems to be part of an evolving society where machines will be expected to make all of our decisions, from what we eat to what we read or what we are expected to like in fashion and art. To me, this seems both boring and dangerous.

I am not alone. Sven Birkerts, a well-known commentator on reading and technology issues in our time, provides a few words of caution in his latest book, Changing the Subject: Art and Attention in the Internet Age (Minneapolis, MN: Graywolf Press, 2015). In this collection of recently published essays, ranging over a wide range of subjects, Birkerts tackles the focus on data, noting that “though we live in the so-called information age, very little of what now impinges on us is really information. It is data” (p. 7). And data has become a kind of scripture in a new religious system: “We believe we are held inside a data field, cradled there as if in God’s hand. We trust the machine to give us the answer, the result, the path – for that is why we have invented it” (p. 56). If we can’t measure something or capture it digitally, it can’t be important. And there seems to be little room for skeptics about any of this: “One starts to detect a feeling of data triumphalism in the air, as if it has been formally established that only the quantifiable need apply” (p. 85). It seems to be a worship of the machine. And skeptics about any of this, are not often welcome.

It is difficult to discuss such matters when one teaches in an Information School, even though an iSchool by design is intended to accommodate all perspectives about information and its study (it doesn’t always seem that way). However, from my vantage, I believe it is essential that we educate future information professionals, from programmers and information systems managers to librarians and archivists, to understand the full range of implications – ethical, social, economic, political – of what we now glibly call Big Data. When we see new books on the wonders of Big Data in every conceivable section of bookstores, it is time to ask questions. Fortunately, there are individuals like Sven Birkerts who are doing this.

Slow Down

Many have remarked about how fast things are moving in our society. This is most often associated with the rapidly changing pace of computer technologies. You buy a computer and worry that within a year (or less) your machine is obsolete or not powerful enough. You must commit a good portion of your life to downloading new software in order to keep things humming along. Those within the high-tech sector often see fast change as a positive feature of their industry, believing that most of this change is for the better, that the next version of computer hardware or software will be an improvement (that it is not, of course, can be seen sometimes by the complaints appearing about glitches and other problems with new releases). Every addition of memory, of powerful processing enhancements, can be interpreted as a positive change, as good an example of technological determinism as anyone can imagine. The old days of having electrical and other devices repaired, usually by a local person, are over; now we rip the device out and buy a new one. Stephen Bertman suggests that the computer’s nature works against our ability to reflect: “reflection and meditation are functions inconsistent with the computer’s nature, its mandate from society, and society’s own pace. Thought a computer may save us time, its very quickness can condition us to disdain slower, more peculiarly human, modes of operation” (Stephen Bertman, Hyperculture: The Human Cost of Speed [Westport, Conn.: Praeger, 1998], p. 23). David Levy argues that “Thinking is by its very nature a slow-time activity” (David M. Levy, “No Time to Think: Reflections on Information Technology and Contemplative Scholarship,” Ethics and Information Technology 9 [December 2007]: 244).

Yet, we do not have to stick with our computers to see how the pace of life has quickened. Let’s take the most obvious example, the emergence of 24/7 news. When I was growing up news was reported via print newspapers appearing daily (in some instances several times a day) and the evening local and national newscasts. News stories unfolded slowly, the most important news stories appearing over a succession of days. All this has changed. Now we get news instantaneously, so fast that at first we are mostly hearing about rumors rather than substantiated news. In fact, often the news industry becomes the news itself, firing up rumors and making predictions before all the facts have emerged. I once experienced a panic started by local newscasters reporting an impending shortage of Christmas trees, based on soft or no evidence but leading to a run on these trees; we bought our tree late for a fraction of the cost, since the lots were overflowing with surplus trees just before the holiday. This is a humorous example, but the reporting of the events in Ferguson, Missouri following the events of the shooting of the unarmed black teenager is far les humorous while showing some of the same characteristics of instilling unease and panic. In one sense, television news has become indistinguishable from reality shows, especially as the newscasts are laced with personal exchanges and more personal opinions than factual reporting. Analysts, individuals supposedly possessing some relevant expertise, offer opinion, and they usually outnumber the journalists.

We can take some solace, I suppose, in remembering that commentators from earlier eras have also lamented the rapidity of change they were experiencing. Shifts in more durable writing materials and the inventing of movable type printing must have brought such concerns, as did the emergence of new communications systems such as the telegraph and telephone. It would be silly, however, to deny that our digital era isn’t moving, literally, a whole lot faster, driven by technologies forcing us always to be at work, never disconnected, and never on off time. In other essays about slow reading or slow communication, one must recognize that doing this takes a lot of individual will power and ability to buck the trends. Just look around yourself in a coffee shop and try to find someone who is not checking e-mail, browsing FaceBook, or sending a Tweet. Or listen to a discussion about academic research, where quantity of production often seems to outshine significance or impact. Faculty members are urged to produce faster, and long-term research and other projects are often squeezed out.

In recent years there has been a lot of research about multi-tasking, almost always built on the premise that this is an approach necessary, or desirable, for coping in our present society. I write about this with some trepidation since my natural proclivity is to function as a multi-tasker. However, this is different from believing that the only way to cope is by multi-tasking, as fast and as efficiently as possible. I enjoy working on multiple projects at the same time. I am always reading several books at once, usually divergent in topic and style, just for the change of pace. I am always writing on several essays at once, again usually on extremely different topics. Every once and again, I finish several projects at nearly the same time, providing a kind of intellectual euphoria. This is what I enjoy, not what I have to do. While I have written a lot of books and articles, my goal is not to beat out others in sheer numbers. My pleasure in reading, reflecting, and writing about interesting issues and concerns is what drives me. I have had fallow periods, where I produced little, but those times were not devoid of thinking about future projects; the only issue that makes me pause a bit is that as I get older I realize I have less time to finish my work.

Why should we want to slow down? This is a good question, since the speed of information access and dissemination are heralded as such great positive attributes of our present era. While there has been consideration about how the use of the Internet and other networked communication systems might be negatively affecting our cognitive abilities, there has been less attention paid to what all this might be doing to our physical selves. The most obvious problem is the fact that all the predictions of just a generation ago that we would have shorter workweeks have been obliterated. Now, many of us work longer and are rarely, if ever, cut off from work; we stay connected to our work on vacation, while we travel, while we relax at coffee shops, and even when we visit friends and relatives. This has have to be having health implications. Most of us feel tense when we are away from work or, at least, away from our online connections, and it sometimes takes a while before we can truly relax. This seems extremely unnatural. When we wake up in the middle of the night and can’t fall back to sleep, our first inclination is check our email or to surf the Web; our second inclination, to turn on the television, is little better (and these days, it is hard to distinguish between our computers and televisions) — either way, we seldom do something that helps us fall back to sleep. We keep on running, figuratively, toward our work.

What do we need to do to slow down? There has been an increasing literature on doing this, with some just saying shut off the computer, disconnect every once in a while, and take a break. Mark Taylor states, “Contrary to expectation, the technologies that were supposed to liberate us now enslave us, networks that were supposed to unite us now divide us, and technologies that were supposed to save time leave us no time for ourselves” (Mark C. Taylor, “Speed Kills: Fast is Never Fast Enough,” The Chronicle Review, October 24, 2014, p. B7). So, what other choice do we have, but to shut the machines down and take a break? That this is important can also be seen in the increasing testimony about the positive benefits of occasionally taking a break from our smart phones, tablets, and other digital devices (such as Jay Bookman, Caught in the Current: Searching for Simplicity in the Technological Age [New York: St. Martin’s Press, 2004].

Forging Ahead: Academic Forgery Today

There are lots of challenges facing faculty today when it comes to evaluating students’ written work. Plagiarism has long been identified as one challenge in terms of both identifying and dealing with student work not actually written by students (see Richard A. Posner, The Little Book of Plagiarism [New York: Pantheon Books, 2007] and Susan D. Blum, My Word! Plagiarism and College Culture [Ithaca: Cornell University Press, 2009]). Now, with Jeffrey Alfred Ruth’s insider account of the business of selling forged papers, Papers for Pay: Confessions of an Academic Forger (Jefferson, North Carolina: McFarland and Co., Inc., 2015), we have something else to worry about. Ruth, who spent six months in 2010 forging papers for a living, provides a disturbing account of how students can purchase papers satisfying academic requirements written by others.

Ruth, who had been an English major and seems to have been an unsuccessful academic, provides some startling information about the scale of this forgery enterprise. He estimates that nearly two million fraudulent essays are turned in each semester in American colleges, from undergraduate papers even to dissertations. And he adds that this is a growing business, suggesting that all of us have probably dealt, unknowingly, with forged work. He also adds another amazing insight about the length of time it takes to produce these fraudulent papers, stating that good forgers “can take a topic that they have only passing interest in, create a thesis, research, analyze, synthesize, and produce an order in under four hours” (p. 139).

Why are we duped by these papers? Our normal ways of detecting plagiarism, such as using TurnItIn, aren’t suitable for identifying such work. The papers Ruth describes are not plagiarized but original works, just works not done by the students in question. “Most academic forgery papers contain no overt plagiarism beyond the fact that the paper is fraudulently turned in by the customer” (p. 65). And our conventional means of detecting papers not written by the students in question are far too limited: “Existing methods of looking for forgeries can’t detect original scholarly works of fraud, and more detailed investigations take much too much time for the overworked college faculty” (p. 100). Checkmate.

This is a tough book to read. Ruth often suggests how the nature of American higher education has created conditions that led to the growth of faked papers and the businesses creating them, as if this justifies what is happening now. He is also unrepentant about his own involvement in this forgery of papers. Personally, I feel a little guilty about buying the book and contributing to the author’s royalties. Nevertheless, any faculty member concerned with fairly evaluating student work would be advised to take a look at it. I am not sure what I can do to deal with this growing problem, but I am convinced that any students who turn in papers written by others are cheating themselves, imperiling their future careers, and jeopardizing others. The ethical issues raised by the businesses that have grown to supply papers to students ought to be the focus of how we approach this aspect of the academy; ironically, the attention to ethical matters by this author relate to the measures taken by the companies to provide quality products.

Solitude (or Quiet)

Recently, I sat in a small meeting and the statement was made that the only way now to conduct serious research was in collaborative mode. This is a statement being made all over the academy, scholarly conferences, and scholarly and professional journals, reflecting growing interest in interdisciplinary work and the recognition of the large scale of issues we face in any research study. On the one hand it seems to make perfect sense, especially in our networked world where collaboration and communication is so much easier. The more people gathered together and the more disciplinary and methodological perspectives focused on a particular problem or research question, it stands to reason that there is a higher likelihood for success. This is the crowd theory of scholarship, and while it may be popular at the moment I am not sure it will stand the test of time, nor am I sure it should. Moreover, such collaborative projects are supported, in most cases, by large grants, funding graduate and other research assistants. How can such projects not be successful? Besides, the individual who made this statement, who was involved in a collaborative project, when approached by others expressing an interest in being included in the project, seemed uninterested in including others. And I am not sure what this means about the nature of such research methodology or it’s validity.

On the other hand, there are many fields, such as history and literary studies, where solitary research not only continues but also is the preferred approach (or for some at least the most interesting). Indeed, there have been some well-documented instances where collaborative projects in those fields have made contributions, but not necessarily to any degree greater than what any single person sitting in an archives or research library might have accomplished. Is creativity more possible when an individual labors on, than the group-think that might occur with large collaborative projects? Arguments will rage on about this, of course, but the main point is that creative insights will still emanate from individual scholars and other thinkers even as the university and industry embraces large-scale projects now made possible by the Internet. There will always be both opportunities and needs for individuals to look at the world’s problems and develop creative solutions. Creativity is not the sole domain of collaborative, interdisciplinary work. Never has been, never will be. Besides, more brains do not necessarily mean better or clearer thinking (anyone ever visiting a faculty meeting over a span of a couple of hours can attest to this, especially as they wonder what hit them).

Solitude, or another way of stating it, quiet, is becoming harder to find and to justify in our era. Connectivity is the norm, and you can see evidence of this everywhere. People are constantly checking their e-mail, making calls on their cell phones, reading Facebook, or tweeting wherever they are – airports, libraries, malls, on the beach, and in church. Most talks, classroom presentations, and sermons now start off with the reminder to turn off your cell phones; that this advice is not heeded is amply demonstrated by the random and varied ring tones periodically demanding the attention of their owners. Focused attention on any one task seems impossible now, and our younger generation has grown up functioning in this fashion. The question is, of course, just what we may be losing in this climate.

One particular loss might be a degree of civility. It is difficult these days to have a conversation with someone for very long before their cell phone rings or they pull it out to check their messages and e-mail. I have heard of stories of individuals interviewing for jobs who pause in the middle of an interview to answer their phone. Phones ring in the classroom on a regular basis, and, along with other sounds emanating from our portable electronic devices, we are constantly reminded that there is no escape from the outside world. Sales clerks, in the middle of dealing with a customer, stop to answer their phone. In the middle of conversations, people stop to check their email. Everyone has their favorite stories of such breaches of polite conduct they have witnessed, and the point is clear – we are afraid of falling behind or of being alone. In the interest of communication, we have become desensitized to what communication really means or how it works.

Some might argue that this connectivity enhances solitude. Perhaps. In a volume on writing creative non-fiction, we find one possibility: ‘When you look at our tendency these days to interface with technology rather than one another, perhaps the surprise is not that memories are flourishing but that anyone questions the trend. Neuropsychologists are discovering that the impulse for story is likely hard-wired into our brains. The less we talk to one another, the more our personal narratives – our confessions, our dark sides, our recitations of the things we do in secret – will seek other ways to emerge, finding voice in the genre of memory” (Lee Gutkind and Hattie Fletcher, eds., Keep It Real: Everything You Need to Know About Researching and Writing Creative Nonfiction [New York: W. W. Norton and Co., 2008], pp. 98-99). In other words, we will spend our days and nights facing a computer screen, alone, in new forms of virtual communities. We decrease the amount of healthy personal interaction, and we do not substitute time for reflection; our attention is on browsing or surfing the web or losing ourselves in social media.

What kind of solitude, then, should we seek? We need to have time when it is quiet, where we can read, relax, and reflect. We generally associate such solitude as a quiet place where we can withdraw. Rebecca Solnit, for example, notes that “libraries are sanctuaries from the world and command centers into it. . . “ (Rebecca Solnit, The Faraway Nearby [New York: Viking, 2013], p. 63). Museums, parks, cemeteries, woodland trails, and so forth, can serve the same purpose. Take a chance. Leave all your portable electronic devices, and go for a walk. You might be surprised what you think about.