When I am explaining what I do/study as a historian, I always discuss digital history last. I do this because explaining digital history is not like articulating my study of American history or nineteenth-century Mormonism. No, those topics are more concrete, their framework generally understood, and they easily fit into the traditional historian paradigm. When discussing those topics, I usually get bombarded with questions about historical figures, events or themes. Yet, when I finally do bring up digital history, the response always is, “Digital history…what is that?” Some days, I want to respond, “That, my friend, is the $64,000 question.”
Defining digital history or the larger digital humanities is not an easy task. If it were, such sites as Jason Heppler’s What is Digital Humanities would not exist. Everyone seems to have their own variation of a working definition. For myself, I have settled (for the time being) on “Digital History is the use of information technology to research, present, and teach history.” Yet, even my own definition comes across as clunky and possibly so broad as to not be useful. Most important in the defining digital history is classifying it within the larger historical framework. In “Interchange: The Promise of Digital History,” William G. Thomas defines Digital History as an “approach to examine and representing the past.” At another point in the article, digital history is classified as a field, a method, even a genre or method. Each classification comes with its own question, concerns and problems. Labeling digital history a field presents some with the notion of its insularity in the larger field of History. Of the multiple classifications methodology seems to be paramount. Tom Scheinfeldt, in his 2008 blog post “Sunset of Ideology, Sunrise of Methodology?,” articulates the methodology of digital history against the 75 years of historical ideological focus during the twentieth-century. Positing that perhaps the field of history is reorienting itself back to “organizing activities, both in terms of organizing knowledge and organizing ourselves and our work.” This rebirth of methodology in digital history raises an important question: “What is the end goal of digital history?”
Amy Murrell Taylor, in addressing the question of teaching graduate students digital history, highlights that there is a shortcoming in the teaching process. “Mastering technology” she says, “becomes the ends rather than the means to a bigger end of producing innovative history.” I reflected on my last year in the PhD program at George Mason University and wondered if I too fell victim to mastering technologies rather than producing innovative history. Both the Clio Wired courses provide a great platform for introduction and experimental learning in digital history but, speaking for myself, often my focus centered on the tech. Cameron Blevins takes this a step further in “The Perpetual Sunrise of Methodology” (a play on Tom Scheinfeldt’s blog post) states that Digital history has “over-promised and under-delivered.” Specifically, it has over-promised under-delivered on making scholarly claims about history. Digital historians are fascinated with methodology. Rightfully so, digital history forces a discussion, even a transparency, of methodology that other historical fields either gloss over or don’t address at all. However, does this infatuation with methodology draw our attention from the bigger end of innovative history?
As a possible tangent, I want to dwell on topic of educating students/historians in digital history. At the RRCHNM twentieth anniversary conference, Spencer Roberts articulated, what I feel to be an important part in achieving this “bigger end” that Amy Murrell Taylor is addressing. He said “Failure is productive if you value learning, it isn’t if you value the end product.” At the time, I had only been a graduate student for a meager 2-3 months and my engagement in the field of Digital history was limited, yet I agreed whole heartedly with Spencer’s comment. It could be that the focus on mastering technology is the product of a fear of failure and the lack of a framework that rewards the process and not the end product. For those reading this post, what do you think?