Beginning in 2012, RRCHNM started offering a Digital History Fellowship to incoming PhD students. A cohort of three students are selected each year to work on active projects from across all three of the Center’s divisions: Research, Public Projects, and Education. The fellows are able to immerse themselves in the day to day activities of digital humanists and experience the field at large. Upon my admission to George Mason in the fall of 2014, I was one of the recipients of the Digital History Fellowship.
Digital Campus Podcast
The Digital Campus Podcast discusses “how digital media and technology are affecting learning, teaching, and scholarship at colleges, universities, libraries, and museums.” It is hosted by the Roy Rosenzweig Center for History and New Media at George Mason University and features Dan Cohn, Tom Scheinfeldt, Amanda French, Stephen Robertson, and Mills Kelly as the podcasters. One of the responsibilities of a Digital History Fellow is to act as a producer in organizing discussion material, coordinating with the podcasters to schedule a recording time, and writing a summary of that episode’s discussion. I produced two podcasts in collaboration with fellow Digital history Fellow Jannelle Legg:
- Episode 111: The Next Big Thing
- Episode 108: Things that Go Bump in the Night: Copyright, Interviews, and other scary things
RRCHNM 20th Conference:
RRCHNM celebrated its 20 year anniversary in November of 2014. In celebration of this milestone, the Center hosted a conference discussing both the history of the Center as well as the state of the field of digital humanities. As a fellow, I had the opportunity to welcome guests as they arrived, serve as a scribe in the various break out sessions, and help the conference organizers in any way possible. The conference schedule is online with links to the majority of material produced at the conference. Using Storify, I aggregated my tweets from the first day of the conference.
RRCHNM 20 Histories: The September 11 Digital Archive
As part of a seminar with the Director of the Center, Stephen Robertson, my cohort of fellows helped aggregate the materials coming out of the RRCHNM 20th conference. In connection with the anniversary, each fellow was to research and develop a project pertaining to some aspect of the Center’s history. I chose to research and outline the context surrounding the creation of The September 11th Digital Archive. I attempt to place it within the context of the larger history of the Center and its various projects that contributed to the archives development.
Digital Humanities Now
For the second half of the fellowship, I was placed in the Research Division at the Center. One of the ongoing projects within the Division is the bi-weekly publication of Digital Humanities Now. Developed in collaboration with another project within the Research Division, PressForward, DHNow is a publication that “highlights and distributes informally published digital humanities scholarship and resources from the open web.” During my tenure in the Research Division, I was able to act as the Editor-in-Chief multiple times. The Editor-in-Chief reviews material that has been nominated by a body of Editors-at-Large for publication and then makes the final decision on what will be published. Below is a link to the archive of publications I oversaw as Editor-in-Chief.
DH Support Space
The DH Support Space was not an official part of the fellowship’s curriculum. It was started by a previous cohort of fellows as a place where students working on digital projects can get help. While never inundated with students, the support space provided a unique opportunity to work as a group to solve problems and answer questions. If I was unable to fix a problem, we could draw on the knowledge of the other fellows to arrive at an answer.
Part of the curriculum for Digital History Fellows is to contribute to a Fellows blog outlining and detailing the work the Fellows were doing within the Center. Each semester, I was tasked with contributing a handful of blog posts recounting the work I had accomplished on various projects and tasks. As Fellows, we were also introduced to and participated in the use of Twitter for scholarly communication. Each semester we were assigned to “Tweet a day of Work” and then write a reflection post on the use of Twitter by scholars. I have aggregated these posts here but have also included a link to the original post on the fellow’s blog.
In a week’s time, the semester and by extension the DH Fellowship will come to an end. As such, it is time for the end of the semester blog post. IN the time since my last blog post, I have had divided my time into two projects associated with Digital Humanities Now. The first project (Web Scraping) was focused on the content published by DHNow while the second (Web Mapping) focused on DHNow’s Editors-at-Large base.
Over the years, Digital Humanities Now has published hundreds of Editor’s Choice pieces. For 2014 alone, roughly 165 Editor’s Choice articles from numerous authors were featured. Such a large corpus of documents provided a ready source of data about the publishing patterns of DHNow. In order to translate the documents into usable data we needed to format the Editor’s Choice articles into a usable format, namely machine-readable text. The task, then, was to go through each Editor’s Choice article and scrape the body text down into a .txt file. I had never scraped a website before, so this project was going to be a great learning opportunity.
I began the project by reading through the Beautiful Soup web scraping tutorial on Programming Historian by Jeri Wieringa. It uses a Python library called Beautiful Soup to go into a website and scrape the data. During my rotation in the Research Division last semester, the three first year Fellows had quickly worked through the Beautiful Soup tutorial but I needed a refresher. However, I made a switch from Python and to R. This change came from the suggestion of Amanda Regan who has experience using R. As she explained it, R is a statistical computing language and would be a better resource in analyzing the corpus of Editor’s Choices than Python. After downloading R Studio (a great IDE) and playing around with R, I found it to be a fairly intuitive language (more so for those who have some background in coding). I came to rely on Mandy and Lincoln Mullen when running into issues and they were both extremely helpful. Learning R was fun and it was also exciting as R is the primary language taught and used in the Clio Wired III course, which I plan on taking the next time it is offered.
In order to scrape the body text of each post, I relied on the class names of each html tags containing the text. I imported in a .csv file of all the Editor’s Choice articles and search each website for a specific class name. When found, R would scrape all the text found in that tag, place it in a .txt file whose name corresponds with the articles ID number. Finding the class name was a hang up, but I was able to use the Selector Gadget tool to expedite the process. It essentially makes your webpage’s css structure interactive allowing you to click on items to view their extent and class names. I learned a lot about website structures in while identifying each body text’s class name. In the end, I was able to scrape 150 of the 165 Editor’s Choice articles.
You can find my code on my Github account here.
The second project I was fortunate to work on was displaying our Editor-at-Large spatially on a map. My undergraduate work is in Geographic Information Systems (GIS) so this project in part came out of my interests and prior experience. In association with this project I am writing two blog posts for the soon to be DHNow blog. The first will detail the process of developing and designing the map while the second will delve into what the map is “telling us.” For the sake of the Fellows blog, I will instead reflect on my experience in creating the Editors-at-Large map and will link to the other two blogs when they are published.
It had been almost a year since I devoted any real time to cartography. I decided to use the same model I went through in my undergraduate capstone class on web mapping. To being with, I needed a dataset that I could use on the web. During my undergraduate, I used ArcGIS to convert a .csv into a geoJSON file that could be used on the web. However, since coming to GMU and the Center, I have embraced Open Source (both by choice and by financial force) and instead relied on Quantum GIS (QGIS). I had no real experience with QGIS so this project provided me an opportunity to become familiar with the QGIS platform. This was an added benefit that I both appreciated and enjoyed. In the end, converting to a geoJSON format was fairly straightforward.
In the final days of the Fellowship, I feel both excited and melancholy. I am sad that the fellowship is coming to an end and I am moving out from the Center. It has been a wonderful experience working with great people on interesting and engaging projects. Yet, it is exciting to think back to myself on the first day of the Fellowship and realize how far I have come in my digital work.
Last semester I was able to live tweet the 20th Anniversary Conference for RRCHNM. It was a lot of fun and really interesting to pass information around on the Twitter-verse about the various talks and sessions. My overall impression hasn't changed about the usefulness of twitter for professionals an academics. In the time since creating my twitter account, I have had several people ask me for my opinion on twitter and I have been able to expound on its usefulness in academia. As I have developed my online presence, I have become sensitive to what type of presence I am "putting out there." From the beginning, I have tried to separate my personal/family life from my professional life. One way I have tried to separate the two is by using Twitter for my professional life and keeping my Facebook account for family or personal use. While some cross over does exist (posting articles or conference material on Facebook or about the Red Sox on twitter), my impression of Twitter is very academic.
I chose to live tweet Monday, May 4th (May the 4th be with you...) because I knew I would be engaging in a multitude of projects. This week I am the Editor-in-Chief for Digital Humanities Now so I would be reading through the nominated material to prep for Tuesday's posting. I also would be working on drafting a write up for my Editors at Large Map discussing the process of making the map as well as the various ways to use the E@L data in a spatial platform. Additionally, the DH Support Space takes place on Mondays to help students working digital projects. We were expecting higher traffic with the Support Space as Clio II's preliminary drafts of their websites are due tonight. Compared to my live tweeting of the conference, Monday's live tweeting was a very different experience.
Live tweeting a work day meant that my tweets were not all centered on a specific topic. Live tweeting the conference was more of a collaborative effort as each tweet was tagged with the conference's designated hashtag. Thus all tweets were focused towards a single collaborative narrative or focus. Tweeting my work flow for Monday allowed me to tweet about DHNow in the morning and then about Leaflet and mapping in the afternoon. At first it felt a bit scattered in my approach but I would attribute that to my still limited proficiency and experience with tweeting.
This semester, my cohort of fellows were placed into different divisions. Since we are on the accelerated one-year fellowship tract (the previous two cohorts each had a two-year fellowship), every division currently has two DH fellows. I was assigned to my first choice, the Research Division. This was the first division my cohort rotated through last semester and was a bit more technical than the other two divisions. However, I am excited to get involved in their current projects and to contribute as a member of the team. You can read my reflection on my rotation through the Research Division here.
For the first few weeks, I was familiarizing myself with various aspects of the divisions work. Even though I came into the fellowship with experience in programming and web design, I was by no means at the same level as the rest of the division. Taking a few weeks to introduce myself to the tools used in the division’s work would allow me to better understand the workflow and processes involved in the different projects.
Git and Github: The two main projects that Research is involved with are Zotero and PressForward. Both of these are programs are open source and available online in their entirety at Github.com. Github is an online repository for source code and allows for collaboration in the development process. Currently, the Research Division is working on releasing updated versions of PressForward. By learning how Github and git commands work, I would be able to understand how these updated versions are created, shared, tested, and released. I went through a handful tutorials on git commands from both Github and on Code School. I even created my own project repository on Github and practiced pulling and pushing files. I worked through the command line (Terminal on Mac) to communicate with Github. It was an interesting and definitely new experience. I now understand the theory of how to save various stages in the coding process and uploading them to Github. Most importantly, I can follow people’s conversations about Github or their online repositories. I am looking forward to learning more and becoming more comfortable with the process.
PHP: I came into this Fellowship with experience in a few programming languages. I had taken two programming classes in my undergraduate in C# and had some experience with HTML, CSS, and Java script from a capstone class. PressForward works a lot in a scripting language called PHP. I went through the tutorial on Codecademy for PHP and reviewed Java script as well. I didn't come out an of the tutorials with a mastery of the language but it did teach me how to following the syntax and logic of the code. That really is half the battle in programming. I now have a greater appreciation for programmers who have expertise in multiple languages as well. I only have a basic knowledge of a handful of languages and they are already bleeding over into each other in my mind. In spite of this, I enjoyed working with programming and want to continue to improve my skills and utilize them in my own work as well as within the Research Division.
The crux of my time in the division thus far has been preparing for and working as Editor-in-Chief for Digital Humanities Now. The idea of being Editor-in-Chief was a bit daunting, especially with the immediate publication that comes with the digital medium. However, I was aptly prepared and supported with my first time through.
Preparation: In the weeks leading up to my assigned time, I shadowed Amanda Regan and Amanda Morten during their weeks as Editor-in-Chief. They showed me how to format each post for publication, how to find relevant information from the Google documents, and how to email the editors-at-large. The most imposing task was to find the Editor’s Choice articles. I felt comfortable with identifying the various news items for publication but the Editor’s Choice articles are more involved and the focal point on the DHNow website. A helpful way of understanding Editor’s Choice articles, as it was explained to me, is that they should be focused around an argument or position. With this understanding, I decided to spend some time going through former Editor’s Choice articles from the previous months to better ground my judgment. As my week of Editor-in-Chief approached, I felt prepared and excited for the task.
My Week: My week started with a suggestion from Ben Schneider that I look through the nominated material the day before publication. This would allow me to gauge if we have enough material to publish or that I needed to devote time to aggregating articles. So I spent an hour or so drafting posts and prepping for the following day. I left work on Monday feeling confident that I had plenty of material for the Tuesday publication. The following morning, I returned to find that most of the material I drafted the day before was almost entirely Humanities focused with little to no digital component. Luckily I started the day early to allow for “hiccups” such as this and was able to work through and find things to publish. Thursday went a little faster, after having already gone through the entire process on my own. I was able to find, with relative ease, plenty of news items to publish for both days. Editor’s Choice articles were, however, more time consuming. In the end, I was able to publish two Editor's Choice pieces on both Tuesday and Thursday.
Reflection: I really enjoyed being Editor-in-Chief. It was somewhat empowering to be the individual who decides what is being published. It also imbues a sense of responsibility that the posts you choose are quality in nature and relevant to the digital humanities community. Taking on this role gave me a glimpse at the vast amount of material being published on the Internet. PressForward has over 400 RSS feeds coming into the All Content page and this is only a mere fraction of the content being published daily. I can definitely see the need for programs such as PressForward to aggregate, organize, and publicize digital work. This being my first experience with online publishing, I found it to be very rewarding and encouraging. I have three more weeks to helm the Editor-in-Chief and I am looking forward to them.
This blog post will conclude my first semester in the PhD program here at George Mason University. The semester moved by very quickly and it is amazing to see how much we have all learned from the fellowship. Each divisional rotation expanded my understanding of the field of Digital History as well as my own capabilities within it. Our final rotation for the semester was a seminar with Dr. Stephen Robertson. Largely focused on the recent 20th Anniversary of the Center, I was able to learn a lot more about digital history centers in general and how they function.
We started the seminar with a list of readings on Digital Humanities Centers. It was interesting to compare what we were reading to our experiences over the last four months. I was surprised to see the complexity in establishing DH Centers. Issues were raised over where the center would reside, what function would the center have, and even if creating a center is the right thing to do. I reflected on my reasoning for applying to GMU. George Mason ranked high on my list of graduate schools because of I wanted to work at CHNM. If I knew of a professor who worked in DH but didn't operate in a center, would that deter me from applying to that university or program? I really liked Stephen Ramsey's post Centers are People where he articulated a preconceived notion that I had in applying to programs. He states that for many, centers are how you get into DH. However, he goes on to highlight how this isn't always the case. These questions about centers are really interesting to me as I aim to be a collegiate professor who is also a digital historian.
After our brief crash course in DH centers, we focused the remainder of our time on the 20th Anniversary. We thought it would be a great resource if we aggregated the tweets from the two days of the conference using Storify. Using the #rrchnm20, we went back through twitter and grouped the tweets according to the day and session of the conference. The ability to do this capped off my largely increased appreciation for twitter that I have developed over the semester. I have come to greatly appreciate it as a platform to communicate with other scholars and to disseminate information and ideas. It also, as I learned from this, a convenient and easy way to categorize and save these communications. I can truly say that I have been converted. The saved tweets can be found here.
The majority of the seminar focused on creating an Omeka exhibit for the 20th site. Having recently come out of our rotation through Public Projects, I was interested in the history of the September 11 Digital Archive. We, the first year fellows, had the opportunity to add metadata to recently received items, so I had a little experience with the archive itself. The approach I took was to contextualize the Archive with its place in CHNM's history. How was it influenced by other projects? Did it feed into or help bring about other projects? How does a center preserve a project over time? In locating the Archive in the overall story that is CHNM, I learned a lot. I was able to explore other Digital Memory Banks that the Center has created such as Blackout History Project and the Hurricane Digital Memory Bank. I was able to read through the various documents for the Archive which addressed interesting points such as what to do with submission that could be falsified or blatantly racist. Also I was able to come to appreciate the magnitude of such a project and the importance of collecting history online. If you would like to see my exhibit, follow the link here.
I found the topic of the seminar, on that of DH centers and the history of CHNM, to be a great way to wrap up this whirlwind of a semester. Each of the three divisions allowed us a glimpse into the various projects CHNM undertakes and broadened my understanding of the field. The seminar allowed us to step back and take in the broader sense of centers and their functionality in the discipline. I am coming away with a greater understanding and even more questions.
This past weekend, November 14th and 15th, was the RRCHNM 20th Anniversary Conference here at George Mason University. The attendee list included current and former staff, George Mason faculty, current grad students, and guests from other institutions and universities. Over the two days of this unconference, topics ranging from the history of CHNM to graduate student attribution were discussed.
I was excited to be able to meet the people whose work I have been reading in my Clio Wired class. The conference was a bit of a contextual event as I was able to interact with people like Tim Hitchcock, Dan Cohen, Trevor Owens etc. In this gathering, I was able to place myself in the community of DH scholars. It was an interesting experience that really boost my desire to engage the field and participate in the discussions.
Of the three sessions I attended on the first day, the first session (Digital Literacy Tool Kit for Undergraduates) has lingered with me the longest. I wanted to attend this session because I am just starting out in my PhD program and in my involvement with Digital history. The discussion in this session, I felt, would help me as I learn and grow as a digital historian. The session was focused around an attendee who was trying to develop an undergraduate course focused around digital methods. It began by taking a step back and asking "What do you (the professor) want the students to leave with, ultimately?" Digital literacy and fluency, multilinear narratives, interaction with digital sources were all addressed. One of the more important comments was made by Spencer Roberts on failure. He said "Failure is productive if you value learning, it isn't if you value the end product." I have been reflecting on how my own relationship with failure in my work. Moving forward, I have a greater sense of myself and my own progress as a digital historian. I hope to always improve my digital literacy and fluency through my work.
I also took the opportunity of the conference to fulfill my "Live tweet a day" assignment for my Fellowship. I though it would be a great time to tweet out the discussions and talks, especially for those who weren't able to make the first day. I wrote a blog post on that experience that can be found, here.
The second day, I worked the registration table in the morning and acted as scribe for the two breakout sessions. What was interesting was that both sessions I was assigned to ended up being on the same topic. In all, including the day before, there was a series of three sessions that carried on a long discussion of peer review of digital scholarship. There was a core group of individuals who attended all three of these sessions. It was fascinating to participate in this important discussion as I have not published anything, let alone any digital scholarship. By the third session, the afternoon of the second day, the discussion focused heavily on crafting a DHR - Digital History Review (coined by Fred Gibbs) - to provide the best platform to review the scholarship. I came away from the session invigorated and motived to continue the discussion on peer review. It is an important part of Digital History, not just for the overall review but also as a supplement for tenure and promotion committees. The notes from the latter two sessions are here and here.
Overall, the conference was a great success and a lot of fun to participate in. I enjoyed tweeting out the conference, especially because there is now a record of all the tweets using the #rrchnm20 hashtag. The sessions were incredibly helpful and insightful. I was surprised at how quickly I became invested in the discussions and the possible outputs from those sessions. I went home from the second day with a plethora of thoughts, ideas, questions and concerns. I guess, that would be the identifier of a great conference.
In some ways, Public History was a field I spent little time engaging before coming to George Mason. While I did work in a University museum for 2+ years during my undergraduate degree, I had always focused my career aspirations and attention on academic history only. In part, I had never formally been introduced to public history and the vastness of the field. Since starting the Digital History Fellowing, Public history has quickly come into focus. My rotation through the Public Projects division introduced me to the plethora of opportunities that digital public history has to offer. Over the course of these four weeks, we worked on multiple projects, each with differing tasks.
During our first week, we were introduced to Omeka. A CMS (content management system) designed with the focus on the item and not the word. Omeka is one of the flagship programs/projects for Public Projects. I was excited to learn more about this program as I had heard so much about it from around the Center. We started by reading about Omeka and exploring Omeka sites. This was followed by Megan Brett walking us through a command line install of Omeka on the Dev server. It was really interesting to work with the command line as I have little experience using Terminal or command line anything. In addition, the command line install differed greatly from the One-click install we did in our Clio Wired class on Reclaim Hosting. Working on the back end using git commands definitely gave me a greater appreciation for the ease of the One-click install while also highlighting the control command line gives to the user. We wrapped up our week on Omeka by installing PosterBuilder on our Dev Omeka and tested the plugin.
In our second week, we moved on to Histories of the National Mall. Our main task was to do mobile testing of the website while on the National Mall. It doesn't matter how old you get, everyone loves going on field trips, especially to a place like the National Mall. We took a day off from the Center and traveled out to the Mall with the intention of testing the site on different devices. Alyssa brought an iPad, Stephanie had her Android phone, and I had my iPhone 5. Of the three devices, the Android phone worked the best (surprisingly). The Mall wireless network wasn't working thus ruling out Alyssa's iPad and my iPhone was running really slow. In spite of this, the whole experience was a lot of fun and very educational. Using the website on the mall allowed us to experience it as it was intended. We had to work around the sun glare on the screens, trying to get the map geolocation to work, and filtering the tags for each item.
During our trip to the Mall we were tasked with reading through an Exploration to gain a sense of the user experience. After we returned to the Center, we were assigned a rough draft of an exploration that needed to be both fact checked and edited. My exploration was "Who keeps the mall so green?" It covered the history of the Mall's landscaping as well as its grounds maintenance. The fact checking process took an exceptionally long time to complete. It required me to read through various NPS documents as well as other government documents. As difficult and frustrating as it was, it was very rewarding in the end. I learned a lot about the McMilan Plan, the Commission of Fine Arts as well as the new Turf Restoration Project.
Our final project was the 911 digital archive. A retired FAA special agent from Boston sent in a collection of documents that needed to be cataloged into the archive. We each took five documents from the collection, read through them and then populated the respective metadata fields. It was quite fascinating to read these testimonial accounts or internal memos from Logan International Airport. I learned evermore about metadata as we had to follow the Dublin Core standard. While I have experience with metadata in general (metadata is important in GIS work), I was unaware of differing standards etc. Through this project, I learned more about curating items in a digital archive as well as creating and maintaining metadata.
Overall, my time in public projects was very beneficial. I was introduced to the expanse that is digital public history by taking part in multiple projects. Each project challenged me in different ways and helped me to become a more rounded digital historian. Truthfully, I am now contemplating and investigating Public History just as much as Academic History.
Prior to entering the PhD program at GMU and starting my Digital History Fellowship, I had little to no interest in Twitter. My opinion of Twitter centered around self absorbed individuals who liked to tweet images of their breakfast or update the world on their commute to work. However, I found myself creating a Twitter account on my first day of graduate school. In the coming weeks, I learned that Twitter has become a very active platform for academics, especially Digital Historians, in the exchange of ideas and information. Two great articles on the topic, Heather Cox Richardson's “Should Historians Use Twitter, parts 1 & 2” as well as Ryan Cordell's “How to Start Tweeting (and Why You Might Want To),” were both helpful as I crafted my Twitter account.
As a Fellow, part of my curriculum for this semester is to do a day of tweeting. While the other two in my cohort (Stephanie and Alyssa) tweeted about a day in a division, I thought tweeting a day at the CHNM 20th Anniversary Conference would be interesting. The attendee list included current staff, graduate students, alumnae and other noted scholars all gathering to discuss CHNM, Digital History and Digital History Centers. Not only would this fulfill my Fellowship assignment, I thought it would be a great experience live tweeting a conference.
The conference was great! There was a lot of great discussion on various digital history topics. I found the live tweeting to go exceptionally well. I noticed that I was listening to the presenters specifically to find a group point to tweet out. This made it harder at times to take notes but in a way, the series of tweets serve as my notes. Twitter served as a collaboration platform, when using the #rrchnm20 that everyone else at the conference used. It was also interesting to see the interaction of others who could not attend the conference but could interact online through Twitter. It was a very interesting exercise that I continued on day two as well.
Overall, my view of Twitter as increased greatly, especially from tweeting the conference. It adds another layer to scholarly interaction and allows for communication beyond the conference hall. Furthermore, it increased my network of scholars by finding new people to follow as well as others following me. My tweets from the first day of the conference are below.
Our rotation through the Education division has come to an end. It was a great experience and I learned a lot from our assignments.
To begin, I want to contrast the fellows experience in the Research division with our time spent in Education. Our entire rotation through Education was spent working on the 100 Leaders project. We were given various tasks and assignments to complete, all of which focused on 100 Leaders. This was different from out time in Research, where our rotation was divided between two very different projects. My time in Research, as I mentioned in my previous blog post, gave me a broad over view of a project and then a more “nuts and bolts” experience with the second project. My time in Education working on one project allowed me to engage that project at different stages along its progression. The experiences in each division are very different but, for me, are complementary in their pedagogy.While Research allowed me to see how each division manages multiple projects, Education showed me how one project matures and develops.
Our first assignment was to build up a pool of images and videos for the people on the 100 Leaders website. We each were assigned 25 of the leaders to work on. It became somewhat like a treasure hunt as we were trying to find a quality image for each leader. It was a lot of fun to find interesting or unique images and then to share them with the other fellows. In a way, I felt more connected to each leader I spent time searching the web for. This assignment also forced me to think differently about certain leaders whom I could not find many images for. I had to reword my search phrases or approach them from their work as opposed to their name. The biggest hurdle in collecting images was copyright. I must admit that after this assignment I became somewhat disillusioned with copyright the availability of public domain images. By way of example, I was assigned George Washington. As I began to search, I was excited to work on him as I had seen many amazing pieces of art on Washington. However, it turns out, many of the paintings of Washington are not public domain. I felt saddened that such an important figure in US History, a founding father and the first President of the United States, did not belong to the people but instead to some organization or government. Looking beyond the frustrations of copyright, this assignment was very rewarding and it was even more exciting to see the images used in the instructional videos on the 100 Leaders webpage.
Our next task was to do some user testing. A major facet of the 100 Leaders website is the voting component. End users will be able to vote on how well the leader did in five different leadership qualities. The voting functionality is still being worked on and we were tasked with testing it both on a laptop computer and on a mobile device. I had never user tested anything before, hardware or software. It was interesting to think about the various scenarios in which an end user would access the site. First, would they be using a PC or Mac? Second, what operating system would they be using? Third, what browser would then use? Safari? Chrome? Mozilla? Internet Explorer? Directions were written up (how we should vote) and we had to run through them in different scenarios. Luckily, with three of us doing this, we were able to move through the directions fairly quickly. On my MacBook Pro, I have a dual boot with Windows 7, so I was able to test the voting on both OS X and Windows 7. Whenever an issue arose (the webpage froze or had a glitch, links broke, webpages loaded incorrectly, etc.) we took screen shots to document them. The mobile testing was even more interesting as we had to interact with the site on touch screens. In the end, I learned there is a lot to consider and think about when user testing. This is the time the designers want the website to break, not when it is live and taking thousands of hits a day. I must say, it was fun to “try” and break a website.
Our final assignment was to draft a user manual for those who would be editing the 100 Leaders website. I had some experience in writing a manual for a website. During my undergraduate degree, I worked for a museum whose website was being switched over to a new content management system. My task was to develop the content for the new website and then write a manual on how to manage/edit the new website. In all honesty, I did not entirely enjoy writing the manual. For the 100 Leaders website, I only had to work on a section of the biographical content of a leader. Writing a manual is a mental exercise. The author is trying to explain how to do something to someone who has never done it before. On top of that, the text is static so there can be no conversation between the author and the user. The sections I was tasked with writing the instructions for covered material that would change according to different scenarios. With the help from my mentor, Jannelle Legg, I was able to work through them. While I don’t necessarily want to write manuals, doing so stretched me as an academic and a student.
My time in Education was very rewarding. I am excited for the 100 Leaders project and will return to the site often to see its progress and to use its tools and information. Now on to Public Projects.
I am not sure what I was expecting when the first year fellows were assigned to the Research division. I came with a preconceived notion of what Digital History research is and what historians do with it. It turned out that the scope of my understanding was actually quite limited. My time in the division has taught me a lot about the vast applications and possibilities of Digital History. We (the first year fellows) were given chances to get our hands dirty and it proved very rewarding. Sadly, this blog post marks the end of our rotation through the Research division.
Our first assignment was to PressForward. We started from the ground up by familiarizing ourselves with the project. We installed the plugin on the sandbox server and got to bang around on it. We explored the PressForward.org site as well as the digitalhumanitiesnow.org site. I must admit that my initial reaction was that PressForward was a glorified RSS reader with some added features of promoting articles. I use Feedly (a RSS reader) on my phone to follow various history blogs and I, at first, did not see a big difference between the two. It wasn’t until someone explained “gray literature” that the full purpose of PressForward came into view. Until that point, I had been ignorant to the issue of online scholarship. The PressForward site explains “gray literature” to be “conference papers, white papers, reports, scholarly blogs, and digital projects.” Online scholarship is being under-appreciated and forgotten in a discipline that has focused so heavily for so long on printed material. My assignment as an Editor-at-Large and then as an Editor-in-Chief brought this issue into focus for me
Working as an Editor-at-Large and Editor-in-Chief really solidified the importance of PressForward. As an Editors-at-Large, we worked through the live feed of articles and websites coming into Digital Humanities Now. I learned that it can be labor intensive to sift through the various websites and articles to find important, relevant material. It is not always easy to find the scholarship and pertinent information. I also learned first hand about the limitations of the software. On a couple of occasions I fell victim to the browser’s back button instead of closing a window. I then found myself back at the beginning of the feed instead of where I was before I had clicked on the article. After shadowing Amanda and Mandy when they were Editors-in-Chief, the first year fellows were able to make decisions on what would be published to DH Now. It was a very fun experience that helped me begin to grasp the extent of online scholarship and publishing. In addition, reading through the articles helped us to be informed an the various projects in the field. I even found articles that did not qualify for DH Now but were of interest to me. I bookmarked more than a handful that I wanted to return to later.
The final part of Programming Historian were the lessons on APIs, more specifically the Zotero API. I had never used Zotero so these lessons introduced me to both Zotero and the Zotero API. Before I began the lessons I played around on Zotero, starting my own library and learning to love the program. From the beginning, I wanted to use my personal library in the lessons and not the sample one provided. By doing this, Spencer and I found a problem in the lessons when my program couldn’t access my library. Alyssa has since reported it and a problem she had to GitHub. After finishing the API lessons, I wanted to do things that the lessons did not delve into. With help from Spencer I was able to bang around on the API in an attempt to add/edit the author field of an item. While we did not find a solution we did make headway and it really piqued my interest in working on the Zotero API.
I am leaving a much improved Digital Historian. The Research division had to help the first year fellows through a learning curve that, in some ways, Education and Public Projects don’t. We now know the Center and feel comfortable in it.We got our feet wet and our hands dirty. The Research division was a great place to do that.