History Leaks

I am involved in a new project called History Leaks. The purpose of the site is to publish historically significant public domain documents and commentaries that are not available elsewhere on the open web. The basic idea is that historians and others often digitize vast amounts of information that remains locked away in their personal files. Sharing just a small portion of this information helps to increase access and draw attention to otherwise unknown or underappreciated material. It also supports the critically important work of archives and repositories at a time when these institutions face arbitrary cutbacks and other challenges to their democratic mission.

I hope that you will take a moment to explore the site and that you will check back often as it takes shape, grows, and develops. Spread the word to friends and colleagues. Contributions are warmly welcomed and encouraged. Any feedback, suggestions, or advice would also be of value. A more detailed statement of purpose is available here.

Posted in Archives, Research and Teaching Tools, Site Reviews | Tagged , , | Comments Off

Combine JPEGs and PDFs with Automator

leninchristmasLike most digital historians, my personal computer is packed to the gills with thousands upon thousands of documents in myriad formats and containers: JPEG, PDF, PNG, GIF, TIFF, DOC, DOCX, TXT, RTF, EPUB, MOBI, AVI, MP3, MP4, XLSX, CSV, HTML, XML, PHP, DMG, TAR, BIN, ZIP, OGG. Well, you get the idea. The folder for my dissertation alone contains almost 100,000 discrete files. As I mentioned last year, managing and preserving all of this data can be somewhat unwieldy. One solution to this dilemma is to do our work collaboratively on the open web. My esteemed colleague and fellow digital historian Caleb McDaniel is running a neat experiment in which he and his student assistants publish all of their research notes, primary documents, drafts, presentations, and other material online in a wiki.

Although I think there is a great deal of potential in projects like these, most of us remain hopelessly mired in virtual reams of data files spread across multiple directories and devices. A common issue is a folder with 200 JPEGs from some archival box or a folder with 1,000 PDFs from a microfilm scanner. One of my regular scholarly chores is to experiment with different ways to sort, tag, manipulate, and combines these files. This time around, I would like to focus on a potential solution for the latter task. So if, like most people, you have been itching for a way to compile your entire communist Christmas card collection into a single handy document, today is your lucky day. Now you can finally finish that article on why no one ever invited Stalin over to their house during the holidays.

Combining small numbers of image files or PDFs into larger, multipage PDFs is a relatively simply point-and-click operation using Preview (for Macs) or Adobe Acrobat. But larger, more complex operations can become annoying and repetitive pretty quickly. Since I began my IT career on Linux and since my Mac runs on a similar Unix core, I tend to fall back on shell scripting for exceptionally complicated operations. The venerable, if somewhat bloated, PDFtk suite is a popular choice for the programming historian, but there are plenty of other options as well. I’ve found the pdfsplit and pdfcat tools included in the latter package to be especially valuable. At the same time, I’ve been trying to use the Mac OS X Automator more often, and I’ve found that it offers what is arguably an easier, more user friendly interface, especially for folks who may be a bit more hesitant about shell scripting.

What follows is an Automator workflow that takes an input folder of JPEGs (or PDFs) and outputs a single combined PDF with the same name as the containing folder. It can be saved as a service, so you can simply right-click any folder and run the operation within the Mac Finder. I’ve used this workflow to combine thousands of research documents into searchable digests.

Step 1: Open Automator, create a new workflow and select the “Service” template. At the top right, set it to receive selected folders in the Finder.

Step 2: Insert the “Set Value of Variable” action from the library of actions on the left. Call the variable “Input.” Below this, add a “Run Applescript” action and paste in the following commands:

on run {input}
tell application "Finder"
set FilePath to (container of (first item of input)) as alias
end tell
return FilePath
end run

Add another “Set Value of Variable” action below this and call it “Path.” This will establish the absolute path to the containing folder of your target folder for use later in the script. If this is all getting too confusing, just hang it there. It will probably make more sense by the end.

combinesmallStep 3: Add a “Get Value of Variable” action and set it to “Input.” Click on “Options” on the bottom of the action and select “Ignore this action’s input.” This part is crucial, as you are starting a new stage of the process.

Step 4: Add the “Run Shell Script” action. Set the shell to Bash and pass input “as arguments.” Then paste the following code:

echo ${1##*/}

I admit that I am cheating a little bit here. This Bash command will retrieve the title of the target folder so that your output file is named properly. There is probably an easier way to do this using Applescript, but to be honest I’m just not that well versed in Applescript. Add another “Set Value of Variable” action below the shell script and call it “FolderName” or whatever else you want to call the variable – it really doesn’t matter.

Step 5: Add another “Get Value of Variable” action and set it to “Input.” Click on “Options” on the bottom of the action and select “Ignore this action’s input.” Once again, this step is crucial, as you are starting a new stage of the process.

Step 6: Add the action to “Get Folder Contents,” followed by the action to “Sort Finder Items.” Set the latter to sort by name in ascending order. This will assure that the pages of your output PDF are in the correct order, the same order in which they appeared in the source folder.

Step 7: Add the “New PDF from Images” action. This is where the actual parsing of the JPEGs will take place. Save the output to the “Path” variable. If you don’t see this option on the list, go to the top menu and click on View –> Variables. You should now see a list of variables at the bottom of the screen. At this point, you can simply drag and drop the “Path” variable into the output box. Set the output file name to something arbitrary like “combined.” If you want to combine individual PDF files instead of images, skip this step and scroll down to the end of this list for alternative instructions.

Step 8: Add the “Rename Finder Items” action and select “Replace Text.” Set it to find “combined” in the basename and replace it with the “FolderName” variable. Once again, you can drag and drop the appropriate variable from the list at the bottom of the screen. Save the workflow as something obvious like “Combine Images into PDF,” and you’re all set. When you right-click on a folder of JPEGs (or other images) in the Finder, you should be able to select your service. Try it out on some test folders with a small number of images to make sure all is working properly. The workflow should deposit your properly-named output PDF in the same directory as the source folder.

To combine PDFs rather than image files, follow steps 1-6 above. After retrieving and sorting the folder contents, add the “Combine PDF Pages” action and set it to combine documents by appending pages. Next add an action to “Rename Finder Items” and select “Name Single Item” from the pull-down menu. Set it to name the “Basename only” and drag and drop the “FolderName” variable into the text box. Lastly, add the “Move Finder Items” action and set the location to the “Path” variable. Save the service with a name like “Combine PDFs” and you’re done.

This procedure can be modified relatively easily to parse individually-selected files rather than entire folders. A folder action worked best for me, though, so that’s what I did. Needless to say, the containing folder has to be labeled appropriately for this to work. I find that I’m much better at properly naming my research folders than I am at naming all of the individual files within them. So, again, this process worked best for me. A lot can go wrong with this workflow. Automator can be fickle, and scripting protocols are always being updated and revised, so I disavow any liability for your personal filesystem. I also welcome any comments or suggestions to improve or modify this process.

Posted in Research and Teaching Tools, Study Methods | Tagged , , , , , , , | Comments Off

The $14 Million Question

Yesterday a copy of the Bay Psalm Book, the first book composed and printed in British North America, sold at auction for a record-breaking $14.16 million. Members of Boston’s Old South Church decided to sell one of their two copies to help fund their cash-strapped congregation, and while the amount fell short of the auction house estimate of $15-30 million, it is certainly enough to buy a whole lot of snazzy sermons, baptismal fonts, and really uncomfortable pews. A number of talented and distinguished historians, including Jill Lepore and David Spadafora, have weighed in on the broader context and significance of this standard devotional text, printed in the fledgling Massachusetts Bay Colony in 1640. Amid all of the excellent scholarly analysis and public humanities work, however, no one seems to be asking the big question: why is someone willing to pay millions of dollars for a book that anyone with an internet connection can get for free? In an age of increasingly universal digitization, when nearly every major print publication prior to 1923 is available online, why do some public domain printed books sell for princely sums?

In 1947, when the last Bay Psalm Book sold at auction for $151,000, a researcher needed to physically travel to a major library in order to view an original copy. In the Northeast, there were plenty of optionsYale, Harvard, Brown, the Boston Public Library, the New York Public Library, the American Antiquarian Society. South of New York City, there was nothing. West of the Appalachians, the only choice was the private Huntington Library in California – and their copy was missing seven pages, including the title page. The only copy available to researchers outside of the United States was at the Bodleian Library at the University of Oxford. Bibliophiles published facsimile editions as early as 1862, but their production and circulation were limited. Depending on how far one had to travel, and factoring in layover times, scheduling, family and work obligations, and local arrangements, the onetime cost of consulting this small piece of religious history could be enormous. Gripes about the digital divide notwithstanding, the analog divide was and is much worse.

In 2013, copies of the the Bay Psalm Book are everywhere – the Library of Congress, the World Digital Library, even the Old South Church. In fact, almost every single book, pamphlet, and broadside published in colonial America is available for free online or at a participating library through Readex’s Early American Imprints series. Yale’s copy of the Bay Psalm Book, which, coincidentally, was the one purchased at the aforementioned auction in 1947, is available in full here. That book sold for the equivalent of about $1.5 million in present-day terms. No copies of this august tome have been discovered or destroyed since 1947. So why is the same book worth over $14 million today? What accounts for this tenfold increase in value?

I can think of several reasons why someone would pay so much for a book that is available to everyone for free. If there are significant deviations or marginalia between and among different copies or editions, each copy is more or less unique and thus uniquely valuable. Yet the differences among the various Bay Psalm Books are fairly well documented by this point and are not that extreme. Another reason might be personal profit or prestige. To his credit, David Rubenstein, the billionaire investor who purchased the book at yesterday’s auction, plans to loan it out to libraries around the country and to place it on deposit with a public institution. Although he may derive a good deal of personal satisfaction from this arrangement, I do not think that private gain is his primary goal. That leaves one more motive – the simple pleasure of the physical artifact.

The Early Dawn - rarer than the Bay Psalm Book and just as significant, but considerably less expensive. Courtesy of Special Collections, Yale Divinity School Library.

The Early Dawn – rarer than the Bay Psalm Book and just as significant, but considerably less expensive. Courtesy of Special Collections, Yale Divinity School Library.

Perhaps one reason why the value of the Bay Psalm Book has increased ten times over the past 60 years is that paper, photographic, and digital reproductions have increased exponentially over the same period. In an era of digital alienation, there is greater romance in the physical object. To touch, to feel, to smell, even to be in the near presence of a famous text creates a kind of living connection with history. Such documents become, as Jill Lepore writes of the United States Constitution, “a talisman held up against the uncertainties and abstractions of a meaningless, changeable, paperless age.”

This is nothing new, of course. Since the days when early Christians passed around the head of Saint Paul or the foreskin of Jesus, and probably long before that, people have always been fascinated by sacred relics. Presumably, this is why so many tourists flock to see the original Declaration of Independence or the Wright Flyer in Washington D.C. One can read almost everything there is to know about the Declaration or the Wright brothers on an iPad while waiting in line at Stop & Shop, but there is something ineffably special about being in the presence of the real thing.

Even so, what justifies such an outrageous price tag? There are almost a dozen copies of the Bay Psalm Book, all available, to some extent, to the public. And there are plenty of rare and valuable historical documents that seldom see the light of day. A few years ago, I found an 1864 edition of the Early Dawn for sale online for less than $200. Published by American abolitionists at the Mendi Mission in West Africa starting in 1861, it is a periodical that ties together the struggles against slavery and racism across two continents. It is invaluable to our understanding of global politics, history, religion, and the state of our world today. In this sense, it is just as significant as the Bay Psalm Book. It is also extremely rare. As far as I know, there is only one other extant issue from the same time period. Fortunately, I was able to convince my colleagues at the Yale Divinity School to purchase and properly preserve this one-of-a-kind artifact so that it would be available for future researchers (click the image above for a full scan of the paper). I am sure that every historian who has worked on a major project has a story similar to this. If not an online purchase, then it is a special document found in an archive, or an especially moving oral history.

There are countless unique and historically significant documents and manuscripts moldering in libraries and repositories around the world. Some of them are true gems, just waiting to be discovered. Most of them remain unavailable and unknown. And yet our society sees nothing wrong with a private citizen spending a small fortune to acquire a copy of the the Bay Psalm Book. There is no question that the venerable Old South Church deserves our support, and I have no doubt that its congregants do important work in their community and abroad. But how many lost treasures could have been brought to the world for the first time for the cost of this single public domain text? How much digitization, transcription, or innovation could $14.16 million buy?

Cross-posted at HASTAC

Posted in Archives, Random Thoughts | Tagged , , , , , , | Comments Off

The Assassination of Zachary Taylor

oswald2Today marks the fiftieth anniversary of the assassination of President John F. Kennedy, and the internet and airwaves are awash in an orgy of commentaries and memorials. What can a digital humanist add to this conversation? Well, for starters, one could ask what the assassination of President Kennedy would look like in the age of social networks, smart phones, and instantaneous communication (bigbopper69: JFK shot in dallas OMG!!! 2 soon 2 no who #grassyknoll). NPR’s Today in 1963 project, which is tweeting out the events of the assassination as they occurred, day-by-day, hour-by-hour, may actually provide a good sense of what it was like to be there in real time. For those of us born decades after the fact, the deluge of digitized photos, videos, documents, and other artifacts enables a kind of full historical immersion that is not quite the same as time travel but close enough to be educationally useful.

One of the more interesting statistics to come out of this year’s commemoration is that “a clear majority of Americans (61%) still believe others besides Lee Harvey Oswald were involved” in a conspiracy to kill President Kennedy. Indeed, historical data show that a majority of Americans have suspected a conspiracy since 1963, at times reaching as high as 81 percent of respondents. This raises all sorts of interesting questions for our current moment, when rumor and misinformation spread as easily as the truth and technophiles celebrate the wisdom of the crowd while solemnly proclaiming the death of the expert. Especially after the recent revelations of unprecedented government spying, including secret courts and secret backdoors built into consumer software, Americans seem to have little reason to trust authority. So what is the role of popular knowledge in the age of digital history?

It would be easy to dismiss the various JFK assassination theories as just another example of what Richard Hofstadter called “The Paranoid Style in American Politics.” Yet to do so would ignore the important function of rumor, gossip, conspiracy theories, and other forms of popular wisdom as material forces in the shaping of our world. 1 Getting at the truth behind major events is, of course, the prime directive of all good history, digital or otherwise. A certain degree of analytical distance, strict rules of evidence, and overt argumentation are what separate professional historiography from simple nostalgia. But what counts as truth can sometimes be just as revealing as the truth itself. The alleged assassination of President Zachary Taylor is a case in point.

When Taylor, the twelfth president, died suddenly of an unidentified gastrointestinal illness just sixteen months into his first term in office, rumors spread that he had been eliminated by political rivals. Taylor’s death, in July 1850, came at a time of heightened tension between supporters and opponents of slavery. Although a slaveholder himself and the hero of an expansionist war against Mexico, Taylor took a moderate position on the slavery question and appeared to oppose its extension into the western territories. His actions may have troubled some of the more ardent southern politicians, including Senator – and future Confederate President – Jefferson Davis. Not long after his predecessor’s tragic demise, newly-minted President Millard Fillmore signed the Compromise of 1850, which had stalled under Taylor’s administration. The legislation included territorial divisions and an aggressive fugitive slave law that helped to set the stage for the looming Civil War.

I will not rehash the specific circumstances of Taylor’s illness, which is conventionally ascribed to a tainted batch of cherries and milk. Suffice it to say that the rapid and inexplicable nature of his death, which fit the profile for acute arsenic poisoning, coupled with the laughably inept state of professional medicine, left plenty of room for speculation. 2 Members of the rising antislavery coalition, soon to be called the Republican Party, were suspicious that the President had met with foul play. Nor were their suspicions limited to Taylor. Over time, the list of alleged assassination victims grew to include Andrew Jackson, William Henry Harrison, and James Buchanan, among others.

Republicans worried that Abraham Lincoln would meet a similar fate after the contentious presidential election of 1860. Even before the election, letters poured in warning the candidate about attempts to poison his food and begging him to keep a close eye on his personal staff. I counted at least fourteen warning notes in a very cursory search of the Lincoln Papers at the Library of Congress. Many of them mention President Taylor by name. “Taylor was a vigorous man, of good habits and accustomed to active life and trying duties,” wrote a supporter from Ohio, “and that he should fall a solitary victim to cholera, in a time of health, after eating a little ice cream is quite unsatisfactory.” After carefully studying the circumstances of Taylor’s death, another concluded that “the Borgias were about.” Yet another consulted a clairvoyant who warned of an active conspiracy to poison the President. In a speech responding to Lincoln’s assassination five years later, railroad magnate and women’s rights advocate George Francis Train mentioned in passing that slaveholders had “poisoned Zachary Taylor,” as if it were a matter of fact. 3

John Armor Bingham, one of the three lawyers tasked with prosecuting the Lincoln assassination conspiracy and the primary author of the fourteenth amendment to the Constitution, reportedly spent some time investigating Taylor’s death. His research, presumably conducted during or shortly after the Lincoln trial in 1865, led him to believe that Taylor had been poisoned and that Jefferson Davis had helped to precipitate the plot. 4 It is a striking claim, if true. Davis was Taylor’s son-in-law by an earlier marriage, and the two were known to be friends. Indeed Taylor uttered his final words to Davis, who stood vigil at his deathbed. Bingham also suspected that Davis was involved in Lincoln’s death, which is unlikely, though not impossible, since there is evidence to suggest that Lincoln’s assassin had contact with Confederate spies in the period leading up to the attack. 5 Whatever the case, Davis was decidedly ambivalent about the effect of the President’s removal on the flagging war effort in the South.

Although historians have shown sporadic interest in Bingham – he was an early antislavery politician and U.S. Ambassador to Japan in addition to his important legal and constitutional roles – I could find no substantial information about his investigation into a conspiracy to murder Zachary Taylor. 6 The finding aids for Bingham’s manuscripts at the Ohio Historical Society and the Library of Congress did not reveal anything related to Taylor. A superficial perusal of similar material at the Pierpont Morgan Library in New York, which holds some of Bingham’s records pertaining to the Lincoln Assassination, also failed to turn up anything significant. Still, my search was limited to document titles and finding aids and did not dig very deep into the actual content of his papers. Perhaps some enterprising digital historian could investigate further?

Uncertainly about Taylor’s death continued to smolder until the early 1990s, when an assiduous biographer managed to secure permission to exhume his body and run scientific tests on the remains. Early results showed no evidence of arsenic poisoning, though later research concluded that those results were unreliable. According to presidential assassination experts Nancy Marion and Willard Oliver, there is no definitive proof either way, and thus the ultimate cause of Taylor’s death remains a mystery. 7 While I think the evidence for natural causes is persuasive, the assorted circumstantial and physical evidence for poisoning is certainly intriguing. More intriguing still is the fact that so many contemporaries, including major political figures, were convinced that Taylor had been intentionally targeted.

The confusion surrounding Taylor’s death speaks to the awesome influence of the “Slave Power Conspiracy” that gripped the nation for much of the nineteenth century. Aspects of this conspiracy theory could be extreme, but as the historian Leonard Richards has shown in great detail, the Slave Power was a quantitative reality that could be measured in votes, laws, institutions, and individuals. 8 Although historians can debate the extent to which it was a self-conscious or internally unified collusion, thanks to the three-fifths clause, the spread of the cotton gin, and other peculiarities of antebellum development, there really was a Slave Power in early American politics. Bingham may have been overzealous when it came to the sinister machinations of Jefferson Davis, but there is no question that Davis and his ilk shared a broadly similar agenda. Popular knowledge about the death of Zachary Taylor, whatever its veracity, reflected a real concern about the grip of a small group of wealthy aristocrats over the social, economic, and political life of the country, just as theories about the death of JFK reflect a real concern about the exponential growth of the U.S. national security state.

A few days ago, Americans celebrated the 150th anniversary of the Gettysburg Address, another epochal moment in their national history. Unlike the sadness and uncertainly surrounding the JFK assassination, this was a moment of optimism and unity, typified by the filmmaker Ken Burns, who solicited readings of the Address from everyone from Uma Thurman to Bill O’Reilly, including all five extant U.S. Presidents. Lost in patriotic reverie, it is easy to lose sight of the bitter, divisive, and bloody conflict that formed the broader context for that document. It is no accident, perhaps, that the recently unmasked espionage programs developed by the United States and Great Britain were named after civil war battles – Manassas and Bullrun for the NSA, Edgehill for the GCHQ. The choice of names appears to be intentional. Both battles were pivotal moments, the first major engagements in a long and destructive war that would result in the birth of a modern nation. Likewise, these surveillance systems appear to be the first step in a prolonged global war for digital intelligence. Is this evidence of a conspiracy? Or is it yet more evidence of the extent to which conspiratorial thinking has infiltrated modern political culture – just another example of the new paranoid style?

Notes:

  1. Clare Birchall, Knowledge Goes Pop: From Conspiracy Theory to Gossip (New York: Berg, 2006); Jesse Walker, The United States of Paranoia: A Conspiracy Theory (New York: HarperCollins, 2013).
  2.  K. Jack Bauer, Zachary Taylor: Soldier, Planter, Statesman of the Old Southwest (Baton Rouge: Louisiana State University Press, 1985), 314-328; Michael Parenti, History as Mystery (San Fransisco: City Lights, 1999), 209-239; Willard Oliver and Nancy Marion, Killing the President: Assassinations, Attempts, and Rumored Attempts on U.S. Commanders-in-Chief (Santa Barbara: Praeger, 2010), 181-189.
  3. “Geo. Francis Train,” Philadelphia Inquirer, May 13, 1865.
  4. “Assassination of Presidents,” New York Times, Aug. 29, 1881.
  5. William A. Tidwell, April ’65: Confederate Covert Action in the American Civil War (Kent, OH: Kent State University Press, 1995).
  6. C. Russell Riggs, “The Ante-Bellum Career of John A. Bingham: A Case Study in the Coming of the Civil War” (PhD Thesis, New York University, 1958); Erving E. Beauregard, Bingham of the Hills: Politician and Diplomat Extraordinary (New York: P. Lang, 1989); Gerard N. Magliocca, American Founding Son: John Bingham and the Invention of the Fourteenth Amendment (New York: New York University Press, 2013).
  7. Oliver and Marion, Killing the President, 181-189.
  8. David Brion Davis, The Slave Power Conspiracy and the Paranoid Style (Baton Rouge: Louisiana State University Press, 1969); Leonard L. Richards, The Slave Power: The Free North and Southern Domination, 1780-1860 (Baton Rouge: Louisiana State University Press, 2000).
Posted in Digital Scholarship, Random Thoughts | Tagged , , , , , , , , , | Comments Off

WordPress as a Course Management System

I am a big fan of the WordPress publishing platform. It’s robust and intuitive with an elegant user interface, and best of all, it’s completely open source. Content management heavyweights such as Drupal or MediaWiki may be better equipped when it comes to highly complex, multimodal databases or custom scripting, but for small-scale, quick and dirty web publishing, I can think of few rivals to the WordPress dynasty. About 20% of all websites currently run on some form of WordPress. Considering that Google’s popular Blogger platform accounts for a measly 1.2% of the total, this is a staggering statistic. Like many digital humanists, I use WordPress for my personal blogging as well as for the courses that I teach. Yet I often wonder if I am using this wonderfully diverse free software to its full potential. Instead of an experimental sideshow or an incidental component of a larger course, what if I made digital publishing the core element, the central component of my research and teaching?

YouTube Preview Image
Jack Black as a course management system.

What follows are my suggestions for using a WordPress blog as a full-fledged course management system for a small discussion seminar. These days almost all colleges and universities have a centralized course management system of some sort. In the dark ages of IT, a proprietary and much-derided software package called Blackboard dominated the landscape. More recently, there is the free and open source Moodle, the Sakai Project, and many others (Yale uses a custom rendition of Sakai called Classes*v2). These platforms, sometimes called learning management systems, collaboration and learning environments, or virtual learning environments, are typically quite powerful. Historically, they have played an important role in bridging analog and digital pedagogy. Compared to WordPress, however, they can seem arcane and downright unfriendly. Although studies of course management systems are sporadic and anecdotal, one of the most common complaints is “the need for a better user interface.” Instead of working to improve these old methods, perhaps it is time to embrace a new paradigm. Why waste time training students and teachers on idiosyncratic in-house systems, when you can give them more valuable experience on a major web publishing platform? Why let technology determine the limits of our scholarship, when we can use our scholarship to push the boundaries of emerging technologies?

Before getting started, I should point out that there are already a wide variety of plugins that aim to transform WordPress into a more robust collaborative learning tool. Sensei and BuddyPress Courseware are good examples. The ScholarPress project was an early innovator and still shows great promise, but it has not been updated in several years and no longer works with the latest versions of WordPress. The majority of these systems are more appropriate for large lectures, distance learning, or MOOCs (massive open online courses). There is no one-size-fits-all approach. For smaller seminars and discussion sections, however, a custom assortment of plugins and settings is usually all that is required. I have benefited from previous conversations about this topic. I also collaborate closely with my colleagues at Yale’s Instructional Technology Group when designing a new course. It is worth repeating that the digital humanities are, at their heart, a community enterprise.

Step 1: Install WordPress. An increasing number of colleges and universities offer custom course blogs along with different levels of IT support. For faculty and students here, Yale Academic Commons serves as a one-stop-shop for scholarly web publishing. Other options include building your own WordPress site or signing up for free hosting.

Step 2: Find a good theme. There is an endless sea of WordPress themes out there, many of them free. For my course blogs, I prefer something that is both minimalist and intuitive, like the best academic blogs. The simpler the better. I also spend a lot of time choosing and editing an appropriate and provocative banner image. This will be the first thing that your students see every time they log in to the site, and it should reflect some of the central themes or problems of your course. It should be something worth pondering. Write a bit about the significance of the banner on the “About” page or as a separate blog post, but do not clutter your site with media. As Dan Cohen pointed out last year, effective design is all about foregrounding the content.

Step 3: Load up on plugins. Andrew Cullison provides a good list of course management plugins for WordPress. Although almost all of them are out of date now, many have newer counterparts that are easily discoverable in the official WordPress plugin directory. Among the more useful plugins are those that allow you to embed interactive polls, create tag clouds, sync calendars, and selectively hide sensitive content. ShareThis offers decent social media integration. WPtouch is a great way to streamline your site for mobile devices. Footnote and annotation plugins are helpful for posting and workshopping assignments. I also recommend typography plugins to do fancy things like pull quotes and drop caps. A well configured WYSIWYG editor, such as TinyMCE, is essential.

Step 4: Upload content. Post an interactive version of the syllabus, links to the course readings, films, image galleries, and any other pertinent data. Although your institution probably has a centralized reserves system, it is perfectly legal to post short reading assignments directly to your course site, as long as they are only available to registered students. In some cases, this might actually be preferable to library reserves that jumble all of your documents together with missing endnotes and abstruse titles. Most WordPress installs do not have massive amounts of media storage space, but there is usually enough for a modest amount of data. If you need more room, use Google Drive or a similar cloud storage service.

Step 5: Configure settings and metadata. Make sure your students are assigned the proper user roles when they are added to the blog. Also be sure to establish a semantic infrastructure, with content categories for announcements, news, reading responses, primary documents, project prospectuses, etc. Your WYSIWYG editor should be configured so that both you and your students can easily embed YouTube videos, cite sources, and create tables. Depending on the level of interaction you would like to encourage on your site, the discussion settings are worth going over carefully.

Step 6: Figure out how you’re going to grade. After a good deal of experimentation, I settled on a plugin called Grader. It allows instructors to post comments that are viewable only to them and the student. Check out Mark Sample’s rubric for evaluating student blogs. Rather than grade each individual post, I prefer to evaluate work in aggregate at certain points during the semester. I also tend to prefer the 0-100 or A-F scale to the alternatives. Providing substantial feedback on blog posts is probably better than the classic √ or √+. You should treat each post as a miniature essay and award extra points for creativity, interactivity, and careful deliberation. If you are serious about digital publishing, it should account for at least 30-50% of the final grade for the course. Although I have not experimented with them yet, there are gradebook plugins that purport to allow students to track their progress throughout the semester.

Step 7: Be clear about your expectations. It can be difficult to strike the correct balance between transparency and simplicity, but I usually prefer to spell out exactly what I want from my students. For a course blog, that probably means posting regular reading responses and commentaries. In addition to response papers, primary documents, and bibliographies, I ask students to post recent news items and events pertaining to the central themes of the course. I encourage them to embed relevant images, films, and documents and to link to both internal and external material. I also require students to properly title, categorize, and tag their posts. Because what good is a blog if you are not making full use of the medium?

Step 8: Publish. Although there are good reasons for keeping course blogs behind an institutional firewall, there are equally good reasons for publishing them to the world. An open blog encourages students to put their best foot forward, teaches them to speak to a broader audience, and leaves a lasting record of their collective efforts. If making your blog publicly accessible, allow your students to post using just their first names or a pseudonym. This will allow them to remain recognizable to class members but relatively anonymous to the rest of the world. It is also a good idea to restrict access to certain pages and posts, such as the course readings and gradebook, to comply with FERPA and Fair Use guidelines.

I always review my course blogs on the first day of class, and I spend a fair amount of time explaining how to navigate the backend and post content. I also find it useful to reinforce these lessons periodically during the semester. It only takes a few minutes to review proper blogging protocol, how to embed images and videos, annotate documents, etc. If possible, project the course site in the background during class discussions and refer back to it frequently. Make it a constant and normal presence. Depending on the class, discussing more advanced digital publishing techniques, such as SEO, CSS, and wikis, can be both challenging and exciting. It is also important to remember that course management systems, like all emerging technologies, are embedded in larger social structures, with all of their attendant histories, politics, and inequalities. So it is worth researching and supporting initiatives, such as Girl Develop It or the Center for Digital Inclusion, that seek to confront and redress these issues.

Please feel free to chime in if you’ve tried something similar with your courses, or if you have any questions, suggestions, or comments about my process.

Posted in Research and Teaching Tools, Yale Projects | Tagged , , , , , , , , , | 1 Comment