Next week I return to Texas A&M University, where I started my academic career and spent twenty indifferent years of it, to deliver a lecture on the digital humanities. The subject is an appropriate one for me, I guess, since I was a pioneer of the digital humanities a good decade before they were even called that. Along with the late Denis Dutton, I founded the listserve discussion group PHIL-LIT in the summer of 1994, just a few weeks after L-Soft launched its first version of listserv software. I moderated PHIL-LIT for nine years until, sick unto death of the partisan politics that had crowded out any discussion of philosophy and literature, I pulled the plug on it.
All talk about the digital humanities is pretty evenly divided between those who are skeptical that computers will ever do anything more than lighten the drudgery of humanistic scholarship by speeding up its more mechanical tasks and those, like Alan Liu of the University of California at Santa Barbara, who are excited by the prospect of a “uniquely contemporary kind of discourse”:
Number me among the skeptics. My suspicion is that what Liu calls the “structured encoding of knowledge” is really only another way—a newer way, I grant you, and for now a stranger way—of preparing copy for the printer. The printer has been replaced by a machine; our copy must now be machine readable. But the copy itself remains unchanged fundamentally (I apologize for the swear word). The encoding is a superaddition to it.
One reason for my skepticism is that the digital humanities have been around for nearly half a century now, and the hoped-for breakthrough has yet to occur. Jerome McGann, a well-known scholar of romanticism, expresses the hope succinctly when he predicts that computers will be able to “expose textual features that lie outside the usual purview of human readers.” But even the most successful work in the digital humanities (like the four-author paper “The Expression of Emotion in 20th Century Books,” with its impressive equations and graphs) has produced what scientists call results of low statistical power—small sample sizes, small effects being studied.
In 1965, IBM awarded a grant to Yale University to investigate the promise of computers in humanistic research. At the inevitable conference that ensued, the late Jacques Barzun was optimistic about the promise of computers for indexing, collating, verifying, drawing up concordances, and similar attention-to-detail work, but he warned that humanists who hope to rely upon the computer for more far-reaching results will only “reduce wholes to discrete parts that are disconnected from the value or nature of the whole.”
Barzun’s warning is even more timely now that digitalization has opened up archives and library collections that were once closed to everyone outside a small elite. By means of topic modeling, a humanistic scholar can now search more text in an afternoon than he previously could in a lifetime. But the problem—the problem as defined by Barzun—remains. The excited advocates of the digital humanities, which they familiarly call DH (they don’t mean Lawrence), are worried about a different problem altogether, which they are confident the new computer-backed methods and conventions will solve:
Barzun’s warning is a reminder that mind, the moisture in the robot, is forever indispensable to human knowledge, including the humanities. The connection of discrete parts to the value and nature of the whole is an operation that can only be performed by a human being who is capable of judgment in addition to designing search protocols.
Let me brag for a moment. Perhaps my only substantive contribution to humanistic learning is the discovery that Ralph Waldo Emerson coined the term creative writing, which he first used in “The American Scholar” (a discovery that has been incorporated, without attribution, into the third edition of the OED, by the way, thus giving the lie to Cassio’s claim that his reputation is a man’s immortal part). Without question, the selection of archival materials that I plowed through to study the history of the idea of creative writing was a product of my interpretive bias. But the mistake is to assume that my bias illegitimately skewed the search results somehow. You are not permitted to ignore the fact that I was right about the origin of the term. My bias (namely, that creative writing reeks of American romanticism) led me to the right materials.
The confidence that they “will enable us to move beyond the traditional methodologies” might be called the Great White Hope of the digital humanities. It is overweight, overhyped, an expression of superstition and prejudice.
The real promise of the digital humanities is at once less exciting and more liberating. What the digital humanities promise is the death of the credential. Anyone at all can now undertake an inquiry into the human heritage, and anyone at all can now publish her findings. No one need any longer submit her research for prior approval to a figure in a position of institutional power. She is free to follow her inclinations and talents—free to follow them as far as they will carry her. This is what political conservatives, who complain incessantly about the “liberal bias” in academe, fail to understand. No one is in control of humanistic scholarship any longer, no party, no league of prestigious institutions, no system of acceptance and rejection. When credentials have lost their cultural influence, the only influence in the humanities will be the influence of brilliant undeterred minds. And that is the final hope of the digital humanities.
 Alan Liu, “Transcendental Data: Toward a Cultural History and Aesthetics of the New Encoded Discourse,” Critical Inquiry 31 (Autumn 2004).
 Jerome McGann, Radiant Textuality: Literature after the World Wide Web (New York: Palgrave, 2001), p. 190.
 Jacob Leed, Review of Computers for the Humanities? A Record of the Confederence Sponsored by Yale University on a Grant from IBM, January 22–23, 1965, Computers and the Humanities 1 (September 1966): 13.