File /Humanist.vol22.txt, message 37


Date: Sat, 24 May 2008 08:20:10 +0100
From: "Humanist Discussion Group \(by way of Willard McCarty              <willard.mccarty-AT-kcl.ac.uk>\)" <willard-AT-LISTS.VILLAGE.VIRGINIA.EDU>
Subject: 22.037 testing time
To: <humanist-AT-Princeton.EDU>


                Humanist Discussion Group, Vol. 22, No. 37.
       Centre for Computing in the Humanities, King's College London
  www.kcl.ac.uk/schools/humanities/cch/research/publications/humanist.html
                        www.princeton.edu/humanist/
                     Submit to: humanist-AT-princeton.edu



         Date: Fri, 23 May 2008 18:29:30 +0100
         From: Willard McCarty <willard.mccarty-AT-kcl.ac.uk>
         Subject: testing time

Sixteen years ago, in the Ninth British Library Research Lecture,
"Computers and the humanities" (British Library, 1992), Sir Anthony
Kenny surveyed "the use of computers in actual research" since the
late 1940s. He sketched work on three levels across the decades he
was then able to survey. At the lowest, most general, and most
unambiguously useful level, he noted the employment of  computers to
perform humdrum tasks in less time than an unaided human could
accomplish -- the kinds of things everyone now does without much if
any thought and without help from experts. This sort, he pointed out,
leaves little trace in the published work of researchers (though
its mostly unstudied effects on scholarship are undoubtedly great). At
the opposite end he put the showpiece explorations of computational
methods by the most ambitious projects, which seek results "not so
much to enrich the domain of research with fundamentally new findings
as to demonstrate the validity of some new form of automatic
processing". These "win acclaim in the literature of [computer
science] but pass almost without remark in the parent humanities
disciplines" -- and often never get beyond laboratory prototypes. His
interest was, as ours is, in the middle ground between these two. And
there he found a great absence:

 >in spite of the multiplication of new basic research tools in the
 >humanities, it is surprisingly difficult to point, in specific
 >areas, to solid, uncontroverted gains to scholarship which could not
 >have been achieved without the new technology. The high hopes which
 >some computer enthusiasts held out that the computer would
 >revolutionize humanistic study have been proved, over and over
 >again, to be unrealistic. Sometimes the initial claims made were
 >much exaggerated...  But even in areas where there was no hubris in
 >the initial claims, the results delivered have often been
 >disappointing. Between humdrum research and showpiece research,
 >what, the humanities scholarly community is really anxious to see is
 >work which is both (a) respected as an original scholarly
 >contribution within its own discipline and (b) could clearly not
 >have been done without a computer....
 >Indeed throughout humanities disciplines, after thirty-odd years of
 >this kind of research, there are embarrassingly few books and
 >articles which can be confidently pointed out as passing both tests.
 >This has meant that many enthusiasts for computing in ihe humanities
 >have an uncomfortable sense of crisis, a feeling of promise
 >unfulfilled. Gone is the glad confident morning in which Ladurie
 >could say, "L'historien de demain sera programmeur ou il ne sera
 >plus". The feeling of disillusion is indeed partly the result of the
 >misplaced optimism and exaggerated claims of some of the pioneers:
 >the belief was sometimes encouraged in the past that feeding data
 >into a computer would automatically solve a scholar's problems. Rare
 >has been the computer project which did not, in the course of
 >execution, bring to light an initial overestimation of the technical
 >possibilities, and an underestimation of the problems of data
 >preparation. The proliferation of personal computers in the last
 >decade has often, embarrassingly, gone with an actual diminution in
 >methodological sophistication.

This is a deeply familiar observation, and especially among the
text-analysis crowd, a frequent lament that continues to attract a
number of diagnoses, the most recent I know of being in the first
number of Literary and Linguistic Computing for 2008, by Patrick
Juola. The immediately previous one, by David Hoover, appeared
in Digital Humanities Quarterly 1.2 (Summer 2007). The list is a long
one, going back at least to the late 1970s. Rosanne Potter's retrospective
in CHum 25 (1991): 401-29 fingers the most significant ones up to that
date.

I wonder, however, if the problem is as much in the question being
asked as in the answer not forthcoming. For one thing, writers tend
to assume one answer or failure to answer across all disciplines
irrespective of their materials, style of reasoning and goals.
Clearly that cannot be right. When your goal is, e.g. as in
epigraphy, principally to report factually rich details of what you
have seen that may not be there the next time someone wants to take a
look, you may well regard digital imaging, markup, relational
database and online publication tools to have made an enormous,
discipline-changing difference. You may well be tempted to point out
that now those stuck-up folks in literary studies have to revise
their ideas of what, exactly, belongs in the corpus of literature.
You may well have the sense of a new renaissance of discoveries. If
your goal is to interpret the literature, you may well be glad for
the additional text but then hasten to point out that other than
delivering text obediently and allowing you to do all those humdrum
things faster etc, computing has not really made much of a difference
to what centrally counts: the interpretative operations of criticism.
The best statement from this perspective to date is Jerome McGann's.

Wouldn't it be better to ask what sort of differences are making
real differences in what disciplines? Wouldn't be better to take
account of what practitioners in each specialism are actually
trying to do?

Kenny concluded, in 1992, by saying, "the testing time has now
arrived". Indeed -- and something else folks have been saying again
and again for a long time. But if, as I think to be the case,
humanities computing is fundamentally an experimental practice, then
wasn't Kenny noticing a perpetual dawn rather than a final sunset?
And where does all this anxiety about whether we are being seen to be
useful come from? Are we quaking at the wagging finger of the ghost
of Imre Lakatos ("Beware degenerate research programmes!"), or are
we suffering from the general lack of self-respect afflicting academics
these days?

Comments?

Yours,
WM




Willard McCarty | Professor of Humanities Computing | Centre for
Computing in the Humanities | King's College London |
http://staff.cch.kcl.ac.uk/~wmccarty/. Et sic in infinitum (Fludd 1617, p. 26). 


   

Humanist Main Page

 

Display software: ArchTracker © Malgosia Askanas, 2000-2005