File /Humanist.vol22.txt, message 328


From: Humanist Discussion Group <willard.mccarty-AT-mccarty.org.uk>
To: humanist-AT-lists.digitalhumanities.org
Date: Sun, 16 Nov 2008 11:21:08 +0000 (GMT)
Subject: [Humanist]  22.332 hardware and interpretation


                 Humanist Discussion Group, Vol. 22, No. 332.
         Centre for Computing in the Humanities, King's College London
                       www.digitalhumanities.org/humanist
                Submit to: humanist-AT-lists.digitalhumanities.org



        Date: Sun, 16 Nov 2008 11:06:18 +0000
        From: Willard McCarty <willard.mccarty-AT-mccarty.org.uk>
        Subject: Re: [Humanist]  22.324 hardware and interpretation
        In-Reply-To: <20081114100622.D62922633E-AT-woodward.joyent.us>

The discussion gets better. As James Cummings points out, in the early 
days of my experiences with computing,

> ... you were not a 'more or less ordinary person' 
> at that point.  The ordinary people weren't using computers in the mid 
> 60s, so if you were then you were then in the same category as those who 
> are now pushing the boundaries of computing.  In those 
> boundary-stretching types of research, I think the computing model is 
> still client/server but just in an increasingly distributed way.  It 
> isn't that I imagine one computer out there processing my request, but 
> an amorphous cloud of them.  

My (refined) point is that what computing meant culturally then was 
different than it is now. True, ordinary people did not encounter 
machines, but they encountered stories and pictures about them, showing 
the big machine-rooms, white-coated assistants (of whom I was one) and 
so forth. Now "computer" means something quite different.

I should say that I am asking my questions in order better to understand 
how scholars in those early days encountered computing, what they made 
of it, how they conceptualized the machine, its purposes and 
implications. Their conceptualizations went straight into the 
professional and para-professional literature. Furthermore, as I have 
just been reminded, these ideas of computing are still quite deeply 
rooted in the academy, in the sort of scholars who are most influential 
shapers of opinion. Those opposed to computing are not so important any 
more; they're out of circulation or simply keeping their heads down. The 
type that concern me much more are those who like the stuff but say 
silly things which are taken as gospel in a positive or negative sense.

James goes on,

> At a very basic level ordinary people of 
> the 60s had the same experience as I do in front of my desktop computer 
> today; like me they pressed a key and saw a letter appear before them, 
> the difference was they were using a typewriter.

This is why the typewriter analogy was so common then, why people like 
me were hopping up and down about the computer NOT being a typewriter. 
In talking to colleagues then one of the prevalent confusions I 
witnessed centred on the fact that the connection between a key and the 
resultant action was not simple or even determinate (if whatever program 
crashed). That wedge of indeterminacy or complexity, that bit of 
artificial intelligence clearly puzzled if not upset people then. The 
wonkiness that I wrote about (when, now, something happens to my 
machine) is related, I'd think, but also an important clue to what has 
changed.

James writes that,

> For a modern Turing test, 
> we might a imagine whether not only a single individual could be 
> mimicked, but an entire online community of individuals with varying 
> levels and methods of interaction. But I can't think of how you would 
> set up a modern Turing test without there being the perception of the 
> other 'person' or persons being remote in some way, otherwise how does 
> the test work?  That's my creative limitations I'm sure.  I don't 
> disagree that our interaction with computers has fundamentally changed, 
> I just don't think it has changed to another single form of perceiving 
> it, but multiple, complex and shifting forms.

This is an important point -- that we are now dealing with "multiple, 
complex and shifting forms" of computing in actuality. (They were always 
in potentia, thanks to Turing's design.) But my point/implicit question 
here is at a greater level of abstraction. Would a modern Turing, 
observing the scene without attachment to what the old Turing proposed, 
even think of such a test? Thinking thus is based on an idea of 
difference, between us and computing, which is rapidly vanishing.

To get back to my earlier, political argument, I think it's crucial to 
get such questions straight among ourselves because the case still has 
to be made among our extra-Humanist colleagues. What many of them are 
hearing and, I infer, thinking can at times be so wrong as to lead a 
thoughtful person with critical awareness of computing to a deep state 
of melancholia. A number of very prominent individuals (i.e. those in 
charge of jobs and funding) are still thinking that the digital 
humanities, humanities computing or whatever we call our practice, 
should be folded into the traditional disciplines and so become a set of 
shrink-wrapped ideas which hordes of busy academics have not time to 
examine, criticise and develop.

We talk of interdisciplinarity yet seem to do everything to guarantee 
that it will continue to be impossible in any sense worth thinking about.

Comments?

Yours,
WM



_______________________________________________
List posts to: humanist-AT-lists.digitalhumanities.org
List info and archives at at: http://digitalhumanities.org/humanist
Listmember interface at: http://digitalhumanities.org/humanist/Restricted/listmember_interface.php
Subscribe at: http://www.digitalhumanities.org/humanist/membership_form.php




   

Humanist Main Page

 

Display software: ArchTracker © Malgosia Askanas, 2000-2005