File /Humanist.vol22.txt, message 579


From: Humanist Discussion Group <willard.mccarty-AT-mccarty.org.uk>
To: humanist-AT-lists.digitalhumanities.org
Date: Thu,  5 Mar 2009 06:07:05 +0000 (GMT)
Subject: [Humanist] 22.594 two questions from the dustbin


                 Humanist Discussion Group, Vol. 22, No. 594.
         Centre for Computing in the Humanities, King's College London
                       www.digitalhumanities.org/humanist
                Submit to: humanist-AT-lists.digitalhumanities.org

  [1]   From:    Willard McCarty <willard.mccarty-AT-mccarty.org.uk>          (46)
        Subject: KWIC before Luhn

  [2]   From:    Willard McCarty <willard.mccarty-AT-mccarty.org.uk>          (58)
        Subject: psychotic computers?


--[1]------------------------------------------------------------------------
        Date: Wed, 04 Mar 2009 12:47:56 +0000
        From: Willard McCarty <willard.mccarty-AT-mccarty.org.uk>
        Subject: KWIC before Luhn

At the risk of committing the same historiographical sin I identified a 
couple of days ago, namely walking into the who-got-there-first quagmire 
as if it were firm ground for history, allow me to ask about an instance 
in the pre-history of KWIC (keyword-in-context) concording.

Many here will know that KWIC was invented by H. P. Luhn ca 1959, for 
which see "Keyword-in-context index for technical literature (KWIC 
index)", in Readings in Automatic Language Processing, ed David G Hays 
(American Elsevier, 1966). As these same many will know, KWIC is 
fundamental to corpus linguistics and the text-analysis which preceded 
and has followed it.

My question begins with the fact that the central idea of KWIC, or more 
accurately an idea of which KWIC is a somewhat less ambitious 
expression, was let out into the world by Warren Weaver ten years 
earlier in a memorandum of 15 July 1949, later published in William N 
Locke and A Donald Booth, Machine translation of languages: fourteen 
essays (MIT Press, 1955). Weaver's idea begins as follows:

> If one examines the words in a book, one at a time as through an opaque
> mask with a hole in it one word wide, then it is obviously impossible to
> determine, one at a time, the meaning of the words. "Fast" may mean
> "rapid"; or it may mean "motionless"; and there is no way of telling
> which. 
> 
> But, if one lengthens the slit in the opaque mask, until one can see not
> only the central word in question but also say N words on either side,
> then, if N is large enough one can unambiguously decide the meaning of
> the central word. The formal truth of this statement becomes clear when
> one mentions that the middle word of a whole article or a whole book is
> unambiguous if one has read the whole article or book, providing of
> course that the article or book is sufficiently well written to
> communicate at all. 
> 
> The practical question is: "What minimum value of N will, at least in a
> tolerable fraction of cases, lead to the correct choice of meaning for
> the central word?

His scheme is greater than specified by the notion of "span" because he 
moves from looking at so many words on either side of the target to the 
idea of discarding all but the nouns, for example, or nouns and 
adjectives, or nouns, adjectives and verbs, and so forth.

My question is, was Weaver blazing a new trail here?

Yours,
WM
-- 
Willard McCarty, Professor of Humanities Computing,
King's College London, staff.cch.kcl.ac.uk/~wmccarty/;
Editor, Humanist, www.digitalhumanities.org/humanist;
Interdisciplinary Science Reviews, www.isr-journal.org.



--[2]------------------------------------------------------------------------
        Date: Wed, 04 Mar 2009 13:51:47 +0000
        From: Willard McCarty <willard.mccarty-AT-mccarty.org.uk>
        Subject: psychotic computers?

Users of Windows Vista will take particular enjoyment in the following 
snippet from "The Thinking Machine", in Time Magazine for Monday, 23 
January 1950  --  the issue with the Harvard Mark III ("Can man build a 
superman?") on the cover:

> Nearly all the computermen are worried about the effect the machines
> will have on society. But most of them are not so pessimistic as
> [Norbert] Wiener....
> 
> Psychotic Robots. In the larger, "biological" sense, there is room
> for nervous speculation. Some philosophical worriers suggest that the
> computers, growing superhumanly intelligent in more & more ways, will
> develop wills, desires and unpleasant foibles' of their own, as did
> the famous robots in Capek's R.U.R.
> 
> Professor Wiener says that some computers are already "human" enough
> to suffer from typical psychiatric troubles. Unruly memories, he
> says, sometimes spread through a machine as fears and fixations
> spread through a psychotic human brain. Such psychoses may be cured,
> says Wiener, by rest (shutting down the machine), by electric shock
> treatment (increasing the voltage in the tubes), or by lobotomy
> (disconnecting part of the machine).
> 
> Some practical computermen scoff at such picturesque talk, but others
> recall odd behavior in their own machines. Robert Seeber of I.B.M.
> says that his big computer has a very human foible: it hates to wake
> up in the morning. The operators turn it on, the tubes light up and
> reach a proper temperature, but the machine is not really awake. A
> problem sent through its sleepy wits does not get far. Red lights
> flash, indicating that the machine has made an error. The patient
> operators try the problem again. This time the machine thinks a
> little more clearly. At last, after several tries, it is fully awake
> and willing to think straight.
> 
> Neurotic Exchange. Bell Laboratories' Dr. Shannon has a similar
> story. During World War II, he says, one of the Manhattan dial
> exchanges (very similar to computers) was overloaded with work. It
> began to behave queerly, acting with an irrationality that disturbed
> the company. Flocks of engineers, sent to treat the patient, could
> find nothing organically wrong. After the war was over, the work load
> decreased. The ailing exchange recovered and is now entirely normal.
> Its trouble had been "functional": like other hard-driven war
> workers, it had suffered a nervous breakdown.

It is interesting that already in 1950 either computers were behaving 
oddly in a human-like sense (unlikely, I think) or that people, even 
such as Norbert Wiener, were interpreting their malfunctioning as such. 
Engineers and other scientists are certainly not immune to fanciful 
imaginings. But, would it not be reasonable to think that if one takes 
the idea of emergent phenomena seriously, then there's no reason not to 
believe that something analogous to psychotic behaviour in computers is 
possible?

Comments?

Yours,
WM
-- 
Willard McCarty, Professor of Humanities Computing,
King's College London, staff.cch.kcl.ac.uk/~wmccarty/;
Editor, Humanist, www.digitalhumanities.org/humanist;
Interdisciplinary Science Reviews, www.isr-journal.org.



_______________________________________________
List posts to: humanist-AT-lists.digitalhumanities.org
List info and archives at at: http://digitalhumanities.org/humanist
Listmember interface at: http://digitalhumanities.org/humanist/Restricted/listmember_interface.php
Subscribe at: http://www.digitalhumanities.org/humanist/membership_form.php




   

Humanist Main Page

 

Display software: ArchTracker © Malgosia Askanas, 2000-2005