Humanist Discussion Group

Humanist Archives: Feb. 21, 2025, 8:34 a.m. Humanist 38.370 - AI, poetry and readers

				
              Humanist Discussion Group, Vol. 38, No. 370.
        Department of Digital Humanities, University of Cologne
                      Hosted by DH-Cologne
                       www.dhhumanist.org
                Submit to: humanist@dhhumanist.org


    [1]    From: Gabriel Egan <mail@gabrielegan.com>
           Subject: Re: [Humanist] 38.367: AI, poetry and readers (139)

    [2]    From: James Rovira <jamesrovira@gmail.com>
           Subject: Re: [Humanist] 38.367: AI, poetry and readers: Calvino, neuroscience & intention (37)


--[1]------------------------------------------------------------------------
        Date: 2025-02-20 20:31:31+00:00
        From: Gabriel Egan <mail@gabrielegan.com>
        Subject: Re: [Humanist] 38.367: AI, poetry and readers

Tim Smithers wants us to treat as axiomatic
the idea that when humans make a machine
they necessarily understand how it works.
That I what I take him to be claiming
when he writes of Artificial Intelligence
machines:

<<
... we know how these systems are designed and
specified using linear matrix arithmetic ...
This is all we need to know and understand
to build these systems, and all we need to
know to understand how they work and what
they do.
 >>

I would disagree and say that there are
plenty of examples of machines that we
have made that we understood at the
level of their engineering when we made
them, but that turned out to have
emergent behaviours that we did not
expect and that we do not understand.

Indeed, if that were not the case there
would scarcely be any point to the field
of experimental physics. We know what we
built when we built the Large Hadron
Collider, but the point of building
it was to explore what its component
parts -- including the subatomic parts
that its operation inevitably generates
-- will do under particular circumstances.
If we knew that in advance, there was
no point building it.

To take a case from computing, John Conway
knew what he was making when he programmed
his Game of Life. But his descriptions of
his explorations with it make clear that
he did not anticipate its emergent
behaviours.

It was not obvious to Conway or anyone
else that a particular instance of the
Game of Life (that is, a particular
starting state of its cells) would
constitute a computing machine that
could itself run an instance of the
Game of Life. (YouTube shows some
fascinating videos in response to a
search for 'game of life running game
of life'.)

Smithers reiterates his point when I refer
to what we don't know about the function
of Artificial Intelligence systems:

<<
I [Smithers] would ask you [Egan] to
provide some evidence for this apparently
near complete ignorance of how machines
we design and build actually work.
What is it you would say we are not
understanding here?
 >>

I would say that we do not understand
the emergent behavior of some complex
machines. Indeed, that is why we build
them in the first place: to see what
they do, what they come up with. The
most commonly expressed response of
even experts using AI is surprise
at what it does. An example was in
the news today:

   https://www.bbc.co.uk/news/articles/clyz6e9edy3o

When a Machine Learning technique
is used to optimize a particular
problem in the world, say the
best way to organize the internal
structure and procedures of an airport
(such as the security checks and
baggage handling), then we can know when
we have produced an optimization that
is better than what we do now. We know
this by implementing the optimized
arrangement and finding that more
people pass through the airport.

But we cannot know if that optimization
is the best possible one, since to know
that would require us to already know
what is the best possible one, in order
to compare it with the one created by
Machine Learning. If we already knew the
best possible optimization then we would
not need the Machine Learning approach
in the first place. (I am indebted
to my colleague Mario Gongora and
his inaugural professorial lecture
on AI, given earlier this week, for
this example.)

I think few people who have used
ChatGPT would agree with Smithers's
assertion that "It does not generate
words". This is only true if we imbue
the concept of 'words' with the intrinsic
attribute of human origin, so that on
principle a machine cannot create them.
Meantime, the rest of the world continues
to communicate, using what we all agree
are words, with the LLMs.

It is comforting to believe that only
humans can think, but technology has
a tendency to ride roughshod over
our sensitivities in these matters.
Recall the philosophers in 'The Hitchhiker's
Guide to the Galaxy' who object to the
machine Deep Thought challenging the
human monopoly on philosophical thinking.
As they see it, this is a matter of the
'demarcation' of roles -- here, the roles
of 'human' and 'machine' -- which was an
idea often invoked in 1970s trades union
disputes about the automation of work,
and which Douglas Adams is gently mocking.
As one of the philosophers reflects,
"... what's the use of our sitting
up half the night arguing that there
may or may not be a God if this
machine only goes and gives us his
bleeding phone number the next morning?"

Gabriel Egan

--[2]------------------------------------------------------------------------
        Date: 2025-02-20 16:42:15+00:00
        From: James Rovira <jamesrovira@gmail.com>
        Subject: Re: [Humanist] 38.367: AI, poetry and readers: Calvino, neuroscience & intention

First, thank you, Tim, for the kind words. I feel the same way about our
conversation. You have helped me understand my own ideas better than I did.

Next, to add to Tim's response to Gabriel, I would like to say that
Gabriel's comparison between systems such as a wing and a lens and the
human brain is deeply flawed. He's comparing two very simple, fixed systems
(wings and neurons) to two highly complex ones (brains and computers)
involving billions of connections. That's hardly a valid analogy.

Furthermore, we know that computer and brain systems have at least these
two fundamental differences:

1. Brain systems are part of an organic neural network spread throughout an
organic, physical body that has connections with its external environment
that cannot be shut off. Computer systems are inorganic, limited to the
processor, and are not connected in the same way to any external
environment. They can be completely disconnected except for a single
interface, in fact, that does not in any way represent the external
environment (say, a keyboard). My only interface right now is my keyboard.
I have a camera/microphone that is shut off. It is unnecessary to the
functioning of the system. I cannot similarly shut off my eyes and ears. I
can block inputs to them with, say, a mask and earplugs, but they are still
constantly operating. They are never shut off. And while I can similarly
plug my nose, thank God, I can do nothing to shut off my skin and sense of
taste or even block inputs from them.

2. Brain systems process human language and computer systems do not. Human
words do not exist for computer systems except for being rendered on a
screen. The computer system itself does not "think" using words. It
"thinks" in high and low voltage states that are converted to numbers and
then rendered as words in LLMs. This is a fundamentally different kind of
processing than the human brain.

One needs to be thinking very reductively and ignoring what we do know
about human brains and computer processors to identify the two.

Jim R


_______________________________________________
Unsubscribe at: http://dhhumanist.org/Restricted
List posts to: humanist@dhhumanist.org
List info and archives at at: http://dhhumanist.org
Listmember interface at: http://dhhumanist.org/Restricted/
Subscribe at: http://dhhumanist.org/membership_form.php