Humanist Discussion Group

Humanist Archives: Feb. 25, 2025, 7:34 a.m. Humanist 38.374 - AI, poetry and readers

				
              Humanist Discussion Group, Vol. 38, No. 374.
        Department of Digital Humanities, University of Cologne
                      Hosted by DH-Cologne
                       www.dhhumanist.org
                Submit to: humanist@dhhumanist.org




        Date: 2025-02-24 09:21:25+00:00
        From: Tim Smithers <tim.smithers@cantab.net>
        Subject: Re: [Humanist] 38.370: AI, poetry and readers

Here we go again.

Gabriel, you assert that ...

    "Tim Smithers wants us to treat as axiomatic the idea
     that when humans make a machine they necessarily
     understand how it works. ..."

No, I don't.  Read my words more carefully, please.

I know, and have had to try to do things with, various
machines, and devices, and systems, that people say they have
designed and built, but which they clearly didn't do a good
job of because they do not understand well the machine they
say they have designed and built.  I imagine this is an
experience had by others here.  It's annoying when this
happens, but, so what?  I didn't say this couldn't happen.  I
just said that we do know and understand all there is to know
and understand about the way today's Generative AI systems are
built, at least in the case of the Open Source systems.  So,
if you want to say the [mostly commercial] non-Open Source
Generative AI systems have some special magical properties,
pray, do tell us.

As a designer and engineer I think I am obliged to know and
understand all there is to know and understand about the
machines or systems I design and build.  And, as a designer
and engineer, I would insist that there are no other "levels
of understanding" to be had.  To suggest, or think, there is,
is to believe in some kind of mysticism of machines.  Which I
do not believe in.  It's a notion I reject, forcefully.  We
all should, I would say.  Good research has no use of mystical
anything.

If the builders of things like ChatGPT took their failure to
understand how it "knows" Paris is in France as a indication
they are mistaken, we we'd probably have a lot less of all
this hype' and nonsense about Generative AI. And, we'd have
fewer people flogging fake AI.

Good researchers take failure to understand what they
investigate, or some aspect of it, as a strong indication they
may be mistaken in how they are trying to understand what they
study.  Research everywhere, across all the disciplines, is
full of examples of this, of course.

So, yes, of course, we -- a big we in this case [but not
including me] -- do know and understand what was designed and
built when the Large Hadron Collider was made, but, all this
good designing and building did not require all, or any of,
the people who did this designing and building to also say
what would be discovered when this machine was used to conduct
the experiments it was designed and built for.  Not even
Humpty Dumpty would say they should have done, or could have
done, this.

Knowing and understanding all there is to know and understand
about some machine we have designed and built does not include
having to specify all and every kind of behaviour any
operation of this machine will display.  To insist it does is
nonsense.  Fully knowing and understanding the machine we have
designed and built means when we see some behaviour not seen
before, or which puzzles us, or which surprises us, we can
fully explain it, albeit after plenty of effort, perhaps.  In
the case of Generative AI systems, this is what we can do.
Just because you are mistaken about what needs explaining
here does not mean we can't do this.

Being surprised by some behaviour of some machine we have
designed and built does not mean we don't know and understand
it fully.  John Horton Conway did not think he did not fully
know and understand The Game of Life machine he designed and
made.  I know, he told me this, and some other people also
present.  Conway definitely did not believe in any mysticism
of machines.  Conway had a life long interest in games, often
games of a mathematical flavour, and this informed and shaped
the mathematical research he did.  For Conway, a good game
must surprise us.  That's why games made, for him, an
effective way to investigate things.  Surprises are
opportunities to "de-mystify," and, to see what other things
there are which perhaps we don't yet understand.

Conway's Game of Life is an example of a generative system,
sometimes also called a generative grammar system.  These use
a usually simple to state deterministic rule which, on
continued application, often result in unpredictable outcomes.
And, often small changes to the initial starting state
results, on application of the rule, very different, and thus
hard to predict, outcomes.  In this sense these generative
systems share the sensitivity to initial conditions
characteristic of so called Chaotic systems.

Talk of "emergent behaviour" is empty without first
presenting, explaining, illustrating, and defending, a careful
and precise characterisation of what "emergent behaviour"
means here.  (More nouning maketh not the nouned.)  To have
any force, and thus any meaning, emergent behaviour cannot be,
in any way, to any degree, observer dependent, or observer
relative.  So, please tell us, what is your observer
independent characterisation of so called "emergent
behaviour"?  Just saying it's "Oh!  Wow!  I never thought we'd
see it do that," won't do.

You also assert ...

    "I think few people who have used ChatGPT would agree
     with Smithers's assertion that "It does not generate
     words".  This is only true if we imbue the concept of
     'words' with the intrinsic attribute of human origin, so
     that on principle a machine cannot create them.
     Meantime, the rest of the world continues to communicate,
     using what we all agree are words, with the LLMs."

I can't make you look carefully at what ChatGPT does and how
it does it if you do't want to look, but nothing it does
involves words.  It takes in text, not words.  It outputs
text, not words.  And, together with some other needed
matrices, it crunches numerical vector representations of
text-tokens, most of which are not recognisable as words to us
as readers.  Something is only a word [in some language] if it
can be read by something capable of reading.  ChatGPT does no
reading, nor, of course, any writing of words.  It just
generates sequences of text-tokens and turns these into text
which it presents to you, after first adding all the "sugar
coating" to make this text look, to you, like it came from
something that can read and write, and to make it look like it
knows, understands, and reasons about, what the text is about
to you when you read it.  ChatGPT doesn't do any of this
reading, writing, knowing, understanding, and reasoning, no
matter how much it looks like it does to you.  What you think,
and say, ChatGPT does, does not mean that's what ChatGPT
necessarily does.  To know and understand what ChatGPT really
does, and how, you need to understand how it is designed and
built.  To make any good use of ChatGPT, just as in making
good use of any tool or system, you must know how it is
designed and built to work.  Machine mysticism is not, and
cannot be, a basis for sensible use of any machine, ChatGPT
included.  If automatically generated text can be useful to
you, fine, but, thinking, or, worse, claiming, what you're
getting are real written words is fundamentally mistaken.
Unless, of course, you think all we do when we write is
generate sequences of text-tokens which we turn into sugar
coated text for our readers.  But not even Humpty Dumpty
thinks this.

Finally, my position, now often expressed here, perhaps too
often, is _not_ that only humans can know, understand, and
reason, read, and write.  My position is that today's so
called Generative AI systems do not, and cannot, do any of
these things, they are just built to make it look like they
do.  They are a kind of Artificial Flower AI, and not a kind
of Artificial Light AI -- real intelligence by artificial
means.  And my complaint is that this Artificial Flower AI is
a deliberate deception.  One that is working well these days,
as we see.

-- Tim


PS: I'll use the excuse that I need to prepare the next PhD
course to make this my last post on this matter.For now, at
least.


> On 21 Feb 2025, at 09:34, Humanist <humanist@dhhumanist.org> wrote:
>
>
>              Humanist Discussion Group, Vol. 38, No. 370.
>        Department of Digital Humanities, University of Cologne
>                      Hosted by DH-Cologne
>                       www.dhhumanist.org
>                Submit to: humanist@dhhumanist.org
>
>
>    [1]    From: Gabriel Egan <mail@gabrielegan.com>
>           Subject: Re: [Humanist] 38.367: AI, poetry and readers (139)

<snip>
>
> --[1]------------------------------------------------------------------------
>        Date: 2025-02-20 20:31:31+00:00
>        From: Gabriel Egan <mail@gabrielegan.com>
>        Subject: Re: [Humanist] 38.367: AI, poetry and readers
>
> Tim Smithers wants us to treat as axiomatic
> the idea that when humans make a machine
> they necessarily understand how it works.
> That I what I take him to be claiming
> when he writes of Artificial Intelligence
> machines:
>
> <<
> ... we know how these systems are designed and
> specified using linear matrix arithmetic ...
> This is all we need to know and understand
> to build these systems, and all we need to
> know to understand how they work and what
> they do.
>>>
>
> I would disagree and say that there are
> plenty of examples of machines that we
> have made that we understood at the
> level of their engineering when we made
> them, but that turned out to have
> emergent behaviours that we did not
> expect and that we do not understand.
>
> Indeed, if that were not the case there
> would scarcely be any point to the field
> of experimental physics. We know what we
> built when we built the Large Hadron
> Collider, but the point of building
> it was to explore what its component
> parts -- including the subatomic parts
> that its operation inevitably generates
> -- will do under particular circumstances.
> If we knew that in advance, there was
> no point building it.
>
> To take a case from computing, John Conway
> knew what he was making when he programmed
> his Game of Life. But his descriptions of
> his explorations with it make clear that
> he did not anticipate its emergent
> behaviours.
>
> It was not obvious to Conway or anyone
> else that a particular instance of the
> Game of Life (that is, a particular
> starting state of its cells) would
> constitute a computing machine that
> could itself run an instance of the
> Game of Life. (YouTube shows some
> fascinating videos in response to a
> search for 'game of life running game
> of life'.)
>
> Smithers reiterates his point when I refer
> to what we don't know about the function
> of Artificial Intelligence systems:
>
> <<
> I [Smithers] would ask you [Egan] to
> provide some evidence for this apparently
> near complete ignorance of how machines
> we design and build actually work.
> What is it you would say we are not
> understanding here?
>>>
>
> I would say that we do not understand
> the emergent behavior of some complex
> machines. Indeed, that is why we build
> them in the first place: to see what
> they do, what they come up with. The
> most commonly expressed response of
> even experts using AI is surprise
> at what it does. An example was in
> the news today:
>
>   https://www.bbc.co.uk/news/articles/clyz6e9edy3o
>
> When a Machine Learning technique
> is used to optimize a particular
> problem in the world, say the
> best way to organize the internal
> structure and procedures of an airport
> (such as the security checks and
> baggage handling), then we can know when
> we have produced an optimization that
> is better than what we do now. We know
> this by implementing the optimized
> arrangement and finding that more
> people pass through the airport.
>
> But we cannot know if that optimization
> is the best possible one, since to know
> that would require us to already know
> what is the best possible one, in order
> to compare it with the one created by
> Machine Learning. If we already knew the
> best possible optimization then we would
> not need the Machine Learning approach
> in the first place. (I am indebted
> to my colleague Mario Gongora and
> his inaugural professorial lecture
> on AI, given earlier this week, for
> this example.)
>
> I think few people who have used
> ChatGPT would agree with Smithers's
> assertion that "It does not generate
> words". This is only true if we imbue
> the concept of 'words' with the intrinsic
> attribute of human origin, so that on
> principle a machine cannot create them.
> Meantime, the rest of the world continues
> to communicate, using what we all agree
> are words, with the LLMs.
>
> It is comforting to believe that only
> humans can think, but technology has
> a tendency to ride roughshod over
> our sensitivities in these matters.
> Recall the philosophers in 'The Hitchhiker's
> Guide to the Galaxy' who object to the
> machine Deep Thought challenging the
> human monopoly on philosophical thinking.
> As they see it, this is a matter of the
> 'demarcation' of roles -- here, the roles
> of 'human' and 'machine' -- which was an
> idea often invoked in 1970s trades union
> disputes about the automation of work,
> and which Douglas Adams is gently mocking.
> As one of the philosophers reflects,
> "... what's the use of our sitting
> up half the night arguing that there
> may or may not be a God if this
> machine only goes and gives us his
> bleeding phone number the next morning?"
>
> Gabriel Egan
>

<snip>


_______________________________________________
Unsubscribe at: http://dhhumanist.org/Restricted
List posts to: humanist@dhhumanist.org
List info and archives at at: http://dhhumanist.org
Listmember interface at: http://dhhumanist.org/Restricted/
Subscribe at: http://dhhumanist.org/membership_form.php