Humanist Discussion Group

Humanist Archives: Feb. 26, 2025, 7:30 a.m. Humanist 38.376 - AI, poetry and readers; Apple Intelligence

				
              Humanist Discussion Group, Vol. 38, No. 376.
        Department of Digital Humanities, University of Cologne
                      Hosted by DH-Cologne
                       www.dhhumanist.org
                Submit to: humanist@dhhumanist.org


    [1]    From: Gabriel Egan <mail@gabrielegan.com>
           Subject: Re: [Humanist] 38.374: AI, poetry and readers (48)

    [2]    From: Willard McCarty <willard.mccarty@mccarty.org.uk>
           Subject: Apple Intelligence and our own (25)


--[1]------------------------------------------------------------------------
        Date: 2025-02-25 08:36:01+00:00
        From: Gabriel Egan <mail@gabrielegan.com>
        Subject: Re: [Humanist] 38.374: AI, poetry and readers

Tim Smithers writes:

 > John Horton Conway did not think he
 > did not fully know and understand
 > The Game of Life machine he designed
 > and made.

The point is unpredictability. Conway
said that what was interesting in his
Game of Life was its unpredictability.
He said that he tinkered with the rules
and finally came up with the ones he
settled on because with those "you didn't
seem to be able to predict what will
happen". He says these words at 4:53
to 5:08 of this interview:

   https://www.youtube.com/watch?v=R9Plq-D1gEk

Conway goes on "there's no algorithmic way of
telling whether a thing [a configuration of
the Game] is going to die off . . . that's
one of the astonishing properties" (8:39
to 9:19).

Conway engineered into his invention
the unpredictability he wanted. AI
makers engineer into their machines
the learning ability that they want,
and are then unable to anticipate
what the machines will learn to be
able to do. (This list's readers might
agree that human learners are rather
like that too.)

Smithers asserts that "we do know and
understand all there is to know and
understand about the way today's
Generative AI systems are built".

I agree with that assertion. What I
disagreed with was Smithers's earlier
one, that because we know how we built
them we understand how they work.

Regards

Gabriel

--[2]------------------------------------------------------------------------
        Date: 2025-02-25 09:41:21+00:00
        From: Willard McCarty <willard.mccarty@mccarty.org.uk>
        Subject: Apple Intelligence and our own

Earlier I (and doubtless millions of others) received email announcing
that "Apple Intelligence is here",
<https://www.apple.com/uk/apple-intelligence/?cid=CDM-GB-DM-c01523-M00000>.
I'd like to suggest that a discussion on the consequences which might
ensue were many of our fellow citizens to make it their 'familiar'.

My colleague Alan Blackwell has noted in his illuminating book, Moral
Codes: Designing Alternatives to AI (MIT Press, 2024, also online), that
the actual effects of AI as presently conceived is to make machines seem
smarter by making humans stupider (OED: "Of a person: slow to learn or
understand; lacking intelligence or perceptiveness; acting without
common sense or good judgement.") 

In light of what Apple Intelligence suggests, it seems to me that the 
shoe fits rather well.

Kindly let rip.

Best,
WM
--
Willard McCarty,
Professor emeritus, King's College London;
Editor, Humanist
www.mccarty.org.uk


_______________________________________________
Unsubscribe at: http://dhhumanist.org/Restricted
List posts to: humanist@dhhumanist.org
List info and archives at at: http://dhhumanist.org
Listmember interface at: http://dhhumanist.org/Restricted/
Subscribe at: http://dhhumanist.org/membership_form.php