Yes, humor is important. Good point. Laughing is one of the things we do and
apes do not. For me this is where it starts to get interesting: when we look at
the things we do that apes do not, like language, culture, art, or writing
systems. I mean before the first civilizations in Mesopotamia, ancient Greece
and ancient Egypt appeared there were just clans fighting against each other to
determine their place in the pecking order. Primates do this too.We have this
constant drive to resolve inconsistencies which is related to the confirmation
bias. Every joke starts with an inconsistency that is resolved by an insight.
Maybe we need just one basic mechanism to create a self-supervised agent that
gets smarter bit by bit: artificial curiosity, i.e. a mechanism that seeks new
or inconsistent information and rewards the resolution of inconsistencies. A
bit like science itself.-J.
-------- Original message --------From: glen <[email protected]> Date:
6/13/22 23:14 (GMT+01:00) To: [email protected] Subject: Re: [FRIAM] Google
Engineer Thinks AI Bot Has Become Sentient IDK, I wouldn't say the dialog was
indistinguishable from a human. When I ask people things like "Do you have
feelings?", they respond pretty aggressively or defensively. While I agree that
all the sentences were well-formed and sensible (SSI), they lacked the
reflective quality of actual human responses. Plus, there wasn't any humor as
far as I could tell. You'd expect that in a conversation with such ridiculous
questions. That would be true even if, especially if, you were talking to a kid
or a typically blue collar sort. [⛧]That's why it read, to me, like one of
those fake dialogs intended to teach some lesson or other. And it wasn't even
Socratic. This is where Aaronson's comment ("but can I run my own tests?")
plays in. Meno or Euthyphro might *seem* indistinguishable from a human ... but
they're not, they're fantastically designed to render the just-so condition the
ideologue intends. Perhaps Lahontan's Kondiaronk was different?[⛧] I once
picked up a hitchhiker on my way home from work back in TX. Since the ride was
quite long, we discussed quite a bit. As we were driving through town, I
commented that most of the people looked, to me, like they were asleep ...
metaphorically. The hitcher said, "They look awake to me", literally.On 6/13/22
13:40, Jochen Fromm wrote:> I think the capabilities of large language models
are really impressive. The language of these models is not grounded, as this
article says, but in principle it is possible to do it.>
https://techmonitor.ai/technology/ai-and-automation/foundation-models-may-be-future-of-ai-theyre-also-deeply-flawed>
> Take for example a robot, connect it to the Internet and a large language
model, and add an additional OCR layer in between. The result? Probably creepy
and uncanny, but if it works we would most likely think such an actor would be
sentient. The replies in the LaMDA dialog transcript look indistinguishable
from a human.>
https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917>
> -J.> > > -------- Original message --------> From: glen
<[email protected]>> Date: 6/13/22 17:14 (GMT+01:00)> To: [email protected]>
Subject: Re: [FRIAM] Google Engineer Thinks AI Bot Has Become Sentient> >
"Remarkable" in the sense of "worthy of remark"? Yeah, maybe.> > LaMDA:
Language Models for Dialog Applications> https://arxiv.org/abs/2201.08239> >
Personally, I think we can attribute Lemoine's belief in LaMDA's sentience is
an artifact of his religious belief. It's not exclusive to Christianity,
though. One of the risks of the positions taken by those who believe in the
reality of things like Jungian archetypes is false attribution. And it's not
limited to anthropomorphic attribution. To the person with a hammer, everything
looks like a nail. Even if such beliefs have some objective utility in some
contexts, that utility is not likely to be that transitive to other contexts.>
> I suppose this is why I'm more sympathetic to the (obviously still false in
its extreme) behaviorist or skeptical position (cf
https://onlinelibrary.wiley.com/doi/abs/10.1111/phpr.12445). I.e. it's
completely irrelevant whether or not you *claim* to have feelings and emotions.
What's needed for knowledge (justified true belief) is a parallax pointing to
the same conclusion, preferably including some largely objective angles.> > An
objective angle on LaMDA might well be available from IIT operating over some
(very large) log/trace data from the executing program. *That* plus the bot
claiming it's sentient would give me pause.> > On 6/12/22 08:28, Jochen Fromm
wrote:> > A Google engineer said he was placed on leave after claiming an AI
chatbot was sentient. The fact that he thinks it would be sentient is
remarkable, isn't it?> >
https://www.businessinsider.com/google-engineer-thinks-artificial-intelligence-bot-has-become-sentient-2022-6--
ꙮ Mɥǝu ǝlǝdɥɐuʇs ɟᴉƃɥʇ' ʇɥǝ ƃɹɐss snɟɟǝɹs˙ ꙮ-. --- - / ...- .- .-.. .. -.. /
-- --- .-. ... . / -.-. --- -.. .FRIAM Applied Complexity Group listservFridays
9a-12p Friday St. Johns Cafe / Thursdays 9a-12p Zoom
https://bit.ly/virtualfriamto (un)subscribe
http://redfish.com/mailman/listinfo/friam_redfish.comFRIAM-COMIC
http://friam-comic.blogspot.com/archives: 5/2017 thru present
https://redfish.com/pipermail/friam_redfish.com/ 1/2003 thru 6/2021
http://friam.383.s1.nabble.com/-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe / Thursdays 9a-12p Zoom
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives: 5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
1/2003 thru 6/2021 http://friam.383.s1.nabble.com/