Thanks Ben, Nils and all who participated, I appreciate the thought enrichment, 
I have many trails to investigate after this.

I would enjoy it very much if, at some point, those of us interested could go 
"down that rabbit hole" of the hard problem of consciousness.  And possibly to 
explore why it may, or may not, matter - if an artificial system is having an 
experience as we seem to, or if it is completely unaware, just flipping 
switches based on the position of other switches, so to speak.  And why we even 
care...

Lastly, a quick comment on "Intention":  I have been working on the 
conversational ability of my robots for a few years, which was very limited at 
best.  I have also played extensively with GPT-2, which improved its 
"conversational" ability, but also was prone to going off the rails.  What made 
the biggest difference is giving the robot a hard-coded "intention" or goal for 
the conversation.  This allowed it to redirect conversation to an expected 
direction and subject via more -or less- elegant segues, and prolonged the 
length of time a conversation continued to make reasonable sense before it got 
way off track.  I think there's something to the idea of perhaps encoding 
systems with a group or hierarchy of goals for interaction, an artificial 
"intentionality", right out of the starting gate, if the goal of the robot 
creator is to have their robot interact in a more "human-like" fashion.

Thanks again all,

Dave


-----Original Message-----
From: 'Ben Goertzel' via opencog <open...@googlegroups.com> 
Sent: Thursday, September 9, 2021 4:10 PM
To: AGI <agi@agi.topicbox.com>; opencog <open...@googlegroups.com>
Subject: [opencog-dev] AGI discussion group, Sep 10 7AM Pacific: Characterizing 
and Implementing Human-Like Consciousness

Theme for discussion this week: Characterizing and Implementing Human-Like 
Consciousness

See

https://wiki.opencog.org/w/AGI_Discussion_Forum#Sessions

URL for video-chat: https://singularitynet.zoom.us/my/benbot ...

Background reading:
https://www.researchgate.net/publication/275541457_haracterizing_Human-like_Consciousness_An_Integrative_Approach

Pretty soon we will have some new Hyperon design documents/ideas to discuss in 
the AGI Discussion Group -- lots of progress occurring on that front, but much 
of it not quite yet fully baked enough for public-ish discussion ... so for 
this session we're going to explore some highly relevant but more general 
topics... ;)


On Thu, Aug 26, 2021 at 9:22 AM Mike Archbold <jazzbo...@gmail.com> wrote:
>
> Is there a discussion tomorrow?
>
> On 8/16/21, magnuswootto...@gmail.com <magnuswootto...@gmail.com> wrote:
> > Brute forcing is about reducing the search in a way,  to spread it 
> > out further, the same amount of computation power.
>
> ------------------------------------------
> Artificial General Intelligence List: AGI
> Permalink: 
> https://agi.topicbox.com/groups/agi/T5b614d3e3bb8e0da-Me75a2ed05b46ea5
> 198406b29 Delivery options: 
> https://agi.topicbox.com/groups/agi/subscription



--
Ben Goertzel, PhD
http://goertzel.org

“He not busy being born is busy dying" -- Bob Dylan

--
You received this message because you are subscribed to the Google Groups 
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to opencog+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/opencog/CACYTDBfnBV-29VTEBP3rr_WsEb_Wh0E%2B6n%3DsOCCmJi7bTSTtgQ%40mail.gmail.com.


------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T5e30c339c3bfa713-M56f4269c9c12dccd457a9a96
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to