Could it be around 9pm EET? 
it's a completely different time but should it be available for everyone?

Michele
Il giorno giovedì 1 aprile 2021 alle 16:29:53 UTC+2 Nil ha scritto:

> Sure! The place is
>
> https://meet.jit.si/proto-agi
>
> the time is
>
> 10:45am EET
>
> Unfortunately probably too early if you're in the US.
>
> Michele, maybe we could do a last minute change to fit the US timezone 
> as well? With the risk of adding confusion though.
>
> I'll try to record the call, BTW.
>
> Nil
>
> On 4/1/21 5:08 PM, Douglas Miles wrote:
> > May I sit in on the meeting as a fly on the wall?
> > If so, when/how shall I connect?
> > 
> > Thanks in advance!
> > Douglas Miles
> > 
> > On Thu, Apr 1, 2021 at 2:52 AM Michele Thiella <[email protected] 
> > <mailto:[email protected]>> wrote:
> > 
> > Hi Nil,
> > you're right! currently EET corresponds to the Italian time!
> > Great, then I might be a few minutes late because I have a lesson
> > first. But surely 10.45am EET can work!
> > 
> > Also for me, no problems for those who want to join!
> > Thanks for the PLN link. See you tomorrow.
> > 
> > Michele
> > Il giorno mercoledì 31 marzo 2021 alle 14:40:38 UTC+2 Nil ha scritto:
> > 
> > Hi Michele,
> > 
> > On 3/27/21 12:12 PM, Michele Thiella wrote:
> > > Is there any recommended book/paper to study before the code
> > of PLN rules?
> > 
> > Search for Probabilistic Logic Networks in
> > 
> > 
> https://wiki.opencog.org/w/Background_Publications#Books_Directly_Related_to_OpenCog_AI
> > <
> https://wiki.opencog.org/w/Background_Publications#Books_Directly_Related_to_OpenCog_AI
> >
> > 
> > 
> > > For the meeting, could it be at 11.30am EET?
> > 
> > 11:30am EET works for me. But maybe you mean 10:30am EET. With
> > daylight saving time it seems EET corresponds to Italy time. I'm
> > not
> > sure so double check but anyway 10:30am Italy time works for me.
> > 
> > Nil
> > 
> > >
> > > Michele
> > >
> > > Il giorno venerdì 26 marzo 2021 alle 08:56:11 UTC+1 Nil ha
> > scritto:
> > >
> > > On 3/25/21 9:03 PM, Michele Thiella wrote:
> > > > Can I ask you to say something about tree of decisions in Eva?
> > > Was it a
> > > > separate scheme/python module that analyzed SequentialAnd?
> > > > While i'm at it, I can't place some components in your
> > architecture:
> > > > I read Moshe Looks thesis on MOSES and what I found on
> > OpenPsi.
> > > But in
> > > > practice what were they used for?
> > >
> > > MOSES is a program learner. In principle it could learn any
> > program, in
> > > practice it is mostly used to learn multivariable boolean
> > functions (as
> > > it doesn't work very well on anything else, so far anyway).
> > >
> > > See for more info
> > >
> > >
> > https://wiki.opencog.org/w/Meta-Optimizing_Semantic_Evolutionary_Search
> > <https://wiki.opencog.org/w/Meta-Optimizing_Semantic_Evolutionary_Search
> >
> > 
> > >
> > <https://wiki.opencog.org/w/Meta-Optimizing_Semantic_Evolutionary_Search
> > <https://wiki.opencog.org/w/Meta-Optimizing_Semantic_Evolutionary_Search
> >>
> > 
> > >
> > >
> > > > Finally, in practice what does PLN do/have more than URE?
> > >
> > > The URE is a generic rewriting system, that needs a rule set to
> > > operate.
> > >
> > > See for more info
> > >
> > > https://wiki.opencog.org/w/Unified_rule_engine
> > <https://wiki.opencog.org/w/Unified_rule_engine>
> > > <https://wiki.opencog.org/w/Unified_rule_engine
> > <https://wiki.opencog.org/w/Unified_rule_engine>>
> > >
> > > Such rule set can be PLN, which has been specifically
> > tailored to
> > > handle
> > > uncertain reasoning
> > >
> > > https://github.com/opencog/pln
> > <https://github.com/opencog/pln> <https://github.com/opencog/pln
> > <https://github.com/opencog/pln>>
> > >
> > > or the Miner, which is has been tailored to find frequent
> > subgraphs
> > >
> > > https://github.com/opencog/miner
> > <https://github.com/opencog/miner>
> > <https://github.com/opencog/miner
> > <https://github.com/opencog/miner>>
> > >
> > > or more, though these are the two most used/mature.
> > >
> > > Nil
> > >
> > > >
> > > >
> > > > Before reasoning is possible, one must have a world-model.
> > This
> > > > model has several parts to it:
> > > > * The people in the room, and their 3D coordinates
> > > > * The objects on the table and their 3D coordinates.
> > > > * The self-model (current position of robot, and of its
> > arms, etc.)
> > > > The above is updated rapidly, by sensor information.
> > > >
> > > > Then there is some long-term knowledge:
> > > > * The names of everyone who is known. A dictionary linking
> > names to
> > > > faces.
> > > >
> > > > Then there is some common-sense knowledge:
> > > > * you can talk to people,
> > > > * you can pick up bottles on a table
> > > > * you cannot talk to bottles
> > > > * you cannot pick up people.
> > > > * bottles can be picked up with the arm.
> > > > * facial expressions and arm movements can be used to
> > communicate
> > > > with people.
> > > >
> > > > The world model needs to represent all of this. It also
> > needs to
> > > > store all of the above in a representation that is
> > accessible to
> > > > natural language, so that it can talk about the position of
> > its arm,
> > > > the location of the bottle, and the name of the person it is
> > > talking to.
> > > >
> > > > Reasoning is possible only *after* all of the above has been
> > > > satisfied, not before.  Attempts to do reasoning before the
> > above
> > > > has been built will always come up short, because some
> > important
> > > > piece of information will be missing, or will be stored
> > somewhere,
> > > > in some format that the reasoning system does not have
> > access to it.
> > > >
> > > > The point here is that people have been building "reasoning
> > systems"
> > > > for the last 30 or 40 years. They are always frail and
> > fragile. They
> > > > are always missing key information.  I think it is
> > important to try
> > > > to understand how to represent information in a uniform
> > manner, so
> > > > that reasoning does not stumble.
> > > >
> > > >
> > > > Atomspace:
> > > >
> > > >   Concepts: "name" - "3D pose"
> > > >   - bottle - Na
> > > >   - table - Na
> > > >   (Predicate: "over" List ("bottle") ("table"))
> > > >   Actions:
> > > >   - Go random
> > > >   - Go to coord
> > > >   - Grab obj
> > > >
> > > > Goal: (bottle in hand)    // = grab bottle
> > > >
> > > > Inference rules: all the necessary rules, i.e.
> > > > * grab-rule: preconditions: (robot-coord = obj-coord) ...,
> > > > effects: (obj in hand) ...
> > > > * coord-rule: if x is in "coord1" and y is over x then y is in
> > > > "coord1"
> > > >
> > > > -> So, robot try backward chaining to find the behavior
> > tree to
> > > > run. It doesn't find it, it lacks knowledge, it doesn't know
> > > > where the bottle is (let's leave out partial trees).
> > > > -> Go random ...
> > > > -> Vision sensor recognizes table
> > > > -> atomspace update: table in coord (1,1,1)
> > > > -> forward chaining -> bottle in coord (1,1,1)
> > > > -> backward chaining finds a tree, that is
> > > > Go to coord (1,1,1) + Grap obj
> > > > -> goal achieved
> > > >
> > > >
> > > > This is a more-or-less textbook robotics homework
> > assignment. It has
> > > > certainly been solved in many different ways by many different
> > > > people using many different technologies, over the last
> > 40-60 years.
> > > > Algorithms like A-star search are one of the research
> > results of
> > > > trying to solve the above. The AtomSpace would be a horrible
> > > > technology to solve the above problem, its too slow, too
> > bulky, too
> > > > complicated.
> > > >
> > > > The chaining steps can be called "inference", but it is
> > inference
> > > > devoid of natural language, devoid of "true understanding".
> > My goal
> > > > is to have a conversation with the robot:
> > > >
> > > > "What do you see?"
> > > > "A bottle"
> > > > "where is it?"
> > > > "on the table"
> > > > "can you reach it?"
> > > > "no"
> > > > "could you reach it if you move to a different place?"
> > > > "yes"
> > > > "where would you move?"
> > > > "closer to the bottle"
> > > > "can you please move closer to the bottle?"
> > > > (robot moves)
> > > >
> > > >
> > > > This is now clear to me, but why natural language?
> > > > if i didn't want interactions with humans could i do it
> > differently?
> > > > A certain variation of the sensor values already represents
> > "the
> > > forward
> > > > movement", I do not need to associate a name with it if I
> > don't
> > > speak,
> > > > also for the Atom "bottle" I could use its ID instead.
> > > > I don't understand why removing natural language implies
> > having an
> > > > inference devoid of "true understanding".
> > > >
> > > > Stupid example: If I speak Italian with a French, neither
> > of us
> > > > understands the other. But a bottle remains a bottle for
> > both and
> > > if I
> > > > give him my hand he will probably do it too ... or he will
> > leave
> > > without
> > > > saying goodbye.
> > > >
> > > > I'm probably missing something big, but until I don't bang
> > my head
> > > > against it, I don't see.
> > > >
> > > >
> > > > This can be solved by carefully hand-crafting a chatbot
> > dialog tree.
> > > > (The ghost chatbot system in opencog was designed to allow
> > such
> > > > dialog trees to be created) Over the decades, many chatbots
> > have
> > > > been written. Again: there are common problems:
> > > >
> > > > -- the text is hard-coded, and not linguistic.  Minor
> > changes in
> > > > wording cause the chatbot to get confused.
> > > > -- there is no world-model, or it is ad hoc and scattered
> > over many
> > > > places
> > > > -- no ability to perform reasoning
> > > > -- no memory of the dialog ("what were we talking about?" -
> > well,
> > > > chatbots do have a one-word "topic" variable, so the
> > chatbot can
> > > > answer "we are talking about baseball", but that's it.
> > There is no
> > > > "world model" of the conversation, and no "world model" of
> > who the
> > > > conversation was with ("On Sunday, I talked to John about a
> > bottle
> > > > on a table and how to grasp it")
> > > >
> > > > Note that ghost has all of the above problems. It's not
> > linguistic,
> > > > it has no world-model, it has no defined representation
> > that can be
> > > > reasoned over, and it has no memory.
> > > >
> > > > 20 years ago, it was hard to build a robot that could grasp a
> > > > bottle. It was hard to create a good chatbot.
> > > >
> > > > What is the state of the art, today? Well, Tesla has
> > self-driving
> > > > cars, and Amazon and Apple have chatbots that are very
> > > > sophisticated.  There is no open source for any of this,
> > and there
> > > > are no open standards, so if you are a university grad
> > student (or a
> > > > university professor) it is still very very hard to build a
> > robot
> > > > that can grasp a bottle, or a robot that you can talk to. 
> > And yet,
> > > > these basic tasks have become "engineering"; they are no
> > longer
> > > > "science".  The science resides at a more abstract level.
> > > >
> > > > --linas
> > > >
> > > >
> > > > I find the abstract level incredible, both in terms of
> > beauty and
> > > > difficulty!
> > > >
> > > > Michele
> > > >
> > > > --
> > > > You received this message because you are subscribed to the
> > Google
> > > > Groups "opencog" group.
> > > > To unsubscribe from this group and stop receiving emails
> > from it,
> > > send
> > > > an email to [email protected]
> > > > <mailto:[email protected]>.
> > > > To view this discussion on the web visit
> > > >
> > >
> > 
> https://groups.google.com/d/msgid/opencog/5ac81cf1-c4cd-40cd-9438-55d8dc3d95f5n%40googlegroups.com
> > <
> https://groups.google.com/d/msgid/opencog/5ac81cf1-c4cd-40cd-9438-55d8dc3d95f5n%40googlegroups.com
> >
> > 
> > >
> > <
> https://groups.google.com/d/msgid/opencog/5ac81cf1-c4cd-40cd-9438-55d8dc3d95f5n%40googlegroups.com
> > <
> https://groups.google.com/d/msgid/opencog/5ac81cf1-c4cd-40cd-9438-55d8dc3d95f5n%40googlegroups.com
> >>
> > 
> > >
> > > >
> > >
> > <
> https://groups.google.com/d/msgid/opencog/5ac81cf1-c4cd-40cd-9438-55d8dc3d95f5n%40googlegroups.com?utm_medium=email&utm_source=footer
> > <
> https://groups.google.com/d/msgid/opencog/5ac81cf1-c4cd-40cd-9438-55d8dc3d95f5n%40googlegroups.com?utm_medium=email&utm_source=footer
> >
> > 
> > >
> > <
> https://groups.google.com/d/msgid/opencog/5ac81cf1-c4cd-40cd-9438-55d8dc3d95f5n%40googlegroups.com?utm_medium=email&utm_source=footer
> > <
> https://groups.google.com/d/msgid/opencog/5ac81cf1-c4cd-40cd-9438-55d8dc3d95f5n%40googlegroups.com?utm_medium=email&utm_source=footer
> >>>.
> > 
> > >
> > >
> > > --
> > > You received this message because you are subscribed to the
> > Google
> > > Groups "opencog" group.
> > > To unsubscribe from this group and stop receiving emails from
> > it, send
> > > an email to [email protected]
> > > <mailto:[email protected]>.
> > > To view this discussion on the web visit
> > >
> > 
> https://groups.google.com/d/msgid/opencog/f8d77746-4855-491d-bf65-4bc73d45ca39n%40googlegroups.com
> > <
> https://groups.google.com/d/msgid/opencog/f8d77746-4855-491d-bf65-4bc73d45ca39n%40googlegroups.com
> >
> > 
> > >
> > <
> https://groups.google.com/d/msgid/opencog/f8d77746-4855-491d-bf65-4bc73d45ca39n%40googlegroups.com?utm_medium=email&utm_source=footer
> > <
> https://groups.google.com/d/msgid/opencog/f8d77746-4855-491d-bf65-4bc73d45ca39n%40googlegroups.com?utm_medium=email&utm_source=footer
> >>.
> > 
> > 
> > -- 
> > You received this message because you are subscribed to the Google
> > Groups "opencog" group.
> > To unsubscribe from this group and stop receiving emails from it,
> > send an email to [email protected]
> > <mailto:[email protected]>.
> > To view this discussion on the web visit
> > 
> https://groups.google.com/d/msgid/opencog/c1d4319f-70de-4cda-a6e5-a91c8dd53946n%40googlegroups.com
> > <
> https://groups.google.com/d/msgid/opencog/c1d4319f-70de-4cda-a6e5-a91c8dd53946n%40googlegroups.com?utm_medium=email&utm_source=footer
> >.
> > 
> > -- 
> > You received this message because you are subscribed to the Google 
> > Groups "opencog" group.
> > To unsubscribe from this group and stop receiving emails from it, send 
> > an email to [email protected] 
> > <mailto:[email protected]>.
> > To view this discussion on the web visit 
> > 
> https://groups.google.com/d/msgid/opencog/CAER3M5%3D1%2B7jNyXa6zjcoWf8qJ%2BVWkjjZ_x_V%3Dbmp8QrhUVHNTw%40mail.gmail.com
>  
> > <
> https://groups.google.com/d/msgid/opencog/CAER3M5%3D1%2B7jNyXa6zjcoWf8qJ%2BVWkjjZ_x_V%3Dbmp8QrhUVHNTw%40mail.gmail.com?utm_medium=email&utm_source=footer
> >.
>

-- 
You received this message because you are subscribed to the Google Groups 
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/opencog/a2120141-b415-41f7-8b29-c7cba23db310n%40googlegroups.com.

Reply via email to