Hi Nil,  sorry for late response, I'm drowning in email

AsyncLaunchLink already exists, its called ParallelLink and JoinLink

http://wiki.opencog.org/w/JoinLink
http://wiki.opencog.org/wikihome/index.php/ParallelLink

--linas


On Fri, Jun 10, 2016 at 4:39 AM, Nil Geisweiller <[email protected]>
wrote:

> Hi Linas,
>
> On 06/10/2016 12:10 PM, Linas Vepstas wrote:
>
>>         A lesser issue is that SequentialAnd proceeds to the next step
>>         only if
>>         the previous step returned "true". What is the truth value of a
>>         schema?
>>
>>
>>     The truth value of a schema would be how much the universe inherit
>>     from it. However the truth value of a certain ExecutionLink is
>>     whether such inputs return such outputs.
>>
>>
>> Sure Everything in OpenCog is a Markov process, according to the theory.
>> But in practice, such general advice isn't useful.  Here are some
>> explicit examples of where things are, today:  For example:
>>
>> https://github.com/opencog/ros-behavior-scripting/blob/master/src/behavior.scm#L251
>>
>> Yes, the above should be split up into multiple, distinct OpenPsi rules,
>> but even after that happens, there will still need to be short sequences
>> of imperatives written in atomese.  Everything in that file is an
>> example of the general issue -- currently, actions are defacto
>> implemented as predicates, and NOT as schema, and changing them to
>> Schema is confusing and opaque.  It breaks C++ code. Its not clear how
>> to fix the C++ code so that it will work with Schema, and not break
>> other things (e.g. the pattern matcher, which treats schema and
>> predicates very differently)
>>
>
> It's OK that they are predicates and not schema, and it's OK that they
> occur over a certain duration and have conditionals, etc.
>
> What ExecutionLink could mean in that case would be that you trigger that
> action asynchronously, the return value would be that you successfully
> launched the action, independently of the end result. Then the time lag
> between the action being triggered and the goal would be indicated in the
> OpenPsi rule. For instance you may have the following OpenPsi rule
>
> PredictiveImplication <0.7 0.8>
>   TypedVariable
>     Variable X
>     Type "ConceptNode"
>   TimeInterval
>     TimeNode "1s"
>     TimeNode "20s"
>   And
>     Evaluation
>       Predicate "face-in-front-of-me"
>       Concept "X"
>     Execution
>       DefinedPredicate "Interact with face"
>       Concept "X"
>       Concept "async-action-successfully-launched"
>   <goal>
>
> Maybe we could wrap that predicate into some
>
> ASyncLaunchLink or something...
>
> Nil
>
>
>
>> I mean, we can change things, but lets not be flip: its a lot of work,
>> it impacts a lot of subsystems, and its not easy work.
>>
>>
>>         Yes,. There's a blurry boundary between imperative code written in
>>         python, to send ROS messages, and imperative sequences that more
>>         naturally fit in in the atomspace.  For example: if someone left
>> the
>>         room, then glance at where they were last seen, then clear the
>>         face-visibility flag, and then update the room state. These
>>         three are
>>         currently done in atomese, and they are "naturally" atomese, since
>>         visibility and room state are in the atomspace.
>>
>>
>>     I don't know these parts, but as far as Atomese is concerned the
>>     boundary between imperative and declarative is pretty clear, use
>>     ExecutionLink for declarative, use ExecutionOutputLink for imperative.
>>
>>
>> Again, ponder the contents of
>>
>> https://github.com/opencog/ros-behavior-scripting/blob/master/src/behavior.scm
>> and you'll see what the issue is.
>>
>>
>>
>>
>>              As your above comments indirectly suggest, a SequentialAND
>>         of a series
>>              of ExecutionOutputLinks isn't really a predicate in the
>>              straightforward sense; I think it's got to be treated as
>>         effectively a
>>              "macro execution output link" right?
>>
>>
>>     SequentialAnd has a pretty clear definition with respect to temporal
>>     reasoning (SeqAnd A B === A occurs, then B occurs). The fact that
>>     this link is used to construct imperative action sequences I guess
>>     is OK.
>>
>>
>> What if the execution of A fails? Then what?  Do we proceed to B?  How
>> can we even know that the execution of A failed?
>>
>>
>>
>>
>>         Yes, exactly. That's the narrow technical issue -- should a new
>>         MacroExecutionOutputLink be invented, or can we find some other
>>         way that
>>         does not require inventing yet another link type?
>>
>>
>>     Why would you need a MacroExecutionOutputLink? How would you use it
>>     exactly? Like a Lambda? Like a DefinedSchemaNode?
>>
>>
>> Well ... again, look at behavior.scm and ponder how to re-write it so it
>> uses only schema, and never predicates.
>>
>>
>>
>>         A related, more general issue is how to get the system to learn
>> new
>>         behaviors, and then remember them -- i.e. how to represent them
>> For
>>         example:  "if someone says something nice, then raise arm, wave,
>>         and say
>>         'you're welcome' ". This can't really be two or three psi rules,
>>         it has
>>         to be one. The talking and the arm movement can be done in
>>         parallel, but
>>         the raising of the arm must occur before the waving.
>>
>>
>>     We're not there yet, but yes definitely PLN, MOSES, pattern miner
>>     would do that.
>>
>>
>> ?? We're definitely there, and have been for not quite a year now.  Look
>> at behavior.scm -- its rife with this kind of stuff.  Its only
>> non-verbal behavior, but we're now adding verbal behaviors.   This is
>> not an academic imponderable, its a current, real-world issue, and has
>> been for a while.
>>
>>
>>
>>         Clearly, we need such compound actions, but its not clear how to
>>         represent them in such a way that either PLN or moses could do
>>         something
>>         useful.   A slightly unrealistic example: should one wave hello,
>>         while
>>         brandishing a knife? How, exactly, would that work?
>>
>>
>>     If you have rules expressing stuff about waving knife, then PLN can
>>     use that to estimate the likelihood of having something bad happen
>>     if such composite action is taken. But again we're not there yet (BC
>>     needs to be completed first, I'm experimenting with FC inference on
>>     NLP so I'm not working on the BC ATM).
>>
>>
>> OK, so this can be deferred for a while.
>>
>>
>>              If you want the rule to look in a certain place for a
>>         certain Atom,
>>              can't you just specify the Atom's location explicitly in
>>         the predicate
>>              constructs used in the context part of the Psi Implication
>>         rule?
>>
>>
>>         Sure, but its computationally infeasible to run 200K rule
>>         evaluations
>>         per conversational turn.  Based on recent experience, those 200K
>>         evaluations took 15 minutes on my admittedly under-powered cheapo
>>         laptop.  You can't have a conversation if each turn takes 15
>>         minutes.
>>
>>
>>     But wouldn't that be the job of ECAN? Only rules in the attentional
>>     span (getting there via hebbian or semantical relationship with
>>     contexts, etc) would need to be taken into account.
>>
>>
>> Ehh? How can ECAN possibly know which of the 200K rules should be in the
>> attention span? Below is an example, from AIML; there are 200K more of
>> these roughly similar, many with variables in them.  The fact that
>> they're AIML is a red herring -- they could be any kind of rules.  How
>> can ECAN know that this is the rule, as opposed to some other one?
>>
>> (psi-rule-nocheck
>>     ; context
>>     (list (AndLink
>>        (Evaluation
>>           (Predicate "*-AIML-pattern-*")
>>           (ListLink
>>              (Word "i")
>>              (Word "am")
>>              (Word "an")
>>              (Word "astronaut")
>>           ))
>>        ; Context with topic!
>>        (Evaluation
>>           (Predicate "*-AIML-topic-*")
>>           (ListLink
>>              (Word "astronaut")
>>           ))
>>     )) ;TEMPLATECODE
>>
>>     ; action
>>     (ListLink
>>        (Word "what")
>>        (Word "missions")
>>        (Word "have")
>>        (Word "you")
>>        (Word "been")
>>        (Word "on?")
>>        (ExecutionOutput
>>           (DefinedSchema "AIML-tag think")
>>           (ListLink
>>              (ListLink
>>                 (ExecutionOutput
>>                    (DefinedSchema "AIML-tag set")
>>                    (ListLink
>>                       (Concept "job")
>>                       (ListLink
>>                          (ExecutionOutput
>>                             (DefinedSchema "AIML-tag set")
>>                             (ListLink
>>                                (Concept "topic")
>>                                (ListLink
>>                                   (Word "astronaut")
>>                             )))
>>                    )))
>>           )))
>>     )
>>     (Concept "AIML chat subsystem goal")
>>     (stv 1 0.555555555555555)
>>     (psi-demand "AIML chat demand" 0.97)
>> )
>>
>>
>> --linas
>>
>>
>>     Nil
>>
>>
>>
>>
>>              Or are you looking for some sort of Atomese "library
>>         function" that
>>              makes this concise and elegant, since it has to be done
>>         over and over
>>              again... I guess? ...
>>
>>
>>         No, this is purely a computational performance thing. The goal
>>         is to cut
>>         down the number of psi rules by many orders of magnitude, before
>> one
>>         even begins to evaluate them.  In the good-old days, this was
>>         called the
>>         "frame problem".  In modern times, all AIML engines solve this
>>         by using
>>         a Trie, and OpenCog solves this by using a DualLink.  But in
>> either
>>         case, this can be applied only to a single chunk of context: the
>>         current
>>         sentence.  Its now time to generalize this to the general case,
>>         i.e. not
>>         just the current input sentence, but for state in general.
>>
>>
>>              > Thus, I'm thinking that every part of a Psi rule context
>>         should have
>>              > associated with it some method, some means of localizing
>>         where this context
>>              > is stored, or provide some way of fetching it -- this can
>>         then be used to
>>              > cut down on the search for applicable Psi rules.
>>
>>              I see the need, but I wonder if the localization can just
>>         be done
>>              explicitly within the context predicates .. so that the PM
>>         will then
>>              take account of it automatically...
>>
>>
>>         Well, some sort of explicit syntax is needed that allows this to
>>         happen.
>>
>>         --linas
>>
>>
>>
>>              > (How this could interact with PLN and/or moses is unclear)
>>              >
>>
>>              Hmm, well if the localization is just some logical Atoms,
>>         then PLN can
>>              leverage it explicitly...
>>
>>              ben
>>
>>
>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/opencog.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/opencog/CAHrUA37r1p0RNc5b8dpjRqV%2BU9KZSi5%3D3O1MixdeKzA-gA0pUQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to