Patrick, Thanks a lot and I will add this to the notes document. I was 
actually thinking of asking you for more details; I should have. I suppose 
I felt like the generic summary I copied was a good description to get a 
"feel" for the way NARS handles time (my notes were kind of uneven and it 
really deserves proper research).

The folder with all my materials from the "time" event is here 
--> 
https://drive.google.com/drive/folders/1HijCci1_USpw6X8S6F5I-Skah_TQkf2-?usp=sharing

There is a second "time" event by the Philosophy Club, but I'm not linking 
to it from our AGI forum, if anyone is interested with some Sunday armchair 
philosophizing. It's a discussion of the Bergson vs. Einstein debate which 
will probably be interesting (philosophy vs physics):
https://www.meetup.com/The-Philosophy-Club/events/283106532/ 

On Thursday, February 17, 2022 at 7:20:59 AM UTC-8 [email protected] wrote:

> Hi Mike!
>
> This is a valuable summary, thank you! The NARS descriptions in your 
> document regarding time are still valid, my only critique is that they are 
> quite generic. Over the years we have filled in many details we are happy 
> to share with you if you want. We also implemented the principles in an 
> efficient way, and demonstrated them to work well in rich streams of events 
> such as necessary to control a robot with multiple sensor modalities. 
> Getting this right was pretty much our main focus between 2015 and 2020, 
> together with other sensorimotor aspects and attention allocation which 
> highly depends on timing. Since we are very happy with the outcome we moved 
> on to other issues and the strengths of NAL-based declarative reasoning.
>
> Here a very simple example of an a-b event sequence using "OpenNARS for 
> Applications" ( https://github.com/opennars/OpenNARS-for-Applications ):
>
>
>
> *Input: a. :|: occurrenceTime=1 Priority=1.000000 Truth: 
> frequency=1.000000, confidence=0.900000Input: b. :|: occurrenceTime=2 
> Priority=1.000000 Truth: frequency=1.000000, confidence=0.900000Derived: 
> dt=1.000000 <a =/> b>. Priority=0.348301 Truth: frequency=1.000000, 
> confidence=0.282230*
> As you see, there is an occurrence time value which is assigned to each 
> event, this is how before/after can be decided, and how the dt (time delta) 
> of the induced hypothesis is calculated.
> The time delta is required to decide the occurrence time of a prediction 
> of *b*. Additionally, if *b* does not happen after *a*, negative evidence 
> is attributed to the hypothesis *<a =/> b>*.
> Additionally, a projection formula is used to decay the confidence of a 
> conclusion dependent on time distance between the premises, but patterns 
> which span higher time distances can still become higher-confident than 
> short time-distance ones via revision (repeated occurrence). Projection 
> also allows to revise hypotheses with varying time deltas to learn proper 
> timing expectations (since timing itself is uncertain), and to handle 
> timing variations in decision making. 
>
> In the reasoning literature there are many ideas how to formalize and 
> handle time, but most of them wouldn't work for AGI, or aren't practical 
> for various reasons such as not being able to handle timing variations and 
> uncertainties in timing in general even though they are crucial. As a rule 
> of thumb, I suggest to be skeptical about anything which only exists in 
> papers, it's way easier to describe something than to describe something 
> which could really work, and then to make it work reliably. I'm convinced 
> timing happens to be a key aspect of AGI as I think you have rightly 
> identified, it's one of the things which have to be properly handled at the 
> beginning and is hard to add to a system later. Humans' attention 
> allocation and decision making is strongly bound to the current moment, 
> though our decisions are not fully determined by the current moment but 
> strongly controlled by our intentions and also previous results of 
> reasoning.
>
> Best regards,
> Patrick
>
>
> On Sunday, February 13, 2022 at 7:49:36 AM UTC+1 Mike Archbold wrote:
>
>> On Friday, February 11, 2022 at 10:28:03 PM UTC-8 linas wrote:
>>
>>> Hi,
>>>
>>> Yeah, I understand. I'm just doing a sales job here. Besides the systems 
>>> you mention, there are at least another dozen or two, at various 
>>> Universities, in assorted robotics and AI labs. And more recently, Lord 
>>> knows how many dozens, if not hundreds, being created in big companies and 
>>> small startups.  The overwhelming modus operendi is that no one 
>>> collaborates with anyone, everyone goes it alone, re-inventing the same 
>>> stuff, rediscovering the same ideas, over and over. 
>>>
>>> I'm doing what little I can to promote collaboration, to get everyone 
>>> working on a common, shared software base and infrastructure. And part of 
>>> that, in this case, is trying to sell you on the wonders and miracles that 
>>> await in opencog/atomspace-land.  I don't expect you or anyone in your 
>>> audience to roll up their sleeves and start coding or anything like that, 
>>> but by pumping it up, doing a hard sell thing, I'm hoping to spread the 
>>> word, to say "Hey everyone! Lets at least collaborate on a common generic 
>>> infrastructure, something that can benefit everyone".   Get that message 
>>> out. You are just today's target, that's all.
>>>
>>> -- Linas
>>>
>>> On Fri, Feb 11, 2022 at 11:26 PM Mike Archbold <[email protected]> 
>>> wrote:
>>>
>>>> Thanks... I will update the notes. The notes are VERY general, just to 
>>>> get a "feel" was my intent, not a criticism or drawing comparisons.... the 
>>>> coverage is uneven. So, I say it's "notes" and it's not proper research... 
>>>> I bolded ACT-R only for emphasis on what sounds like a key feature of the 
>>>> design, not even related to the time. I don't know much ACT-R, which is 
>>>> why 
>>>> I stuck a long description in there.  It sounds like you've done a lot of 
>>>> great work on OpenCog.
>>>>
>>>> On Friday, February 11, 2022 at 7:48:38 PM UTC-8 linas wrote:
>>>>
>>>>> And one final hopefully short comment:
>>>>>
>>>>> > NARS, SOAR, ACT-R
>>>>>
>>>>> I want to draw a few more distinctions.  First, "classic" OpenCog is 
>>>>> (was?) a theory of mind or a theory of cognition (a "cognitive model"?), 
>>>>> having more than a few similarities to the above systems.  This "classic" 
>>>>> OpenCog is described in several books by Goertzel et al, and assorted 
>>>>> papers, conference proceedings, etc. Assorted variants of it were built.
>>>>>
>>>>> All of these incarnations of OpenCog were built on a generic 
>>>>> infrastructure, the "Atomspace". The AtomSpace is meant to provide an 
>>>>> "easy-to-use" base on which different cognitive theories can be created, 
>>>>> explored, developed.  It tries to be impartial, providing a collection of 
>>>>> tinker-toy parts which you can assemble yourself, or extend, implement, 
>>>>> re-implement as needed to pursue any one particular theory or vision of 
>>>>> what cognition is. 
>>>>>
>>>>> Because we've turned the crank on this 3 or 4 or 5 times, the lower 
>>>>> layers have gotten fairly generic, and are debugged, stable, 
>>>>> performance-optimized and can support the weight of more complex devices 
>>>>> to 
>>>>> be built on top of them. The exploration of higher layers continues 
>>>>> unabated.  Most of what you abstracted about NARS, SOAR, ACT-R would 
>>>>> count 
>>>>> as "higher layers". 
>>>>>
>>>>> To rephrase: the AtomSpace allows you to "roll your own" temporal 
>>>>> logic. I don't care- have at it, use your favorite theory. You mention 
>>>>> ACT-R as having declarative, and procedural memory, and ACT-R being a 
>>>>> production system. Sure, we can do all three styles in the AtomSpace, 
>>>>> simultaneously, on the same data. I don't care: do it however you want. 
>>>>> You 
>>>>> bolded: At each moment, an internal pattern matcher [in ACT-R] 
>>>>> searches for a production that matches the current state of the buffers. 
>>>>> Only one such production can be executed at a given moment By 
>>>>> contrast, in the AtomSpace, you can run productions one at a time, or in 
>>>>> parallel, or distributed across the network. Don't care. Or, instead of 
>>>>> productions, you can use term-rewriting, graph rewriting, don't care. The 
>>>>> toolset is there. 
>>>>>
>>>>> -- Linas
>>>>>
>>>>> On Fri, Feb 11, 2022 at 7:50 PM Linas Vepstas <[email protected]> 
>>>>> wrote:
>>>>>
>>>>>> Hi Mike,
>>>>>>
>>>>>> > looking like CLIPS a bit to me.
>>>>>>
>>>>>>
>>>>>> And not by accident. There are, however, some deep and fundamental 
>>>>>> differences. These are:
>>>>>>
>>>>>>
>>>>>> * The "rules" are kept in a graph database that can be saved to disk 
>>>>>> in several formats, saved to SQL, no-SQL, and transmitted by network to 
>>>>>> other network nodes.
>>>>>>
>>>>>>
>>>>>> * The graph store is more generic than just "rules", you can store 
>>>>>> anything you want in it. It's a generalized KR system. If you don't like 
>>>>>> the default KR style, you can invent your own: all knowledge graphs are 
>>>>>> not 
>>>>>> just static graphs, but are also executable, and you get to pick how 
>>>>>> that's 
>>>>>> done. (OK, so if you invent your own, it might not work so well with 
>>>>>> PLN, 
>>>>>> and whatever temporal subsystem gets created. So compatibility is your 
>>>>>> responsibility, too.)
>>>>>>
>>>>>>
>>>>>> * Unlike CLIPS (or Prolog) rules/expressions can have more than just 
>>>>>> true/false values. They can be given floating-point valuations, for 
>>>>>> example, Bayesian probabilities or fuzzy-logic percentages. They can be 
>>>>>> given vector-of-floats, e.g. two numbers: probability & confidence. Or a 
>>>>>> vector of 653 floats, from some neural net. Or a vector of strings. Or a 
>>>>>> nested tree of floats and strings. Or whatever. Each valuation is a 
>>>>>> generic 
>>>>>> key-value DB. And not just only "true/false".
>>>>>>
>>>>>>
>>>>>> The default PLN rules that are CLIPS-like use a blend of probability 
>>>>>> theory and fuzzy logic. But again, you don't have to use these, you can 
>>>>>> invent your own.
>>>>>>
>>>>>>
>>>>>> -- Linas
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Fri, Feb 11, 2022 at 6:02 PM Mike Archbold <[email protected]> 
>>>>>> wrote:
>>>>>>
>>>>>>> Thanks everybody for your comments. There is a time philosophy 
>>>>>>> meetup event this Sunday, and I put together some very general time 
>>>>>>> notes I 
>>>>>>> cobbled together:  
>>>>>>> https://docs.google.com/document/d/1_PLknbLKL7ZGEupy6tBQR-J5rkFgt3s-dOHKn4LZKIU/edit?usp=sharing
>>>>>>> Please let me know if further comments. I appreciate your help!
>>>>>>> Mike Archbold
>>>>>>>
>>>>>>> -- 
>>>>>>> You received this message because you are subscribed to the Google 
>>>>>>> Groups "opencog" group.
>>>>>>> To unsubscribe from this group and stop receiving emails from it, 
>>>>>>> send an email to [email protected].
>>>>>>> To view this discussion on the web visit 
>>>>>>> https://groups.google.com/d/msgid/opencog/55185ace-56c1-4564-8b4a-4d5c175379c9n%40googlegroups.com
>>>>>>>  
>>>>>>> <https://groups.google.com/d/msgid/opencog/55185ace-56c1-4564-8b4a-4d5c175379c9n%40googlegroups.com?utm_medium=email&utm_source=footer>
>>>>>>> .
>>>>>>>
>>>>>>
>>>>>>
>>>>>> -- 
>>>>>> Patrick: Are they laughing at us?
>>>>>> Sponge Bob: No, Patrick, they are laughing next to us.
>>>>>>  
>>>>>>
>>>>>>
>>>>>
>>>>> -- 
>>>>> Patrick: Are they laughing at us?
>>>>> Sponge Bob: No, Patrick, they are laughing next to us.
>>>>>  
>>>>>
>>>>> -- 
>>>> You received this message because you are subscribed to the Google 
>>>> Groups "opencog" group.
>>>> To unsubscribe from this group and stop receiving emails from it, send 
>>>> an email to [email protected].
>>>>
>>> To view this discussion on the web visit 
>>>> https://groups.google.com/d/msgid/opencog/8eada5d7-b03e-42b8-b3bc-c68a16bbff37n%40googlegroups.com
>>>>  
>>>> <https://groups.google.com/d/msgid/opencog/8eada5d7-b03e-42b8-b3bc-c68a16bbff37n%40googlegroups.com?utm_medium=email&utm_source=footer>
>>>> .
>>>>
>>>
>>>
>>> -- 
>>> Patrick: Are they laughing at us?
>>> Sponge Bob: No, Patrick, they are laughing next to us.
>>>  
>>>
>>>

-- 
You received this message because you are subscribed to the Google Groups 
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/opencog/0c3f61f8-0520-4415-ae69-10c736bea468n%40googlegroups.com.

Reply via email to