Re: [opencog-dev] About Attention Values and Truth Values
Hmm! yes. i am getting it now. Thanks and regards, Vishnu. On Saturday, 19 November 2016 13:32:40 UTC+1, linas wrote: > > That makes the problem harder. You still have to somehow deal with > different word-senses for "apple", and in addition, you also need to create > a a model of the mental state of id1. So, if id1 is a child, the > word-sense for "apple" and "sweet" is probably different than if id1 is an > iphone fanboi. This opens a can of worms: what are id1's beliefs and > world-view? > > (and context dependent: did id1 say that while standing in front of a > store-front selling Apple computer products? or while standing in front of > a grocery display?) > > I think this is "solvable", but its at/past the cutting-edge of what > anyone else is doing with opencog. I've been trying to work on "mental > models" but it's currently hard. > > --linas > > > On Sat, Nov 19, 2016 at 6:06 AM, Vishnu Priya> wrote: > >> >> I also had an another idea of coupling the sentences along with their id. >> Ex. Why can't i give sentences like "Apples are sweet, said by id1". >> "Farmers are starving, said by id2" .So that i would know which sentence >> has which id. what do you say? >> >> Thanks, >> Vishnu >> >> >> On Monday, 14 November 2016 21:53:56 UTC+1, linas wrote: >> >>> >>> A better design would be to explicitly acknowledge that words have >>> meanings. The way that this is currently done looks roughly like this: >>> >>> (EvaluationLink >>> (PredicateNode "is") >>>(ListLink >>> (ConceptNode "apple@meaning-42") >>> (ConceptNode "fruit@meanning-66") >>>) >>> ) >>> >>> I hope the above is "obvious": the 42nd kind of meaning of the word >>> "apple" is a kind of "fruit", where by "fruit", we mean the 66th entry in >>> Webster's dictionary. >>> >>> (ReferenceLink >>> (ConceptNode "apple@meaning-42") >>> (WordNode "apple") >>> ) >>> >>> That tells you the actual word that gets used for meaning-42. This is a >>> lexical function https://en.wikipedia.org/wiki/Lexical_function >>> >>> (WordInstanceLink >>> (SentencNode "id1") >>> (WordInstanceNode "apple@bf71826c-487e-42df-a941-0ecd3c942a76") >>> ) >>> >>> This tells you that the the word apple occurred in sentence id1 >>> >>> (ReferenceLink >>>(WordInstanceNode "apple@bf71826c-487e-42df-a941-0ecd3c942a76") >>>(ConceptNode "apple@meaning-42") >>> ) >>> >>> This tells you that the word apple in sentence id1 actually corresponds >>> to meaning 42. >>> >>> See? No context link at all. >>> >>> The above oversimplifies things a little bit. Some of the reference >>> links should probably be EvaluationLinks. The lexical functions need to be >>> improved, a lot. The current output is documented here: >>> http://wiki.opencog.org/w/RelEx_OpenCog_format but it could be >>> over-hauled and improved, its not perfect. >>> >>> I believe that the above should work well with PLN, but that remains to >>> be seen: again Nil is working on this now. >>> >>> --linas >>> >>> >>> >>> On Mon, Nov 14, 2016 at 9:34 AM, Vishnu Priya >>> wrote: >>> Hey Linas! Thanks for the reply. It's ok. Totally understandable.!! Yeah just read about ContextLink on wiki. I have a scenario, where i have sentences that i want to give to NLP Pipeline. Along with sentences, i also have an attribute called id. Like a reference for sentence. Each sentence is associated with an identifier. For me, it would be useful when i have the sentences parsed along with their id. Later, say i stimulate and get STI, whatever i do, finally i should be knowing, to what id the atom belongs to. So i thought, with something like below, i might achieve that. apple is fruit in the context of id1. (EvaluationLink (ContextLink id1 (PredicateNode "is") (ListLink (ConceptNode "apple") (ConceptNode "fruit") ) ) But i don't know, how to input my sentences along with their identifiers. Is it possible somehow to do such a thing of incorporating identifiers ? or is it totally not doable? --vishnu On Friday, 11 November 2016 02:30:44 UTC+1, linas wrote: > > Hi, > > sorry just now recovering from system outages and an email overload. > > ContextLink and how to use it is documented on the wiki. > > Currently it it not used very much, or at all. > > ContextLinks only make sense once you know how to asssign meaning to > things -- syntax parsing of sentences is far too low-level for this, > because you don't yet know what the word "apple" is. > > --linas > > > > On Fri, Oct 21, 2016 at 10:27 AM, Vishnu Priya > wrote: >
Re: [opencog-dev] About Attention Values and Truth Values
That makes the problem harder. You still have to somehow deal with different word-senses for "apple", and in addition, you also need to create a a model of the mental state of id1. So, if id1 is a child, the word-sense for "apple" and "sweet" is probably different than if id1 is an iphone fanboi. This opens a can of worms: what are id1's beliefs and world-view? (and context dependent: did id1 say that while standing in front of a store-front selling Apple computer products? or while standing in front of a grocery display?) I think this is "solvable", but its at/past the cutting-edge of what anyone else is doing with opencog. I've been trying to work on "mental models" but it's currently hard. --linas On Sat, Nov 19, 2016 at 6:06 AM, Vishnu Priyawrote: > > I also had an another idea of coupling the sentences along with their id. > Ex. Why can't i give sentences like "Apples are sweet, said by id1". > "Farmers are starving, said by id2" .So that i would know which sentence > has which id. what do you say? > > Thanks, > Vishnu > > > On Monday, 14 November 2016 21:53:56 UTC+1, linas wrote: > >> >> A better design would be to explicitly acknowledge that words have >> meanings. The way that this is currently done looks roughly like this: >> >> (EvaluationLink >> (PredicateNode "is") >>(ListLink >> (ConceptNode "apple@meaning-42") >> (ConceptNode "fruit@meanning-66") >>) >> ) >> >> I hope the above is "obvious": the 42nd kind of meaning of the word >> "apple" is a kind of "fruit", where by "fruit", we mean the 66th entry in >> Webster's dictionary. >> >> (ReferenceLink >> (ConceptNode "apple@meaning-42") >> (WordNode "apple") >> ) >> >> That tells you the actual word that gets used for meaning-42. This is a >> lexical function https://en.wikipedia.org/wiki/Lexical_function >> >> (WordInstanceLink >> (SentencNode "id1") >> (WordInstanceNode "apple@bf71826c-487e-42df-a941-0ecd3c942a76") >> ) >> >> This tells you that the the word apple occurred in sentence id1 >> >> (ReferenceLink >>(WordInstanceNode "apple@bf71826c-487e-42df-a941-0ecd3c942a76") >>(ConceptNode "apple@meaning-42") >> ) >> >> This tells you that the word apple in sentence id1 actually corresponds >> to meaning 42. >> >> See? No context link at all. >> >> The above oversimplifies things a little bit. Some of the reference >> links should probably be EvaluationLinks. The lexical functions need to be >> improved, a lot. The current output is documented here: >> http://wiki.opencog.org/w/RelEx_OpenCog_format but it could be >> over-hauled and improved, its not perfect. >> >> I believe that the above should work well with PLN, but that remains to >> be seen: again Nil is working on this now. >> >> --linas >> >> >> >> On Mon, Nov 14, 2016 at 9:34 AM, Vishnu Priya >> wrote: >> >>> Hey Linas! >>> >>> Thanks for the reply. It's ok. Totally understandable.!! >>> >>> Yeah just read about ContextLink on wiki. >>> >>> I have a scenario, where i have sentences that i want to give to NLP >>> Pipeline. Along with sentences, i also have an attribute called id. Like a >>> reference for sentence. >>> Each sentence is associated with an identifier. For me, it would be >>> useful when i have the sentences parsed along with their id. >>> Later, say i stimulate and get STI, whatever i do, finally i should be >>> knowing, to what id the atom belongs to. >>> >>> So i thought, with something like below, i might achieve that. >>> apple is fruit in the context of id1. >>> (EvaluationLink >>> (ContextLink id1 >>>(PredicateNode "is") >>>(ListLink >>> (ConceptNode "apple") >>> (ConceptNode "fruit") >>>) >>> ) >>> >>> >>> >>> But i don't know, how to input my sentences along with their >>> identifiers. Is it possible somehow to do such a thing of incorporating >>> identifiers ? >>> or is it totally not doable? >>> >>> >>> --vishnu >>> >>> >>> >>> >>> On Friday, 11 November 2016 02:30:44 UTC+1, linas wrote: Hi, sorry just now recovering from system outages and an email overload. ContextLink and how to use it is documented on the wiki. Currently it it not used very much, or at all. ContextLinks only make sense once you know how to asssign meaning to things -- syntax parsing of sentences is far too low-level for this, because you don't yet know what the word "apple" is. --linas On Fri, Oct 21, 2016 at 10:27 AM, Vishnu Priya wrote: > Hey Linas, > > I would like to know how to use ContextLink. >> > >- The Apple is red in color. >- The Headquarters of apple is in California. > > Each and every sentence of mine has certain context word. > I want the former sentence to be parsed along with ContextLink fruit > and the later as company. So that later, i can identify
Re: [opencog-dev] About Attention Values and Truth Values
I also had an another idea of coupling the sentences along with their id. Ex. Why can't i give sentences like "Apples are sweet, said by id1". "Farmers are starving, said by id2" .So that i would know which sentence has which id. what do you say? Thanks, Vishnu On Monday, 14 November 2016 21:53:56 UTC+1, linas wrote: > > > A better design would be to explicitly acknowledge that words have > meanings. The way that this is currently done looks roughly like this: > > (EvaluationLink > (PredicateNode "is") >(ListLink > (ConceptNode "apple@meaning-42") > (ConceptNode "fruit@meanning-66") >) > ) > > I hope the above is "obvious": the 42nd kind of meaning of the word > "apple" is a kind of "fruit", where by "fruit", we mean the 66th entry in > Webster's dictionary. > > (ReferenceLink > (ConceptNode "apple@meaning-42") > (WordNode "apple") > ) > > That tells you the actual word that gets used for meaning-42. This is a > lexical function https://en.wikipedia.org/wiki/Lexical_function > > (WordInstanceLink > (SentencNode "id1") > (WordInstanceNode "apple@bf71826c-487e-42df-a941-0ecd3c942a76") > ) > > This tells you that the the word apple occurred in sentence id1 > > (ReferenceLink >(WordInstanceNode "apple@bf71826c-487e-42df-a941-0ecd3c942a76") >(ConceptNode "apple@meaning-42") > ) > > This tells you that the word apple in sentence id1 actually corresponds to > meaning 42. > > See? No context link at all. > > The above oversimplifies things a little bit. Some of the reference links > should probably be EvaluationLinks. The lexical functions need to be > improved, a lot. The current output is documented here: > http://wiki.opencog.org/w/RelEx_OpenCog_format but it could be > over-hauled and improved, its not perfect. > > I believe that the above should work well with PLN, but that remains to be > seen: again Nil is working on this now. > > --linas > > > > On Mon, Nov 14, 2016 at 9:34 AM, Vishnu Priya> wrote: > >> Hey Linas! >> >> Thanks for the reply. It's ok. Totally understandable.!! >> >> Yeah just read about ContextLink on wiki. >> >> I have a scenario, where i have sentences that i want to give to NLP >> Pipeline. Along with sentences, i also have an attribute called id. Like a >> reference for sentence. >> Each sentence is associated with an identifier. For me, it would be >> useful when i have the sentences parsed along with their id. >> Later, say i stimulate and get STI, whatever i do, finally i should be >> knowing, to what id the atom belongs to. >> >> So i thought, with something like below, i might achieve that. >> apple is fruit in the context of id1. >> (EvaluationLink >> (ContextLink id1 >>(PredicateNode "is") >>(ListLink >> (ConceptNode "apple") >> (ConceptNode "fruit") >>) >> ) >> >> >> >> But i don't know, how to input my sentences along with their identifiers. >> Is it possible somehow to do such a thing of incorporating identifiers ? >> or is it totally not doable? >> >> >> --vishnu >> >> >> >> >> On Friday, 11 November 2016 02:30:44 UTC+1, linas wrote: >>> >>> Hi, >>> >>> sorry just now recovering from system outages and an email overload. >>> >>> ContextLink and how to use it is documented on the wiki. >>> >>> Currently it it not used very much, or at all. >>> >>> ContextLinks only make sense once you know how to asssign meaning to >>> things -- syntax parsing of sentences is far too low-level for this, >>> because you don't yet know what the word "apple" is. >>> >>> --linas >>> >>> >>> >>> On Fri, Oct 21, 2016 at 10:27 AM, Vishnu Priya >>> wrote: >>> Hey Linas, I would like to know how to use ContextLink. > - The Apple is red in color. - The Headquarters of apple is in California. Each and every sentence of mine has certain context word. I want the former sentence to be parsed along with ContextLink fruit and the later as company. So that later, i can identify which atom belongs to which context. Should i make changes at the parser level? What should i do? Cheers, Vishnu >>> >>> > -- You received this message because you are subscribed to the Google Groups "opencog" group. To unsubscribe from this group and stop receiving emails from it, send an email to opencog+unsubscr...@googlegroups.com. To post to this group, send email to opencog@googlegroups.com. Visit this group at https://groups.google.com/group/opencog. To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/1333af72-6237-4499-a060-0db4a88b08b0%40googlegroups.com. For more options, visit https://groups.google.com/d/optout.
Re: [opencog-dev] About Attention Values and Truth Values
A better design would be to explicitly acknowledge that words have meanings. The way that this is currently done looks roughly like this: (EvaluationLink (PredicateNode "is") (ListLink (ConceptNode "apple@meaning-42") (ConceptNode "fruit@meanning-66") ) ) I hope the above is "obvious": the 42nd kind of meaning of the word "apple" is a kind of "fruit", where by "fruit", we mean the 66th entry in Webster's dictionary. (ReferenceLink (ConceptNode "apple@meaning-42") (WordNode "apple") ) That tells you the actual word that gets used for meaning-42. This is a lexical function https://en.wikipedia.org/wiki/Lexical_function (WordInstanceLink (SentencNode "id1") (WordInstanceNode "apple@bf71826c-487e-42df-a941-0ecd3c942a76") ) This tells you that the the word apple occurred in sentence id1 (ReferenceLink (WordInstanceNode "apple@bf71826c-487e-42df-a941-0ecd3c942a76") (ConceptNode "apple@meaning-42") ) This tells you that the word apple in sentence id1 actually corresponds to meaning 42. See? No context link at all. The above oversimplifies things a little bit. Some of the reference links should probably be EvaluationLinks. The lexical functions need to be improved, a lot. The current output is documented here: http://wiki.opencog.org/w/RelEx_OpenCog_format but it could be over-hauled and improved, its not perfect. I believe that the above should work well with PLN, but that remains to be seen: again Nil is working on this now. --linas On Mon, Nov 14, 2016 at 9:34 AM, Vishnu Priyawrote: > Hey Linas! > > Thanks for the reply. It's ok. Totally understandable.!! > > Yeah just read about ContextLink on wiki. > > I have a scenario, where i have sentences that i want to give to NLP > Pipeline. Along with sentences, i also have an attribute called id. Like a > reference for sentence. > Each sentence is associated with an identifier. For me, it would be useful > when i have the sentences parsed along with their id. > Later, say i stimulate and get STI, whatever i do, finally i should be > knowing, to what id the atom belongs to. > > So i thought, with something like below, i might achieve that. > apple is fruit in the context of id1. > (EvaluationLink > (ContextLink id1 >(PredicateNode "is") >(ListLink > (ConceptNode "apple") > (ConceptNode "fruit") >) > ) > > > > But i don't know, how to input my sentences along with their identifiers. > Is it possible somehow to do such a thing of incorporating identifiers ? > or is it totally not doable? > > > --vishnu > > > > > On Friday, 11 November 2016 02:30:44 UTC+1, linas wrote: >> >> Hi, >> >> sorry just now recovering from system outages and an email overload. >> >> ContextLink and how to use it is documented on the wiki. >> >> Currently it it not used very much, or at all. >> >> ContextLinks only make sense once you know how to asssign meaning to >> things -- syntax parsing of sentences is far too low-level for this, >> because you don't yet know what the word "apple" is. >> >> --linas >> >> >> >> On Fri, Oct 21, 2016 at 10:27 AM, Vishnu Priya >> wrote: >> >>> Hey Linas, >>> >>> I would like to know how to use ContextLink. >>> >>>- The Apple is red in color. >>>- The Headquarters of apple is in California. >>> >>> Each and every sentence of mine has certain context word. >>> I want the former sentence to be parsed along with ContextLink fruit >>> and the later as company. So that later, i can identify which atom belongs >>> to which context. >>> Should i make changes at the parser level? What should i do? >>> >>> Cheers, >>> Vishnu >>> >> >> -- You received this message because you are subscribed to the Google Groups "opencog" group. To unsubscribe from this group and stop receiving emails from it, send an email to opencog+unsubscr...@googlegroups.com. To post to this group, send email to opencog@googlegroups.com. Visit this group at https://groups.google.com/group/opencog. To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/CAHrUA35T%2BiU0M0d_zT%2BirOLVOmnMpaaqVrC7UZK14YS0CWz53A%40mail.gmail.com. For more options, visit https://groups.google.com/d/optout.
Re: [opencog-dev] About Attention Values and Truth Values
On Mon, Oct 17, 2016 at 5:46 AM, Vishnu Priyawrote: > > Thnaks Linas for the reply. > > >> I would like to know some more info about Truth values. >> > > How is atom's truth value is updated based on new observations? > They are not. Only PLN updates TV's and some other specialized subsystems that you are not using. Currently, TV update is up to the user to do as they please. That's mostly because we don't know of or have any one-size-fits-all algos for this. > How can truth values of certain atoms in a particular context change a > lot? ( i came across this line in the book, "*if truth values of a > certain sort of atom in certain context change a lot, then confidence decay > rate of the atoms of that sort should be increased.*") > Could you please explain with few example sentences. > > Is ConfidenceDecay Mind Agent already implemented? > No > > If so, then i assume that confidence-decaying predicates which are > important Atoms but are unconfident, are given STI, so as to make it likely > that they may be used for inference and this is how atoms become important. > We will find out. Nil is working on inference right now. .. PLN inference. You are welcome to create your own inference rules that do something completely different -- the rule engine doesn't care about what rules you create. It will just apply the rules, and twiddle the TV's according to your desires. --linas > > regards, > --Vishnu > > > -- You received this message because you are subscribed to the Google Groups "opencog" group. To unsubscribe from this group and stop receiving emails from it, send an email to opencog+unsubscr...@googlegroups.com. To post to this group, send email to opencog@googlegroups.com. Visit this group at https://groups.google.com/group/opencog. To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/CAHrUA34hODAYn7JQUt5OTV952PtYXCGpJ8P49Zd28rFPen%3DgCA%40mail.gmail.com. For more options, visit https://groups.google.com/d/optout.
Re: [opencog-dev] About Attention Values and Truth Values
Thnaks Linas for the reply. > I would like to know some more info about Truth values. > How is atom's truth value is updated based on new observations? How can truth values of certain atoms in a particular context change a lot? ( i came across this line in the book, "*if truth values of a certain sort of atom in certain context change a lot, then confidence decay rate of the atoms of that sort should be increased.*") Could you please explain with few example sentences. Is ConfidenceDecay Mind Agent already implemented? If so, then i assume that confidence-decaying predicates which are important Atoms but are unconfident, are given STI, so as to make it likely that they may be used for inference and this is how atoms become important. regards, --Vishnu -- You received this message because you are subscribed to the Google Groups "opencog" group. To unsubscribe from this group and stop receiving emails from it, send an email to opencog+unsubscr...@googlegroups.com. To post to this group, send email to opencog@googlegroups.com. Visit this group at https://groups.google.com/group/opencog. To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/69f01f68-5a05-43f7-a4b1-a3f61adb209f%40googlegroups.com. For more options, visit https://groups.google.com/d/optout.
Re: [opencog-dev] About Attention Values and Truth Values
On Wed, Oct 12, 2016 at 9:48 AM, vishnuwrote: > > > With attention values, i thought i could do the following: > I have 24x7 tweets coming. So i thought, I can send them to NLP pipeline > and get Atoms. Let's say most of the people tweet about Presidential > Election. I assumed that, when feeding these atoms into atomspace, > somehow atoms related to "Election" will get High STI?. Since they occur > more often now-a-days. Say, there will be a lot of Trump and Hilary Clinton > atoms. Somehow they get high STI (?!!) and i can retrieve those top ranked > atoms and its related atoms (since STI is also diffused to similar atoms). > Well, a very olde-fshioned idea (olde in "internet time") is to PageRank algorithm -- https://en.wikipedia.org/wiki/PageRank -- and so you would take something like STI, and diffuse it to other atoms, based on how many incoming and outgoing links there are. ECAN does something like PageRank, but different, I don't recall what the differences are. If you don't like how it distributes the STI, you could create a different variant-- e.g. a PageRankImportanceDiffussionAgent and it would use the pagerank algo, instead of the ECAN algo. I don't know which would give better results for you. --linas > > For more options, visit https://groups.google.com/d/optout. > -- You received this message because you are subscribed to the Google Groups "opencog" group. To unsubscribe from this group and stop receiving emails from it, send an email to opencog+unsubscr...@googlegroups.com. To post to this group, send email to opencog@googlegroups.com. Visit this group at https://groups.google.com/group/opencog. To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/CAHrUA37voR6M9HueXWV0rVhc5P3zRW2fua-AYdjk63-dTRMW2g%40mail.gmail.com. For more options, visit https://groups.google.com/d/optout.
Re: [opencog-dev] About Attention Values and Truth Values
Hey Roman, Thanks, that helped a lot to get more insight. :-) I shall ask Misgana about stimulating atoms. Cheers, Vishnu -- You received this message because you are subscribed to the Google Groups "opencog" group. To unsubscribe from this group and stop receiving emails from it, send an email to opencog+unsubscr...@googlegroups.com. To post to this group, send email to opencog@googlegroups.com. Visit this group at https://groups.google.com/group/opencog. To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/863fb250-ed1d-411f-a379-9ce35826a972%40googlegroups.com. For more options, visit https://groups.google.com/d/optout.
Re: [opencog-dev] About Attention Values and Truth Values
Hey vishnu, what you are suggesting does sound doable. In your case, you would just want to stimulate atoms every time they have been parsed by the NLP pipeline. Something like this might already exist not sure ask misgana. More generally there would be many Mind-Agents that are running in the cogserver. And everytime any of those agents deem it useful they can stimulate an atom, the actual STI value given to an Atom is dynamic based on other variables but you can provide a factor to indicate stimulate this Atom a lot or only a little. Examples of this would be that the PLN system successfully used an Atom for a deduction it is trying to make so it stimulates it so similar atoms come into focus and help the continued process. Or some Agents related to perception have just seen X which corresponds to one or more Atoms so they stimulated them as they will likely be of interest in the moment. Now for retrieving these values you will probably have to write an Cogserver Module that implements a command that gives you the top ranking atoms. If you want this to be done automatically you need to have an Agent. (Which are alway part of a Module) hope that helps. -- You received this message because you are subscribed to the Google Groups "opencog" group. To unsubscribe from this group and stop receiving emails from it, send an email to opencog+unsubscr...@googlegroups.com. To post to this group, send email to opencog@googlegroups.com. Visit this group at https://groups.google.com/group/opencog. To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/2c2d538f-0549-40b7-bfd9-a242d7df8cd7%40googlegroups.com. For more options, visit https://groups.google.com/d/optout.
Re: [opencog-dev] About Attention Values and Truth Values
> > Hey Roman, > Thanks for the reply :-) I am not sure what exactly you want to use the AttentionValues for With attention values, i thought i could do the following: I have 24x7 tweets coming. So i thought, I can send them to NLP pipeline and get Atoms. Let's say most of the people tweet about Presidential Election. I assumed that, when feeding these atoms into atomspace, somehow atoms related to "Election" will get High STI?. Since they occur more often now-a-days. Say, there will be a lot of Trump and Hilary Clinton atoms. Somehow they get high STI (?!!) and i can retrieve those top ranked atoms and its related atoms (since STI is also diffused to similar atoms). That was the idea. But i don't know, whether Attention values works like this for pursuing the above mentioned. :-(. What am i missing?? Boosting STI/LTI would be done when they become relevant i.e they just entered the AtomSpace or NLP found them to be useful. I could not figure out, on what basis stimulus is given to atom. In general, How atoms become important/relevant? -- You received this message because you are subscribed to the Google Groups "opencog" group. To unsubscribe from this group and stop receiving emails from it, send an email to opencog+unsubscr...@googlegroups.com. To post to this group, send email to opencog@googlegroups.com. Visit this group at https://groups.google.com/group/opencog. To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/3d361b65-5ab6-448b-9e15-fe59c6151252%40googlegroups.com. For more options, visit https://groups.google.com/d/optout.
Re: [opencog-dev] About Attention Values and Truth Values
Hey, Short explanation first: STI: This value indicates how relevant this atom is to the currently running process/context LTI: This value indicates how relevant this atom might be in future processes/context (Atoms with low LTI have no future use and get delete if the AS gets to big) VLTI: This is a simple boolean that indicates that this atom should never be deleted. (Useful for system components that are written in Atomese) so STI values are only ever useful at the current point in time so storing them in a DB makes no sense. Now storing only those Atom which have a high STI/LTI in the DB that might be useful but there is no code that does that currently. In my opinion just storing the whole AS in a DB works just as well. Now when you load the atoms back into the AS then it might make sense to give all those Atoms an LTI boost. I am not sure what exactly you want to use the AttentionValues for but in general they are supposed to speed up other processes like the PLN System by restricting their search space to only Atoms with a high STI. Misgana has a working but experimental implementation for this as far as I know. In regards to using ECAN: generate fake sentences --> feed atoms in atomspace --> Boost STI/LTI --> Set Memory Capacity I assume you got that from on of the experiments. You obviously don't want to generate face sentences so we can ignore that. Feeding Atoms into the AtomSpace would be done by the NLP Pipeline. Boosting STI/LTI would be done when they become relevant i.e they just entered the AtomSpace or NLP found them to be useful. (again ask misgana about his implementation of this). Setting the Memory Capacity you probably don't have to do but it's just a value in the config file. If you have some more specific question about how ECAN works (i.e. spreading of STI , rent collection , setting of the AFB) i can answer those. regards /Roman -- You received this message because you are subscribed to the Google Groups "opencog" group. To unsubscribe from this group and stop receiving emails from it, send an email to opencog+unsubscr...@googlegroups.com. To post to this group, send email to opencog@googlegroups.com. Visit this group at https://groups.google.com/group/opencog. To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/5c47a39e-e27c-4a75-9572-cd569ab51944%40googlegroups.com. For more options, visit https://groups.google.com/d/optout.