On Thu, Sep 15, 2016 at 10:12 AM, <vishnupriya...@gmail.com> wrote:

>
> So should i post this segmentation fault in github?
>

Sure. It would be better if you fixed it!


>
> --Thanks
> Vishnu
>
>
>
>>
>> :-(
>> OK, so .. here's the deal:
>>
>> -- Clearly, the segfault is bad, and needs to be fixed!
>>
>> -- there are two versions of the pattern miner, the one here, and the one
>> in a different (older) branch of opencog.  Shujing Ke did most of her work
>> in the older branch, and no one has ported her changes to the current
>> code.  This should also probably be done.  The older branch is here:
>> https://github.com/opencog/opencog/branches  PatternMinerEmbodiment --
>> you can see that she has made 65 updates, but that her code is 4639 commits
>> behind master!  It might be the case that her code will nt segfault, no one
>> knows.
>>
>> -- its not entirely obvious to Nil or to me that the Pattern Miner is
>> correctly written, anyway.  We need to review it.  There is a very highly
>> specialized version of a pattern miner on the language-learning code, and I
>> was planning on perhaps replacing that by a general-purpose miner, but have
>> not gotten around to it. Its a big project.
>>
>> TL;DR: We need someone to roll up their sleeves, and take control of the
>> pattern Miner, and fix it, advance it, improve it, etc.
>>
>>>
>>> how can i give bunch of sentences and get R2L outputs, which in turn i
>>> can give to pattern miner?
>>>
>>
>> Well, that is the magic question, isn't it?  I'm not sure what state the
>> pattern-miner demos and examples are in. A good place to start would be to
>> review those, and then write a new one, explicitly dealing with language
>> issues.
>>
>>>
>>>
>>> I also thought a way to do this:
>>> --->  converting bunch of lines into cff  by using "batch-process.sh"
>>>  and in turn converting  that into scm ./cff-to-opencog.pl
>>> <http://www.google.com/url?q=http%3A%2F%2Fcff-to-opencog.pl&sa=D&sntz=1&usg=AFQjCNGnQSAK8n4X-ID09II7nLXXjxE8IA>
>>>  .
>>>
>>>
>>
>> cff is useful only for saving some CPU time during bulk processing.
>> Right now, the system is not ready for bulk processing, so saving some CPU
>> cycles is not worth the effort.
>>
>>
>>> But it will be in the form of relex output.
>>> so picking some WordInstanceNode of each sentence from the relex output
>>> and doing the below to get R2L outputs.
>>>
>>> (cog-incoming-set (car (cog-incoming-set (ConceptNode (cog-name
>>> (WordInstanceNode "apple@2d15518b-c626-4ce3-8e6d-ecd07d3f9e46"))))))
>>> But it would be tedious!!
>>>
>>
>> why is that tedious?  That's more or less how you're supposed to do it:
>> its a giant graph, you have to chase the edges of the graph to get what you
>> want.  Your code is not the most elegant way to chase through an edge, but
>> its not atypical. There are various InheritanceLinks, etc. in place to
>> simplify such searches.  There are also various utilities and macros for
>> some of this stuff (in the utilities.scm and nlp-utilities.scm files)
>>
>> --linas
>>
>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to opencog+unsubscr...@googlegroups.com.
To post to this group, send email to opencog@googlegroups.com.
Visit this group at https://groups.google.com/group/opencog.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/opencog/CAHrUA36pSoDMEqBFUj%2BpFpDCpCwjWSt8nZyMz9xmNy4PMy1WJQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to