I'm quite interested in using the pattern miner for the last chapter of my 
dissertation and so I'm excited to work with it and try things out.  
However, it seems as though the compilation of even small test knowledge 
bases takes a lot of memory?  For serious data sets, such as the 
environmental ontology 
(https://raw.githubusercontent.com/EnvironmentOntology/envo/master/envo.obo) 
it always crashes.  Is there a way to compile the data in a more memory 
efficient way, or compile it somehow elsewhere and upload it?  

-- 
You received this message because you are subscribed to the Google Groups 
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/opencog.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/opencog/da7f4204-8b1d-47d8-8cc1-0d05da2048e8%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to