I have a large pool of 'facts' .. and i initially had memory problems (2 gigs not being enough) to stuff all of them in an engine, with other facts, and allow a small set of rules to parse them (Mostly date things.. I have one of these, there aren't any earlier in history, remove all within 2 weeks of this 'earliest fact') kind of work.
Well.. I got out of memory errors on 2 gigs allocated to the JVM.
So.. I decided to break it up into '50' segments with ajva code before hand. I try to put in 'one of the 50 and its chunk of facts' with definstance.. do an
engine.run(), then engine.undefinstance all of the facts... and repeat this process 50 times.
Now I use only about 240 megs of ram.. but the CPU sits at 49.2% or so forever.. and never completes...
What is the 'advised' strategy for handling problems like this? A massive fact pool (like 400,000 facts) all at once, versus breaking it up. They are all javabeans.
Thanks,
Roger
- JESS: using engine.undefinstance Roger Studner
- Re: JESS: using engine.undefinstance Peter Van Weert
- Re: JESS: using engine.undefinstance ejfried
- JESS: what is the best way to backup steps perfor... Nan
- Re: JESS: what is the best way to backup step... Jason Morris
- Re: JESS: what is the best way to backup ... Hector Urroz
- Re: JESS: what is the best way to backup ... Jonathan Sewall
- Re: JESS: what is the best way to ba... ejfried
- Re: JESS: what is the best way to ba... Jason Morris
