I am starting to build an expert system that needs to run off a database. The dataset 
is quite large (data arriving at the rate of gigs per hour).

The approach I am taking to run multiple rule-engines, having each engine look at the 
data in a different way. Partitioning the data like give us view that should be a 
vastly smaller set of data than the entire database. Partitioning the data makes 
sense, and very little if any relational data is lost; if any is lost we can tolerate 
it at this point.

First question: does that approach seem make sense or have I given enough details?

I bought "JESS In Action" today and have made through about 100 pages. So far, I 
haven't encountered examples of grafting JESS to a database. I noticed on the JESS 
website (http://herzberg.ca.sandia.gov/jess/user.shtml) that someone had written 
something to do this grafting (Fact Storage Provider Framework) . I cannot seem to 
find a simple example of this.

Second question: Where might I find a simple example that I could learn from to graft 
a fact-base to a database in JESS?

I hope my questions are understandable and not too bothersome. Thank you for your time!

--Nate

--------------------------------------------------------------------
To unsubscribe, send the words 'unsubscribe jess-users [EMAIL PROTECTED]'
in the BODY of a message to [EMAIL PROTECTED], NOT to the list
(use your own address!) List problems? Notify [EMAIL PROTECTED]
--------------------------------------------------------------------

Reply via email to