Yes I agree. Done. Holger
On Apr 16, 2009, at 10:21 AM, Schmitz, Jeffrey A wrote: > > Hi Holger, > One note on the below if you are going to implement it, you may want > to pass the predicate into the SPINInferences.run function for use in > the explanation text. Currently the explanation text has 'spin:rule' > hardcoded in the explanation string that you may want to change to > something like: > > E.g. > String explanationText = "Inferred by " + rulePredicate.getURI() + " > at class " + SPINLabels.get().getLabel(cls) + ":\n\n" + > arqWrapper.getText(); > > -----Original Message----- > From: Holger Knublauch [mailto:[email protected]] > Sent: Thursday, April 09, 2009 1:34 PM > To: [email protected] > Subject: [tbc-users] Re: SPIN API suggestion > > > Hi Jeff, > > thanks for your feedback. Your suggestion makes very much sense, and I > have added another run method as you propose to the next release. Your > use case sounds familiar, and we often discover similar patterns in > which finer control over the order of inferences is needed. If these > patterns become better understood we may add new properties to the > SPIN > namespace to allow for distinguishing between transforms and other > rule > types. > > Please feel free to continue to post SPIN API related questions here. > Ideally put something like [SPIN API] into the subject line so that > people not interested in this topic can skip it. We may set up a > separate mailing list at some point in time if more people start using > it. > > Holger > > > On Apr 9, 2009, at 9:45 AM, Schmitz, Jeffrey A wrote: > >> >> Hello, >> >> Thought I'd give a quick suggestion for the SPIN API. The spin:rule >> based inferencing is a very powerful way to assign inference rules to >> classes in an object oriented way. However, as currently implemented >> it's pretty much an all or nothing way to create and add inferred >> triples to a single model. This is because spin:rule is hardcoded in >> the SPINInferences.run function and at runtime all of a class's >> specified spin:rule's (or subproperties thereof) are exectued en- >> mass, > >> with all inferred triples added to the single 'newTriples' model: >> >> public static int run( >> Model queryModel, >> Model newTriples, >> SPINExplanations explanations, >> List<SPINStatistics> statistics, >> boolean singlePass, >> ProgressMonitor monitor) { >> Map<QueryWrapper, Map<String,RDFNode>> > initialTemplateBindings = new >> HashMap<QueryWrapper, Map<String,RDFNode>>(); >> Map<Resource,List<QueryWrapper>> cls2Query = >> SPINQueryFinder.getClass2QueryMap(queryModel, queryModel, SPIN.rule, >> true, initialTemplateBindings, false); >> return run(queryModel, newTriples, cls2Query, >> initialTemplateBindings, explanations, statistics, singlePass, >> monitor); >> } >> >> To make this powerful capability more flexible, what I've done is >> re-create the run function with the rule predicate being >> parameterized. >> >> >> public int runSPINInferences( >> Model queryModel, >> Model newTriples, >> Property rulePredicate , >> SPINExplanations explanations, >> List<SPINStatistics> statistics, >> boolean singlePass, >> ProgressMonitor monitor) { >> Map<QueryWrapper, Map<String,RDFNode>> > initialTemplateBindings = new >> HashMap<QueryWrapper, Map<String,RDFNode>>(); >> Map<Resource,List<QueryWrapper>> cls2Query = >> SPINQueryFinder.getClass2QueryMap(queryModel, queryModel, >> rulePredicate, true, initialTemplateBindings, false); >> return SPINInferences.run(queryModel, newTriples, > cls2Query, >> initialTemplateBindings, explanations, statistics, singlePass, >> monitor); >> } >> >> This way I can create sibling subproperties of spin:rule, and in my >> SPARQL engine I can pick and choose exactly which rules get run based >> on the current state/progress of the engine, as well as specify the >> model to be updated with the "inferred" triples based on the type of >> rule being executed. For example, I've setup two subproperties of >> spin:rule >> >> SpinLib:inferenceRule >> SpinLib:transformRule >> >> Our SPARQL engine first runs all the SpinLib:inferenceRule's, which >> adds all the triples back into the source model: >> >> runSPINInferences(baseModel, baseModel, inferenceRule, exp, null, >> true, null); >> >> These are for rules like calculating the area of a rectangle based on >> height and width. >> >> After these new triples are created, the engine then runs transform >> rules on the source model. >> >> runSPINInferences(baseModel, destModel, transformRule, exp, null, >> true, null); >> >> For these transforms the triples are added to the model being >> transformed into destModel), and not back into the source model. >> >> Anyway, it was a very simple change for me to make locally, but >> thought perhaps allowing this flexibility might be something you >> might > >> want to consider adding directly to the API (and/or perhaps more >> importantly documenting the capability/pattern). Perhaps some >> typical > >> subproperties could even be added to the spin model. I would think >> model transforms such as we're using would be a very useful and >> general type of inference that people could use. Also seems like >> something that might be able to be combined with SPARQLMotion in some >> way to allow transforms to be a little more object-oriented (e.g. the >> classes transform themselves). >> >> Btw, is this the proper forum for SPIN API questions/comments? >> >> Thanks, >> Jeff >> >>> > > > > > > --~--~---------~--~----~------------~-------~--~----~ You received this message because you are subscribed to the Google Groups "TopBraid Composer Users" group. To post to this group, send email to [email protected] To unsubscribe from this group, send email to [email protected] For more options, visit this group at http://groups.google.com/group/topbraid-composer-users?hl=en -~----------~----~----~----~------~----~------~--~---
