Hi Rob,
Did you notice that in each case I display the full resultSet (and
therefore consume it) in the console ?In each case, I gave two numbers:
one that measures the time taken by execSelect (this number is relevant
since it can change a lot depending upon the query) and the second
number that measures (precisely) the time to consume the ResultSet
using ResulSetFormatter.asText(ResultSet res).
I am not benchmarking Jena and I have therefore no interest in the
timing values per se. My issue is just that I cannot use the API
because it's taking too long to return 86 results out of a model
comprising approximatively 4600 statements.
You are also telling me that " InfModel infMod =
ModelFactory.createInfModel(reasoner, m); " doesn't actually provides a
usable Model and that inference rules are applied when the ResultSet is
consumed. Well it looks like it's really the case : the execSelect()
takes 1ms in the case of an infModel, and "
System.out.println(ResultSetFormatter.asText(rs));" takes almost 19
seconds to complete).
Moreover, I actually ran the same test twice on the same infModel
object and yes, the second time it took 2ms for execSelect and 7ms to
consume the resultSet.
My conclusion is that one cannot use queries on InfModel created at
realtime (OR : any InfModel must be used once to be really usable then
after)

Marc

Le mercredi 14 mars 2018 à 16:41 +0000, Rob Vesse a écrit :
> You've made a common error that people trying to benchmark Jena
> make.  execSelect() simply prepares a result set backed by an
> iterator that is capable of answering the query, until you are
> consume that result set no execution actually takes place.  All query
> execution in Jena is lazy, if you want to time the time to execute
> the full results use a method that consumes/copies the returned
> iterator such as ResultSetFactory.copyResults() thus forcing full
> execution to happen
> 
> So what you are timing as the results processing is actually results
> processing + query execution.  Over an inference model the act of
> executing a query will cause inference rules to be applied which
> depending on the ontology and rules may take a long time.
> 
> Rob
> 
> On 14/03/2018, 16:26, "[email protected]" <[email protected]>
> wrote:
> 
>     Hi,
>     
>     I have included here (https://gist.github.com/MarcAgate/8bbe334fd
> 852817977c909af107a9c6b) some code tha illustrates the issue.
>     It runs the same query against three different models (Model,
> InfModel and OntModel) of the same ontology.
>     There's obviously a problem with InfModel.
>     
>     Any idea ?
>     
>     Thanks
>     
>     Marc
>     
>     
> 
> 
> 
> 

Reply via email to