Hi Yao, Yao Harrison wrote:
Sure, my question is the same as you thought, and I was agreed with your opinion that the paging of query result should be the scope of applications , but imagine that you have to filter the more than ten thousands nodes and pick up only 10 objects in it as your hoped every times , so we should queried for this big result set every time and get the small set of query returned only , and this will take the defect of the application's performance.
not necessarily, an implementation should be able to handle this scenario. the jcr api allows to lazy instanciate result nodes and a client application may skip results (NodeIterator.skip()) to avoid stepping through e.g. 10000 nodes to find the one it is interested in.
I have to admit, that the current query handler implementation in jackrabbit only takes some advantage of this fact. query results are always calculated as a whole in respect to the uuids of the result node. the actual result node instances however are fetched on a lazy basis.
There was another thread about a similar topic: http://thread.gmane.org/gmane.comp.apache.jackrabbit.devel/3602 so, there's still room for improvement.
So , I hope to limit the result set amount when query executed and returned , just like MySQL's " Limit " statment.
This would be a proprietary extension, and I don't know how you would specify such a keyword in XPath.
regards marcel