Just a theoretical question, would it make sense to add some sort of 
StoredDocument[] bulkGet(int[] docId) to fetch multiple stored documents in one 
go? 

The reasoning behind is that now with compressed blocks random-access gets more 
expensive, and in some cases  a user  needs to fetch more documents in one go. 
If it happens that more documents come from one block it is a win. I would also 
assume, even without compression , bulk access on sorted docIds cold be a win 
(sequential access)?

Does that make sense, is it doable? Or even worse, does it already exist :)

By the way, I am impressed how well compression does, even on really short 
stored documents, approx. 150b  we observe 35% reduction. Fetching 1000 short 
documents on fully cached index  is observably slower (2-3 times), but as soon 
as you memory gets low, compression wins quickly. Did not test it thoroughly, 
but looks good so far. Great job!


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

Reply via email to