[ 
https://issues.apache.org/jira/browse/LUCENE-855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12487590
 ] 

Otis Gospodnetic commented on LUCENE-855:
-----------------------------------------

Comments about the patch so far:
Cosmetics:
- You don't want to refer to Andy's class in javadocs, as that class won't go 
in unless Andy makes it faster.
- I see some incorrect (copy/paste error) javadocs and javadocs/comments with 
typos in both the test classes and non-test classes.
- Please configure your Lucene project in Eclipse to use 2 spaces instead of 4. 
 In general, once you get the code formatting settings right, it's a good 
practise to format your code with that setting before submitting a patch.

Testing:
- You can put the testPerformance() code from  
TestFieldCacheRangeFilterPerformance  in the other unit test class you have 
there.
- Your testPerformance() doesn't actually assert...() anything, just prints out 
numbers to stdout.  You can keep the printing, but it would be better to also 
do some asserts, so we can always test that the FCRangerFilter beats the 
vanilla RangeFilter without looking at the stdout.
- You may want to close that searcher in testPerformance() before opening a new 
one.  Probably won't make any difference, but still.
- You may also want to just close the searcher at the end of the method.


Impl:
- In the inner FieldCacheBitSet class, I see:
+        public boolean intersects(BitSet set)  {
+            for (int i =0; i < length; i++) {
+                if (get(i) && set.get(i)) {
+                    return true;
+                }
+            }
+            return false;
+        }

Is there room for a small optimization?  What if BitSets are not of equal size? 
 Wouldn't it make sense to loop through a smaller BitSet then?  Sorry if I'm 
off, I hardly ever work with BitSets.

- I see you made *_PARSERs in FCImpl public (were private).  Is that really 
needed?  Would ackage protected be enough?

- Make sure ASL is in all test and non-test classes, I don't see it there now.


Overall, I like it - slick and elegant usage of FC!

I'd love to know what Hoss and other big Filter users think about this.  Solr 
makes a lof of use of (Range?)Filters, I believe.


> MemoryCachedRangeFilter to boost performance of Range queries
> -------------------------------------------------------------
>
>                 Key: LUCENE-855
>                 URL: https://issues.apache.org/jira/browse/LUCENE-855
>             Project: Lucene - Java
>          Issue Type: Improvement
>          Components: Search
>    Affects Versions: 2.1
>            Reporter: Andy Liu
>         Attachments: FieldCacheRangeFilter.patch, 
> FieldCacheRangeFilter.patch, FieldCacheRangeFilter.patch, 
> MemoryCachedRangeFilter.patch, MemoryCachedRangeFilter_1.4.patch
>
>
> Currently RangeFilter uses TermEnum and TermDocs to find documents that fall 
> within the specified range.  This requires iterating through every single 
> term in the index and can get rather slow for large document sets.
> MemoryCachedRangeFilter reads all <docId, value> pairs of a given field, 
> sorts by value, and stores in a SortedFieldCache.  During bits(), binary 
> searches are used to find the start and end indices of the lower and upper 
> bound values.  The BitSet is populated by all the docId values that fall in 
> between the start and end indices.
> TestMemoryCachedRangeFilterPerformance creates a 100K RAMDirectory-backed 
> index with random date values within a 5 year range.  Executing bits() 1000 
> times on standard RangeQuery using random date intervals took 63904ms.  Using 
> MemoryCachedRangeFilter, it took 876ms.  Performance increase is less 
> dramatic when you have less unique terms in a field or using less number of 
> documents.
> Currently MemoryCachedRangeFilter only works with numeric values (values are 
> stored in a long[] array) but it can be easily changed to support Strings.  A 
> side "benefit" of storing the values are stored as longs, is that there's no 
> longer the need to make the values lexographically comparable, i.e. padding 
> numeric values with zeros.
> The downside of using MemoryCachedRangeFilter is there's a fairly significant 
> memory requirement.  So it's designed to be used in situations where range 
> filter performance is critical and memory consumption is not an issue.  The 
> memory requirements are: (sizeof(int) + sizeof(long)) * numDocs.  
> MemoryCachedRangeFilter also requires a warmup step which can take a while to 
> run in large datasets (it took 40s to run on a 3M document corpus).  Warmup 
> can be called explicitly or is automatically called the first time 
> MemoryCachedRangeFilter is applied using a given field.
> So in summery, MemoryCachedRangeFilter can be useful when:
> - Performance is critical
> - Memory is not an issue
> - Field contains many unique numeric values
> - Index contains large amount of documents

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to