[ 
https://issues.apache.org/jira/browse/SOLR-3393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13541153#comment-13541153
 ] 

Shawn Heisey commented on SOLR-3393:
------------------------------------

When I built the 4x patch, I accidentally checked out a specific old revision 
instead of the newest.  The patch will apply successfully to the most recent 
revision as long as the SVN URL glitch is dealt with first, or you use 
svn-aware tools.

                
> Implement an optimized LFUCache
> -------------------------------
>
>                 Key: SOLR-3393
>                 URL: https://issues.apache.org/jira/browse/SOLR-3393
>             Project: Solr
>          Issue Type: Improvement
>          Components: search
>    Affects Versions: 3.6, 4.0-ALPHA
>            Reporter: Shawn Heisey
>            Assignee: Hoss Man
>            Priority: Minor
>             Fix For: 4.1
>
>         Attachments: SOLR-3393-4x-withdecay.patch, SOLR-3393.patch, 
> SOLR-3393.patch, SOLR-3393.patch, SOLR-3393.patch, SOLR-3393.patch, 
> SOLR-3393.patch, SOLR-3393.patch, SOLR-3393-trunk-withdecay.patch
>
>
> SOLR-2906 gave us an inefficient LFU cache modeled on 
> FastLRUCache/ConcurrentLRUCache.  It could use some serious improvement.  The 
> following project includes an Apache 2.0 licensed O(1) implementation.  The 
> second link is the paper (PDF warning) it was based on:
> https://github.com/chirino/hawtdb
> http://dhruvbird.com/lfu.pdf
> Using this project and paper, I will attempt to make a new O(1) cache called 
> FastLFUCache that is modeled on LRUCache.java.  This will (for now) leave the 
> existing LFUCache/ConcurrentLFUCache implementation in place.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to