Thanks Aaron... It would be great if block-cache can do something like
that...

I was also looking at deeper caching... Lets say 95% of times only 10
fields are used for search

My postings-list [DOC/POS] files are quite large to fit in cache. But
during file-writes, is there a way for me to selectively cache only the
common-fields? In other words, I need to selectively cache small parts of a
big-file...

For ex:
Lets say I have a custom-codec as follows

public class CachingCommonTermsConsumer extends FieldsConsumer {

@Override
public TermsConsumer addField(FieldInfo fInfo) throws IOException {
  if(fInfo.name is a common-field) {
     //Start-write-via block-cache
  }
}

@Override
public void close() throws IOException {
   if(fInfo.name is a common-field) {
      //Stop-caching
   }
}


On Thu, Dec 11, 2014 at 3:48 AM, Aaron McCurry <[email protected]> wrote:
>
> On Wed, Dec 10, 2014 at 1:24 AM, Ravikumar Govindarajan <
> [email protected]> wrote:
>
> > We would like to implement write-thru caching in the following manner
> >
> > a. X GB of block-cache for FDT/TIM files of lucene
> > b. Y GB of block-cache for DOC file of lucene
> >
> > Is this possible in block-cache v2?
> >
> > The reason is DOC files are quite huge and they negatively impact FDT/TIM
> > cache-data. So there is a need to isolate it
> >
>
> I wanted to implement this in v2 but didn't get to it.  My guess is that
> most of the pieces needed are probably in place to make this happen so it
> shouldn't be that hard to implement.  Also by default FDT files are not
> cached but can be configured to be enabled.
>
> Aaron
>
>
> >
> > --
> > Ravi
> >
>

Reply via email to