[
https://issues.apache.org/jira/browse/LUCENE-4484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13477536#comment-13477536
]
Mark Miller commented on LUCENE-4484:
-------------------------------------
Right, we have changed the defaults to favor NRT.
You can always say to switch that if someone runs into a problem, but of course
it would be nicer if NRTCachingDir was more versatile and could deal well with
term vectors / stored fields.
I agree it's more of a niche situation (it's not likely a common problem), but
it would be my preference.
> NRTCachingDir can't handle large files
> --------------------------------------
>
> Key: LUCENE-4484
> URL: https://issues.apache.org/jira/browse/LUCENE-4484
> Project: Lucene - Core
> Issue Type: Bug
> Reporter: Michael McCandless
>
> I dug into this OOME, which easily repros for me on rev 1398268:
> {noformat}
> ant test -Dtestcase=Test4GBStoredFields -Dtests.method=test
> -Dtests.seed=2D89DD229CD304F5 -Dtests.multiplier=3 -Dtests.nightly=true
> -Dtests.slow=true
> -Dtests.linedocsfile=/home/hudson/lucene-data/enwiki.random.lines.txt
> -Dtests.locale=ru -Dtests.timezone=Asia/Vladivostok
> -Dtests.file.encoding=UTF-8 -Dtests.verbose=true
> {noformat}
> The problem is the test got NRTCachingDir ... which cannot handle large files
> because it decides up front (when createOutput is called) whether the file
> will be in RAMDir vs wrapped dir ... so if that file turns out to be immense
> (which this test does since stored fields files can grow arbitrarily huge w/o
> any flush happening) then it takes unbounded RAM.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]