I have biuld a distribute index using the source code of
hadoop/contrib/index,but I found that when the input files become big(such
as one file is 16G),the OOM exception will be throwed .The cause is that: in
combiner ,"writer.addIndexNoOptimize()",this use much memory cause to OOM,
it's the Lucene OOM insead of the MapReduce OOM, but I hope create a new
method like the "spill" to solve this problem ,how can I do? My English is
poor ,sorry.


Thanks

--
View this message in context: 
http://lucene.472066.n3.nabble.com/mapreduce-combiner-tp3612513p3612513.html
Sent from the Hadoop lucene-dev mailing list archive at Nabble.com.

Reply via email to