Thanks Otis,

I don't know enough about Hadoop to understand the advantage of using Hadoop
in this use case.  How would using Hadoop differ from distributing the
indexing over 10 shards on 10 machines with Solr?

Tom



Otis Gospodnetic wrote:
> 
> Hi Tom,
> 
> 32MB is very low, 320MB is medium, and I think you could go higher, just
> pick whichever garbage collector is good for throughput.  I know Java 1.6
> update 18 also has some Hotspot and maybe also GC fixes, so I'd use that. 
> Finally, this sounds like a good use case for reindexing with Hadoop!
> 
>  Otis
> ----
> Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch
> Hadoop ecosystem search :: http://search-hadoop.com/
> 
> 

-- 
View this message in context: 
http://old.nabble.com/What-is-largest-reasonable-setting-for-ramBufferSizeMB--tp27631231p27645167.html
Sent from the Solr - User mailing list archive at Nabble.com.

Reply via email to