On Tue, Nov 12, 2013 at 9:56 PM, Eric Katherman <kather...@gmail.com> wrote:
> Stats:
> default config for 4.3.1 on a high memory AWS instance using jetty.
> Two collections each with less than 700k docs per collection.
>
> We seem to hit some performance lags when doing large commits.  Our front end 
> service allows customers to import data which is stored in Mongo and then 
> indexed in Solr.  We keep all of that data and do one big commit at the end 
> rather than doing commits for each record along the way.

What do you mean by a performance lag? Large query times after a
commit are to be expected. They can be managed better with good auto
warming queries.

>
> Would it be better to use something like autoSoftCommit and just commit each 
> record as it comes in?  Or is the problem more about disk IO?    Are there 
> some other "low hanging fruit" things we should consider?  The solr dashboard 
> shows that there is still plenty of free memory during these imports so it 
> isn't running out of memory and reverting to disk.

Committing after every record is bound to slow down things even more.
Batched updates are almost always better. Perhaps you need to tune
your auto commit settings to commit in smaller batches rather than in
one big bang at the end.

>
> Thanks!
> Eric



-- 
Regards,
Shalin Shekhar Mangar.

Reply via email to