Yago:

Batches of 100k docs at a time are pretty big, you're way past the
diminishing returns point. I rarely go over 1,000. That said, reducing
the size might be a work-around, perhaps down to one.

All:

Look on your Solr servers (not client) for a stack trace fragment similar to:

at org.apache.solr.util.AdjustableSemaphore.acquire(AdjustableSemaphore.java:61)
at org.apache.solr.update.SolrCmdDistributor.submit(SolrCmdDistributor.java:349)
at org.apache.solr.update.SolrCmdDistributor.submit(SolrCmdDistributor.java:299)

This has been lurking in the background, and work is being done here:
https://issues.apache.org/jira/browse/SOLR-4816
that should address this.

It'd be great if either or both of you could try this patch and see if
it cures your problem!

Of course this may be unrelated to what you're seeing, look at the
stack trace on your server before jumping in....

In the mean time, another way around this would be to very
significantly reduce the number of docs in an update. I _think_ that
the more docs you have the more likely you are to get into a deadlock
state.

FWIW,
Erick



On Fri, May 31, 2013 at 1:51 PM, bbarani <bbar...@gmail.com> wrote:
> As far as I know, partial update in Solr 4.X doesn’t partially update Lucene
> index  , but instead removes a document from the index and indexes an
> updated one. The underlying lucene always requires to delete the old
> document and index the new one..
>
>
> We usually dont use partial update when updating huge number of documents.
> This is really useful for small number of documents (mostly during push
> indexing)...
>
>
>
> --
> View this message in context: 
> http://lucene.472066.n3.nabble.com/updating-docs-in-solr-cloud-hangs-tp4067388p4067416.html
> Sent from the Solr - User mailing list archive at Nabble.com.

Reply via email to