On 5/22/2013 11:25 AM, Justin Babuscio wrote:
On your overflow theory, why would this impact the client? Is is possible that a write attempt to Solr would block indefinitely while the Solr server is running wild or in a bad state due to the overflow?
That's the general notion. I could be completely wrong about this, but as that limit is the only thing you changed, it was the idea that came to mind first.
One other thing I thought of, though this would be a band-aid, not a real solution - if there's a definable maximum amount of time that an individual update request should take to complete (1 minute? 5 minutes?) then you might be able to use the setSoTimeout call on your server object. In the 3.5.0 source code, this method is inherited, so it might not actually work correctly, but I'm hopeful.
If the problem is stuck update requests (and not a bug in blockUntilFinished), setting the SoTimeout (assuming it works) might unplug the works. The stuck requests might fail, but your SolrJ log might contain enough info to help you track that down. I don't think your application would ever be notified about such failures, but they should be logged.
Good luck with the upgrade plan. Would you be able to upgrade the dependent jars for the existing SolrJ without an extensive approval process? I won't be surprised if the answer is no.
On SOLR-1990, I don't think that's it, because unless blockUntilFinished() itself is broken, calling it more often than strictly necessary shouldn't be an issue.
Do you see any problems in the server log? Thanks, Shawn