Re: Solr LockObtainFailedException and NPEs for CoreAdmin STATUS
I've created a JIRA ticket now: https://issues.apache.org/jira/browse/SOLR-14969 I'd be really glad, if a Solr developer could help or comment on the issue. Thank you, Andreas -- Sent from: https://lucene.472066.n3.nabble.com/Solr-User-f472068.html
Re: Solr LockObtainFailedException and NPEs for CoreAdmin STATUS
Hi, after reading some Solr source code, I might have found the cause: There was indeed a change in Solr 8.6 that leads to the NullPointerException for the CoreAdmin STATUS request in CoreAdminOperation#getCoreStatus. The instancePath is not retrieved from the ResourceLoader anymore, but from the registered CoreDescriptor. See commit [1]. SolrCore.getInstancePath(SolrCore.java:333) throws an NPE because the CoreContainer does not have a CoreDescriptor for the name, even though a SolrCore is available in the CoreContainer under that name (retrieved some lines above). This inconsistency is persistent: All STATUS requests keep failing until Solr is restarted. IIUC, the underlying problem is that CoreContainer#create does not correctly handle concurrent requests to create the same core. There's a race condition (see TODO comment [2]), and CoreContainer#createFromDescriptor may be called subsequently for the same core. The second call then fails to create an IndexWriter (LockObtainFailedException), and this causes a call to SolrCores#removeCoreDescriptor [3]. This mean, the second call removes the CoreDescriptor for the SolrCore created with the first call. This is the inconsistency that causes the NPE in CoreAdminOperation#getCoreStatus. Does this sound reasonable? I'll create a JIRA ticket tomorrow, if that's okay. Thank you, Andreas [1] https://github.com/apache/lucene-solr/commit/17ae79b0905b2bf8635c1b260b30807cae2f5463#diff-9652fe8353b7eff59cd6f128bb2699d88361e670b840ee5ca1018b1bc45584d1R324 [2] https://github.com/apache/lucene-solr/blob/15241573d3c8da0db3dfd380d99e4efcfe500c2e/solr/core/src/java/org/apache/solr/core/CoreContainer.java#L1242 [3] https://github.com/apache/lucene-solr/blob/15241573d3c8da0db3dfd380d99e4efcfe500c2e/solr/core/src/java/org/apache/solr/core/CoreContainer.java#L1407 -- Sent from: https://lucene.472066.n3.nabble.com/Solr-User-f472068.html
Solr LockObtainFailedException and NPEs for CoreAdmin STATUS
Hi, we're running tests on a stand-alone Solr instance, which create Solr cores from multiple applications using CoreAdmin (via SolrJ). Lately, we upgraded from 8.4.1 to 8.6.3, and sometimes we now see a LockObtainFailedException for a lock held by the same JVM, after which Solr is broken and runs into NullPointerExceptions for simple CoreAdmin STATUS requests. We have to restart Solr then. I've never seen this with 8.4.1 or previous releases. This bug is quite severe for us because it breaks our system tests with Solr, and we fear that it may also happen in production. Is this a known bug? Our applications use a CoreAdmin STATUS request to check whether a core exists, followed by a CREATE request, if the core does not exist. With multiple applications, and bad timing, two concurrent CREATE requests for the same core are of course still possible. Solr 8.4.1 rejected duplicate requests and logged ERRORs but kept working correctly [1]. I can still see the same log messages in 8.6.3 ("Core with name ... already exists" or "Error CREATEing SolrCore ... Could not create a new core in ... as another core is already defined there") - but sometimes also the following error, after which Solr is broken: 2020-10-27 00:29:25.350 ERROR (qtp2029754983-24) [ ] o.a.s.h.RequestHandlerBase org.apache.solr.common.SolrException: Error CREATEing SolrCore 'blueprint_acgqqafsogyc_comments': Unable to create core [blueprint_acgqqafsogyc_comments] Caused by: Lock held by this virtual machine: /var/solr/data/blueprint_acgqqafsogyc_comments/data/index/write.lock at org.apache.solr.core.CoreContainer.create(CoreContainer.java:1312) at org.apache.solr.handler.admin.CoreAdminOperation.lambda$static$0(CoreAdminOperation.java:95) at org.apache.solr.handler.admin.CoreAdminOperation.execute(CoreAdminOperation.java:367) ... Caused by: org.apache.solr.common.SolrException: Unable to create core [blueprint_acgqqafsogyc_comments] at org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1408) at org.apache.solr.core.CoreContainer.create(CoreContainer.java:1273) ... 47 more Caused by: org.apache.solr.common.SolrException: Error opening new searcher at org.apache.solr.core.SolrCore.(SolrCore.java:1071) at org.apache.solr.core.SolrCore.(SolrCore.java:906) at org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1387) ... 48 more Caused by: org.apache.solr.common.SolrException: Error opening new searcher at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:2184) at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:2308) at org.apache.solr.core.SolrCore.initSearcher(SolrCore.java:1130) at org.apache.solr.core.SolrCore.(SolrCore.java:1012) ... 50 more Caused by: org.apache.lucene.store.LockObtainFailedException: Lock held by this virtual machine: /var/solr/data/blueprint_acgqqafsogyc_comments/data/index/write.lock at org.apache.lucene.store.NativeFSLockFactory.obtainFSLock(NativeFSLockFactory.java:139) at org.apache.lucene.store.FSLockFactory.obtainLock(FSLockFactory.java:41) at org.apache.lucene.store.BaseDirectory.obtainLock(BaseDirectory.java:45) at org.apache.lucene.store.FilterDirectory.obtainLock(FilterDirectory.java:105) at org.apache.lucene.index.IndexWriter.(IndexWriter.java:785) at org.apache.solr.update.SolrIndexWriter.(SolrIndexWriter.java:126) at org.apache.solr.update.SolrIndexWriter.create(SolrIndexWriter.java:100) at org.apache.solr.update.DefaultSolrCoreState.createMainIndexWriter(DefaultSolrCoreState.java:261) at org.apache.solr.update.DefaultSolrCoreState.getIndexWriter(DefaultSolrCoreState.java:135) at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:2145) 2020-10-27 00:29:25.353 INFO (qtp2029754983-19) [ ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/cores params={core=blueprint_acgqqafsogyc_comments=STATUS=false=javabin=2} status=500 QTime=0 2020-10-27 00:29:25.353 ERROR (qtp2029754983-19) [ ] o.a.s.s.HttpSolrCall null:org.apache.solr.common.SolrException: Error handling 'STATUS' action at org.apache.solr.handler.admin.CoreAdminOperation.execute(CoreAdminOperation.java:372) at org.apache.solr.handler.admin.CoreAdminHandler$CallInfo.call(CoreAdminHandler.java:397) at org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:181) ... Caused by: java.lang.NullPointerException at org.apache.solr.core.SolrCore.getInstancePath(SolrCore.java:333) at org.apache.solr.handler.admin.CoreAdminOperation.getCoreStatus(CoreAdminOperation.java:329) at org.apache.solr.handler.admin.StatusOp.execute(StatusOp.java:54) at org.apache.solr.handler.admin.CoreAdminOperation.execute(CoreAdminOperation.java:36
Re: Error Opening new IndexSearcher - LockObtainFailedException
Hmmm, 6.4 was considerably before the refactoring that this patch addresses so it's not a surprise that it doesn't apply. On Thu, Sep 21, 2017 at 10:28 PM, Shashank Pedamalluwrote: > Hi Luiz, > > Unfortunately, I’m on version Solr-6.4.2 and the patch does not apply > straight away. > > Thanks, > Shashank > > On 9/21/17, 8:35 PM, "Luiz Armesto" wrote: > > Hi Shashank, > > There is an open issue about this exception [1]. Can you take a look and > test the patch to see if it works in your case? > > [1] > https://urldefense.proofpoint.com/v2/url?u=https-3A__issues.apache.org_jira_browse_SOLR-2D11297=DwIFaQ=uilaK90D4TOVoH58JNXRgQ=blJD2pBapH3dDkoajIf9mT9SSbbs19wRbChNde1ErNI=EBLEhJ6TlQpK4rJngNBxBwypGpdbAuhnuqmgiRGcxZg=j69wKZOK2Ve9oeIPl92iyiQLSZS38Qe-ZLj-2OeN-u0= > > On Sep 21, 2017 10:19 PM, "Shashank Pedamallu" > wrote: > > Hi, > > I’m seeing the following exception in Solr that gets automatically > resolved > eventually. > 2017-09-22 00:18:17.243 ERROR (qtp1702660825-17) [ x:spedamallu1-core-1] > o.a.s.c.CoreContainer Error creating core [spedamallu1-core-1]: Error > opening new searcher > org.apache.solr.common.SolrException: Error opening new searcher > at org.apache.solr.core.SolrCore.(SolrCore.java:952) > at org.apache.solr.core.SolrCore.(SolrCore.java:816) > at > org.apache.solr.core.CoreContainer.create(CoreContainer.java:890) > at org.apache.solr.core.CoreContainer.getCore( > CoreContainer.java:1167) > at > org.apache.solr.servlet.HttpSolrCall.init(HttpSolrCall.java:252) > at > org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:418) > at org.apache.solr.servlet.SolrDispatchFilter.doFilter( > SolrDispatchFilter.java:345) > at org.apache.solr.servlet.SolrDispatchFilter.doFilter( > SolrDispatchFilter.java:296) > at org.eclipse.jetty.servlet.ServletHandler$CachedChain. > doFilter(ServletHandler.java:1691) > at org.eclipse.jetty.servlet.ServletHandler.doHandle( > ServletHandler.java:582) > at org.eclipse.jetty.server.handler.ScopedHandler.handle( > ScopedHandler.java:143) > at org.eclipse.jetty.security.SecurityHandler.handle( > SecurityHandler.java:548) > at org.eclipse.jetty.server.session.SessionHandler. > doHandle(SessionHandler.java:226) > at org.eclipse.jetty.server.handler.ContextHandler. > doHandle(ContextHandler.java:1180) > at org.eclipse.jetty.servlet.ServletHandler.doScope( > ServletHandler.java:512) > at org.eclipse.jetty.server.session.SessionHandler. > doScope(SessionHandler.java:185) > at org.eclipse.jetty.server.handler.ContextHandler. > doScope(ContextHandler.java:1112) > at org.eclipse.jetty.server.handler.ScopedHandler.handle( > ScopedHandler.java:141) > at > org.eclipse.jetty.server.handler.ContextHandlerCollection.handle( > ContextHandlerCollection.java:213) > at org.eclipse.jetty.server.handler.HandlerCollection. > handle(HandlerCollection.java:119) > at org.eclipse.jetty.server.handler.HandlerWrapper.handle( > HandlerWrapper.java:134) > at org.eclipse.jetty.server.Server.handle(Server.java:534) > at > org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320) > at org.eclipse.jetty.server.HttpConnection.onFillable( > HttpConnection.java:251) > at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded( > AbstractConnection.java:273) > at > org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95) > at org.eclipse.jetty.io.SelectChannelEndPoint$2.run( > SelectChannelEndPoint.java:93) > at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume. > executeProduceConsume(ExecuteProduceConsume.java:303) > at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume. > produceConsume(ExecuteProduceConsume.java:148) > at > org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run( > ExecuteProduceConsume.java:136) > at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob( > QueuedThreadPool.java:671) > at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run( > QueuedThreadPool.java:589) > at java.lang.Thread.run(Thread.java:748) > Caused by: org.apache.solr.common.SolrException: Error opening new > searcher > at > org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1891) > at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:2011) > at org.apache.solr.core.SolrCore.initSearcher(SolrCore.java:1041) > at org.apache.solr.core.SolrCore.(SolrCore.java:925) >
Re: Error Opening new IndexSearcher - LockObtainFailedException
Hi Luiz, Unfortunately, I’m on version Solr-6.4.2 and the patch does not apply straight away. Thanks, Shashank On 9/21/17, 8:35 PM, "Luiz Armesto"wrote: Hi Shashank, There is an open issue about this exception [1]. Can you take a look and test the patch to see if it works in your case? [1] https://urldefense.proofpoint.com/v2/url?u=https-3A__issues.apache.org_jira_browse_SOLR-2D11297=DwIFaQ=uilaK90D4TOVoH58JNXRgQ=blJD2pBapH3dDkoajIf9mT9SSbbs19wRbChNde1ErNI=EBLEhJ6TlQpK4rJngNBxBwypGpdbAuhnuqmgiRGcxZg=j69wKZOK2Ve9oeIPl92iyiQLSZS38Qe-ZLj-2OeN-u0= On Sep 21, 2017 10:19 PM, "Shashank Pedamallu" wrote: Hi, I’m seeing the following exception in Solr that gets automatically resolved eventually. 2017-09-22 00:18:17.243 ERROR (qtp1702660825-17) [ x:spedamallu1-core-1] o.a.s.c.CoreContainer Error creating core [spedamallu1-core-1]: Error opening new searcher org.apache.solr.common.SolrException: Error opening new searcher at org.apache.solr.core.SolrCore.(SolrCore.java:952) at org.apache.solr.core.SolrCore.(SolrCore.java:816) at org.apache.solr.core.CoreContainer.create(CoreContainer.java:890) at org.apache.solr.core.CoreContainer.getCore( CoreContainer.java:1167) at org.apache.solr.servlet.HttpSolrCall.init(HttpSolrCall.java:252) at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:418) at org.apache.solr.servlet.SolrDispatchFilter.doFilter( SolrDispatchFilter.java:345) at org.apache.solr.servlet.SolrDispatchFilter.doFilter( SolrDispatchFilter.java:296) at org.eclipse.jetty.servlet.ServletHandler$CachedChain. doFilter(ServletHandler.java:1691) at org.eclipse.jetty.servlet.ServletHandler.doHandle( ServletHandler.java:582) at org.eclipse.jetty.server.handler.ScopedHandler.handle( ScopedHandler.java:143) at org.eclipse.jetty.security.SecurityHandler.handle( SecurityHandler.java:548) at org.eclipse.jetty.server.session.SessionHandler. doHandle(SessionHandler.java:226) at org.eclipse.jetty.server.handler.ContextHandler. doHandle(ContextHandler.java:1180) at org.eclipse.jetty.servlet.ServletHandler.doScope( ServletHandler.java:512) at org.eclipse.jetty.server.session.SessionHandler. doScope(SessionHandler.java:185) at org.eclipse.jetty.server.handler.ContextHandler. doScope(ContextHandler.java:1112) at org.eclipse.jetty.server.handler.ScopedHandler.handle( ScopedHandler.java:141) at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle( ContextHandlerCollection.java:213) at org.eclipse.jetty.server.handler.HandlerCollection. handle(HandlerCollection.java:119) at org.eclipse.jetty.server.handler.HandlerWrapper.handle( HandlerWrapper.java:134) at org.eclipse.jetty.server.Server.handle(Server.java:534) at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320) at org.eclipse.jetty.server.HttpConnection.onFillable( HttpConnection.java:251) at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded( AbstractConnection.java:273) at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95) at org.eclipse.jetty.io.SelectChannelEndPoint$2.run( SelectChannelEndPoint.java:93) at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume. executeProduceConsume(ExecuteProduceConsume.java:303) at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume. produceConsume(ExecuteProduceConsume.java:148) at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run( ExecuteProduceConsume.java:136) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob( QueuedThreadPool.java:671) at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run( QueuedThreadPool.java:589) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.solr.common.SolrException: Error opening new searcher at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1891) at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:2011) at org.apache.solr.core.SolrCore.initSearcher(SolrCore.java:1041) at org.apache.solr.core.SolrCore.(SolrCore.java:925) ... 32 more Caused by: org.apache.lucene.store.LockObtainFailedException: Lock held by this virtual machine: /Users/spedamallu/Desktop/mount-1/spedamallu1-core-1/ data/index/write.lock at org.apache.lucene.store.NativeFSLockFactory.obtainFSLock( NativeFSLockFactory.java:127) at org.apache.lucene.store.FSLockFactory.obtainLock(
Re: Error Opening new IndexSearcher - LockObtainFailedException
Hi Shashank, There is an open issue about this exception [1]. Can you take a look and test the patch to see if it works in your case? [1] https://issues.apache.org/jira/browse/SOLR-11297 On Sep 21, 2017 10:19 PM, "Shashank Pedamallu"wrote: Hi, I’m seeing the following exception in Solr that gets automatically resolved eventually. 2017-09-22 00:18:17.243 ERROR (qtp1702660825-17) [ x:spedamallu1-core-1] o.a.s.c.CoreContainer Error creating core [spedamallu1-core-1]: Error opening new searcher org.apache.solr.common.SolrException: Error opening new searcher at org.apache.solr.core.SolrCore.(SolrCore.java:952) at org.apache.solr.core.SolrCore.(SolrCore.java:816) at org.apache.solr.core.CoreContainer.create(CoreContainer.java:890) at org.apache.solr.core.CoreContainer.getCore( CoreContainer.java:1167) at org.apache.solr.servlet.HttpSolrCall.init(HttpSolrCall.java:252) at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:418) at org.apache.solr.servlet.SolrDispatchFilter.doFilter( SolrDispatchFilter.java:345) at org.apache.solr.servlet.SolrDispatchFilter.doFilter( SolrDispatchFilter.java:296) at org.eclipse.jetty.servlet.ServletHandler$CachedChain. doFilter(ServletHandler.java:1691) at org.eclipse.jetty.servlet.ServletHandler.doHandle( ServletHandler.java:582) at org.eclipse.jetty.server.handler.ScopedHandler.handle( ScopedHandler.java:143) at org.eclipse.jetty.security.SecurityHandler.handle( SecurityHandler.java:548) at org.eclipse.jetty.server.session.SessionHandler. doHandle(SessionHandler.java:226) at org.eclipse.jetty.server.handler.ContextHandler. doHandle(ContextHandler.java:1180) at org.eclipse.jetty.servlet.ServletHandler.doScope( ServletHandler.java:512) at org.eclipse.jetty.server.session.SessionHandler. doScope(SessionHandler.java:185) at org.eclipse.jetty.server.handler.ContextHandler. doScope(ContextHandler.java:1112) at org.eclipse.jetty.server.handler.ScopedHandler.handle( ScopedHandler.java:141) at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle( ContextHandlerCollection.java:213) at org.eclipse.jetty.server.handler.HandlerCollection. handle(HandlerCollection.java:119) at org.eclipse.jetty.server.handler.HandlerWrapper.handle( HandlerWrapper.java:134) at org.eclipse.jetty.server.Server.handle(Server.java:534) at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320) at org.eclipse.jetty.server.HttpConnection.onFillable( HttpConnection.java:251) at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded( AbstractConnection.java:273) at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95) at org.eclipse.jetty.io.SelectChannelEndPoint$2.run( SelectChannelEndPoint.java:93) at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume. executeProduceConsume(ExecuteProduceConsume.java:303) at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume. produceConsume(ExecuteProduceConsume.java:148) at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run( ExecuteProduceConsume.java:136) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob( QueuedThreadPool.java:671) at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run( QueuedThreadPool.java:589) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.solr.common.SolrException: Error opening new searcher at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1891) at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:2011) at org.apache.solr.core.SolrCore.initSearcher(SolrCore.java:1041) at org.apache.solr.core.SolrCore.(SolrCore.java:925) ... 32 more Caused by: org.apache.lucene.store.LockObtainFailedException: Lock held by this virtual machine: /Users/spedamallu/Desktop/mount-1/spedamallu1-core-1/ data/index/write.lock at org.apache.lucene.store.NativeFSLockFactory.obtainFSLock( NativeFSLockFactory.java:127) at org.apache.lucene.store.FSLockFactory.obtainLock( FSLockFactory.java:41) at org.apache.lucene.store.BaseDirectory.obtainLock( BaseDirectory.java:45) at org.apache.lucene.store.FilterDirectory.obtainLock( FilterDirectory.java:104) at org.apache.lucene.index.IndexWriter.(IndexWriter.java:804) at org.apache.solr.update.SolrIndexWriter.( SolrIndexWriter.java:125) at org.apache.solr.update.SolrIndexWriter.create( SolrIndexWriter.java:100) at org.apache.solr.update.DefaultSolrCoreState. createMainIndexWriter(DefaultSolrCoreState.java:240) at org.apache.solr.update.DefaultSolrCoreState.getIndexWriter( DefaultSolrCoreState.java:114) at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1852) I kind of have a theory of why this is
Error Opening new IndexSearcher - LockObtainFailedException
Hi, I’m seeing the following exception in Solr that gets automatically resolved eventually. 2017-09-22 00:18:17.243 ERROR (qtp1702660825-17) [ x:spedamallu1-core-1] o.a.s.c.CoreContainer Error creating core [spedamallu1-core-1]: Error opening new searcher org.apache.solr.common.SolrException: Error opening new searcher at org.apache.solr.core.SolrCore.(SolrCore.java:952) at org.apache.solr.core.SolrCore.(SolrCore.java:816) at org.apache.solr.core.CoreContainer.create(CoreContainer.java:890) at org.apache.solr.core.CoreContainer.getCore(CoreContainer.java:1167) at org.apache.solr.servlet.HttpSolrCall.init(HttpSolrCall.java:252) at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:418) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:345) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:296) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512) at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213) at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) at org.eclipse.jetty.server.Server.handle(Server.java:534) at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320) at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251) at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273) at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95) at org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93) at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303) at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148) at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671) at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.solr.common.SolrException: Error opening new searcher at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1891) at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:2011) at org.apache.solr.core.SolrCore.initSearcher(SolrCore.java:1041) at org.apache.solr.core.SolrCore.(SolrCore.java:925) ... 32 more Caused by: org.apache.lucene.store.LockObtainFailedException: Lock held by this virtual machine: /Users/spedamallu/Desktop/mount-1/spedamallu1-core-1/data/index/write.lock at org.apache.lucene.store.NativeFSLockFactory.obtainFSLock(NativeFSLockFactory.java:127) at org.apache.lucene.store.FSLockFactory.obtainLock(FSLockFactory.java:41) at org.apache.lucene.store.BaseDirectory.obtainLock(BaseDirectory.java:45) at org.apache.lucene.store.FilterDirectory.obtainLock(FilterDirectory.java:104) at org.apache.lucene.index.IndexWriter.(IndexWriter.java:804) at org.apache.solr.update.SolrIndexWriter.(SolrIndexWriter.java:125) at org.apache.solr.update.SolrIndexWriter.create(SolrIndexWriter.java:100) at org.apache.solr.update.DefaultSolrCoreState.createMainIndexWriter(DefaultSolrCoreState.java:240) at org.apache.solr.update.DefaultSolrCoreState.getIndexWriter(DefaultSolrCoreState.java:114) at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1852) I kind of have a theory of why this is happening. Can someone please confirm if this is indeed the case. * I was trying to run a long running operation on a transient core and it was getting evicted because of LRU * So, to ensure my operation completion, I have added a solrCore.open() call before
Re: Error opening new searcher due to LockObtainFailedException
Hmmm. oddly another poster was seeing this due to permissions issues, although I don't know why that would clear up after a while. But it's something to check. Erick On Wed, Aug 30, 2017 at 3:24 PM, Sundeep Twrote: > Hello, > > Occasionally we are seeing errors opening new searcher for certain solr > cores. Whenever this happens, we are unable to query or ingest new data > into these cores. It seems to clear up after some time though. The root > cause seems to be - *"org.apache.lucene.store.LockObtainFailedException: > Lock held by this virtual machine: > /opt/solr/volumes/data9/7d50b38e114af075-core-24/data/index/write.lock"* > > Below is the full stack trace. Any ideas on what could be going on that > causes such an exception and how to mitigate this? thanks a lot for your > help! > > Unable to create core > [7d50b38e114af075-core-24],trace=org.apache.solr.common.SolrException: > Unable to create core [7d50b38e114af075-core-24] > at org.apache.solr.core.CoreContainer.create(CoreContainer.java:903) > at org.apache.solr.core.CoreContainer.getCore(CoreContainer.java:1167) > at org.apache.solr.servlet.HttpSolrCall.init(HttpSolrCall.java:252) > at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:418) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:345) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:296) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) > at > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) > at > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226) > at > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180) > at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512) > at > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) > at > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) > at > org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213) > at > org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) > at org.eclipse.jetty.server.Server.handle(Server.java:534) > at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320) > at > org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251) > at > org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273) > at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95) > at > org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93) > at > org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303) > at > org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148) > at > org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136) > at > org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671) > at > org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) > at java.lang.Thread.run(Thread.java:748) > Caused by: org.apache.solr.common.SolrException: Error opening new searcher > at org.apache.solr.core.SolrCore.(SolrCore.java:952) > at org.apache.solr.core.SolrCore.(SolrCore.java:816) > at org.apache.solr.core.CoreContainer.create(CoreContainer.java:890) > ... 30 more > Caused by: org.apache.solr.common.SolrException: Error opening new searcher > at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1891) > at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:2011) > at org.apache.solr.core.SolrCore.initSearcher(SolrCore.java:1041) > at org.apache.solr.core.SolrCore.(SolrCore.java:925) > ... 32 more > Caused by: org.apache.lucene.store.LockObtainFailedException: Lock held by > this virtual machine: > /opt/solr/volumes/data9/7d50b38e114af075-core-24/data/index/write.lock > at > org.apache.lucene.store.NativeFSLockFactory.obtainFSLock(NativeFSLockFactory.java:127) > at org.apache.lucene.store.FSLockFactory.obtainLock(FSLockFactory.java:41) > at org.apache.lucene.store.BaseDirectory.obtainLock(BaseDirectory.java:45) > at > org.apache.lucene.store.FilterDirectory.obtainLock(FilterDirectory.java:104) > at org.apache.lucene.index.IndexWriter.(IndexWriter.java:804) > at org.apache.solr.update.SolrIndexWriter.(SolrIndexWriter.java:125) > at org.apache.solr.update.SolrIndexWriter.create(SolrIndexWriter.java:100) > at >
Error opening new searcher due to LockObtainFailedException
Hello, Occasionally we are seeing errors opening new searcher for certain solr cores. Whenever this happens, we are unable to query or ingest new data into these cores. It seems to clear up after some time though. The root cause seems to be - *"org.apache.lucene.store.LockObtainFailedException: Lock held by this virtual machine: /opt/solr/volumes/data9/7d50b38e114af075-core-24/data/index/write.lock"* Below is the full stack trace. Any ideas on what could be going on that causes such an exception and how to mitigate this? thanks a lot for your help! Unable to create core [7d50b38e114af075-core-24],trace=org.apache.solr.common.SolrException: Unable to create core [7d50b38e114af075-core-24] at org.apache.solr.core.CoreContainer.create(CoreContainer.java:903) at org.apache.solr.core.CoreContainer.getCore(CoreContainer.java:1167) at org.apache.solr.servlet.HttpSolrCall.init(HttpSolrCall.java:252) at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:418) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:345) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:296) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512) at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213) at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) at org.eclipse.jetty.server.Server.handle(Server.java:534) at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320) at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251) at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273) at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95) at org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93) at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303) at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148) at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671) at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.solr.common.SolrException: Error opening new searcher at org.apache.solr.core.SolrCore.(SolrCore.java:952) at org.apache.solr.core.SolrCore.(SolrCore.java:816) at org.apache.solr.core.CoreContainer.create(CoreContainer.java:890) ... 30 more Caused by: org.apache.solr.common.SolrException: Error opening new searcher at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1891) at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:2011) at org.apache.solr.core.SolrCore.initSearcher(SolrCore.java:1041) at org.apache.solr.core.SolrCore.(SolrCore.java:925) ... 32 more Caused by: org.apache.lucene.store.LockObtainFailedException: Lock held by this virtual machine: /opt/solr/volumes/data9/7d50b38e114af075-core-24/data/index/write.lock at org.apache.lucene.store.NativeFSLockFactory.obtainFSLock(NativeFSLockFactory.java:127) at org.apache.lucene.store.FSLockFactory.obtainLock(FSLockFactory.java:41) at org.apache.lucene.store.BaseDirectory.obtainLock(BaseDirectory.java:45) at org.apache.lucene.store.FilterDirectory.obtainLock(FilterDirectory.java:104) at org.apache.lucene.index.IndexWriter.(IndexWriter.java:804) at org.apache.solr.update.SolrIndexWriter.(SolrIndexWriter.java:125) at org.apache.solr.update.SolrIndexWriter.create(SolrIndexWriter.java:100) at org.apache.solr.update.DefaultSolrCoreState.createMainIndexWriter(DefaultSolrCoreState.java:240) at org.apache.solr.update.DefaultSolrCoreState.getIndexWriter(DefaultSolrCoreState.java:114) at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:1852) ... 35 more ,code=500}```
Gracefully stopping jetty server - LockObtainFailedException
Hi, I have solr cloud(4.1) setup with embedded jetty server. I use the below command to start and stop the server. start server : nohup java -DSTOP.PORT=8085 -DSTOP.KEY=key -DnumShards=2 -Dbootstrap_confdir=./solr/nlp/conf -Dcollection.configName=myconf -DzkHost=10.88.139.206:2181,10.88.139.206:2182,10.88.139.206:2183 -jar start.jar output.log 21 Stop Server : java -DSTOP.PORT=8085 -DSTOP.KEY=key -jar start.jar --stop What I have observed is once I stop the server and start again, while indexing it gives me 'org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out:' with the 'NativeFSLock@solr /nlp/data/index.20130924205253479/write.lock' after I delete lock file manually and start the server, indexing works fine. Please let me know how we can resolve this. If this issue is answered earlier, I would appreciate pointing me to the url, tried finding it but could not. Thanks in Advance, Ashwin
LockObtainFailedException and older segment_N files are present
Hi, I have got a searcher server replicating index from master server. Recently I have noticed the huge difference in the index size between master and slave followed by LockObtainFailedException in catalin.out log. When I debugged the searcher index folder, I could see more that 100 segement_N files in it. After debugging I found the root cause to be mis-configured solrconfig.xml, I was using solr3.4 and it had indexConfig section instead of main index and hence forth it was using simple file lock rather than the configured native lock( http://wiki.apache.org/solr/SolrConfigXml#indexConfig). Rectifying this cofiguration corrected the error and subsequently replication was fine and older segement files got deleted, finally synching the searcher core size and indexer core size. What I would like to understand here is: 1. Why would it cause lock obtain time out with simple file lock ? (may be the defaul /writeLockTimeout was too short). 2. What would happen in case of this LockObtainFailedException while repliation, will it fail to replicate the docs ? However I have observed the searcher has equal numfound doc as indexer, which mean clients searching wouldn't face any documents missing. 3. why did the index size bloat and older segment_N files were present? thanks in advance ! -- -JAME
Re: LockObtainFailedException after trying to create cores on second SolrCloud instance
Will check later to use different data dirs for the core on each instance. But because each Solr sits in it's own openvz instance (virtual server respectively) they should be totally separated. At least from my point of understanding virtualization. Will check and get back here... Thanks. On Wed, Jun 13, 2012 at 8:10 PM, Mark Miller markrmil...@gmail.com wrote: Thats an interesting data dir location: NativeFSLock@/home/myuser/ data/index/write.lock Where are the other data dirs located? Are you sharing one drive or something? It looks like something already has a writer lock - are you sure another solr instance is not running somehow? On Wed, Jun 13, 2012 at 11:11 AM, Daniel Brügge daniel.brue...@googlemail.com wrote: BTW: i am running the solr instances using -Xms512M -Xmx1024M so not so little memory. Daniel On Wed, Jun 13, 2012 at 4:28 PM, Daniel Brügge daniel.brue...@googlemail.com wrote: Hi, am struggling around with creating multiple collections on a 4 instances SolrCloud setup: I have 4 virtual OpenVZ instances, where I have installed SolrCloud on each and on one is also a standalone Zookeeper running. Loading the Solr configuration into ZK works fine. Then I startup the 4 instances and everything is also running smoothly. After that I am adding one core with the name e.g. '123'. This core is correctly visible on the instance I have used for creating it. it maps like '123' shard1 - virtual-instance-1 After that I am creating a core with the same name '123' on the second instance and it creates it, but an exception is thrown after some while and the cluster state of the newly created core goes to 'recovering' *123:{shard1:{ virtual-instance-1:8983_solr_123:{ shard:shard1, roles:null, leader:true, state:active, core:123, collection:123, node_name:virtual-instance-1:8983_solr, base_url:http://virtual-instance-1:8983/solr}, **virtual-instance-2**:8983_solr_123:{* *shard:shard1, roles:null, state:recovering, core:123, collection:123, node_name:virtual-instance-2:8983_solr, base_url:http://virtual-instance-2:8983/solr}}},* The exception throws is on the first virtual instance: *Jun 13, 2012 2:18:40 PM org.apache.solr.common.SolrException log* *SEVERE: null:org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out: NativeFSLock@/home/myuser/data/index/write.lock* * at org.apache.lucene.store.Lock.obtain(Lock.java:84)* * at org.apache.lucene.index.IndexWriter.init(IndexWriter.java:607)* * at org.apache.solr.update.SolrIndexWriter.init(SolrIndexWriter.java:58)* * at org.apache.solr.update.DefaultSolrCoreState.createMainIndexWriter(DefaultSolrCoreState.java:112) * * at org.apache.solr.update.DefaultSolrCoreState.getIndexWriter(DefaultSolrCoreState.java:52) * * at org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:364) * * at org.apache.solr.update.processor.RunUpdateProcessor.processCommit(RunUpdateProcessorFactory.java:82) * * at org.apache.solr.update.processor.UpdateRequestProcessor.processCommit(UpdateRequestProcessor.java:64) * * at org.apache.solr.update.processor.DistributedUpdateProcessor.processCommit(DistributedUpdateProcessor.java:919) * * at org.apache.solr.update.processor.LogUpdateProcessor.processCommit(LogUpdateProcessorFactory.java:154) * * at org.apache.solr.handler.RequestHandlerUtils.handleCommit(RequestHandlerUtils.java:69) * * at org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:68) * * at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:129) * * at org.apache.solr.core.SolrCore.execute(SolrCore.java:1566)* * at org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:442) * * at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:263) * * at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1337) * * at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:484) * * at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:119) * * at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:524) * * at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:233) * * at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1065) * * at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:413)* * at
Re: LockObtainFailedException after trying to create cores on second SolrCloud instance
OK, I think I have found it. I provided when starting the 4 solr instances via start.jar always the data directory property via *-Dsolr.data.dir=/home/myuser/data * After removing this it worked fine. What is weird is, that all 4 instances are totally separated, so that instance-2 should never conflict with instance-1. they could also be on totally different physical servers. Thanks. Daniel On Wed, Jun 13, 2012 at 8:10 PM, Mark Miller markrmil...@gmail.com wrote: Thats an interesting data dir location: NativeFSLock@/home/myuser/ data/index/write.lock Where are the other data dirs located? Are you sharing one drive or something? It looks like something already has a writer lock - are you sure another solr instance is not running somehow? On Wed, Jun 13, 2012 at 11:11 AM, Daniel Brügge daniel.brue...@googlemail.com wrote: BTW: i am running the solr instances using -Xms512M -Xmx1024M so not so little memory. Daniel On Wed, Jun 13, 2012 at 4:28 PM, Daniel Brügge daniel.brue...@googlemail.com wrote: Hi, am struggling around with creating multiple collections on a 4 instances SolrCloud setup: I have 4 virtual OpenVZ instances, where I have installed SolrCloud on each and on one is also a standalone Zookeeper running. Loading the Solr configuration into ZK works fine. Then I startup the 4 instances and everything is also running smoothly. After that I am adding one core with the name e.g. '123'. This core is correctly visible on the instance I have used for creating it. it maps like '123' shard1 - virtual-instance-1 After that I am creating a core with the same name '123' on the second instance and it creates it, but an exception is thrown after some while and the cluster state of the newly created core goes to 'recovering' *123:{shard1:{ virtual-instance-1:8983_solr_123:{ shard:shard1, roles:null, leader:true, state:active, core:123, collection:123, node_name:virtual-instance-1:8983_solr, base_url:http://virtual-instance-1:8983/solr}, **virtual-instance-2**:8983_solr_123:{* *shard:shard1, roles:null, state:recovering, core:123, collection:123, node_name:virtual-instance-2:8983_solr, base_url:http://virtual-instance-2:8983/solr}}},* The exception throws is on the first virtual instance: *Jun 13, 2012 2:18:40 PM org.apache.solr.common.SolrException log* *SEVERE: null:org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out: NativeFSLock@/home/myuser/data/index/write.lock* * at org.apache.lucene.store.Lock.obtain(Lock.java:84)* * at org.apache.lucene.index.IndexWriter.init(IndexWriter.java:607)* * at org.apache.solr.update.SolrIndexWriter.init(SolrIndexWriter.java:58)* * at org.apache.solr.update.DefaultSolrCoreState.createMainIndexWriter(DefaultSolrCoreState.java:112) * * at org.apache.solr.update.DefaultSolrCoreState.getIndexWriter(DefaultSolrCoreState.java:52) * * at org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:364) * * at org.apache.solr.update.processor.RunUpdateProcessor.processCommit(RunUpdateProcessorFactory.java:82) * * at org.apache.solr.update.processor.UpdateRequestProcessor.processCommit(UpdateRequestProcessor.java:64) * * at org.apache.solr.update.processor.DistributedUpdateProcessor.processCommit(DistributedUpdateProcessor.java:919) * * at org.apache.solr.update.processor.LogUpdateProcessor.processCommit(LogUpdateProcessorFactory.java:154) * * at org.apache.solr.handler.RequestHandlerUtils.handleCommit(RequestHandlerUtils.java:69) * * at org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:68) * * at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:129) * * at org.apache.solr.core.SolrCore.execute(SolrCore.java:1566)* * at org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:442) * * at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:263) * * at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1337) * * at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:484) * * at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:119) * * at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:524) * * at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:233) * * at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1065) * * at
Re: LockObtainFailedException after trying to create cores on second SolrCloud instance
On 6/14/2012 2:05 AM, Daniel Brügge wrote: Will check later to use different data dirs for the core on each instance. But because each Solr sits in it's own openvz instance (virtual server respectively) they should be totally separated. At least from my point of understanding virtualization. Depending on how your VMs are configured, their filesystems could be mapped to the same place of the host's filesystem. What you describe sounds like this is the case.
Re: LockObtainFailedException after trying to create cores on second SolrCloud instance
Aha, OK. That was new to me. Will check this. Thanks. On Thu, Jun 14, 2012 at 3:52 PM, Yury Kats yuryk...@yahoo.com wrote: On 6/14/2012 2:05 AM, Daniel Brügge wrote: Will check later to use different data dirs for the core on each instance. But because each Solr sits in it's own openvz instance (virtual server respectively) they should be totally separated. At least from my point of understanding virtualization. Depending on how your VMs are configured, their filesystems could be mapped to the same place of the host's filesystem. What you describe sounds like this is the case.
LockObtainFailedException after trying to create cores on second SolrCloud instance
Hi, am struggling around with creating multiple collections on a 4 instances SolrCloud setup: I have 4 virtual OpenVZ instances, where I have installed SolrCloud on each and on one is also a standalone Zookeeper running. Loading the Solr configuration into ZK works fine. Then I startup the 4 instances and everything is also running smoothly. After that I am adding one core with the name e.g. '123'. This core is correctly visible on the instance I have used for creating it. it maps like '123' shard1 - virtual-instance-1 After that I am creating a core with the same name '123' on the second instance and it creates it, but an exception is thrown after some while and the cluster state of the newly created core goes to 'recovering' *123:{shard1:{ virtual-instance-1:8983_solr_123:{ shard:shard1, roles:null, leader:true, state:active, core:123, collection:123, node_name:virtual-instance-1:8983_solr, base_url:http://virtual-instance-1:8983/solr}, **virtual-instance-2**:8983_solr_123:{* *shard:shard1, roles:null, state:recovering, core:123, collection:123, node_name:virtual-instance-2:8983_solr, base_url:http://virtual-instance-2:8983/solr}}},* The exception throws is on the first virtual instance: *Jun 13, 2012 2:18:40 PM org.apache.solr.common.SolrException log* *SEVERE: null:org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out: NativeFSLock@/home/myuser/data/index/write.lock* * at org.apache.lucene.store.Lock.obtain(Lock.java:84)* * at org.apache.lucene.index.IndexWriter.init(IndexWriter.java:607)* * at org.apache.solr.update.SolrIndexWriter.init(SolrIndexWriter.java:58)* * at org.apache.solr.update.DefaultSolrCoreState.createMainIndexWriter(DefaultSolrCoreState.java:112) * * at org.apache.solr.update.DefaultSolrCoreState.getIndexWriter(DefaultSolrCoreState.java:52) * * at org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:364) * * at org.apache.solr.update.processor.RunUpdateProcessor.processCommit(RunUpdateProcessorFactory.java:82) * * at org.apache.solr.update.processor.UpdateRequestProcessor.processCommit(UpdateRequestProcessor.java:64) * * at org.apache.solr.update.processor.DistributedUpdateProcessor.processCommit(DistributedUpdateProcessor.java:919) * * at org.apache.solr.update.processor.LogUpdateProcessor.processCommit(LogUpdateProcessorFactory.java:154) * * at org.apache.solr.handler.RequestHandlerUtils.handleCommit(RequestHandlerUtils.java:69) * * at org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:68) * * at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:129) * * at org.apache.solr.core.SolrCore.execute(SolrCore.java:1566)* * at org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:442) * * at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:263) * * at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1337) * * at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:484)* * at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:119) * * at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:524)* * at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:233) * * at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1065) * * at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:413)* * at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:192) * * at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:999) * * at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:117) * * at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:250) * * at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:149) * * at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:111) * * at org.eclipse.jetty.server.Server.handle(Server.java:351)* * at org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:454) * * at org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:47) * * at org.eclipse.jetty.server.AbstractHttpConnection.content(AbstractHttpConnection.java:900) * * at org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.content(AbstractHttpConnection.java:954) * * at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:857)* * at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)* * at org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:66) * * at org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:254)
Re: LockObtainFailedException after trying to create cores on second SolrCloud instance
BTW: i am running the solr instances using -Xms512M -Xmx1024M so not so little memory. Daniel On Wed, Jun 13, 2012 at 4:28 PM, Daniel Brügge daniel.brue...@googlemail.com wrote: Hi, am struggling around with creating multiple collections on a 4 instances SolrCloud setup: I have 4 virtual OpenVZ instances, where I have installed SolrCloud on each and on one is also a standalone Zookeeper running. Loading the Solr configuration into ZK works fine. Then I startup the 4 instances and everything is also running smoothly. After that I am adding one core with the name e.g. '123'. This core is correctly visible on the instance I have used for creating it. it maps like '123' shard1 - virtual-instance-1 After that I am creating a core with the same name '123' on the second instance and it creates it, but an exception is thrown after some while and the cluster state of the newly created core goes to 'recovering' *123:{shard1:{ virtual-instance-1:8983_solr_123:{ shard:shard1, roles:null, leader:true, state:active, core:123, collection:123, node_name:virtual-instance-1:8983_solr, base_url:http://virtual-instance-1:8983/solr}, **virtual-instance-2**:8983_solr_123:{* *shard:shard1, roles:null, state:recovering, core:123, collection:123, node_name:virtual-instance-2:8983_solr, base_url:http://virtual-instance-2:8983/solr}}},* The exception throws is on the first virtual instance: *Jun 13, 2012 2:18:40 PM org.apache.solr.common.SolrException log* *SEVERE: null:org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out: NativeFSLock@/home/myuser/data/index/write.lock* * at org.apache.lucene.store.Lock.obtain(Lock.java:84)* * at org.apache.lucene.index.IndexWriter.init(IndexWriter.java:607)* * at org.apache.solr.update.SolrIndexWriter.init(SolrIndexWriter.java:58)* * at org.apache.solr.update.DefaultSolrCoreState.createMainIndexWriter(DefaultSolrCoreState.java:112) * * at org.apache.solr.update.DefaultSolrCoreState.getIndexWriter(DefaultSolrCoreState.java:52) * * at org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:364) * * at org.apache.solr.update.processor.RunUpdateProcessor.processCommit(RunUpdateProcessorFactory.java:82) * * at org.apache.solr.update.processor.UpdateRequestProcessor.processCommit(UpdateRequestProcessor.java:64) * * at org.apache.solr.update.processor.DistributedUpdateProcessor.processCommit(DistributedUpdateProcessor.java:919) * * at org.apache.solr.update.processor.LogUpdateProcessor.processCommit(LogUpdateProcessorFactory.java:154) * * at org.apache.solr.handler.RequestHandlerUtils.handleCommit(RequestHandlerUtils.java:69) * * at org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:68) * * at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:129) * * at org.apache.solr.core.SolrCore.execute(SolrCore.java:1566)* * at org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:442) * * at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:263) * * at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1337) * * at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:484) * * at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:119) * * at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:524) * * at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:233) * * at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1065) * * at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:413)* * at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:192) * * at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:999) * * at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:117) * * at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:250) * * at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:149) * * at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:111) * * at org.eclipse.jetty.server.Server.handle(Server.java:351)* * at org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:454) * * at org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:47) * * at org.eclipse.jetty.server.AbstractHttpConnection.content(AbstractHttpConnection.java:900) * * at org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.content(AbstractHttpConnection.java:954) * * at
Re: LockObtainFailedException after trying to create cores on second SolrCloud instance
Thats an interesting data dir location: NativeFSLock@/home/myuser/ data/index/write.lock Where are the other data dirs located? Are you sharing one drive or something? It looks like something already has a writer lock - are you sure another solr instance is not running somehow? On Wed, Jun 13, 2012 at 11:11 AM, Daniel Brügge daniel.brue...@googlemail.com wrote: BTW: i am running the solr instances using -Xms512M -Xmx1024M so not so little memory. Daniel On Wed, Jun 13, 2012 at 4:28 PM, Daniel Brügge daniel.brue...@googlemail.com wrote: Hi, am struggling around with creating multiple collections on a 4 instances SolrCloud setup: I have 4 virtual OpenVZ instances, where I have installed SolrCloud on each and on one is also a standalone Zookeeper running. Loading the Solr configuration into ZK works fine. Then I startup the 4 instances and everything is also running smoothly. After that I am adding one core with the name e.g. '123'. This core is correctly visible on the instance I have used for creating it. it maps like '123' shard1 - virtual-instance-1 After that I am creating a core with the same name '123' on the second instance and it creates it, but an exception is thrown after some while and the cluster state of the newly created core goes to 'recovering' *123:{shard1:{ virtual-instance-1:8983_solr_123:{ shard:shard1, roles:null, leader:true, state:active, core:123, collection:123, node_name:virtual-instance-1:8983_solr, base_url:http://virtual-instance-1:8983/solr}, **virtual-instance-2**:8983_solr_123:{* *shard:shard1, roles:null, state:recovering, core:123, collection:123, node_name:virtual-instance-2:8983_solr, base_url:http://virtual-instance-2:8983/solr}}},* The exception throws is on the first virtual instance: *Jun 13, 2012 2:18:40 PM org.apache.solr.common.SolrException log* *SEVERE: null:org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out: NativeFSLock@/home/myuser/data/index/write.lock* * at org.apache.lucene.store.Lock.obtain(Lock.java:84)* * at org.apache.lucene.index.IndexWriter.init(IndexWriter.java:607)* * at org.apache.solr.update.SolrIndexWriter.init(SolrIndexWriter.java:58)* * at org.apache.solr.update.DefaultSolrCoreState.createMainIndexWriter(DefaultSolrCoreState.java:112) * * at org.apache.solr.update.DefaultSolrCoreState.getIndexWriter(DefaultSolrCoreState.java:52) * * at org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:364) * * at org.apache.solr.update.processor.RunUpdateProcessor.processCommit(RunUpdateProcessorFactory.java:82) * * at org.apache.solr.update.processor.UpdateRequestProcessor.processCommit(UpdateRequestProcessor.java:64) * * at org.apache.solr.update.processor.DistributedUpdateProcessor.processCommit(DistributedUpdateProcessor.java:919) * * at org.apache.solr.update.processor.LogUpdateProcessor.processCommit(LogUpdateProcessorFactory.java:154) * * at org.apache.solr.handler.RequestHandlerUtils.handleCommit(RequestHandlerUtils.java:69) * * at org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:68) * * at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:129) * * at org.apache.solr.core.SolrCore.execute(SolrCore.java:1566)* * at org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:442) * * at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:263) * * at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1337) * * at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:484) * * at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:119) * * at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:524) * * at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:233) * * at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1065) * * at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:413)* * at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:192) * * at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:999) * * at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:117) * * at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:250) * * at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:149) * * at
Re: LockObtainFailedException after trying to create cores on second SolrCloud instance
What command are you using to create the cores? I had this sort of problem, and it was because I'd accidentally created two cores with the same instanceDir within the same SOLR process. Make sure you don't have that kind of collision. The easiest way is to specify an explicit instanceDir and dataDir. Best, Casey Callendrello On 6/13/12 7:28 AM, Daniel Brügge wrote: Hi, am struggling around with creating multiple collections on a 4 instances SolrCloud setup: I have 4 virtual OpenVZ instances, where I have installed SolrCloud on each and on one is also a standalone Zookeeper running. Loading the Solr configuration into ZK works fine. Then I startup the 4 instances and everything is also running smoothly. After that I am adding one core with the name e.g. '123'. This core is correctly visible on the instance I have used for creating it. it maps like '123' shard1 - virtual-instance-1 After that I am creating a core with the same name '123' on the second instance and it creates it, but an exception is thrown after some while and the cluster state of the newly created core goes to 'recovering' *123:{shard1:{ virtual-instance-1:8983_solr_123:{ shard:shard1, roles:null, leader:true, state:active, core:123, collection:123, node_name:virtual-instance-1:8983_solr, base_url:http://virtual-instance-1:8983/solr}, **virtual-instance-2**:8983_solr_123:{* *shard:shard1, roles:null, state:recovering, core:123, collection:123, node_name:virtual-instance-2:8983_solr, base_url:http://virtual-instance-2:8983/solr}}},* The exception throws is on the first virtual instance: *Jun 13, 2012 2:18:40 PM org.apache.solr.common.SolrException log* *SEVERE: null:org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out: NativeFSLock@/home/myuser/data/index/write.lock* * at org.apache.lucene.store.Lock.obtain(Lock.java:84)* * at org.apache.lucene.index.IndexWriter.init(IndexWriter.java:607)* * at org.apache.solr.update.SolrIndexWriter.init(SolrIndexWriter.java:58)* * at org.apache.solr.update.DefaultSolrCoreState.createMainIndexWriter(DefaultSolrCoreState.java:112) * * at org.apache.solr.update.DefaultSolrCoreState.getIndexWriter(DefaultSolrCoreState.java:52) * * at org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:364) * * at org.apache.solr.update.processor.RunUpdateProcessor.processCommit(RunUpdateProcessorFactory.java:82) * * at org.apache.solr.update.processor.UpdateRequestProcessor.processCommit(UpdateRequestProcessor.java:64) * * at org.apache.solr.update.processor.DistributedUpdateProcessor.processCommit(DistributedUpdateProcessor.java:919) * * at org.apache.solr.update.processor.LogUpdateProcessor.processCommit(LogUpdateProcessorFactory.java:154) * * at org.apache.solr.handler.RequestHandlerUtils.handleCommit(RequestHandlerUtils.java:69) * * at org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:68) * * at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:129) * * at org.apache.solr.core.SolrCore.execute(SolrCore.java:1566)* * at org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:442) * * at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:263) * * at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1337) * * at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:484)* * at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:119) * * at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:524)* * at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:233) * * at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1065) * * at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:413)* * at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:192) * * at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:999) * * at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:117) * * at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:250) * * at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:149) * * at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:111) * * at org.eclipse.jetty.server.Server.handle(Server.java:351)* * at org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:454) * * at org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:47) * * at
Re: LockObtainFailedException
HI Peter I found the issue, Actually we were getting this exception because of JVM space. I allocated 512 xms and 1024 xmx .. finally increased the time limit for write lock to 20 secs .. things are working fine ... but still it did not help ... On closely analysis of doc which we were indexing, we were using commitWithin as 10 secs, which was the root cause of taking so long for indexing the document because of so many segments to be committed. On separate commit command using curl solved the issue. The performance improved from 3 mins to 1.5 secs :) Thanks a lot Naveen On Thu, Aug 11, 2011 at 6:27 PM, Peter Sturge peter.stu...@gmail.comwrote: Optimizing indexing time is a very different question. I'm guessing your 3mins+ time you refer to is the commit time. There are a whole host of things to take into account regarding indexing, like: number of segments, schema, how many fields, storing fields, omitting norms, caching, autowarming, search activity etc. - the list goes on... The trouble is, you can look at 100 different Solr installations with slow indexing, and find 200 different reasons why each is slow. The best place to start is to get a full understanding of precisely how your data is being stored in the index, starting with adding docs, going through your schema, Lucene segments, solrconfig.xml etc, looking at caches, commit triggers etc. - really getting to know how each step is affecting performance. Once you really have a handle on all the indexing steps, you'll be able to spot the bottlenecks that relate to your particular environment. An index of 4.5GB isn't that big (but the number of documents tends to have more of an effect than the physical size), so the bottleneck(s) should be findable once you trace through the indexing operations. On Thu, Aug 11, 2011 at 1:02 PM, Naveen Gupta nkgiit...@gmail.com wrote: Yes this was happening because of JVM heap size But the real issue is that if our index size is growing (very high) then indexing time is taking very long (using streaming) earlier for indexing 15,000 docs at a time (commit after 15000 docs) , it was taking 3 mins 20 secs time, after deleting the index data, it is taking 9 secs What would be approach to have better indexing performance as well as index size should also at the same time. The index size was around 4.5 GB Thanks Naveen On Thu, Aug 11, 2011 at 3:47 PM, Peter Sturge peter.stu...@gmail.com wrote: Hi, When you get this exception with no other error or explananation in the logs, this is almost always because the JVM has run out of memory. Have you checked/profiled your mem usage/GC during the stream operation? On Thu, Aug 11, 2011 at 3:18 AM, Naveen Gupta nkgiit...@gmail.com wrote: Hi, We are doing streaming update to solr for multiple user, We are getting Aug 10, 2011 11:56:55 AM org.apache.solr.common.SolrException log SEVERE: org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out: NativeFSLock@/var/lib/solr/data/index/write.lock at org.apache.lucene.store.Lock.obtain(Lock.java:84) at org.apache.lucene.index.IndexWriter.init(IndexWriter.java:1097) at org.apache.solr.update.SolrIndexWriter.init(SolrIndexWriter.java:83) at org.apache.solr.update.UpdateHandler.createMainIndexWriter(UpdateHandler.java:102) at org.apache.solr.update.DirectUpdateHandler2.openWriter(DirectUpdateHandler2.java:174) at org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:222) at org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:61) at org.apache.solr.handler.XMLLoader.processUpdate(XMLLoader.java:147) at org.apache.solr.handler.XMLLoader.load(XMLLoader.java:77) at org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:55) at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:129) at org.apache.solr.core.SolrCore.execute(SolrCore.java:1360) at org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:356) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:252) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) at
Re: LockObtainFailedException
Hi, When you get this exception with no other error or explananation in the logs, this is almost always because the JVM has run out of memory. Have you checked/profiled your mem usage/GC during the stream operation? On Thu, Aug 11, 2011 at 3:18 AM, Naveen Gupta nkgiit...@gmail.com wrote: Hi, We are doing streaming update to solr for multiple user, We are getting Aug 10, 2011 11:56:55 AM org.apache.solr.common.SolrException log SEVERE: org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out: NativeFSLock@/var/lib/solr/data/index/write.lock at org.apache.lucene.store.Lock.obtain(Lock.java:84) at org.apache.lucene.index.IndexWriter.init(IndexWriter.java:1097) at org.apache.solr.update.SolrIndexWriter.init(SolrIndexWriter.java:83) at org.apache.solr.update.UpdateHandler.createMainIndexWriter(UpdateHandler.java:102) at org.apache.solr.update.DirectUpdateHandler2.openWriter(DirectUpdateHandler2.java:174) at org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:222) at org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:61) at org.apache.solr.handler.XMLLoader.processUpdate(XMLLoader.java:147) at org.apache.solr.handler.XMLLoader.load(XMLLoader.java:77) at org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:55) at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:129) at org.apache.solr.core.SolrCore.execute(SolrCore.java:1360) at org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:356) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:252) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:859) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588) at org.apache.tomcat.util.net.JIoEndpoint Aug 10, 2011 12:00:16 PM org.apache.solr.common.SolrException log SEVERE: org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out: NativeFSLock@/var/lib/solr/data/index/write.lock at org.apache.lucene.store.Lock.obtain(Lock.java:84) at org.apache.lucene.index.IndexWriter.init(IndexWriter.java:1097) at org.apache.solr.update.SolrIndexWriter.init(SolrIndexWriter.java:83) at org.apache.solr.update.UpdateHandler.createMainIndexWriter(UpdateHandler.java:102) at org.apache.solr.update.DirectUpdateHandler2.openWriter(DirectUpdateHandler2.java:174) at org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:222) at org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:61) at org.apache.solr.handler.XMLLoader.processUpdate(XMLLoader.java:147) at org.apache.solr.handler.XMLLoader.load(XMLLoader.java:77) at org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:55) at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:129) at org.apache.solr.core.SolrCore.execute(SolrCore.java:1360) at org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:356) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:252) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at
Re: LockObtainFailedException
Yes this was happening because of JVM heap size But the real issue is that if our index size is growing (very high) then indexing time is taking very long (using streaming) earlier for indexing 15,000 docs at a time (commit after 15000 docs) , it was taking 3 mins 20 secs time, after deleting the index data, it is taking 9 secs What would be approach to have better indexing performance as well as index size should also at the same time. The index size was around 4.5 GB Thanks Naveen On Thu, Aug 11, 2011 at 3:47 PM, Peter Sturge peter.stu...@gmail.comwrote: Hi, When you get this exception with no other error or explananation in the logs, this is almost always because the JVM has run out of memory. Have you checked/profiled your mem usage/GC during the stream operation? On Thu, Aug 11, 2011 at 3:18 AM, Naveen Gupta nkgiit...@gmail.com wrote: Hi, We are doing streaming update to solr for multiple user, We are getting Aug 10, 2011 11:56:55 AM org.apache.solr.common.SolrException log SEVERE: org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out: NativeFSLock@/var/lib/solr/data/index/write.lock at org.apache.lucene.store.Lock.obtain(Lock.java:84) at org.apache.lucene.index.IndexWriter.init(IndexWriter.java:1097) at org.apache.solr.update.SolrIndexWriter.init(SolrIndexWriter.java:83) at org.apache.solr.update.UpdateHandler.createMainIndexWriter(UpdateHandler.java:102) at org.apache.solr.update.DirectUpdateHandler2.openWriter(DirectUpdateHandler2.java:174) at org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:222) at org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:61) at org.apache.solr.handler.XMLLoader.processUpdate(XMLLoader.java:147) at org.apache.solr.handler.XMLLoader.load(XMLLoader.java:77) at org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:55) at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:129) at org.apache.solr.core.SolrCore.execute(SolrCore.java:1360) at org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:356) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:252) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:859) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588) at org.apache.tomcat.util.net.JIoEndpoint Aug 10, 2011 12:00:16 PM org.apache.solr.common.SolrException log SEVERE: org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out: NativeFSLock@/var/lib/solr/data/index/write.lock at org.apache.lucene.store.Lock.obtain(Lock.java:84) at org.apache.lucene.index.IndexWriter.init(IndexWriter.java:1097) at org.apache.solr.update.SolrIndexWriter.init(SolrIndexWriter.java:83) at org.apache.solr.update.UpdateHandler.createMainIndexWriter(UpdateHandler.java:102) at org.apache.solr.update.DirectUpdateHandler2.openWriter(DirectUpdateHandler2.java:174) at org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:222) at org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:61) at org.apache.solr.handler.XMLLoader.processUpdate(XMLLoader.java:147) at org.apache.solr.handler.XMLLoader.load(XMLLoader.java:77) at org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:55) at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:129) at org.apache.solr.core.SolrCore.execute(SolrCore.java:1360) at org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:356) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:252) at
Re: LockObtainFailedException
Optimizing indexing time is a very different question. I'm guessing your 3mins+ time you refer to is the commit time. There are a whole host of things to take into account regarding indexing, like: number of segments, schema, how many fields, storing fields, omitting norms, caching, autowarming, search activity etc. - the list goes on... The trouble is, you can look at 100 different Solr installations with slow indexing, and find 200 different reasons why each is slow. The best place to start is to get a full understanding of precisely how your data is being stored in the index, starting with adding docs, going through your schema, Lucene segments, solrconfig.xml etc, looking at caches, commit triggers etc. - really getting to know how each step is affecting performance. Once you really have a handle on all the indexing steps, you'll be able to spot the bottlenecks that relate to your particular environment. An index of 4.5GB isn't that big (but the number of documents tends to have more of an effect than the physical size), so the bottleneck(s) should be findable once you trace through the indexing operations. On Thu, Aug 11, 2011 at 1:02 PM, Naveen Gupta nkgiit...@gmail.com wrote: Yes this was happening because of JVM heap size But the real issue is that if our index size is growing (very high) then indexing time is taking very long (using streaming) earlier for indexing 15,000 docs at a time (commit after 15000 docs) , it was taking 3 mins 20 secs time, after deleting the index data, it is taking 9 secs What would be approach to have better indexing performance as well as index size should also at the same time. The index size was around 4.5 GB Thanks Naveen On Thu, Aug 11, 2011 at 3:47 PM, Peter Sturge peter.stu...@gmail.comwrote: Hi, When you get this exception with no other error or explananation in the logs, this is almost always because the JVM has run out of memory. Have you checked/profiled your mem usage/GC during the stream operation? On Thu, Aug 11, 2011 at 3:18 AM, Naveen Gupta nkgiit...@gmail.com wrote: Hi, We are doing streaming update to solr for multiple user, We are getting Aug 10, 2011 11:56:55 AM org.apache.solr.common.SolrException log SEVERE: org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out: NativeFSLock@/var/lib/solr/data/index/write.lock at org.apache.lucene.store.Lock.obtain(Lock.java:84) at org.apache.lucene.index.IndexWriter.init(IndexWriter.java:1097) at org.apache.solr.update.SolrIndexWriter.init(SolrIndexWriter.java:83) at org.apache.solr.update.UpdateHandler.createMainIndexWriter(UpdateHandler.java:102) at org.apache.solr.update.DirectUpdateHandler2.openWriter(DirectUpdateHandler2.java:174) at org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:222) at org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:61) at org.apache.solr.handler.XMLLoader.processUpdate(XMLLoader.java:147) at org.apache.solr.handler.XMLLoader.load(XMLLoader.java:77) at org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:55) at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:129) at org.apache.solr.core.SolrCore.execute(SolrCore.java:1360) at org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:356) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:252) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:859) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588) at org.apache.tomcat.util.net.JIoEndpoint Aug 10, 2011 12:00:16 PM org.apache.solr.common.SolrException log SEVERE: org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out: NativeFSLock@/var/lib/solr/data/index/write.lock at org.apache.lucene.store.Lock.obtain(Lock.java:84) at
LockObtainFailedException
Hi, We are doing streaming update to solr for multiple user, We are getting Aug 10, 2011 11:56:55 AM org.apache.solr.common.SolrException log SEVERE: org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out: NativeFSLock@/var/lib/solr/data/index/write.lock at org.apache.lucene.store.Lock.obtain(Lock.java:84) at org.apache.lucene.index.IndexWriter.init(IndexWriter.java:1097) at org.apache.solr.update.SolrIndexWriter.init(SolrIndexWriter.java:83) at org.apache.solr.update.UpdateHandler.createMainIndexWriter(UpdateHandler.java:102) at org.apache.solr.update.DirectUpdateHandler2.openWriter(DirectUpdateHandler2.java:174) at org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:222) at org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:61) at org.apache.solr.handler.XMLLoader.processUpdate(XMLLoader.java:147) at org.apache.solr.handler.XMLLoader.load(XMLLoader.java:77) at org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:55) at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:129) at org.apache.solr.core.SolrCore.execute(SolrCore.java:1360) at org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:356) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:252) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:859) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588) at org.apache.tomcat.util.net.JIoEndpoint Aug 10, 2011 12:00:16 PM org.apache.solr.common.SolrException log SEVERE: org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out: NativeFSLock@/var/lib/solr/data/index/write.lock at org.apache.lucene.store.Lock.obtain(Lock.java:84) at org.apache.lucene.index.IndexWriter.init(IndexWriter.java:1097) at org.apache.solr.update.SolrIndexWriter.init(SolrIndexWriter.java:83) at org.apache.solr.update.UpdateHandler.createMainIndexWriter(UpdateHandler.java:102) at org.apache.solr.update.DirectUpdateHandler2.openWriter(DirectUpdateHandler2.java:174) at org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:222) at org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:61) at org.apache.solr.handler.XMLLoader.processUpdate(XMLLoader.java:147) at org.apache.solr.handler.XMLLoader.load(XMLLoader.java:77) at org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:55) at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:129) at org.apache.solr.core.SolrCore.execute(SolrCore.java:1360) at org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:356) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:252) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:859) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489) at java.lang.Thread.run(Thread.java:662)
LockObtainFailedException and open finalizing IndexWriters
Hi, we are running Solr 3.2.0 on Jetty for a web application. Since we just went online and are still in beta tests, we don't have very much load on our servers (indeed, they're currently much oversized for the current usage), and our index size on file system ist just 1.1 MB. We have one dedicated Solr instance for updates, and two replicated read-only servers for requests. The update server gets filled by three different Java web servers, each has a distinct Quartz job for its updates. Every such Quartz job takes all collected updates, sends them via Solrj's addBeans() method, and from time to time, they send an additional commit() after that. Each update job has a CommonHTTPSolrServer instance, which is a Spring controlled singleton. We already had LockObtainFailedExceptions before, raising every few days. Sometimes, we had such an exception before: org.apache.solr.common.SolrException: java.io.IOException: directory '/data/solr/data/index' exists and is a directory, but cannot be listed: list() returned null This looks like if there were no more file handles from the operating system. This is strange, since the only index directory never had more than 100 files, if ever. However, we raised ulimit -n from 1024 to 4096, and reduced mergeFactor from 10 to 5, which firsted helped us with our problem. Until yesterday. Again, we had this: org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out: SimpleFSLock@solr/main/data/index/write.lockat org.apache.lucene.store.Lock.obtain(Lock.java:84)at org.apache.lucene.index.IndexWriter.init(IndexWriter.java:1114)at org.apache.solr.update.SolrIndexWriter.init(SolrIndexWriter.java:83) at org.apache.solr.update.UpdateHandler.createMainIndexWriter(UpdateHandler.java:101) at org.apache.solr.update.DirectUpdateHandler2.openWriter(DirectUpdateHandler2.java:174) at org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:222) at org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:61) . When we deleted the write.lock file without restarting Solr, several hours later we had 441 same log entries: Jul 18, 2011 7:20:29 AM org.apache.solr.update.SolrIndexWriter finalize SEVERE: SolrIndexWriter was not closed prior to finalize(), indicates a bug -- POSSIBLE RESOURCE LEAK!!! Wow, if there really were 441 open IndexWriters trying to access the index directory, it's no wonder that there will be Lock timeouts sooner or later! However, I have no clue why there are so many IndexWriters opened and never closed. The only accessing Solr instances are pure Java applications using Solrj. Each application only has one SolrServer instance - and even of not, this shouldn't harm, AFAIK. The update job is started every five seconds. The installation is a pure 3.2.0 Solr, without additional jars. And all jars are of the correct revision. The solrconfig.xml is based on the example configuration, with nothing special. We currently don't have any own extensions running. There is absolutely only one jetty instance running on the machine. And I checked the solr.xml, it's only one core defined, and we don't do any additional core administration. I'm using Solr since the beginning of 2010, but never had such a problem. Any help is welcome. Greetings, Kuli
LockObtainFailedException
will anyone help me why and how? org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out: SimpleFSLock@/usr/local/se archengine/apache-solr-1.2.0/fr_companies/solr/data/index/write.lock at org.apache.lucene.store.Lock.obtain(Lock.java:70) at org.apache.lucene.index.IndexWriter.init(IndexWriter.java:579) at org.apache.lucene.index.IndexWriter.lt;initgt;(IndexWriter.java :341) at org.apache.solr.update.SolrIndexWriter.lt;initgt;( SolrIndexWriter.java:65) at org.apache.solr.update.UpdateHandler.createMainIndexWriter( UpdateHandler.java:120) at org.apache.solr.update.DirectUpdateHandler2.openWriter( DirectUpdateHandler2.java:181) at org.apache.solr.update.DirectUpdateHandler2.addDoc( DirectUpdateHandler2.java:259) at org.apache.solr.handler.XmlUpdateRequestHandler.update( XmlUpdateRequestHandler.java:166) at org.apache.solr.handler.XmlUpdateRequestHandler.handleRequestBody (XmlUpdateRequestHandler .java:84) Thanks, Jae Joo
Re: LockObtainFailedException
quick fix look for a lucene lock file in your tmp directory and delete it, then restart solr, should start I am an idiot though, so be careful, in fact, I'm worse than an idiot, I know a little :-) you got a lock file somewhere though, deleting that will help you out, for me it was in my /tmp directory On 27 Sep 2007, at 14:10, Jae Joo wrote: will anyone help me why and how? org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out: SimpleFSLock@/usr/local/se archengine/apache-solr-1.2.0/fr_companies/solr/data/index/write.lock at org.apache.lucene.store.Lock.obtain(Lock.java:70) at org.apache.lucene.index.IndexWriter.init (IndexWriter.java:579) at org.apache.lucene.index.IndexWriter.lt;initgt; (IndexWriter.java :341) at org.apache.solr.update.SolrIndexWriter.lt;initgt;( SolrIndexWriter.java:65) at org.apache.solr.update.UpdateHandler.createMainIndexWriter( UpdateHandler.java:120) at org.apache.solr.update.DirectUpdateHandler2.openWriter( DirectUpdateHandler2.java:181) at org.apache.solr.update.DirectUpdateHandler2.addDoc( DirectUpdateHandler2.java:259) at org.apache.solr.handler.XmlUpdateRequestHandler.update( XmlUpdateRequestHandler.java:166) at org.apache.solr.handler.XmlUpdateRequestHandler.handleRequestBody (XmlUpdateRequestHandler .java:84) Thanks, Jae Joo
Re: LockObtainFailedException
In solrconfig.xml, useCompoundFilefalse/useCompoundFile mergeFactor10/mergeFactor maxBufferedDocs25000/maxBufferedDocs maxMergeDocs1400/maxMergeDocs maxFieldLength500/maxFieldLength writeLockTimeout1000/writeLockTimeout commitLockTimeout1/commitLockTimeout Does writeLockTimeout too small? Thanks, Jae On 9/27/07, matt davies [EMAIL PROTECTED] wrote: quick fix look for a lucene lock file in your tmp directory and delete it, then restart solr, should start I am an idiot though, so be careful, in fact, I'm worse than an idiot, I know a little :-) you got a lock file somewhere though, deleting that will help you out, for me it was in my /tmp directory On 27 Sep 2007, at 14:10, Jae Joo wrote: will anyone help me why and how? org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out: SimpleFSLock@/usr/local/se archengine/apache-solr-1.2.0/fr_companies/solr/data/index/write.lock at org.apache.lucene.store.Lock.obtain(Lock.java:70) at org.apache.lucene.index.IndexWriter.init (IndexWriter.java:579) at org.apache.lucene.index.IndexWriter.lt;initgt; (IndexWriter.java :341) at org.apache.solr.update.SolrIndexWriter.lt;initgt;( SolrIndexWriter.java:65) at org.apache.solr.update.UpdateHandler.createMainIndexWriter( UpdateHandler.java:120) at org.apache.solr.update.DirectUpdateHandler2.openWriter( DirectUpdateHandler2.java:181) at org.apache.solr.update.DirectUpdateHandler2.addDoc( DirectUpdateHandler2.java:259) at org.apache.solr.handler.XmlUpdateRequestHandler.update( XmlUpdateRequestHandler.java:166) at org.apache.solr.handler.XmlUpdateRequestHandler.handleRequestBody (XmlUpdateRequestHandler .java:84) Thanks, Jae Joo