Re: Transient commit errors during autocommit
Lance, I have seen this error when the Solr process hit the maximum file descriptors (because the commit triggered an optimize). Make sure your maxfds is set as high as possible. In my case, 1024 was not nearly sufficient. --Casey On 10/19/12 6:20 PM, Lance Norskog wrote: When a transient error happens during an autocommit, the error does not cause a safe rollback or notify the user there was a problem. Instead, there is a write lock failure and Solr has to be restarted. It run fine after restart. Is this a known problem? Is it fixable? Is it unit-test-able?
Re: Can solr return matched fields?
What about using the FastVectorHighlighter? It should get you what you're looking for (fields with matches) without much of a query-time performance impact. --Casey On 9/12/12 3:01 PM, Dan Foley wrote: is there a way for solr to tell me what fields the query matched, other then turning debug on? I'd like my application to take different actions based on what fields were matched. signature.asc Description: OpenPGP digital signature
Re: Solr 4.0 beta deadlock / file descriptor spike
For the record, this was caused by a rookie mistake: FD exhaustion. --Casey On 8/24/12 11:24 AM, Casey Callendrello wrote: Hi there, I have been doing some load testing with Solr 4 beta (now, trunk). My configuration is fairly simple - two servers, replicating via SolrCloud. SolrCloud is configured as recommended in the wiki: updateRequestProcessorChain name=standard processor class=solr.LogUpdateProcessorFactory / processor class=solr.DistributedUpdateProcessorFactory / processor class=solr.RunUpdateProcessorFactory / /updateRequestProcessorChain Twice now I've seen sudden thread and file-descriptor spikes along with a complete deadlock, simultaneously on both machines. My max FDs is set to 1024, and (excepting the spikes) I never see usage over 375 fds. The first FD spike was with an older trunk revision. It was co-incident with a corrupt transaction log. I've lost the logs, unfortunately, but SOLR tried to re-process the same log over and over, leaking FDs and dying. The upgraded version has not reported the corrupt transaction issue prior to deadlock. However, according to the log files, the deadlock persists for about 5 minutes prior to FD exhaustion. The last log line is simply INFO: end_commit_flush Upon restart, I see a frightening amount of corrupt transaction log exceptions and New transaction log already exists exceptions. Any thoughts? Contact me for the thread dump; it's 1 MiB. Thanks, --Casey C. signature.asc Description: OpenPGP digital signature
Solr 4.0 beta deadlock / file descriptor spike
Hi there, I have been doing some load testing with Solr 4 beta (now, trunk). My configuration is fairly simple - two servers, replicating via SolrCloud. SolrCloud is configured as recommended in the wiki: updateRequestProcessorChain name=standard processor class=solr.LogUpdateProcessorFactory / processor class=solr.DistributedUpdateProcessorFactory / processor class=solr.RunUpdateProcessorFactory / /updateRequestProcessorChain Twice now I've seen sudden thread and file-descriptor spikes along with a complete deadlock, simultaneously on both machines. My max FDs is set to 1024, and (excepting the spikes) I never see usage over 375 fds. The first FD spike was with an older trunk revision. It was co-incident with a corrupt transaction log. I've lost the logs, unfortunately, but SOLR tried to re-process the same log over and over, leaking FDs and dying. The upgraded version has not reported the corrupt transaction issue prior to deadlock. However, according to the log files, the deadlock persists for about 5 minutes prior to FD exhaustion. The last log line is simply INFO: end_commit_flush Upon restart, I see a frightening amount of corrupt transaction log exceptions and New transaction log already exists exceptions. Any thoughts? Contact me for the thread dump; it's 1 MiB. Thanks, --Casey C. signature.asc Description: OpenPGP digital signature
Re: Solr 4.0 UI issue
This is also seen when there are no cores defined in solr.xml. Check that your solr.xml is in a useful place and has cores defined. Alternatively, issue an appropriate CoreAdmin request to create one. --Casey Callendrello On 7/6/12 9:57 AM, anarchos78 wrote: Didn't helped -- View this message in context: http://lucene.472066.n3.nabble.com/Solr-4-0-UI-issue-tp3993286p3993507.html Sent from the Solr - User mailing list archive at Nabble.com. signature.asc Description: OpenPGP digital signature
Re: LockObtainFailedException after trying to create cores on second SolrCloud instance
What command are you using to create the cores? I had this sort of problem, and it was because I'd accidentally created two cores with the same instanceDir within the same SOLR process. Make sure you don't have that kind of collision. The easiest way is to specify an explicit instanceDir and dataDir. Best, Casey Callendrello On 6/13/12 7:28 AM, Daniel Brügge wrote: Hi, am struggling around with creating multiple collections on a 4 instances SolrCloud setup: I have 4 virtual OpenVZ instances, where I have installed SolrCloud on each and on one is also a standalone Zookeeper running. Loading the Solr configuration into ZK works fine. Then I startup the 4 instances and everything is also running smoothly. After that I am adding one core with the name e.g. '123'. This core is correctly visible on the instance I have used for creating it. it maps like '123' shard1 - virtual-instance-1 After that I am creating a core with the same name '123' on the second instance and it creates it, but an exception is thrown after some while and the cluster state of the newly created core goes to 'recovering' *123:{shard1:{ virtual-instance-1:8983_solr_123:{ shard:shard1, roles:null, leader:true, state:active, core:123, collection:123, node_name:virtual-instance-1:8983_solr, base_url:http://virtual-instance-1:8983/solr}, **virtual-instance-2**:8983_solr_123:{* *shard:shard1, roles:null, state:recovering, core:123, collection:123, node_name:virtual-instance-2:8983_solr, base_url:http://virtual-instance-2:8983/solr}}},* The exception throws is on the first virtual instance: *Jun 13, 2012 2:18:40 PM org.apache.solr.common.SolrException log* *SEVERE: null:org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out: NativeFSLock@/home/myuser/data/index/write.lock* * at org.apache.lucene.store.Lock.obtain(Lock.java:84)* * at org.apache.lucene.index.IndexWriter.init(IndexWriter.java:607)* * at org.apache.solr.update.SolrIndexWriter.init(SolrIndexWriter.java:58)* * at org.apache.solr.update.DefaultSolrCoreState.createMainIndexWriter(DefaultSolrCoreState.java:112) * * at org.apache.solr.update.DefaultSolrCoreState.getIndexWriter(DefaultSolrCoreState.java:52) * * at org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:364) * * at org.apache.solr.update.processor.RunUpdateProcessor.processCommit(RunUpdateProcessorFactory.java:82) * * at org.apache.solr.update.processor.UpdateRequestProcessor.processCommit(UpdateRequestProcessor.java:64) * * at org.apache.solr.update.processor.DistributedUpdateProcessor.processCommit(DistributedUpdateProcessor.java:919) * * at org.apache.solr.update.processor.LogUpdateProcessor.processCommit(LogUpdateProcessorFactory.java:154) * * at org.apache.solr.handler.RequestHandlerUtils.handleCommit(RequestHandlerUtils.java:69) * * at org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:68) * * at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:129) * * at org.apache.solr.core.SolrCore.execute(SolrCore.java:1566)* * at org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:442) * * at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:263) * * at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1337) * * at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:484)* * at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:119) * * at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:524)* * at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:233) * * at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1065) * * at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:413)* * at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:192) * * at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:999) * * at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:117) * * at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:250) * * at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:149) * * at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:111) * * at org.eclipse.jetty.server.Server.handle(Server.java:351)* * at org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:454) * * at org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:47
field type for Icelandic
Hi there, Are there any best practices / recommendations for indexing and searching Icelandic text? It's not one of the languages provided as a field type in the example schema. I've been using text_general, but (as you all surely know) that is only a stopgap solution at best. I'm looking for a reasonable way to stem, expand synonyms, and remove stopwords. Thanks! --Casey