[jira] [Commented] (PYLUCENE-32) pylucene CharArraySet jvm error
[ https://issues.apache.org/jira/browse/PYLUCENE-32?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14174781#comment-14174781 ] Alex commented on PYLUCENE-32: -- Thanks Andi. But am using pylucene version 3.6.2. I think the problem has to do with jvm instantiation caused by java-python incompatible array issues but I dont know how to solve this. Below are the java files I added to class to lucene core perhaps you will have more understanding of what the issue is: The lemmatizer: /* * Lemmatizing library for Lucene * Copyright (C) 2010 Lars Buitinck * * This program is free software: you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation, either version 3 of the License, or * (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program. If not, see http://www.gnu.org/licenses/. */ package englishlemma; import java.io.*; import edu.stanford.nlp.tagger.maxent.MaxentTagger; import edu.stanford.nlp.tagger.maxent.TaggerConfig; import org.apache.lucene.analysis.Analyzer; import org.apache.lucene.analysis.TokenStream; /** * An analyzer that uses an {@link EnglishLemmaTokenizer}. * * @author Lars Buitinck * @version 2010.1006 */ public class EnglishLemmaAnalyzer extends Analyzer { private MaxentTagger posTagger; /** * Construct an analyzer with a tagger using the given model file. */ public EnglishLemmaAnalyzer(String posModelFile) throws Exception { this(makeTagger(posModelFile)); } /** * Construct an analyzer using the given tagger. */ public EnglishLemmaAnalyzer(MaxentTagger tagger) { posTagger = tagger; } /** * Factory method for loading a POS tagger. */ public static MaxentTagger makeTagger(String modelFile) throws Exception { TaggerConfig config = new TaggerConfig(-model, modelFile); // The final argument suppresses a loading message on stderr. return new MaxentTagger(modelFile, config, false); } @Override public TokenStream tokenStream(String fieldName, Reader input) { return new EnglishLemmaTokenizer(input, posTagger); } } The tokenizer for the lemmatizer: /* * Lemmatizing library for Lucene * Copyright (c) 2010-2011 Lars Buitinck * * This program is free software: you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation, either version 3 of the License, or * (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program. If not, see http://www.gnu.org/licenses/. */ package englishlemma; import java.io.*; import java.util.*; import java.util.regex.*; import com.google.common.collect.Iterables; import edu.stanford.nlp.ling.*; import edu.stanford.nlp.process.Morphology; import edu.stanford.nlp.tagger.maxent.MaxentTagger; import org.apache.lucene.analysis.TokenStream; import org.apache.lucene.analysis.tokenattributes.PositionIncrementAttribute; import org.apache.lucene.analysis.tokenattributes.TermAttribute; /** * A tokenizer that retrieves the lemmas (base forms) of English words. * Relies internally on the sentence splitter and tokenizer supplied with * the Stanford POS tagger. * * @author Lars Buitinck * @version 2011.0122 */ public class EnglishLemmaTokenizer extends TokenStream { private IteratorTaggedWord tagged; private PositionIncrementAttribute posIncr; private TaggedWord currentWord; private TermAttribute termAtt; private boolean lemmaNext; /** * Construct a tokenizer processing the given input and a tagger * using the given model file. */ public EnglishLemmaTokenizer(Reader input, String posModelFile) throws Exception { this(input, EnglishLemmaAnalyzer.makeTagger(posModelFile)); } /** * Construct a tokenizer processing the given input using the given tagger. */ public EnglishLemmaTokenizer(Reader input, MaxentTagger tagger) { super(); lemmaNext = false; posIncr = addAttribute(PositionIncrementAttribute.class); termAtt = addAttribute(TermAttribute.class); ListListHasWord tokenized = MaxentTagger.tokenizeText(input);
Pylucene jvm CharArraySet Error
I added a customized lucene analyzer class to lucene core in Pylucene. This class is google guava as a dependency because of the array handling function available in com.google.common.collect.Iterables in guava. When I tried to index using this analyzer, I got the following error: Traceback (most recent call last): File C:\IndexFiles.py, line 78, in lucene.initVM() JavaError: java.lang.NoClassDefFoundError: org/apache/lucene/analysis/CharArraySet Java stacktrace: java.lang.NoClassDefFoundError: org/apache/lucene/analysis/CharArraySet Caused by: java.lang.ClassNotFoundException: org.apache.lucene.analysis.CharArraySet at java.net.URLClassLoader$1.run(URLClassLoader.java:366) at java.net.URLClassLoader$1.run(URLClassLoader.java:355) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:354) at java.lang.ClassLoader.loadClass(ClassLoader.java:425) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308) at java.lang.ClassLoader.loadClass(ClassLoader.java:358) Even the example indexing code in Lucene in Action that I tried earlier and worked, when I retried it after adding this class is returning the same error above. Am not too familiar with CharArraySet class as I can see the problem is from it. How do i handle this? Attached is the java files whose class were added to lucene core in pylucene. Thanks
Fwd: Pylucene jvm CharArraySet Error
Meanwhile, am using lucene 3.6.2 version. The problem is jvm instantiation from any python code using lucene caused as a result of the classes I added to lucene core. -- Forwarded message -- From: Alexander Alex greatalexander4r...@gmail.com Date: Fri, Oct 17, 2014 at 12:31 PM Subject: Pylucene jvm CharArraySet Error To: pylucene-dev@lucene.apache.org I added a customized lucene analyzer class to lucene core in Pylucene. This class is google guava as a dependency because of the array handling function available in com.google.common.collect.Iterables in guava. When I tried to index using this analyzer, I got the following error: Traceback (most recent call last): File C:\IndexFiles.py, line 78, in lucene.initVM() JavaError: java.lang.NoClassDefFoundError: org/apache/lucene/analysis/CharArraySet Java stacktrace: java.lang.NoClassDefFoundError: org/apache/lucene/analysis/CharArraySet Caused by: java.lang.ClassNotFoundException: org.apache.lucene.analysis.CharArraySet at java.net.URLClassLoader$1.run(URLClassLoader.java:366) at java.net.URLClassLoader$1.run(URLClassLoader.java:355) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:354) at java.lang.ClassLoader.loadClass(ClassLoader.java:425) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308) at java.lang.ClassLoader.loadClass(ClassLoader.java:358) Even the example indexing code in Lucene in Action that I tried earlier and worked, when I retried it after adding this class is returning the same error above. Am not too familiar with CharArraySet class as I can see the problem is from it. How do i handle this? Attached is the java files whose class were added to lucene core in pylucene. Thanks
Re: Fwd: Pylucene jvm CharArraySet Error
On Sat, 18 Oct 2014, Alexander Alex wrote: ok. I built the class files for the java files attached herein, add them to lucene-core-3.6.2.jar at org.apache.lucene.analysis and lucene-analyzers-3.6.2.jar at org.apache.lucene.analysis. I then added the path of the dependencies to classpath in the init.py file. What init.py file ? Can you paste the contents of that file here, please ? Andi.. I ran the typical index file using this customized analyzer through PythonAnalyzer and got the above error. Meanwhile, I had earlier ran the index file using standard analyzer before adding the classes and it worked. After running the index file with the customized analyzer failed, I tried again with the standard analyzer which had earlier worked before adding the classes but failed this time around with same error message as above. I guess the problem has to do with array compatibility in java and python but I don't really know. Thanks. On Fri, Oct 17, 2014 at 7:23 PM, Andi Vajda va...@apache.org wrote: On Fri, 17 Oct 2014, Alexander Alex wrote: Meanwhile, am using lucene 3.6.2 version. The problem is jvm instantiation from any python code using lucene caused as a result of the classes I added to lucene core. -- Forwarded message -- I added a customized lucene analyzer class to lucene core in Pylucene. Please explain in _detail_ the steps you followed to accomplish this. A log of all the commands you ran would be ideal. Thanks ! Andi.. This class is google guava as a dependency because of the array handling function available in com.google.common.collect.Iterables in guava. When I tried to index using this analyzer, I got the following error: Traceback (most recent call last): File C:\IndexFiles.py, line 78, in lucene.initVM() JavaError: java.lang.NoClassDefFoundError: org/apache/lucene/analysis/CharArraySet Java stacktrace: java.lang.NoClassDefFoundError: org/apache/lucene/analysis/CharArraySet Caused by: java.lang.ClassNotFoundException: org.apache.lucene.analysis.CharArraySet at java.net.URLClassLoader$1.run(URLClassLoader.java:366) at java.net.URLClassLoader$1.run(URLClassLoader.java:355) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:354) at java.lang.ClassLoader.loadClass(ClassLoader.java:425) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308) at java.lang.ClassLoader.loadClass(ClassLoader.java:358) Even the example indexing code in Lucene in Action that I tried earlier and worked, when I retried it after adding this class is returning the same error above. Am not too familiar with CharArraySet class as I can see the problem is from it. How do i handle this? Attached is the java files whose class were added to lucene core in pylucene. Thanks
[jira] [Comment Edited] (SOLR-6623) NPE in StoredFieldsShardResponseProcessor possible when using TIME_ALLOWED param
[ https://issues.apache.org/jira/browse/SOLR-6623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14174751#comment-14174751 ] Anshum Gupta edited comment on SOLR-6623 at 10/17/14 6:00 AM: -- A patch that fixes the issue but it would really be resolved when SOLR-6616 is taken care of (I think next week, it's high on my agenda). It would be good to get some feedback on this though. SolrIndexSearcher.getDocSetNC() is where the code flows from and where the ExitingReaderException is caught (and consumed) and collector.getDocset() is returned. This isn't null but a few entries in the response aren't added due to this (looking at the code flow). Ideally, an exception should be returned and partial result should be returned only if shards.tolerant is set to true. That is something that SOLR-6616 would handle. was (Author: anshumg): A patch that fixes the issue but it would really be resolved when SOLR-6616 is taken care of (I think next week, it's high on my agenda). It would be good to get some feedback on this though. NPE in StoredFieldsShardResponseProcessor possible when using TIME_ALLOWED param Key: SOLR-6623 URL: https://issues.apache.org/jira/browse/SOLR-6623 Project: Solr Issue Type: Bug Reporter: Hoss Man Assignee: Anshum Gupta Attachments: SOLR-6623.patch I'm not sure if this is an existing bug, or something new caused by changes in SOLR-5986, but it just poped up in jenkinds today... http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-MacOSX/1844 Revision: 1631656 {noformat} [junit4] 2 NOTE: reproduce with: ant test -Dtestcase=TestDistributedGrouping -Dtests.method=testDistribSearch -Dtests.seed=E9460FA0973F6672 -Dtests.slow=true -Dtests.locale=cs -Dtests.timezone=Indian/Mayotte -Dtests.file.encoding=ISO-8859-1 [junit4] ERROR 55.9s | TestDistributedGrouping.testDistribSearch [junit4] Throwable #1: org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: java.lang.NullPointerException [junit4] at org.apache.solr.search.grouping.distributed.responseprocessor.StoredFieldsShardResponseProcessor.process(StoredFieldsShardResponseProcessor.java:45) [junit4] at org.apache.solr.handler.component.QueryComponent.handleGroupedResponses(QueryComponent.java:708) [junit4] at org.apache.solr.handler.component.QueryComponent.handleResponses(QueryComponent.java:691) [junit4] at org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:337) [junit4] at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:136) [junit4] at org.apache.solr.core.SolrCore.execute(SolrCore.java:1983) [junit4] at org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:773) [junit4] at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:408) [junit4] at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:202) [junit4] at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419) [junit4] at org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:137) [junit4] at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419) [junit4] at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455) [junit4] at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:229) [junit4] at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137) [junit4] at org.eclipse.jetty.server.handler.GzipHandler.handle(GzipHandler.java:301) [junit4] at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1077) [junit4] at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384) [junit4] at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193) [junit4] at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009) [junit4] at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135) [junit4] at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116) [junit4] at org.eclipse.jetty.server.Server.handle(Server.java:368) [junit4] at org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489) [junit4] at
[jira] [Commented] (SOLR-6474) Smoke tester should use the Solr start scripts to start Solr
[ https://issues.apache.org/jira/browse/SOLR-6474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14174770#comment-14174770 ] Hoss Man commented on SOLR-6474: https://svn.apache.org/repos/asf/lucene/dev/trunk/dev-tools/scripts/smokeTestRelease.py Smoke tester should use the Solr start scripts to start Solr Key: SOLR-6474 URL: https://issues.apache.org/jira/browse/SOLR-6474 Project: Solr Issue Type: Task Components: scripts and tools Reporter: Shalin Shekhar Mangar Labels: difficulty-easy, impact-low Fix For: 5.0 We should use the Solr bin scripts created by SOLR-3617 in the smoke tester to test Solr. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-6547) CloudSolrServer query getqtime Exception
[ https://issues.apache.org/jira/browse/SOLR-6547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14174776#comment-14174776 ] Hoss Man commented on SOLR-6547: i suspect this has something to do with the code in CloudSolrServer.condenseResponse ? I honestly have no idea what that method is for or what it's doing, but a quick grep suggests that's the only place in the code base where a long value is getting assocaited with QTime * CloudSolrServer.condenseResponse should be fixed to use an int for QTime * SolrResponseBase.getQTime can (and probably should) be hardended to do something like this instead of just a direct Integer cast...{code} return ((Number) header.get(QTime)).intValue() {code}...that kind of pattern can help protect us in the future if we decide that QTime *should* be a long CloudSolrServer query getqtime Exception Key: SOLR-6547 URL: https://issues.apache.org/jira/browse/SOLR-6547 Project: Solr Issue Type: Bug Components: SolrJ Affects Versions: 4.10 Reporter: kevin We are using CloudSolrServer to query ,but solrj throw Exception ; java.lang.ClassCastException: java.lang.Long cannot be cast to java.lang.Integer at org.apache.solr.client.solrj.response.SolrResponseBase.getQTime(SolrResponseBase.java:76) -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-5.x-MacOSX (64bit/jdk1.7.0) - Build # 1849 - Still Failing!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-MacOSX/1849/ Java: 64bit/jdk1.7.0 -XX:-UseCompressedOops -XX:+UseParallelGC 1 tests failed. FAILED: org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testDistribSearch Error Message: Error CREATEing SolrCore 'halfcollection_shard1_replica1': Unable to create core [halfcollection_shard1_replica1] Caused by: null Stack Trace: org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: Error CREATEing SolrCore 'halfcollection_shard1_replica1': Unable to create core [halfcollection_shard1_replica1] Caused by: null at __randomizedtesting.SeedInfo.seed([7FF06594A345DF76:FE16EB8CD41ABF4A]:0) at org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:569) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:215) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:211) at org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testErrorHandling(CollectionsAPIDistributedZkTest.java:583) at org.apache.solr.cloud.CollectionsAPIDistributedZkTest.doTest(CollectionsAPIDistributedZkTest.java:205) at org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869) at sun.reflect.GeneratedMethodAccessor54.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at
[jira] [Commented] (SOLR-6476) Create a bulk mode for schema API
[ https://issues.apache.org/jira/browse/SOLR-6476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14174891#comment-14174891 ] ASF subversion and git services commented on SOLR-6476: --- Commit 1632525 from [~noble.paul] in branch 'dev/trunk' [ https://svn.apache.org/r1632525 ] SOLR-6476 error message fixed Create a bulk mode for schema API - Key: SOLR-6476 URL: https://issues.apache.org/jira/browse/SOLR-6476 Project: Solr Issue Type: New Feature Components: Schema and Analysis Reporter: Noble Paul Assignee: Noble Paul Labels: managedResource Fix For: 5.0, Trunk Attachments: SOLR-6476.patch, SOLR-6476.patch, SOLR-6476.patch, SOLR-6476.patch, SOLR-6476.patch, SOLR-6476.patch, SOLR-6476.patch, SOLR-6476.patch The current schema API does one operation at a time and the normal usecase is that users add multiple fields/fieldtypes/copyFields etc in one shot. example {code} curl http://localhost:8983/solr/collection1/schema -H 'Content-type:application/json' -d '{ add-field: { name:sell-by, type:tdate, stored:true }, add-field:{ name:catchall, type:text_general, stored:false } } {code} or {code} curl http://localhost:8983/solr/collection1/schema -H 'Content-type:application/json' -d '{ add-field:[ { name:sell-by, type:tdate, stored:true }, { name:catchall, type:text_general, stored:false }] } {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-6476) Create a bulk mode for schema API
[ https://issues.apache.org/jira/browse/SOLR-6476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14174893#comment-14174893 ] ASF subversion and git services commented on SOLR-6476: --- Commit 1632526 from [~noble.paul] in branch 'dev/branches/branch_5x' [ https://svn.apache.org/r1632526 ] SOLR-6476 error message fixed Create a bulk mode for schema API - Key: SOLR-6476 URL: https://issues.apache.org/jira/browse/SOLR-6476 Project: Solr Issue Type: New Feature Components: Schema and Analysis Reporter: Noble Paul Assignee: Noble Paul Labels: managedResource Fix For: 5.0, Trunk Attachments: SOLR-6476.patch, SOLR-6476.patch, SOLR-6476.patch, SOLR-6476.patch, SOLR-6476.patch, SOLR-6476.patch, SOLR-6476.patch, SOLR-6476.patch The current schema API does one operation at a time and the normal usecase is that users add multiple fields/fieldtypes/copyFields etc in one shot. example {code} curl http://localhost:8983/solr/collection1/schema -H 'Content-type:application/json' -d '{ add-field: { name:sell-by, type:tdate, stored:true }, add-field:{ name:catchall, type:text_general, stored:false } } {code} or {code} curl http://localhost:8983/solr/collection1/schema -H 'Content-type:application/json' -d '{ add-field:[ { name:sell-by, type:tdate, stored:true }, { name:catchall, type:text_general, stored:false }] } {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-trunk-Windows (64bit/jdk1.8.0_20) - Build # 4375 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/4375/ Java: 64bit/jdk1.8.0_20 -XX:-UseCompressedOops -XX:+UseParallelGC 1 tests failed. REGRESSION: org.apache.solr.cloud.BasicDistributedZkTest.testDistribSearch Error Message: commitWithin did not work on node: http://127.0.0.1:64298/collection1 expected:68 but was:67 Stack Trace: java.lang.AssertionError: commitWithin did not work on node: http://127.0.0.1:64298/collection1 expected:68 but was:67 at __randomizedtesting.SeedInfo.seed([E6A5B7B4C95ACE8B:674339ACBE05AEB7]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.failNotEquals(Assert.java:647) at org.junit.Assert.assertEquals(Assert.java:128) at org.junit.Assert.assertEquals(Assert.java:472) at org.apache.solr.cloud.BasicDistributedZkTest.doTest(BasicDistributedZkTest.java:345) at org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55) at
[jira] [Commented] (LUCENE-5317) [PATCH] Concordance capability
[ https://issues.apache.org/jira/browse/LUCENE-5317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14174938#comment-14174938 ] Tim Allison commented on LUCENE-5317: - Steve, thank you! I had abandoned hope and haven't been updating this patch on jira. The current version in my local repo looks a bit different now. Let me apply your patch and see what the diff is between your cleanup/fixes and my current version. [PATCH] Concordance capability -- Key: LUCENE-5317 URL: https://issues.apache.org/jira/browse/LUCENE-5317 Project: Lucene - Core Issue Type: New Feature Components: core/search Affects Versions: 4.5 Reporter: Tim Allison Labels: patch Fix For: 4.9 Attachments: LUCENE-5317.patch, concordance_v1.patch.gz This patch enables a Lucene-powered concordance search capability. Concordances are extremely useful for linguists, lawyers and other analysts performing analytic search vs. traditional snippeting/document retrieval tasks. By analytic search, I mean that the user wants to browse every time a term appears (or at least the topn) in a subset of documents and see the words before and after. Concordance technology is far simpler and less interesting than IR relevance models/methods, but it can be extremely useful for some use cases. Traditional concordance sort orders are available (sort on words before the target, words after, target then words before and target then words after). Under the hood, this is running SpanQuery's getSpans() and reanalyzing to obtain character offsets. There is plenty of room for optimizations and refactoring. Many thanks to my colleague, Jason Robinson, for input on the design of this patch. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-6573) Query elevation fails when localParams are used
[ https://issues.apache.org/jira/browse/SOLR-6573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14174974#comment-14174974 ] Liwiusz Ociepa commented on SOLR-6573: -- Yes - I've tested patch on example collection1 and results for query provided by Radek are ok with that fix (after removing backslashes added by Jira). Query elevation fails when localParams are used --- Key: SOLR-6573 URL: https://issues.apache.org/jira/browse/SOLR-6573 Project: Solr Issue Type: Bug Components: search Affects Versions: 4.10.1 Reporter: Radek Urbas Assignee: Jan Høydahl Fix For: 5.0 Attachments: SOLR-6573.patch, SOLR-6573.patch Elevation does not work when localParams are specified. In example collection1 shipped with Solr query like this one {code}http://localhost:8983/solr/collection1/elevate?q=ipodfl=id,title,[elevated]{code} properly returns elevated documents on top. If localParams are specified e.g. {!q.op=AND} in query like {code}http://localhost:8983/solr/collection1/elevate?q=\{!q.op=AND\}ipodfl=id,title,[elevated]{code} documents are not elevated anymore. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-6573) Query elevation fails when localParams are used
[ https://issues.apache.org/jira/browse/SOLR-6573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14174974#comment-14174974 ] Liwiusz Ociepa edited comment on SOLR-6573 at 10/17/14 12:01 PM: - Yes - I've tested patch on example collection1 and results for query provided by Radek are ok with that fix (after removing backslashes added by Jira). edit: I patched 4.10.1 was (Author: liwiusz): Yes - I've tested patch on example collection1 and results for query provided by Radek are ok with that fix (after removing backslashes added by Jira). Query elevation fails when localParams are used --- Key: SOLR-6573 URL: https://issues.apache.org/jira/browse/SOLR-6573 Project: Solr Issue Type: Bug Components: search Affects Versions: 4.10.1 Reporter: Radek Urbas Assignee: Jan Høydahl Fix For: 5.0 Attachments: SOLR-6573.patch, SOLR-6573.patch Elevation does not work when localParams are specified. In example collection1 shipped with Solr query like this one {code}http://localhost:8983/solr/collection1/elevate?q=ipodfl=id,title,[elevated]{code} properly returns elevated documents on top. If localParams are specified e.g. {!q.op=AND} in query like {code}http://localhost:8983/solr/collection1/elevate?q=\{!q.op=AND\}ipodfl=id,title,[elevated]{code} documents are not elevated anymore. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-6533) Support editing common solrconfig.xml values
[ https://issues.apache.org/jira/browse/SOLR-6533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Noble Paul updated SOLR-6533: - Description: There are a bunch of properties in solrconfig.xml which users want to edit. We will attack them first These properties will be persisted to a separate file called config.json (or whatever file). Instead of saving in the same format we will have well known properties which users can directly edit {code} updateHandler.autoCommit.maxDocs query.filterCache.initialSize {code} The api will be modeled around the bulk schema API {code:javascript} curl http://localhost:8983/solr/collection1/config -H 'Content-type:application/json' -d '{ set-property : {updateHandler.autoCommit.maxDocs:5}, unset-property:updateHandler.autoCommit.maxDocs }' {code} {code:javascript} //or use this to set ${mypropname} values curl http://localhost:8983/solr/collection1/config -H 'Content-type:application/json' -d '{ set-user-property : {mypropname:my_prop_val}, unset-user-property:{mypropname} }' {code} The values stored in the config.json will always take precedence and will be applied after loading solrconfig.xml. An http GET on /config path will give the real config that is applied . was: There are a bunch of properties in solrconfig.xml which users want to edit. We will attack them first These properties will be persisted to a separate file called config.json (or whatever file). Instead of saving in the same format we will have well known properties which users can directly edit {code} cores.transientCacheSize indexConfig.mergeFactor {code} The api will be modeled around the bulk schema API {code:javascript} curl http://localhost:8983/solr/collection1/config -H 'Content-type:application/json' -d '{ set-property : {index.mergeFactor:5}, unset-property:{cores.transientCacheSize} }' {code} {code:javascript} //or use this to set ${mypropname} values curl http://localhost:8983/solr/collection1/config -H 'Content-type:application/json' -d '{ set-user-property : {mypropname:my_prop_val}, unset-user-property:{mypropname} }' {code} The values stored in the config.json will always take precedence and will be applied after loading solrconfig.xml. An http GET on /config path will give the real config that is applied . Support editing common solrconfig.xml values Key: SOLR-6533 URL: https://issues.apache.org/jira/browse/SOLR-6533 Project: Solr Issue Type: Sub-task Reporter: Noble Paul Attachments: SOLR-6533.patch, SOLR-6533.patch, SOLR-6533.patch There are a bunch of properties in solrconfig.xml which users want to edit. We will attack them first These properties will be persisted to a separate file called config.json (or whatever file). Instead of saving in the same format we will have well known properties which users can directly edit {code} updateHandler.autoCommit.maxDocs query.filterCache.initialSize {code} The api will be modeled around the bulk schema API {code:javascript} curl http://localhost:8983/solr/collection1/config -H 'Content-type:application/json' -d '{ set-property : {updateHandler.autoCommit.maxDocs:5}, unset-property:updateHandler.autoCommit.maxDocs }' {code} {code:javascript} //or use this to set ${mypropname} values curl http://localhost:8983/solr/collection1/config -H 'Content-type:application/json' -d '{ set-user-property : {mypropname:my_prop_val}, unset-user-property:{mypropname} }' {code} The values stored in the config.json will always take precedence and will be applied after loading solrconfig.xml. An http GET on /config path will give the real config that is applied . -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-6533) Support editing common solrconfig.xml values
[ https://issues.apache.org/jira/browse/SOLR-6533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Noble Paul updated SOLR-6533: - Description: There are a bunch of properties in solrconfig.xml which users want to edit. We will attack them first These properties will be persisted to a separate file called config.json (or whatever file). Instead of saving in the same format we will have well known properties which users can directly edit {code} updateHandler.autoCommit.maxDocs query.filterCache.initialSize {code} The api will be modeled around the bulk schema API {code:javascript} curl http://localhost:8983/solr/collection1/config -H 'Content-type:application/json' -d '{ set-property : {updateHandler.autoCommit.maxDocs:5}, unset-property: updateHandler.autoCommit.maxDocs }' {code} {code:javascript} //or use this to set ${mypropname} values curl http://localhost:8983/solr/collection1/config -H 'Content-type:application/json' -d '{ set-user-property : {mypropname:my_prop_val}, unset-user-property:{mypropname} }' {code} The values stored in the config.json will always take precedence and will be applied after loading solrconfig.xml. An http GET on /config path will give the real config that is applied . was: There are a bunch of properties in solrconfig.xml which users want to edit. We will attack them first These properties will be persisted to a separate file called config.json (or whatever file). Instead of saving in the same format we will have well known properties which users can directly edit {code} updateHandler.autoCommit.maxDocs query.filterCache.initialSize {code} The api will be modeled around the bulk schema API {code:javascript} curl http://localhost:8983/solr/collection1/config -H 'Content-type:application/json' -d '{ set-property : {updateHandler.autoCommit.maxDocs:5}, unset-property:updateHandler.autoCommit.maxDocs }' {code} {code:javascript} //or use this to set ${mypropname} values curl http://localhost:8983/solr/collection1/config -H 'Content-type:application/json' -d '{ set-user-property : {mypropname:my_prop_val}, unset-user-property:{mypropname} }' {code} The values stored in the config.json will always take precedence and will be applied after loading solrconfig.xml. An http GET on /config path will give the real config that is applied . Support editing common solrconfig.xml values Key: SOLR-6533 URL: https://issues.apache.org/jira/browse/SOLR-6533 Project: Solr Issue Type: Sub-task Reporter: Noble Paul Attachments: SOLR-6533.patch, SOLR-6533.patch, SOLR-6533.patch There are a bunch of properties in solrconfig.xml which users want to edit. We will attack them first These properties will be persisted to a separate file called config.json (or whatever file). Instead of saving in the same format we will have well known properties which users can directly edit {code} updateHandler.autoCommit.maxDocs query.filterCache.initialSize {code} The api will be modeled around the bulk schema API {code:javascript} curl http://localhost:8983/solr/collection1/config -H 'Content-type:application/json' -d '{ set-property : {updateHandler.autoCommit.maxDocs:5}, unset-property: updateHandler.autoCommit.maxDocs }' {code} {code:javascript} //or use this to set ${mypropname} values curl http://localhost:8983/solr/collection1/config -H 'Content-type:application/json' -d '{ set-user-property : {mypropname:my_prop_val}, unset-user-property:{mypropname} }' {code} The values stored in the config.json will always take precedence and will be applied after loading solrconfig.xml. An http GET on /config path will give the real config that is applied . -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-6517) CollectionsAPI call REBALANCELEADERS
[ https://issues.apache.org/jira/browse/SOLR-6517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erick Erickson updated SOLR-6517: - Summary: CollectionsAPI call REBALANCELEADERS (was: CollectionsAPI call ELECTPREFERREDLEADERS) CollectionsAPI call REBALANCELEADERS Key: SOLR-6517 URL: https://issues.apache.org/jira/browse/SOLR-6517 Project: Solr Issue Type: New Feature Affects Versions: 5.0, Trunk Reporter: Erick Erickson Assignee: Erick Erickson Attachments: SOLR-6517.patch, SOLR-6517.patch Perhaps the final piece of SOLR-6491. Once the preferred leadership roles are assigned, there has to be a command make it so Mr. Solr. This is something of a placeholder to collect ideas. One wouldn't want to flood the system with hundreds of re-assignments at once. Should this be synchronous or asnych? Should it make the best attempt but not worry about perfection? Should it??? a collection=name parameter would be required and it would re-elect all the leaders that were on the 'wrong' node I'm thinking an optionally allowing one to specify a shard in the case where you wanted to make a very specific change. Note that there's no need to specify a particular replica, since there should be only a single preferredLeader per slice. This command would do nothing to any slice that did not have a replica with a preferredLeader role. Likewise it would do nothing if the slice in question already had the leader role assigned to the node with the preferredLeader role. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-6517) CollectionsAPI call REBALANCELEADERS
[ https://issues.apache.org/jira/browse/SOLR-6517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erick Erickson updated SOLR-6517: - Attachment: SOLR-6517.patch Final patch with CHANGES.txt entry. Committing momentarily CollectionsAPI call REBALANCELEADERS Key: SOLR-6517 URL: https://issues.apache.org/jira/browse/SOLR-6517 Project: Solr Issue Type: New Feature Affects Versions: 5.0, Trunk Reporter: Erick Erickson Assignee: Erick Erickson Attachments: SOLR-6517.patch, SOLR-6517.patch, SOLR-6517.patch Perhaps the final piece of SOLR-6491. Once the preferred leadership roles are assigned, there has to be a command make it so Mr. Solr. This is something of a placeholder to collect ideas. One wouldn't want to flood the system with hundreds of re-assignments at once. Should this be synchronous or asnych? Should it make the best attempt but not worry about perfection? Should it??? a collection=name parameter would be required and it would re-elect all the leaders that were on the 'wrong' node I'm thinking an optionally allowing one to specify a shard in the case where you wanted to make a very specific change. Note that there's no need to specify a particular replica, since there should be only a single preferredLeader per slice. This command would do nothing to any slice that did not have a replica with a preferredLeader role. Likewise it would do nothing if the slice in question already had the leader role assigned to the node with the preferredLeader role. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Batch replica transfer from master shard solr cloud
Hi, We were getting pretty low ingest throughput when using replicas. We traced it to the following logic in indexing code: - Read SolrInputDocument from stream - Index a document - Add to ConcurrentUpdateSolrServer instance for sending to replica So, we changed the logic to: - Read SolrInputDocument objects from stream in batches of 500. - Add documents to ConcurrentUpdateSolrServer instance - Index documents in a loop This has improved indexing speed significantly. What are the caveats to this approach? Thanks, CP
[jira] [Updated] (SOLR-6533) Support editing common solrconfig.xml values
[ https://issues.apache.org/jira/browse/SOLR-6533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Noble Paul updated SOLR-6533: - Attachment: SOLR-6533.patch All features done. Cleanup is left and of course more testcases. Please give your inputs Support editing common solrconfig.xml values Key: SOLR-6533 URL: https://issues.apache.org/jira/browse/SOLR-6533 Project: Solr Issue Type: Sub-task Reporter: Noble Paul Attachments: SOLR-6533.patch, SOLR-6533.patch, SOLR-6533.patch, SOLR-6533.patch There are a bunch of properties in solrconfig.xml which users want to edit. We will attack them first These properties will be persisted to a separate file called config.json (or whatever file). Instead of saving in the same format we will have well known properties which users can directly edit {code} updateHandler.autoCommit.maxDocs query.filterCache.initialSize {code} The api will be modeled around the bulk schema API {code:javascript} curl http://localhost:8983/solr/collection1/config -H 'Content-type:application/json' -d '{ set-property : {updateHandler.autoCommit.maxDocs:5}, unset-property: updateHandler.autoCommit.maxDocs }' {code} {code:javascript} //or use this to set ${mypropname} values curl http://localhost:8983/solr/collection1/config -H 'Content-type:application/json' -d '{ set-user-property : {mypropname:my_prop_val}, unset-user-property:{mypropname} }' {code} The values stored in the config.json will always take precedence and will be applied after loading solrconfig.xml. An http GET on /config path will give the real config that is applied . -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.9.0-ea-b34) - Build # 11468 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/11468/ Java: 64bit/jdk1.9.0-ea-b34 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC 1 tests failed. REGRESSION: org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.testDistribSearch Error Message: There were too many update fails - we expect it can happen, but shouldn't easily Stack Trace: java.lang.AssertionError: There were too many update fails - we expect it can happen, but shouldn't easily at __randomizedtesting.SeedInfo.seed([69C32A820CBDD445:E825A49A7BE2B479]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.junit.Assert.assertFalse(Assert.java:68) at org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.doTest(ChaosMonkeyNothingIsSafeTest.java:223) at org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55) at
[jira] [Commented] (SOLR-6266) Couchbase plug-in for Solr
[ https://issues.apache.org/jira/browse/SOLR-6266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14175174#comment-14175174 ] Karol Abramczyk commented on SOLR-6266: --- Kwan-I Lee, Handling replication failure hasn't been implemented yet in this plugin. I think that most important improvents now are: * automatic XDCR configuration in Couchbase Server at plugin startup * tests for current functionality With both this improvents I had difficulties so far and couldn't succeed. However, when this is done, all following work should be much easier. In the meantime I fixed some important bugs, so I attach the latest sourcecode. Couchbase plug-in for Solr -- Key: SOLR-6266 URL: https://issues.apache.org/jira/browse/SOLR-6266 Project: Solr Issue Type: New Feature Reporter: Varun Assignee: Joel Bernstein Attachments: solr-couchbase-plugin-0.0.3-SNAPSHOT.tar.gz, solr-couchbase-plugin-0.0.5-SNAPSHOT.tar.gz, solr-couchbase-plugin-0.0.5.1.tar.gz, solr-couchbase-plugin.tar.gz, solr-couchbase-plugin.tar.gz It would be great if users could connect Couchbase and Solr so that updates to Couchbase can automatically flow to Solr. Couchbase provides some very nice API's which allow applications to mimic the behavior of a Couchbase server so that it can receive updates via Couchbase's normal cross data center replication (XDCR). One possible design for this is to create a CouchbaseLoader that extends ContentStreamLoader. This new loader would embed the couchbase api's that listen for incoming updates from couchbase, then marshal the couchbase updates into the normal Solr update process. Instead of marshaling couchbase updates into the normal Solr update process, we could also embed a SolrJ client to relay the request through the http interfaces. This may be necessary if we have to handle mapping couchbase buckets to Solr collections on the Solr side. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-6266) Couchbase plug-in for Solr
[ https://issues.apache.org/jira/browse/SOLR-6266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Karol Abramczyk updated SOLR-6266: -- Attachment: solr-couchbase-plugin-0.0.5.1.tar.gz Couchbase plug-in for Solr -- Key: SOLR-6266 URL: https://issues.apache.org/jira/browse/SOLR-6266 Project: Solr Issue Type: New Feature Reporter: Varun Assignee: Joel Bernstein Attachments: solr-couchbase-plugin-0.0.3-SNAPSHOT.tar.gz, solr-couchbase-plugin-0.0.5-SNAPSHOT.tar.gz, solr-couchbase-plugin-0.0.5.1.tar.gz, solr-couchbase-plugin.tar.gz, solr-couchbase-plugin.tar.gz It would be great if users could connect Couchbase and Solr so that updates to Couchbase can automatically flow to Solr. Couchbase provides some very nice API's which allow applications to mimic the behavior of a Couchbase server so that it can receive updates via Couchbase's normal cross data center replication (XDCR). One possible design for this is to create a CouchbaseLoader that extends ContentStreamLoader. This new loader would embed the couchbase api's that listen for incoming updates from couchbase, then marshal the couchbase updates into the normal Solr update process. Instead of marshaling couchbase updates into the normal Solr update process, we could also embed a SolrJ client to relay the request through the http interfaces. This may be necessary if we have to handle mapping couchbase buckets to Solr collections on the Solr side. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-6266) Couchbase plug-in for Solr
[ https://issues.apache.org/jira/browse/SOLR-6266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Karol Abramczyk updated SOLR-6266: -- Attachment: (was: solr-couchbase-plugin-0.0.5.1.tar.gz) Couchbase plug-in for Solr -- Key: SOLR-6266 URL: https://issues.apache.org/jira/browse/SOLR-6266 Project: Solr Issue Type: New Feature Reporter: Varun Assignee: Joel Bernstein Attachments: solr-couchbase-plugin-0.0.3-SNAPSHOT.tar.gz, solr-couchbase-plugin-0.0.5-SNAPSHOT.tar.gz, solr-couchbase-plugin.tar.gz, solr-couchbase-plugin.tar.gz It would be great if users could connect Couchbase and Solr so that updates to Couchbase can automatically flow to Solr. Couchbase provides some very nice API's which allow applications to mimic the behavior of a Couchbase server so that it can receive updates via Couchbase's normal cross data center replication (XDCR). One possible design for this is to create a CouchbaseLoader that extends ContentStreamLoader. This new loader would embed the couchbase api's that listen for incoming updates from couchbase, then marshal the couchbase updates into the normal Solr update process. Instead of marshaling couchbase updates into the normal Solr update process, we could also embed a SolrJ client to relay the request through the http interfaces. This may be necessary if we have to handle mapping couchbase buckets to Solr collections on the Solr side. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-6266) Couchbase plug-in for Solr
[ https://issues.apache.org/jira/browse/SOLR-6266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Karol Abramczyk updated SOLR-6266: -- Attachment: solr-couchbase-plugin-0.0.5.1-SNAPSHOT.tar.gz Couchbase plug-in for Solr -- Key: SOLR-6266 URL: https://issues.apache.org/jira/browse/SOLR-6266 Project: Solr Issue Type: New Feature Reporter: Varun Assignee: Joel Bernstein Attachments: solr-couchbase-plugin-0.0.3-SNAPSHOT.tar.gz, solr-couchbase-plugin-0.0.5-SNAPSHOT.tar.gz, solr-couchbase-plugin-0.0.5.1-SNAPSHOT.tar.gz, solr-couchbase-plugin.tar.gz, solr-couchbase-plugin.tar.gz It would be great if users could connect Couchbase and Solr so that updates to Couchbase can automatically flow to Solr. Couchbase provides some very nice API's which allow applications to mimic the behavior of a Couchbase server so that it can receive updates via Couchbase's normal cross data center replication (XDCR). One possible design for this is to create a CouchbaseLoader that extends ContentStreamLoader. This new loader would embed the couchbase api's that listen for incoming updates from couchbase, then marshal the couchbase updates into the normal Solr update process. Instead of marshaling couchbase updates into the normal Solr update process, we could also embed a SolrJ client to relay the request through the http interfaces. This may be necessary if we have to handle mapping couchbase buckets to Solr collections on the Solr side. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-6151) Intermittent TestReplicationHandlerBackup failures
[ https://issues.apache.org/jira/browse/SOLR-6151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14175209#comment-14175209 ] Tomás Fernández Löbbe commented on SOLR-6151: - I'll keep this Jira open for a couple of days to see if we see the failures again in jenkins. Intermittent TestReplicationHandlerBackup failures -- Key: SOLR-6151 URL: https://issues.apache.org/jira/browse/SOLR-6151 Project: Solr Issue Type: Bug Reporter: Dawid Weiss Assignee: Dawid Weiss Priority: Minor Attachments: SOLR-6151.patch, SOLR-6151.patch, SOLR-6151.patch, SOLR-6151.patch {code} [junit4] 2 4236563 T14503 C3403 oasc.SolrCore.execute [collection1] webapp=/solr path=/replication params={command=details} status=0 QTime=0 [junit4] 2 4236567 T14502 C3403 oasc.SolrCore.execute [collection1] webapp=/solr path=/replication params={command=backupname=cphlpigzwamrxxekj} status=0 QTime=5 [junit4] 2 4236567 T14511 oash.SnapShooter.createSnapshot Creating backup snapshot... [junit4] 2 4236682 T14505 C3403 oasc.SolrCore.execute [collection1] webapp=/solr path=/replication params={command=details} status=0 QTime=0 [junit4] 2 4237270 T14503 C3403 oasc.SolrCore.execute [collection1] webapp=/solr path=/replication params={command=details} status=0 QTime=0 [junit4] 2 4237275 T14502 C3403 oasc.SolrCore.execute [collection1] webapp=/solr path=/replication params={command=backupname=zviqwpynhbjdbiqofwa} status=0 QTime=4 [junit4] 2 4237277 T14513 oash.SnapShooter.createSnapshot Creating backup snapshot... [junit4] 2 4237390 T14504 C3403 oasc.SolrCore.execute [collection1] webapp=/solr path=/replication params={command=details} status=0 QTime=0 [junit4] 2 4237508 T14500 C3403 oasc.SolrCore.execute [collection1] webapp=/solr path=/replication params={command=details} status=0 QTime=0 [junit4] 2 4237626 T14505 C3403 oasc.SolrCore.execute [collection1] webapp=/solr path=/replication params={command=details} status=0 QTime=1 [junit4] 2 4237743 T14503 C3403 oasc.SolrCore.execute [collection1] webapp=/solr path=/replication params={command=details} status=0 QTime=0 [junit4] 2 4237861 T14502 C3403 oasc.SolrCore.execute [collection1] webapp=/solr path=/replication params={command=details} status=0 QTime=0 [junit4] 2 4237979 T14504 C3403 oasc.SolrCore.execute [collection1] webapp=/solr path=/replication params={command=details} status=0 QTime=0 [junit4] 2 4238097 T14500 C3403 oasc.SolrCore.execute [collection1] webapp=/solr path=/replication params={command=details} status=0 QTime=0 [junit4] 2 4238214 T14505 C3403 oasc.SolrCore.execute [collection1] webapp=/solr path=/replication params={command=details} status=0 QTime=0 [junit4] 2 4238332 T14503 C3403 oasc.SolrCore.execute [collection1] webapp=/solr path=/replication params={command=details} status=0 QTime=0 [junit4] 2 4238450 T14502 C3403 oasc.SolrCore.execute [collection1] webapp=/solr path=/replication params={command=details} status=0 QTime=1 [junit4] 2 4238567 T14504 C3403 oasc.SolrCore.execute [collection1] webapp=/solr path=/replication params={command=details} status=0 QTime=0 [junit4] 2 4238686 T14500 C3403 oasc.SolrCore.execute [collection1] webapp=/solr path=/replication params={command=details} status=0 QTime=0 [junit4] 2 4238804 T14505 C3403 oasc.SolrCore.execute [collection1] webapp=/solr path=/replication params={command=details} status=0 QTime=0 [junit4] 2 4238922 T14503 C3403 oasc.SolrCore.execute [collection1] webapp=/solr path=/replication params={command=details} status=0 QTime=1 [junit4] 2 4239039 T14502 C3403 oasc.SolrCore.execute [collection1] webapp=/solr path=/replication params={command=details} status=0 QTime=0 [junit4] 2 4239158 T14504 C3403 oasc.SolrCore.execute [collection1] webapp=/solr path=/replication params={command=details} status=0 QTime=0 [junit4] 2 4239276 T14500 C3403 oasc.SolrCore.execute [collection1] webapp=/solr path=/replication params={command=details} status=0 QTime=0 [junit4] 2 4239394 T14505 C3403 oasc.SolrCore.execute [collection1] webapp=/solr path=/replication params={command=details} status=0 QTime=1 [junit4] 2 4239511 T14503 C3403 oasc.SolrCore.execute [collection1] webapp=/solr path=/replication params={command=details} status=0 QTime=0 [junit4] 2 4239629 T14502 C3403 oasc.SolrCore.execute [collection1] webapp=/solr path=/replication params={command=details} status=0 QTime=0 [junit4] 2 4239747 T14496 oas.SolrTestCaseJ4.tearDown ###Ending doTestBackup [junit4] 2 4239756 T14496 oasc.CoreContainer.shutdown Shutting down CoreContainer instance=5885809
Re: Batch replica transfer from master shard solr cloud
On 10/17/2014 8:50 AM, Cp Mishra wrote: So, we changed the logic to: -Read SolrInputDocument objects from stream in batches of 500. -Add documents to ConcurrentUpdateSolrServer instance -Index documents in a loop This has improved indexing speed significantly. What are the caveats to this approach? Thinking back to when we were having deadlock problems with heavy indexing on SolrCloud, I seem to recall that one of the experts said that SolrCloud already does batch the documents, only it was 10 at a time. I also seemed to remember that making the batch size configurable was discussed, but I don't know how discussion ended. Am I remembering incorrectly? I'm not familiar with the actual code for this part of Solr at all. Thanks, Shawn - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-6517) CollectionsAPI call REBALANCELEADERS
[ https://issues.apache.org/jira/browse/SOLR-6517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14175217#comment-14175217 ] ASF subversion and git services commented on SOLR-6517: --- Commit 1632628 from [~erickoerickson] in branch 'dev/branches/branch_5x' [ https://svn.apache.org/r1632628 ] SOLR-6517: CollectionsAPI call REBALANCELEADERS CollectionsAPI call REBALANCELEADERS Key: SOLR-6517 URL: https://issues.apache.org/jira/browse/SOLR-6517 Project: Solr Issue Type: New Feature Affects Versions: 5.0, Trunk Reporter: Erick Erickson Assignee: Erick Erickson Attachments: SOLR-6517.patch, SOLR-6517.patch, SOLR-6517.patch Perhaps the final piece of SOLR-6491. Once the preferred leadership roles are assigned, there has to be a command make it so Mr. Solr. This is something of a placeholder to collect ideas. One wouldn't want to flood the system with hundreds of re-assignments at once. Should this be synchronous or asnych? Should it make the best attempt but not worry about perfection? Should it??? a collection=name parameter would be required and it would re-elect all the leaders that were on the 'wrong' node I'm thinking an optionally allowing one to specify a shard in the case where you wanted to make a very specific change. Note that there's no need to specify a particular replica, since there should be only a single preferredLeader per slice. This command would do nothing to any slice that did not have a replica with a preferredLeader role. Likewise it would do nothing if the slice in question already had the leader role assigned to the node with the preferredLeader role. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-6517) CollectionsAPI call REBALANCELEADERS
[ https://issues.apache.org/jira/browse/SOLR-6517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14175218#comment-14175218 ] Erick Erickson commented on SOLR-6517: -- Forgot to mention the JIRA when I committed the trunk version. SVN revision for trunk is: 1632594 . I _thought_ the -m param on commit looked a little wrong Siiigggh. CollectionsAPI call REBALANCELEADERS Key: SOLR-6517 URL: https://issues.apache.org/jira/browse/SOLR-6517 Project: Solr Issue Type: New Feature Affects Versions: 5.0, Trunk Reporter: Erick Erickson Assignee: Erick Erickson Fix For: 5.0, Trunk Attachments: SOLR-6517.patch, SOLR-6517.patch, SOLR-6517.patch Perhaps the final piece of SOLR-6491. Once the preferred leadership roles are assigned, there has to be a command make it so Mr. Solr. This is something of a placeholder to collect ideas. One wouldn't want to flood the system with hundreds of re-assignments at once. Should this be synchronous or asnych? Should it make the best attempt but not worry about perfection? Should it??? a collection=name parameter would be required and it would re-elect all the leaders that were on the 'wrong' node I'm thinking an optionally allowing one to specify a shard in the case where you wanted to make a very specific change. Note that there's no need to specify a particular replica, since there should be only a single preferredLeader per slice. This command would do nothing to any slice that did not have a replica with a preferredLeader role. Likewise it would do nothing if the slice in question already had the leader role assigned to the node with the preferredLeader role. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-6517) CollectionsAPI call REBALANCELEADERS
[ https://issues.apache.org/jira/browse/SOLR-6517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erick Erickson resolved SOLR-6517. -- Resolution: Fixed Fix Version/s: Trunk 5.0 CollectionsAPI call REBALANCELEADERS Key: SOLR-6517 URL: https://issues.apache.org/jira/browse/SOLR-6517 Project: Solr Issue Type: New Feature Affects Versions: 5.0, Trunk Reporter: Erick Erickson Assignee: Erick Erickson Fix For: 5.0, Trunk Attachments: SOLR-6517.patch, SOLR-6517.patch, SOLR-6517.patch Perhaps the final piece of SOLR-6491. Once the preferred leadership roles are assigned, there has to be a command make it so Mr. Solr. This is something of a placeholder to collect ideas. One wouldn't want to flood the system with hundreds of re-assignments at once. Should this be synchronous or asnych? Should it make the best attempt but not worry about perfection? Should it??? a collection=name parameter would be required and it would re-elect all the leaders that were on the 'wrong' node I'm thinking an optionally allowing one to specify a shard in the case where you wanted to make a very specific change. Note that there's no need to specify a particular replica, since there should be only a single preferredLeader per slice. This command would do nothing to any slice that did not have a replica with a preferredLeader role. Likewise it would do nothing if the slice in question already had the leader role assigned to the node with the preferredLeader role. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-6527) Give preference during leader election to nodes with the preferredLeader role
[ https://issues.apache.org/jira/browse/SOLR-6527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erick Erickson resolved SOLR-6527. -- Resolution: Fixed Fixed with the checkins under SOLR-6491 Give preference during leader election to nodes with the preferredLeader role - Key: SOLR-6527 URL: https://issues.apache.org/jira/browse/SOLR-6527 Project: Solr Issue Type: Improvement Affects Versions: 5.0, Trunk Reporter: Erick Erickson Assignee: Erick Erickson -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-5969) Add Lucene50Codec
[ https://issues.apache.org/jira/browse/LUCENE-5969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14175227#comment-14175227 ] ASF subversion and git services commented on LUCENE-5969: - Commit 1632631 from [~rcmuir] in branch 'dev/branches/lucene5969' [ https://svn.apache.org/r1632631 ] LUCENE-5969: use sparsebitset to expand sparse encoding to cover more absurd cases Add Lucene50Codec - Key: LUCENE-5969 URL: https://issues.apache.org/jira/browse/LUCENE-5969 Project: Lucene - Core Issue Type: Improvement Reporter: Michael McCandless Fix For: 5.0, Trunk Attachments: LUCENE-5969.patch, LUCENE-5969.patch, LUCENE-5969_part2.patch Spinoff from LUCENE-5952: * Fix .si to write Version as 3 ints, not a String that requires parsing at read time. * Lucene42TermVectorsFormat should not use the same codecName as Lucene41StoredFieldsFormat It would also be nice if we had a bumpCodecVersion script so rolling a new codec is not so daunting. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-5991) SolrCloud: Add API to move leader off a Solr instance
[ https://issues.apache.org/jira/browse/SOLR-5991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erick Erickson resolved SOLR-5991. -- Resolution: Fixed Fixed with the checkins under SOLR-6491. I'm calling this closed after the rebalancing done in SOLR-6517. There's still room for causing a specific node to become leader (or stop being leader), but let's raise that as a new JIRA if necessary. There's also probably some new functionality in the offing (no JIRA yet) to take a node offline for everything _except_ indexing. The idea here is to do the minimal work to keep from having to re-synchronize. This'll avoid being Overseer, shard leader, and processing queries and only accept incoming updates. Details TBD. Anyway, between the two I think the functionality of this JIRA can be accomplished. SolrCloud: Add API to move leader off a Solr instance - Key: SOLR-5991 URL: https://issues.apache.org/jira/browse/SOLR-5991 Project: Solr Issue Type: Bug Components: SolrCloud Affects Versions: 4.7.1 Reporter: Rich Mayfield Assignee: Erick Erickson Common maintenance chores require restarting Solr instances. The process of a shutdown becomes a whole lot more reliable if we can proactively move any leadership roles off of the Solr instance we are going to shut down. The leadership election process then runs immediately. I am not sure what the semantics should be (either accomplishes the goal but one of these might be best): * A call to tell a core to give up leadership (thus the next replica is chosen) * A call to specify which core should become the leader -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-4492) Please add support for Collection API CREATE method to evenly distribute leader roles among instances
[ https://issues.apache.org/jira/browse/SOLR-4492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erick Erickson resolved SOLR-4492. -- Resolution: Fixed Fix Version/s: Trunk 5.0 Fixed with the checkins under SOLR-6491 Needs two steps: 1 either manually assign the preferredLeader property to nodes of your choice OR use the BALANCESLICEUNIQUE command on the preferredLeader property to auto-assign the preferredLeader property 2 issue the REBALANCELEADERS call to make it so. See the reference guide. Please add support for Collection API CREATE method to evenly distribute leader roles among instances - Key: SOLR-4492 URL: https://issues.apache.org/jira/browse/SOLR-4492 Project: Solr Issue Type: New Feature Components: SolrCloud Reporter: Tim Vaillancourt Assignee: Erick Erickson Priority: Minor Fix For: 5.0, Trunk Currently in SolrCloud 4.1, a CREATE call to the Collection API will cause the server receiving the CREATE call to become the leader of all shards. I would like to ask for the ability for the CREATE call to evenly distribute the leader role across all instances, ie: if I create 3 shards over 3 SOLR 4.1 instances, each instance/node would only be the leader of 1 shard. This would be logically consistent with the way replicas are randomly distributed by this same call across instances/nodes. Currently, this CREATE call will cause the server receiving the call to become the leader of 3 shards. curl -v 'http://HOST:8983/solr/admin/collections?action=CREATEname=testnumShards=3replicationFactor=2maxShardsPerNode=2' PS: Thank you SOLR developers for your contributions! Tim Vaillancourt -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-6527) Give preference during leader election to nodes with the preferredLeader role
[ https://issues.apache.org/jira/browse/SOLR-6527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erick Erickson updated SOLR-6527: - Fix Version/s: Trunk 5.0 Give preference during leader election to nodes with the preferredLeader role - Key: SOLR-6527 URL: https://issues.apache.org/jira/browse/SOLR-6527 Project: Solr Issue Type: Improvement Affects Versions: 5.0, Trunk Reporter: Erick Erickson Assignee: Erick Erickson Fix For: 5.0, Trunk -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-4491) Please add support for manual leader election/promotion
[ https://issues.apache.org/jira/browse/SOLR-4491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erick Erickson resolved SOLR-4491. -- Resolution: Fixed Fix Version/s: Trunk 5.0 Fixed with the checkins under SOLR-6491, and especially SOLR-6517. See the Reference guide for the Collections API for the mechanisms. Please add support for manual leader election/promotion --- Key: SOLR-4491 URL: https://issues.apache.org/jira/browse/SOLR-4491 Project: Solr Issue Type: New Feature Components: SolrCloud Affects Versions: 4.1 Reporter: Tim Vaillancourt Assignee: Erick Erickson Fix For: 5.0, Trunk After some discussions on the solr-user list, it appears there is a need for the ability to promote a specific core/instance to be the leader of a shard. As I understand it, currently any master election is random, and the leader is chosen by chance. Workarounds are removing/adding cores until the right server is elected, but this not ideal/efficient. PS: Thank you SOLR developers for your contributions! Tim Vaillancourt -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-6491) Umbrella JIRA for managing the leader assignments
[ https://issues.apache.org/jira/browse/SOLR-6491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erick Erickson resolved SOLR-6491. -- Resolution: Fixed Fix Version/s: Trunk 5.0 SOLR-6517 is the final piece here. See the Reference Guide for ADDREPLICAPROP BALANCESLICEUNIQUE REBALANCELEADERS Umbrella JIRA for managing the leader assignments - Key: SOLR-6491 URL: https://issues.apache.org/jira/browse/SOLR-6491 Project: Solr Issue Type: Improvement Affects Versions: 5.0, Trunk Reporter: Erick Erickson Assignee: Erick Erickson Fix For: 5.0, Trunk Leaders can currently get out of balance due to the sequence of how nodes are brought up in a cluster. For very good reasons shard leadership cannot be permanently assigned. However, it seems reasonable that a sys admin could optionally specify that a particular node be the _preferred_ leader for a particular collection/shard. During leader election, preference would be given to any node so marked when electing any leader. So the proposal here is to add another role for preferredLeader to the collections API, something like ADDROLE?role=preferredLeadercollection=collection_nameshard=shardId Second, it would be good to have a new collections API call like ELECTPREFERREDLEADERS?collection=collection_name (I really hate that name so far, but you see the idea). That command would (asynchronously?) make an attempt to transfer leadership for each shard in a collection to the leader labeled as the preferred leader by the new ADDROLE role. I'm going to start working on this, any suggestions welcome! This will subsume several other JIRAs, I'll link them momentarily. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-5317) [PATCH] Concordance capability
[ https://issues.apache.org/jira/browse/LUCENE-5317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14175274#comment-14175274 ] Steve Rowe commented on LUCENE-5317: Tim, FYI, I've used the ASF's ReviewBoard instance a few times recently - it's very nice for comparing two patches against each other, and it can be useful for detailed review too: https://reviews.apache.org/. After creating an account there, the workflow is: manually upload a patch, assign a reviewer (could be the lucene group, in which case review requests go to the dev list, or a RB account-holder, including yourself), then publish. Thereafter anybody can review by clicking on one or more adjacent lines in a patch and attaching a comment, repeating till done, then publishing, and the original review request creator can update the patch, and anybody can view differences between any two patched versions, and also attach reviews to the patched version differences. [PATCH] Concordance capability -- Key: LUCENE-5317 URL: https://issues.apache.org/jira/browse/LUCENE-5317 Project: Lucene - Core Issue Type: New Feature Components: core/search Affects Versions: 4.5 Reporter: Tim Allison Labels: patch Fix For: 4.9 Attachments: LUCENE-5317.patch, concordance_v1.patch.gz This patch enables a Lucene-powered concordance search capability. Concordances are extremely useful for linguists, lawyers and other analysts performing analytic search vs. traditional snippeting/document retrieval tasks. By analytic search, I mean that the user wants to browse every time a term appears (or at least the topn) in a subset of documents and see the words before and after. Concordance technology is far simpler and less interesting than IR relevance models/methods, but it can be extremely useful for some use cases. Traditional concordance sort orders are available (sort on words before the target, words after, target then words before and target then words after). Under the hood, this is running SpanQuery's getSpans() and reanalyzing to obtain character offsets. There is plenty of room for optimizations and refactoring. Many thanks to my colleague, Jason Robinson, for input on the design of this patch. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-5.x-Linux (32bit/jdk1.8.0_40-ea-b09) - Build # 11312 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/11312/ Java: 32bit/jdk1.8.0_40-ea-b09 -client -XX:+UseG1GC 2 tests failed. REGRESSION: org.apache.solr.cloud.DeleteLastCustomShardedReplicaTest.testDistribSearch Error Message: No live SolrServers available to handle this request:[https://127.0.0.1:52857, https://127.0.0.1:34173, https://127.0.0.1:34253] Stack Trace: org.apache.solr.client.solrj.SolrServerException: No live SolrServers available to handle this request:[https://127.0.0.1:52857, https://127.0.0.1:34173, https://127.0.0.1:34253] at __randomizedtesting.SeedInfo.seed([5E39969D5E5D7335:DFDF188529021309]:0) at org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(LBHttpSolrServer.java:333) at org.apache.solr.client.solrj.impl.CloudSolrServer.sendRequest(CloudSolrServer.java:1015) at org.apache.solr.client.solrj.impl.CloudSolrServer.requestWithRetryOnStaleState(CloudSolrServer.java:793) at org.apache.solr.client.solrj.impl.CloudSolrServer.request(CloudSolrServer.java:736) at org.apache.solr.cloud.DeleteLastCustomShardedReplicaTest.removeAndWaitForLastReplicaGone(DeleteLastCustomShardedReplicaTest.java:117) at org.apache.solr.cloud.DeleteLastCustomShardedReplicaTest.doTest(DeleteLastCustomShardedReplicaTest.java:107) at org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
[jira] [Commented] (SOLR-6533) Support editing common solrconfig.xml values
[ https://issues.apache.org/jira/browse/SOLR-6533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14175277#comment-14175277 ] Tomás Fernández Löbbe commented on SOLR-6533: - {quote} bq. Can this handler be enabled/disabled? No. that is not possible {quote} I think this should be supported. My understanding is that one can enable/disable the schama API from the solrconfig by setting the SchemaFactory. In addition, one could have it's own custom schema factory, right? bq. if you have a property defined in solrconfig.xml using the ${variable-name:default_value} format, you can use the command set-user-property That sounds really useful Support editing common solrconfig.xml values Key: SOLR-6533 URL: https://issues.apache.org/jira/browse/SOLR-6533 Project: Solr Issue Type: Sub-task Reporter: Noble Paul Attachments: SOLR-6533.patch, SOLR-6533.patch, SOLR-6533.patch, SOLR-6533.patch There are a bunch of properties in solrconfig.xml which users want to edit. We will attack them first These properties will be persisted to a separate file called config.json (or whatever file). Instead of saving in the same format we will have well known properties which users can directly edit {code} updateHandler.autoCommit.maxDocs query.filterCache.initialSize {code} The api will be modeled around the bulk schema API {code:javascript} curl http://localhost:8983/solr/collection1/config -H 'Content-type:application/json' -d '{ set-property : {updateHandler.autoCommit.maxDocs:5}, unset-property: updateHandler.autoCommit.maxDocs }' {code} {code:javascript} //or use this to set ${mypropname} values curl http://localhost:8983/solr/collection1/config -H 'Content-type:application/json' -d '{ set-user-property : {mypropname:my_prop_val}, unset-user-property:{mypropname} }' {code} The values stored in the config.json will always take precedence and will be applied after loading solrconfig.xml. An http GET on /config path will give the real config that is applied . -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-6058) Solr needs a new website
[ https://issues.apache.org/jira/browse/SOLR-6058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Francis Lukesh updated SOLR-6058: - Attachment: Solr_Logo_on_white.png Solr_Logo_on_white.pdf Solr_Logo_on_orange.png Solr_Logo_on_orange.pdf Solr_Logo_on_black.png Solr_Logo_on_black.pdf Solr_Icons.pdf SOLR-6058 Solr_Styleguide.pdf In thinking through the impetus behind creating a new website, it became clear that it was a good time to put some effort behind the Solr identity system as well. I certainly believe that the content of the website needs to be thoughtfully considered, but I felt that I could best contribute an identity system and templates such that the community could use them as a toolset and visual framework for building the new site. Additionally, a new identity system will help me and other designers in the Solr community create cohesive graphics for other things as well, such as t-shirts, meetup resources, and the like. I am hopeful that a beautiful new website and identity will help revitalize the outward-facing brand of Solr and help bolster its position in the market and the search community. Manifest: # *Solr_Styleguide.pdf* - This is the primary general-use guide for the new identity system. # *Solr_Icons.pdf* - This is an editable PDF of the vector source images for the new Solr icon set, and could serve as the main source file for the icon library. # *SOLR-6058* - This is the diff of the website. It's a huge patchfile. It applies to the lucene TLP website ([http://lucene.apache.org/site-instructions.html]). Included in the diff are SVG exports of the icons that are used on the website, but can also be used anywhere else they make sense. (You can see a preview of this patch [here|http://lucene.lukesh.com/solr].) # *Solr_Logo_on** - These are editable PDF (vector) and high-resolution PNG (raster) versions of the logo for use everywhere. If you're applying it to a white or light background, use _on_white. If you're applying to a dark background, use _on_black. The _on_orange variant is specifically intended for use on a field of the orange specified in the styleguide. A few notes: # These files are Adobe Illustrator *editable* PDFs. This means that if you have a PDF viewer, you can view the assets, but they are preserved as editable documents as well, so that we don't need to maintain separate AI files. # Getting the Apache CMS to build is not necessarily the most straightforward thing. If you'd like to, start [here|http://www.apache.org/dev/cms.html] and [here|http://www.apache.org/dev/cmsref.html]. These docs are a little out of date, and I'd like to say I'm going to submit a patch to them as well, but we'll see how this goes. If you'd just like to view the new site, I have the patch installed and running on my server here: [http://lucene.lukesh.com/solr] # There are a few major page types (not necessarily templates in the Apache CMS sense of the word, but more like boilerplate examples): #* [http://lucene.lukesh.com/solr/] - Clearly a homepage / landing page layout, with example of the content bar visuals going down the page. #* [http://lucene.lukesh.com/solr/news.html] - A basic content page. This is an example of how it can easily be used for the News section. #* [http://lucene.lukesh.com/solr/features.html] - This showcases some more layout options. #* [http://lucene.lukesh.com/solr/resources.html] - This showcases a 2nd-tier navigation bar that can anchor to the top of the page when a user scrolls down. #* [http://lucene.lukesh.com/solr/tutorials.html] - This is another way to create a secondary navigation bar. This also allows for very deep content. Solr needs a new website Key: SOLR-6058 URL: https://issues.apache.org/jira/browse/SOLR-6058 Project: Solr Issue Type: Task Reporter: Grant Ingersoll Assignee: Grant Ingersoll Attachments: HTML.rar, SOLR-6058, Solr_Icons.pdf, Solr_Logo_on_black.pdf, Solr_Logo_on_black.png, Solr_Logo_on_orange.pdf, Solr_Logo_on_orange.png, Solr_Logo_on_white.pdf, Solr_Logo_on_white.png, Solr_Styleguide.pdf Solr needs a new website: better organization of content, less verbose, more pleasing graphics, etc. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-6058) Solr needs a new website
[ https://issues.apache.org/jira/browse/SOLR-6058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14175318#comment-14175318 ] Francis Lukesh edited comment on SOLR-6058 at 10/17/14 6:04 PM: In thinking through the impetus behind creating a new website, it became clear that it was a good time to put some effort behind the Solr identity system as well. I certainly believe that the content of the website needs to be thoughtfully considered, but I felt that I could best contribute an identity system and templates such that the community could use them as a toolset and visual framework for building the new site. Additionally, a new identity system will help me and other designers in the Solr community create cohesive graphics for other things as well, such as t-shirts, meetup resources, and the like. I am hopeful that a beautiful new website and identity will help revitalize the outward-facing brand of Solr and help bolster its position in the market and the search community. Manifest: # *Solr_Styleguide.pdf* - This is the primary general-use guide for the new identity system. # *Solr_Icons.pdf* - This is an editable PDF of the vector source images for the new Solr icon set, and could serve as the main source file for the icon library. # *SOLR-6058* - This is the diff of the website. It's a huge patchfile. It applies to the lucene TLP website ([http://lucene.apache.org/site-instructions.html]). Included in the diff are SVG exports of the icons that are used on the website, but can also be used anywhere else they make sense. (You can see a preview of this patch [here|http://lucene.lukesh.com/solr].) # *Solr_Logo_on** - These are editable PDF (vector) and high-resolution PNG (raster) versions of the logo for use everywhere. If you're applying it to a white or light background, use _on_white. If you're applying to a dark background, use _on_black. The _on_orange variant is specifically intended for use on a field of the orange specified in the styleguide. A few notes: # These files are Adobe Illustrator *editable* PDFs. This means that if you have a PDF viewer, you can view the assets, but they are preserved as editable documents as well, so that we don't need to maintain separate AI files. # Getting the Apache CMS to build is not necessarily the most straightforward thing. If you'd like to, start [here|http://www.apache.org/dev/cms.html] and [here|http://www.apache.org/dev/cmsref.html]. These docs are a little out of date, and I'd like to say I'm going to submit a patch to them as well, but we'll see how this goes. If you'd just like to view the new site, I have the patch installed and running on my server here: [http://lucene.lukesh.com/solr] # There are a few major page types (not necessarily templates in the Apache CMS sense of the word, but more like boilerplate examples): #* [http://lucene.lukesh.com/solr/] - Clearly a homepage / landing page layout, with example of the content bar visuals going down the page. #* [http://lucene.lukesh.com/solr/news.html] - A basic content page. This is an example of how it can easily be used for the News section. #* [http://lucene.lukesh.com/solr/features.html] - This showcases some more layout options. #* [http://lucene.lukesh.com/solr/resources.html] - This showcases a 2nd-tier navigation bar that can anchor to the top of the page when a user scrolls down. #* [http://lucene.lukesh.com/solr/tutorials.html] - This is another way to create a secondary navigation bar. This also allows for very deep content. was (Author: franlukesh): In thinking through the impetus behind creating a new website, it became clear that it was a good time to put some effort behind the Solr identity system as well. I certainly believe that the content of the website needs to be thoughtfully considered, but I felt that I could best contribute an identity system and templates such that the community could use them as a toolset and visual framework for building the new site. Additionally, a new identity system will help me and other designers in the Solr community create cohesive graphics for other things as well, such as t-shirts, meetup resources, and the like. I am hopeful that a beautiful new website and identity will help revitalize the outward-facing brand of Solr and help bolster its position in the market and the search community. Manifest: # *Solr_Styleguide.pdf* - This is the primary general-use guide for the new identity system. # *Solr_Icons.pdf* - This is an editable PDF of the vector source images for the new Solr icon set, and could serve as the main source file for the icon library. # *SOLR-6058* - This is the diff of the website. It's a huge patchfile. It applies to the lucene TLP website ([http://lucene.apache.org/site-instructions.html]). Included in the diff are SVG exports of the icons that are
Re: Batch replica transfer from master shard solr cloud
Pretty sure that's never been made configurable. I've seen anecdotal evidence of a 30-40% slowdown when adding the first replica, from there the penalty is much less. Cp Mishra: Any time you change code you're absolutely invited to open a JIRA and attach the code for people to look at. Please feel free to! Best, Erick On Fri, Oct 17, 2014 at 12:31 PM, Shawn Heisey apa...@elyograg.org wrote: On 10/17/2014 8:50 AM, Cp Mishra wrote: So, we changed the logic to: -Read SolrInputDocument objects from stream in batches of 500. -Add documents to ConcurrentUpdateSolrServer instance -Index documents in a loop This has improved indexing speed significantly. What are the caveats to this approach? Thinking back to when we were having deadlock problems with heavy indexing on SolrCloud, I seem to recall that one of the experts said that SolrCloud already does batch the documents, only it was 10 at a time. I also seemed to remember that making the batch size configurable was discussed, but I don't know how discussion ended. Am I remembering incorrectly? I'm not familiar with the actual code for this part of Solr at all. Thanks, Shawn - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (LUCENE-6011) CompressingTermVectors should put checkDoc and checkPosition within assert
David Smiley created LUCENE-6011: Summary: CompressingTermVectors should put checkDoc and checkPosition within assert Key: LUCENE-6011 URL: https://issues.apache.org/jira/browse/LUCENE-6011 Project: Lucene - Core Issue Type: Improvement Components: core/codecs Reporter: David Smiley Priority: Minor CompressingTermVectorsReader.TVDocsEnum calls checkPosition() and checkDoc() but not from within asserts. And nextPosition() has some leading checks, also not within asserts. I believe these should all be within asserts. This is low-level code that can get called a ton of times, particularly via TokenSources.getTokenStream. I'd be happy to create a patch there is preliminary agreement. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-6632) Error CREATEing SolrCore .. Caused by: null
Hoss Man created SOLR-6632: -- Summary: Error CREATEing SolrCore .. Caused by: null Key: SOLR-6632 URL: https://issues.apache.org/jira/browse/SOLR-6632 Project: Solr Issue Type: Bug Reporter: Hoss Man Assignee: Noble Paul We've seen 6 nearly identical non-reproducible jenkins failures with errors stemming from an NPE in ClusterState since r1624556 (SOLR-5473, SOLR-5474, SOLR-5810) was committed (Sep 12th) Example... http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-MacOSX/1849/consoleText {noformat} [junit4] ERROR111s | CollectionsAPIDistributedZkTest.testDistribSearch [junit4] Throwable #1: org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: Error CREATEing SolrCore 'halfcollection_shard1_replica1': Unable to create core [halfcollection_shard1_replica1] Caused by: null [junit4]at __randomizedtesting.SeedInfo.seed([7FF06594A345DF76:FE16EB8CD41ABF4A]:0) [junit4]at org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:569) [junit4]at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:215) [junit4]at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:211) [junit4]at org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testErrorHandling(CollectionsAPIDistributedZkTest.java:583) [junit4]at org.apache.solr.cloud.CollectionsAPIDistributedZkTest.doTest(CollectionsAPIDistributedZkTest.java:205) [junit4]at org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869) [junit4]at java.lang.Thread.run(Thread.java:745) {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-6632) Error CREATEing SolrCore .. Caused by: null
[ https://issues.apache.org/jira/browse/SOLR-6632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hoss Man updated SOLR-6632: --- Attachment: SOLR-6632_jenkins_policeman_Lucene-Solr-5.x-MacOSX_1849.txt Full jenkins log from most recent failure attached, root cause from log... {noformat} Updating http://svn.apache.org/repos/asf/lucene/dev/branches/branch_5x at revision '2014-10-17T03:55:36.671 +' ... At revision 1632469 ... [junit4] 2Caused by: java.lang.NullPointerException [junit4] 2at org.apache.solr.common.cloud.ClusterState.getShardId(ClusterState.java:209) [junit4] 2at org.apache.solr.common.cloud.ClusterState.getShardId(ClusterState.java:198) [junit4] 2at org.apache.solr.cloud.ZkController.needsToBeAssignedShardId(ZkController.java:1165) [junit4] 2at org.apache.solr.cloud.ZkController.preRegister(ZkController.java:1520) [junit4] 2at org.apache.solr.core.CoreContainer.create(CoreContainer.java:504) {noformat} Error CREATEing SolrCore .. Caused by: null --- Key: SOLR-6632 URL: https://issues.apache.org/jira/browse/SOLR-6632 Project: Solr Issue Type: Bug Reporter: Hoss Man Assignee: Noble Paul Attachments: SOLR-6632_jenkins_policeman_Lucene-Solr-5.x-MacOSX_1849.txt We've seen 6 nearly identical non-reproducible jenkins failures with errors stemming from an NPE in ClusterState since r1624556 (SOLR-5473, SOLR-5474, SOLR-5810) was committed (Sep 12th) Example... http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-MacOSX/1849/consoleText {noformat} [junit4] ERROR111s | CollectionsAPIDistributedZkTest.testDistribSearch [junit4] Throwable #1: org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: Error CREATEing SolrCore 'halfcollection_shard1_replica1': Unable to create core [halfcollection_shard1_replica1] Caused by: null [junit4] at __randomizedtesting.SeedInfo.seed([7FF06594A345DF76:FE16EB8CD41ABF4A]:0) [junit4] at org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:569) [junit4] at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:215) [junit4] at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:211) [junit4] at org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testErrorHandling(CollectionsAPIDistributedZkTest.java:583) [junit4] at org.apache.solr.cloud.CollectionsAPIDistributedZkTest.doTest(CollectionsAPIDistributedZkTest.java:205) [junit4] at org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869) [junit4] at java.lang.Thread.run(Thread.java:745) {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-6058) Solr needs a new website
[ https://issues.apache.org/jira/browse/SOLR-6058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14175383#comment-14175383 ] Grant Ingersoll commented on SOLR-6058: --- Looks great, Fran! I should be able to find some time to help on content. Hopefully others can pitch in, too. I have the CMS running locally (finally) and can help others if needed. https://github.com/waylan/Python-Markdown/issues/357 may help for anyone that runs into trouble w/ the Python Markdown process. Solr needs a new website Key: SOLR-6058 URL: https://issues.apache.org/jira/browse/SOLR-6058 Project: Solr Issue Type: Task Reporter: Grant Ingersoll Assignee: Grant Ingersoll Attachments: HTML.rar, SOLR-6058, Solr_Icons.pdf, Solr_Logo_on_black.pdf, Solr_Logo_on_black.png, Solr_Logo_on_orange.pdf, Solr_Logo_on_orange.png, Solr_Logo_on_white.pdf, Solr_Logo_on_white.png, Solr_Styleguide.pdf Solr needs a new website: better organization of content, less verbose, more pleasing graphics, etc. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: [JENKINS] Lucene-Solr-5.x-MacOSX (64bit/jdk1.7.0) - Build # 1849 - Still Failing!
There is an NPE happening here, triggered by something timing related / not easily reproducible. Filed SOLR-6632, noble mentioned on IRC that he'd dig into it. : Date: Fri, 17 Oct 2014 07:06:57 + (UTC) : From: Policeman Jenkins Server jenk...@thetaphi.de : To: sar...@gmail.com, tflo...@apache.org, gcha...@apache.org, : hoss...@apache.org, dev@lucene.apache.org : Subject: [JENKINS] Lucene-Solr-5.x-MacOSX (64bit/jdk1.7.0) - Build # 1849 - : Still Failing! : : Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-MacOSX/1849/ : Java: 64bit/jdk1.7.0 -XX:-UseCompressedOops -XX:+UseParallelGC : : 1 tests failed. : FAILED: org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testDistribSearch : : Error Message: : Error CREATEing SolrCore 'halfcollection_shard1_replica1': Unable to create core [halfcollection_shard1_replica1] Caused by: null : : Stack Trace: : org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: Error CREATEing SolrCore 'halfcollection_shard1_replica1': Unable to create core [halfcollection_shard1_replica1] Caused by: null : at __randomizedtesting.SeedInfo.seed([7FF06594A345DF76:FE16EB8CD41ABF4A]:0) : at org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:569) : at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:215) : at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:211) : at org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testErrorHandling(CollectionsAPIDistributedZkTest.java:583) : at org.apache.solr.cloud.CollectionsAPIDistributedZkTest.doTest(CollectionsAPIDistributedZkTest.java:205) : at org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869) : at sun.reflect.GeneratedMethodAccessor54.invoke(Unknown Source) : at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) : at java.lang.reflect.Method.invoke(Method.java:606) : at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618) : at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827) : at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) : at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877) : at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) : at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) : at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) : at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) : at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) : at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) : at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) : at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) : at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) : at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798) : at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458) : at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836) : at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738) : at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772) : at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783) : at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) : at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) : at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) : at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) : at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) : at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) : at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) : at
[jira] [Commented] (SOLR-6623) NPE in StoredFieldsShardResponseProcessor possible when using TIME_ALLOWED param
[ https://issues.apache.org/jira/browse/SOLR-6623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14175454#comment-14175454 ] Steve Rowe commented on SOLR-6623: -- Anshum, in {{SearchHandler.handleRequestBody()}}, shouldn't debug and explain only be added to the response if {{rb.isDebug()}}? I.e. starting at line #246: {code:java} } catch (ExitableDirectoryReader.ExitingReaderException ex) { SolrDocumentList r = (SolrDocumentList) rb.rsp.getValues().get(response); log.warn( Query: + req.getParamString() + ; + ex.getMessage()); if(r == null) r = new SolrDocumentList(); r.setNumFound(0); rb.rsp.add(response, r); if (rb.isDebug()) { // - added NamedList debug = new NamedList(); debug.add(explain, new NamedList()); rb.rsp.add(debug, debug); } rb.rsp.getResponseHeader().add(partialResults, Boolean.TRUE); } finally { SolrQueryTimeoutImpl.reset(); {code} Also, although I confess I don't entirely understand the flow, shouldn't this same change be made in {{MoreLikeThisHandler.handleRequestBody()}} too? NPE in StoredFieldsShardResponseProcessor possible when using TIME_ALLOWED param Key: SOLR-6623 URL: https://issues.apache.org/jira/browse/SOLR-6623 Project: Solr Issue Type: Bug Reporter: Hoss Man Assignee: Anshum Gupta Attachments: SOLR-6623.patch I'm not sure if this is an existing bug, or something new caused by changes in SOLR-5986, but it just poped up in jenkinds today... http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-MacOSX/1844 Revision: 1631656 {noformat} [junit4] 2 NOTE: reproduce with: ant test -Dtestcase=TestDistributedGrouping -Dtests.method=testDistribSearch -Dtests.seed=E9460FA0973F6672 -Dtests.slow=true -Dtests.locale=cs -Dtests.timezone=Indian/Mayotte -Dtests.file.encoding=ISO-8859-1 [junit4] ERROR 55.9s | TestDistributedGrouping.testDistribSearch [junit4] Throwable #1: org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: java.lang.NullPointerException [junit4] at org.apache.solr.search.grouping.distributed.responseprocessor.StoredFieldsShardResponseProcessor.process(StoredFieldsShardResponseProcessor.java:45) [junit4] at org.apache.solr.handler.component.QueryComponent.handleGroupedResponses(QueryComponent.java:708) [junit4] at org.apache.solr.handler.component.QueryComponent.handleResponses(QueryComponent.java:691) [junit4] at org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:337) [junit4] at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:136) [junit4] at org.apache.solr.core.SolrCore.execute(SolrCore.java:1983) [junit4] at org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:773) [junit4] at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:408) [junit4] at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:202) [junit4] at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419) [junit4] at org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:137) [junit4] at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419) [junit4] at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455) [junit4] at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:229) [junit4] at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137) [junit4] at org.eclipse.jetty.server.handler.GzipHandler.handle(GzipHandler.java:301) [junit4] at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1077) [junit4] at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384) [junit4] at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193) [junit4] at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009) [junit4] at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135) [junit4] at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116) [junit4] at org.eclipse.jetty.server.Server.handle(Server.java:368) [junit4] at
[jira] [Commented] (SOLR-6266) Couchbase plug-in for Solr
[ https://issues.apache.org/jira/browse/SOLR-6266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14175485#comment-14175485 ] Kwan-I Lee commented on SOLR-6266: -- Karol, Thanks for the reply. I also tried automatic XDCR configuration in Couchbase server by using the APIs they provide. The problem I encountered is they don't have API to switch XDCR protocol (at least I didn't find it on their website.) By default the protocol is set to version 2 upon creation, while we need version 1 to make the plugin work. I'll try to talk with Couchbase people to see if there's any way to make it. Kwan Couchbase plug-in for Solr -- Key: SOLR-6266 URL: https://issues.apache.org/jira/browse/SOLR-6266 Project: Solr Issue Type: New Feature Reporter: Varun Assignee: Joel Bernstein Attachments: solr-couchbase-plugin-0.0.3-SNAPSHOT.tar.gz, solr-couchbase-plugin-0.0.5-SNAPSHOT.tar.gz, solr-couchbase-plugin-0.0.5.1-SNAPSHOT.tar.gz, solr-couchbase-plugin.tar.gz, solr-couchbase-plugin.tar.gz It would be great if users could connect Couchbase and Solr so that updates to Couchbase can automatically flow to Solr. Couchbase provides some very nice API's which allow applications to mimic the behavior of a Couchbase server so that it can receive updates via Couchbase's normal cross data center replication (XDCR). One possible design for this is to create a CouchbaseLoader that extends ContentStreamLoader. This new loader would embed the couchbase api's that listen for incoming updates from couchbase, then marshal the couchbase updates into the normal Solr update process. Instead of marshaling couchbase updates into the normal Solr update process, we could also embed a SolrJ client to relay the request through the http interfaces. This may be necessary if we have to handle mapping couchbase buckets to Solr collections on the Solr side. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-6510) select?collapse=... - fix NPE in Collapsing(FieldValue|Score)Collector
[ https://issues.apache.org/jira/browse/SOLR-6510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14175514#comment-14175514 ] David Smiley commented on SOLR-6510: I looked into this a bit, applying Christine's patch. It appears that there may be a bug in MultiDocValues (a DocValues view over multiple atomic segments). If there is no value, you *don't* get -1 for the ord, and hence the test complains. The BytesRef shows to be the empty string. I'll dig into this more later. select?collapse=... - fix NPE in Collapsing(FieldValue|Score)Collector -- Key: SOLR-6510 URL: https://issues.apache.org/jira/browse/SOLR-6510 Project: Solr Issue Type: Bug Affects Versions: 4.8.1 Reporter: Christine Poerschke Priority: Minor Fix For: 4.10.2 Affects branch_4x but not trunk, collapse field must be docValues=true and shard empty (or with nothing indexed for the field?). -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.9.0-ea-b34) - Build # 11313 - Still Failing!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/11313/ Java: 64bit/jdk1.9.0-ea-b34 -XX:+UseCompressedOops -XX:+UseG1GC 4 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.cloud.HttpPartitionTest Error Message: ERROR: SolrIndexSearcher opens=2 closes=1 Stack Trace: java.lang.AssertionError: ERROR: SolrIndexSearcher opens=2 closes=1 at __randomizedtesting.SeedInfo.seed([8D28B58E16B9BCB]:0) at org.junit.Assert.fail(Assert.java:93) at org.apache.solr.SolrTestCaseJ4.endTrackingSearchers(SolrTestCaseJ4.java:440) at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:187) at sun.reflect.GeneratedMethodAccessor33.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:790) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at java.lang.Thread.run(Thread.java:745) FAILED: junit.framework.TestSuite.org.apache.solr.cloud.HttpPartitionTest Error Message: 17 threads leaked from SUITE scope at org.apache.solr.cloud.HttpPartitionTest: 1) Thread[id=8742, name=HashSessionScavenger-214, state=TIMED_WAITING, group=TGRP-HttpPartitionTest] at java.lang.Object.wait(Native Method) at java.util.TimerThread.mainLoop(Timer.java:552) at java.util.TimerThread.run(Timer.java:505)2) Thread[id=8746, name=qtp828319888-8746 Acceptor1 SelectChannelConnector@127.0.0.1:60803, state=RUNNABLE, group=TGRP-HttpPartitionTest] at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:250) at org.eclipse.jetty.server.nio.SelectChannelConnector.accept(SelectChannelConnector.java:109) at org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:938) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608) at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543) at java.lang.Thread.run(Thread.java:745)3) Thread[id=8754, name=TEST-HttpPartitionTest.testDistribSearch-seed#[8D28B58E16B9BCB]-EventThread, state=WAITING, group=TGRP-HttpPartitionTest] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:494) 4) Thread[id=8757, name=searcherExecutor-4070-thread-1, state=WAITING, group=TGRP-HttpPartitionTest] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at
[jira] [Commented] (SOLR-6623) NPE in StoredFieldsShardResponseProcessor possible when using TIME_ALLOWED param
[ https://issues.apache.org/jira/browse/SOLR-6623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14175563#comment-14175563 ] Anshum Gupta commented on SOLR-6623: Thanks for looking at it Steve. You're right about adding the debug and explain conditionally (for now) but from my understanding, I don't think we need this to also be a part of the MoreLikeThisHandler because of the way it works. Ideally, we shouldn't need to do any of this and things should either be taken care of when shards.tolerant = true else the exception should just be sent all the way to the user but as I mentioned, I've kept that for another issue i.e SOLR-6616. NPE in StoredFieldsShardResponseProcessor possible when using TIME_ALLOWED param Key: SOLR-6623 URL: https://issues.apache.org/jira/browse/SOLR-6623 Project: Solr Issue Type: Bug Reporter: Hoss Man Assignee: Anshum Gupta Attachments: SOLR-6623.patch I'm not sure if this is an existing bug, or something new caused by changes in SOLR-5986, but it just poped up in jenkinds today... http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-MacOSX/1844 Revision: 1631656 {noformat} [junit4] 2 NOTE: reproduce with: ant test -Dtestcase=TestDistributedGrouping -Dtests.method=testDistribSearch -Dtests.seed=E9460FA0973F6672 -Dtests.slow=true -Dtests.locale=cs -Dtests.timezone=Indian/Mayotte -Dtests.file.encoding=ISO-8859-1 [junit4] ERROR 55.9s | TestDistributedGrouping.testDistribSearch [junit4] Throwable #1: org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: java.lang.NullPointerException [junit4] at org.apache.solr.search.grouping.distributed.responseprocessor.StoredFieldsShardResponseProcessor.process(StoredFieldsShardResponseProcessor.java:45) [junit4] at org.apache.solr.handler.component.QueryComponent.handleGroupedResponses(QueryComponent.java:708) [junit4] at org.apache.solr.handler.component.QueryComponent.handleResponses(QueryComponent.java:691) [junit4] at org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:337) [junit4] at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:136) [junit4] at org.apache.solr.core.SolrCore.execute(SolrCore.java:1983) [junit4] at org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:773) [junit4] at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:408) [junit4] at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:202) [junit4] at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419) [junit4] at org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:137) [junit4] at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419) [junit4] at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455) [junit4] at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:229) [junit4] at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137) [junit4] at org.eclipse.jetty.server.handler.GzipHandler.handle(GzipHandler.java:301) [junit4] at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1077) [junit4] at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384) [junit4] at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193) [junit4] at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009) [junit4] at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135) [junit4] at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116) [junit4] at org.eclipse.jetty.server.Server.handle(Server.java:368) [junit4] at org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489) [junit4] at org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:942) [junit4] at org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1004) [junit4] at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:640) [junit4] at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235) [junit4]
[jira] [Commented] (SOLR-6449) Add first class support for Real Time Get in Solrj
[ https://issues.apache.org/jira/browse/SOLR-6449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14175587#comment-14175587 ] Steve Davids commented on SOLR-6449: The current way of accessing the Real Time Get in SolrJ you need to do something along the lines of: {code} SolrDocument doc = solrServer.query(params(qt, /get, id, id)).getResponse().get(doc); {code} It would be convenient to provide a native method of (or similar variant): {code} SolrDocument doc = solrServer.get(id); {code} Add first class support for Real Time Get in Solrj -- Key: SOLR-6449 URL: https://issues.apache.org/jira/browse/SOLR-6449 Project: Solr Issue Type: Improvement Components: clients - java Reporter: Shalin Shekhar Mangar Labels: difficulty-medium, impact-medium Fix For: 5.0 Any request handler can be queried by Solrj using a custom param map and the qt parameter but I think /get should get first-class support in the java client. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-6350) Percentiles in StatsComponent
[ https://issues.apache.org/jira/browse/SOLR-6350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14175591#comment-14175591 ] Xu Zhang commented on SOLR-6350: Probably we should give compression parameter to users. The T-Digest algorithm use compression parameter to control the trade-off between the size of the T-Digest and the accuracy which quantiles are estimated. Users should be able to pick the trade-off by themselves. Percentiles in StatsComponent - Key: SOLR-6350 URL: https://issues.apache.org/jira/browse/SOLR-6350 Project: Solr Issue Type: Sub-task Reporter: Hoss Man Attachments: SOLR-6350-xu.patch, SOLR-6350-xu.patch Add an option to compute user specified percentiles when computing stats Example... {noformat} stats.field={!percentiles='1,2,98,99,99.999'}price {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-6350) Percentiles in StatsComponent
[ https://issues.apache.org/jira/browse/SOLR-6350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14175591#comment-14175591 ] Xu Zhang edited comment on SOLR-6350 at 10/17/14 10:02 PM: --- Probably we should give compression parameter to users. The T-Digest algorithm use compression parameter to control the trade-off between the size of the T-Digest and the accuracy which quantiles are estimated. Users should be able to pick the trade-off by themselves. And probably AVL t-digest implementation is better. was (Author: simplebread): Probably we should give compression parameter to users. The T-Digest algorithm use compression parameter to control the trade-off between the size of the T-Digest and the accuracy which quantiles are estimated. Users should be able to pick the trade-off by themselves. Percentiles in StatsComponent - Key: SOLR-6350 URL: https://issues.apache.org/jira/browse/SOLR-6350 Project: Solr Issue Type: Sub-task Reporter: Hoss Man Attachments: SOLR-6350-xu.patch, SOLR-6350-xu.patch Add an option to compute user specified percentiles when computing stats Example... {noformat} stats.field={!percentiles='1,2,98,99,99.999'}price {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-6623) NPE in StoredFieldsShardResponseProcessor possible when using TIME_ALLOWED param
[ https://issues.apache.org/jira/browse/SOLR-6623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anshum Gupta updated SOLR-6623: --- Attachment: SOLR-6623.patch Updated patch that conditionally adds elements to response on catching the ExitingReaderException. NPE in StoredFieldsShardResponseProcessor possible when using TIME_ALLOWED param Key: SOLR-6623 URL: https://issues.apache.org/jira/browse/SOLR-6623 Project: Solr Issue Type: Bug Reporter: Hoss Man Assignee: Anshum Gupta Attachments: SOLR-6623.patch, SOLR-6623.patch I'm not sure if this is an existing bug, or something new caused by changes in SOLR-5986, but it just poped up in jenkinds today... http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-MacOSX/1844 Revision: 1631656 {noformat} [junit4] 2 NOTE: reproduce with: ant test -Dtestcase=TestDistributedGrouping -Dtests.method=testDistribSearch -Dtests.seed=E9460FA0973F6672 -Dtests.slow=true -Dtests.locale=cs -Dtests.timezone=Indian/Mayotte -Dtests.file.encoding=ISO-8859-1 [junit4] ERROR 55.9s | TestDistributedGrouping.testDistribSearch [junit4] Throwable #1: org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: java.lang.NullPointerException [junit4] at org.apache.solr.search.grouping.distributed.responseprocessor.StoredFieldsShardResponseProcessor.process(StoredFieldsShardResponseProcessor.java:45) [junit4] at org.apache.solr.handler.component.QueryComponent.handleGroupedResponses(QueryComponent.java:708) [junit4] at org.apache.solr.handler.component.QueryComponent.handleResponses(QueryComponent.java:691) [junit4] at org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:337) [junit4] at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:136) [junit4] at org.apache.solr.core.SolrCore.execute(SolrCore.java:1983) [junit4] at org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:773) [junit4] at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:408) [junit4] at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:202) [junit4] at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419) [junit4] at org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:137) [junit4] at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419) [junit4] at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455) [junit4] at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:229) [junit4] at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137) [junit4] at org.eclipse.jetty.server.handler.GzipHandler.handle(GzipHandler.java:301) [junit4] at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1077) [junit4] at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384) [junit4] at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193) [junit4] at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009) [junit4] at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135) [junit4] at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116) [junit4] at org.eclipse.jetty.server.Server.handle(Server.java:368) [junit4] at org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489) [junit4] at org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:942) [junit4] at org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1004) [junit4] at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:640) [junit4] at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235) [junit4] at org.eclipse.jetty.server.AsyncHttpConnection.handle(AsyncHttpConnection.java:82) [junit4] at org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:628) [junit4] at org.eclipse.jetty.io.nio.SelectChannelEndPoint$1.run(SelectChannelEndPoint.java:52) [junit4] at
[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0_40-ea-b09) - Build # 11470 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/11470/ Java: 64bit/jdk1.8.0_40-ea-b09 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC 4 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.cloud.CollectionsAPIDistributedZkTest Error Message: ERROR: SolrZkClient opens=20 closes=19 Stack Trace: java.lang.AssertionError: ERROR: SolrZkClient opens=20 closes=19 at __randomizedtesting.SeedInfo.seed([3C8147AC723F39AB]:0) at org.junit.Assert.fail(Assert.java:93) at org.apache.solr.SolrTestCaseJ4.endTrackingZkClients(SolrTestCaseJ4.java:455) at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:188) at sun.reflect.GeneratedMethodAccessor32.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:790) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at java.lang.Thread.run(Thread.java:745) FAILED: junit.framework.TestSuite.org.apache.solr.cloud.CollectionsAPIDistributedZkTest Error Message: 4 threads leaked from SUITE scope at org.apache.solr.cloud.CollectionsAPIDistributedZkTest: 1) Thread[id=1166, name=zkCallback-187-thread-1, state=TIMED_WAITING, group=TGRP-CollectionsAPIDistributedZkTest] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460) at java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362) at java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745)2) Thread[id=1163, name=TEST-CollectionsAPIDistributedZkTest.testDistribSearch-seed#[3C8147AC723F39AB]-EventThread, state=WAITING, group=TGRP-CollectionsAPIDistributedZkTest] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:494) 3) Thread[id=1162, name=TEST-CollectionsAPIDistributedZkTest.testDistribSearch-seed#[3C8147AC723F39AB]-SendThread(127.0.0.1:56209), state=TIMED_WAITING, group=TGRP-CollectionsAPIDistributedZkTest] at java.lang.Thread.sleep(Native Method) at org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:101) at
[jira] [Commented] (SOLR-6623) NPE in StoredFieldsShardResponseProcessor possible when using TIME_ALLOWED param
[ https://issues.apache.org/jira/browse/SOLR-6623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14175626#comment-14175626 ] Steve Rowe commented on SOLR-6623: -- +1 NPE in StoredFieldsShardResponseProcessor possible when using TIME_ALLOWED param Key: SOLR-6623 URL: https://issues.apache.org/jira/browse/SOLR-6623 Project: Solr Issue Type: Bug Reporter: Hoss Man Assignee: Anshum Gupta Attachments: SOLR-6623.patch, SOLR-6623.patch I'm not sure if this is an existing bug, or something new caused by changes in SOLR-5986, but it just poped up in jenkinds today... http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-MacOSX/1844 Revision: 1631656 {noformat} [junit4] 2 NOTE: reproduce with: ant test -Dtestcase=TestDistributedGrouping -Dtests.method=testDistribSearch -Dtests.seed=E9460FA0973F6672 -Dtests.slow=true -Dtests.locale=cs -Dtests.timezone=Indian/Mayotte -Dtests.file.encoding=ISO-8859-1 [junit4] ERROR 55.9s | TestDistributedGrouping.testDistribSearch [junit4] Throwable #1: org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: java.lang.NullPointerException [junit4] at org.apache.solr.search.grouping.distributed.responseprocessor.StoredFieldsShardResponseProcessor.process(StoredFieldsShardResponseProcessor.java:45) [junit4] at org.apache.solr.handler.component.QueryComponent.handleGroupedResponses(QueryComponent.java:708) [junit4] at org.apache.solr.handler.component.QueryComponent.handleResponses(QueryComponent.java:691) [junit4] at org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:337) [junit4] at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:136) [junit4] at org.apache.solr.core.SolrCore.execute(SolrCore.java:1983) [junit4] at org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:773) [junit4] at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:408) [junit4] at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:202) [junit4] at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419) [junit4] at org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:137) [junit4] at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419) [junit4] at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455) [junit4] at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:229) [junit4] at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137) [junit4] at org.eclipse.jetty.server.handler.GzipHandler.handle(GzipHandler.java:301) [junit4] at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1077) [junit4] at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384) [junit4] at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193) [junit4] at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009) [junit4] at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135) [junit4] at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116) [junit4] at org.eclipse.jetty.server.Server.handle(Server.java:368) [junit4] at org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489) [junit4] at org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:942) [junit4] at org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1004) [junit4] at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:640) [junit4] at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235) [junit4] at org.eclipse.jetty.server.AsyncHttpConnection.handle(AsyncHttpConnection.java:82) [junit4] at org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:628) [junit4] at org.eclipse.jetty.io.nio.SelectChannelEndPoint$1.run(SelectChannelEndPoint.java:52) [junit4] at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608) [junit4] at
[jira] [Updated] (SOLR-6631) DistributedQueue spinning on calling zookeeper getChildren()
[ https://issues.apache.org/jira/browse/SOLR-6631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jessica Cheng Mallet updated SOLR-6631: --- Description: The change from SOLR-6336 introduced a bug where now I'm stuck in a loop making getChildren() request to zookeeper with this thread dump: {quote} Thread-51 [WAITING] CPU time: 1d 15h 0m 57s java.lang.Object.wait() org.apache.zookeeper.ClientCnxn.submitRequest(RequestHeader, Record, Record, ZooKeeper$WatchRegistration) org.apache.zookeeper.ZooKeeper.getChildren(String, Watcher) org.apache.solr.common.cloud.SolrZkClient$6.execute()2 recursive calls org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkOperation) org.apache.solr.common.cloud.SolrZkClient.getChildren(String, Watcher, boolean) org.apache.solr.cloud.DistributedQueue.orderedChildren(Watcher) org.apache.solr.cloud.DistributedQueue.getChildren(long) org.apache.solr.cloud.DistributedQueue.peek(long) org.apache.solr.cloud.DistributedQueue.peek(boolean) org.apache.solr.cloud.Overseer$ClusterStateUpdater.run() java.lang.Thread.run() {quote} Looking at the code, I think the issue is that LatchChildWatcher#process always sets the event to its member variable event, regardless of its type, but the problem is that once the member event is set, the await no longer waits. In this state, the while loop in getChildren(long), when called with wait being Integer.MAX_VALUE will loop back, NOT wait at await because event != null, but then it still will not get any children. {quote} while (true) \{ if (!children.isEmpty()) break; watcher.await(wait == Long.MAX_VALUE ? DEFAULT_TIMEOUT : wait); if (watcher.getWatchedEvent() != null) \{ children = orderedChildren(null); \} if (wait != Long.MAX_VALUE) break; \} {quote} I think the fix would be to only set the event in the watcher if the type is not None. was: The change from SOLR-6336 introduced a bug where now I'm stuck in a loop making getChildren() request to zookeeper with this thread dump: {quote} Thread-51 [WAITING] CPU time: 1d 15h 0m 57s java.lang.Object.wait() org.apache.zookeeper.ClientCnxn.submitRequest(RequestHeader, Record, Record, ZooKeeper$WatchRegistration) org.apache.zookeeper.ZooKeeper.getChildren(String, Watcher) org.apache.solr.common.cloud.SolrZkClient$6.execute()2 recursive calls org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkOperation) org.apache.solr.common.cloud.SolrZkClient.getChildren(String, Watcher, boolean) org.apache.solr.cloud.DistributedQueue.orderedChildren(Watcher) org.apache.solr.cloud.DistributedQueue.getChildren(long) org.apache.solr.cloud.DistributedQueue.peek(long) org.apache.solr.cloud.DistributedQueue.peek(boolean) org.apache.solr.cloud.Overseer$ClusterStateUpdater.run() java.lang.Thread.run() {quote} Looking at the code, I think the issue is that LatchChildWatcher#process always sets the event to its member variable event, regardless of its type, but the problem is that once the member event is set, the await no longer waits. In this state, the while loop in getChildren(long), when called with wait being Integer.MAX_VALUE will loop back, NOT wait at await because event != null, but then it still will not get any children. {quote} while (true) \{ if (!children.isEmpty()) break; watcher.await(wait == Long.MAX_VALUE ? DEFAULT_TIMEOUT : wait); if (watcher.getWatchedEvent() != null) \{ children = orderedChildren(null); \} if (wait != Long.MAX_VALUE) break; \} {quote} I think the fix would be to only set the event in the watcher if the type is a NodeChildrenChanged. DistributedQueue spinning on calling zookeeper getChildren() Key: SOLR-6631 URL: https://issues.apache.org/jira/browse/SOLR-6631 Project: Solr Issue Type: Bug Components: SolrCloud Reporter: Jessica Cheng Mallet Labels: solrcloud The change from SOLR-6336 introduced a bug where now I'm stuck in a loop making getChildren() request to zookeeper with this thread dump: {quote} Thread-51 [WAITING] CPU time: 1d 15h 0m 57s java.lang.Object.wait() org.apache.zookeeper.ClientCnxn.submitRequest(RequestHeader, Record, Record, ZooKeeper$WatchRegistration) org.apache.zookeeper.ZooKeeper.getChildren(String, Watcher) org.apache.solr.common.cloud.SolrZkClient$6.execute()2 recursive calls org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkOperation) org.apache.solr.common.cloud.SolrZkClient.getChildren(String, Watcher, boolean) org.apache.solr.cloud.DistributedQueue.orderedChildren(Watcher) org.apache.solr.cloud.DistributedQueue.getChildren(long) org.apache.solr.cloud.DistributedQueue.peek(long) org.apache.solr.cloud.DistributedQueue.peek(boolean) org.apache.solr.cloud.Overseer$ClusterStateUpdater.run() java.lang.Thread.run() {quote} Looking at the code, I
[JENKINS-MAVEN] Lucene-Solr-Maven-5.x #734: POMs out of sync
Build: https://builds.apache.org/job/Lucene-Solr-Maven-5.x/734/ No tests ran. Build Log: [...truncated 36587 lines...] -validate-maven-dependencies: [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-test-framework:5.0.0-SNAPSHOT: checking for updates from maven-restlet [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-test-framework:5.0.0-SNAPSHOT: checking for updates from releases.cloudera.com [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-parent:5.0.0-SNAPSHOT: checking for updates from sonatype.releases [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-parent:5.0.0-SNAPSHOT: checking for updates from maven-restlet [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-parent:5.0.0-SNAPSHOT: checking for updates from releases.cloudera.com [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-analyzers-common:5.0.0-SNAPSHOT: checking for updates from maven-restlet [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-analyzers-common:5.0.0-SNAPSHOT: checking for updates from releases.cloudera.com [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-analyzers-kuromoji:5.0.0-SNAPSHOT: checking for updates from maven-restlet [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-analyzers-kuromoji:5.0.0-SNAPSHOT: checking for updates from releases.cloudera.com [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-analyzers-phonetic:5.0.0-SNAPSHOT: checking for updates from maven-restlet [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-analyzers-phonetic:5.0.0-SNAPSHOT: checking for updates from releases.cloudera.com [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-backward-codecs:5.0.0-SNAPSHOT: checking for updates from maven-restlet [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-backward-codecs:5.0.0-SNAPSHOT: checking for updates from releases.cloudera.com [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-codecs:5.0.0-SNAPSHOT: checking for updates from maven-restlet [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-codecs:5.0.0-SNAPSHOT: checking for updates from releases.cloudera.com [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-core:5.0.0-SNAPSHOT: checking for updates from maven-restlet [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-core:5.0.0-SNAPSHOT: checking for updates from releases.cloudera.com [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-expressions:5.0.0-SNAPSHOT: checking for updates from maven-restlet [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-expressions:5.0.0-SNAPSHOT: checking for updates from releases.cloudera.com [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-grouping:5.0.0-SNAPSHOT: checking for updates from maven-restlet [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-grouping:5.0.0-SNAPSHOT: checking for updates from releases.cloudera.com [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-highlighter:5.0.0-SNAPSHOT: checking for updates from maven-restlet [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-highlighter:5.0.0-SNAPSHOT: checking for updates from releases.cloudera.com [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-join:5.0.0-SNAPSHOT: checking for updates from maven-restlet [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-join:5.0.0-SNAPSHOT: checking for updates from releases.cloudera.com [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-memory:5.0.0-SNAPSHOT: checking for updates from maven-restlet [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-memory:5.0.0-SNAPSHOT: checking for updates from releases.cloudera.com [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-misc:5.0.0-SNAPSHOT: checking for updates from maven-restlet [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-misc:5.0.0-SNAPSHOT: checking for updates from releases.cloudera.com [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-queries:5.0.0-SNAPSHOT: checking for updates from maven-restlet [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-queries:5.0.0-SNAPSHOT: checking for updates from releases.cloudera.com [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-queryparser:5.0.0-SNAPSHOT: checking for updates from maven-restlet [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-queryparser:5.0.0-SNAPSHOT: checking for updates from releases.cloudera.com [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-spatial:5.0.0-SNAPSHOT: checking for updates from maven-restlet [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-spatial:5.0.0-SNAPSHOT: checking for updates from releases.cloudera.com [artifact:dependencies]
[jira] [Reopened] (LUCENE-6007) Failed attempt of downloading javax:activation javadoc
[ https://issues.apache.org/jira/browse/LUCENE-6007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hoss Man reopened LUCENE-6007: -- it appears that this change has caused a problem when running the license checker against the maven builds? I may be wrong, but it's the only recent change that seems related... https://builds.apache.org/job/Lucene-Solr-Maven-5.x/734/ Revision: 1632628 {noformat} At revision 1632681 ... [java-info] java version 1.7.0_65 [java-info] OpenJDK Runtime Environment (1.7.0_65-b17, Oracle Corporation) [java-info] OpenJDK 64-Bit Server VM (24.65-b04, Oracle Corporation) [java-info] Test args: [] ... [artifact:dependencies] Downloading: org/eclipse/jetty/jetty-start/8.1.10.v20130312/jetty-start-8.1.10.v20130312.jar from repository central at http://repo1.maven.org/maven2 [artifact:dependencies] Transferring 45K from central [licenses] MISSING sha1 checksum file for: /home/jenkins/.m2/repository/org/eclipse/jetty/jetty-start/8.1.10.v20130312/jetty-start-8.1.10.v20130312.jar [licenses] EXPECTED sha1 checksum file : /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-5.x/solr/licenses/jetty-start-8.1.10.v20130312.jar.sha1 [licenses] Scanned 47 JAR file(s) for licenses (in 0.54s.), 1 error(s). {noformat} Looks like something involving the way start.jar is named tracked in solr/licenses isn't playing nicely with how the ivy.xml changes affect maven? (NOTE: cd solr ant resolve check-licenses doesn't seem to have any problem) Failed attempt of downloading javax:activation javadoc -- Key: LUCENE-6007 URL: https://issues.apache.org/jira/browse/LUCENE-6007 Project: Lucene - Core Issue Type: Bug Components: general/build Affects Versions: Trunk Environment: java version 1.7.0_67 Java(TM) SE Runtime Environment (build 1.7.0_67-b01) Java HotSpot(TM) 64-Bit Server VM (build 24.65-b04, mixed mode) Apache Ant(TM) version 1.9.4 compiled on April 29 2014 Microsoft Windows Professional 8.1 (6.3.9600) x64 Reporter: Ilia Sretenskii Assignee: Steve Rowe Labels: ant, download, ivy, jar, javadoc, javax, maven Attachments: LUCENE-6007.patch I have [checked out the trunk source code|https://wiki.apache.org/lucene-java/HowToContribute]. But then running the following commands fails: * [ant clean test|https://wiki.apache.org/lucene-java/HowToContribute] * [ant idea|https://wiki.apache.org/lucene-java/HowtoConfigureIntelliJ] The failure message says the following: {noformat} ivy-fail: resolve: [ivy:retrieve] [ivy:retrieve] :: problems summary :: [ivy:retrieve] WARNINGS [ivy:retrieve] [FAILED ] javax.activation#activation;1.1.1!activation.jar(javadoc): (0ms) [ivy:retrieve] shared: tried [ivy:retrieve] C:\Users\Ilia\.ivy2\shared\javax.activation\activation\1.1.1\javadocs\activation.jar [ivy:retrieve] public: tried [ivy:retrieve] http://repo1.maven.org/maven2/javax/activation/activation/1.1.1/activation-1.1.1-javadoc.jar [ivy:retrieve] :: [ivy:retrieve] :: FAILED DOWNLOADS:: [ivy:retrieve] :: ^ see resolution messages for details ^ :: [ivy:retrieve] :: [ivy:retrieve] :: javax.activation#activation;1.1.1!activation.jar(javadoc) [ivy:retrieve] :: [ivy:retrieve] [ivy:retrieve] [ivy:retrieve] :: USE VERBOSE OR DEBUG MESSAGE LEVEL FOR MORE DETAILS BUILD FAILED C:\Users\Ilia\Documents\IdeaProjects\svn.apache.org\lucene\build.xml:61: The following error occurred while executing this line: C:\Users\Ilia\Documents\IdeaProjects\svn.apache.org\lucene\extra-targets.xml:39: The following error occurred while executing this line: C:\Users\Ilia\Documents\IdeaProjects\svn.apache.org\lucene\solr\build.xml:209: The following error occurred while executing this line: C:\Users\Ilia\Documents\IdeaProjects\svn.apache.org\lucene\solr\common-build.xml:440: The following error occurred while executing this line: C:\Users\Ilia\Documents\IdeaProjects\svn.apache.org\lucene\solr\common-build.xml:496: The following error occurred while executing this line: C:\Users\Ilia\Documents\IdeaProjects\svn.apache.org\lucene\solr\contrib\contrib-build.xml:52: impossible to resolve dependencies: resolve failed - see output for details Total time: 53 minutes 19 seconds {noformat} There was a javadoc file for [javax:activation:1.1-rev-1|http://repo1.maven.org/maven2/javax/activation/activation/1.1-rev-1/], but none for [javax:activation:1.1.1|http://repo1.maven.org/maven2/javax/activation/activation/1.1.1/], which might be the cause. -- This message was sent by Atlassian JIRA
Re: [JENKINS-MAVEN] Lucene-Solr-Maven-5.x #734: POMs out of sync
Looks like this might be a glitch in the fix for LUCENE-6007? (disclaimer: i don't really understand the original issue, the fix, or this failure -- i'm just guessing they are related based on the timing) https://issues.apache.org/jira/browse/LUCENE-6007 : Date: Fri, 17 Oct 2014 23:05:55 + (UTC) : From: Apache Jenkins Server jenk...@builds.apache.org : Reply-To: dev@lucene.apache.org : To: dev@lucene.apache.org : Subject: [JENKINS-MAVEN] Lucene-Solr-Maven-5.x #734: POMs out of sync : : Build: https://builds.apache.org/job/Lucene-Solr-Maven-5.x/734/ : : No tests ran. : : Build Log: : [...truncated 36587 lines...] : -validate-maven-dependencies: : [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-test-framework:5.0.0-SNAPSHOT: checking for updates from maven-restlet : [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-test-framework:5.0.0-SNAPSHOT: checking for updates from releases.cloudera.com : [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-parent:5.0.0-SNAPSHOT: checking for updates from sonatype.releases : [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-parent:5.0.0-SNAPSHOT: checking for updates from maven-restlet : [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-parent:5.0.0-SNAPSHOT: checking for updates from releases.cloudera.com : [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-analyzers-common:5.0.0-SNAPSHOT: checking for updates from maven-restlet : [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-analyzers-common:5.0.0-SNAPSHOT: checking for updates from releases.cloudera.com : [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-analyzers-kuromoji:5.0.0-SNAPSHOT: checking for updates from maven-restlet : [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-analyzers-kuromoji:5.0.0-SNAPSHOT: checking for updates from releases.cloudera.com : [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-analyzers-phonetic:5.0.0-SNAPSHOT: checking for updates from maven-restlet : [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-analyzers-phonetic:5.0.0-SNAPSHOT: checking for updates from releases.cloudera.com : [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-backward-codecs:5.0.0-SNAPSHOT: checking for updates from maven-restlet : [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-backward-codecs:5.0.0-SNAPSHOT: checking for updates from releases.cloudera.com : [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-codecs:5.0.0-SNAPSHOT: checking for updates from maven-restlet : [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-codecs:5.0.0-SNAPSHOT: checking for updates from releases.cloudera.com : [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-core:5.0.0-SNAPSHOT: checking for updates from maven-restlet : [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-core:5.0.0-SNAPSHOT: checking for updates from releases.cloudera.com : [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-expressions:5.0.0-SNAPSHOT: checking for updates from maven-restlet : [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-expressions:5.0.0-SNAPSHOT: checking for updates from releases.cloudera.com : [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-grouping:5.0.0-SNAPSHOT: checking for updates from maven-restlet : [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-grouping:5.0.0-SNAPSHOT: checking for updates from releases.cloudera.com : [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-highlighter:5.0.0-SNAPSHOT: checking for updates from maven-restlet : [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-highlighter:5.0.0-SNAPSHOT: checking for updates from releases.cloudera.com : [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-join:5.0.0-SNAPSHOT: checking for updates from maven-restlet : [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-join:5.0.0-SNAPSHOT: checking for updates from releases.cloudera.com : [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-memory:5.0.0-SNAPSHOT: checking for updates from maven-restlet : [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-memory:5.0.0-SNAPSHOT: checking for updates from releases.cloudera.com : [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-misc:5.0.0-SNAPSHOT: checking for updates from maven-restlet : [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-misc:5.0.0-SNAPSHOT: checking for updates from releases.cloudera.com : [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-queries:5.0.0-SNAPSHOT: checking for updates from maven-restlet : [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-queries:5.0.0-SNAPSHOT: checking for updates from releases.cloudera.com
[jira] [Commented] (LUCENE-5969) Add Lucene50Codec
[ https://issues.apache.org/jira/browse/LUCENE-5969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14175742#comment-14175742 ] ASF subversion and git services commented on LUCENE-5969: - Commit 1632706 from [~rcmuir] in branch 'dev/branches/lucene5969' [ https://svn.apache.org/r1632706 ] LUCENE-5969: fix tests Add Lucene50Codec - Key: LUCENE-5969 URL: https://issues.apache.org/jira/browse/LUCENE-5969 Project: Lucene - Core Issue Type: Improvement Reporter: Michael McCandless Fix For: 5.0, Trunk Attachments: LUCENE-5969.patch, LUCENE-5969.patch, LUCENE-5969_part2.patch Spinoff from LUCENE-5952: * Fix .si to write Version as 3 ints, not a String that requires parsing at read time. * Lucene42TermVectorsFormat should not use the same codecName as Lucene41StoredFieldsFormat It would also be nice if we had a bumpCodecVersion script so rolling a new codec is not so daunting. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-Tests-trunk-Java7 - Build # 4925 - Failure
Build: https://builds.apache.org/job/Lucene-Solr-Tests-trunk-Java7/4925/ 1 tests failed. REGRESSION: org.apache.solr.cloud.DeleteReplicaTest.testDistribSearch Error Message: No live SolrServers available to handle this request:[https://127.0.0.1:11201/gllb/wr, https://127.0.0.1:11187/gllb/wr, https://127.0.0.1:11195/gllb/wr, https://127.0.0.1:11171/gllb/wr, https://127.0.0.1:11578/gllb/wr] Stack Trace: org.apache.solr.client.solrj.SolrServerException: No live SolrServers available to handle this request:[https://127.0.0.1:11201/gllb/wr, https://127.0.0.1:11187/gllb/wr, https://127.0.0.1:11195/gllb/wr, https://127.0.0.1:11171/gllb/wr, https://127.0.0.1:11578/gllb/wr] at __randomizedtesting.SeedInfo.seed([217FCF39A5F8:A09942D4B866C5C4]:0) at org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(LBHttpSolrServer.java:333) at org.apache.solr.client.solrj.impl.CloudSolrServer.sendRequest(CloudSolrServer.java:1015) at org.apache.solr.client.solrj.impl.CloudSolrServer.requestWithRetryOnStaleState(CloudSolrServer.java:793) at org.apache.solr.client.solrj.impl.CloudSolrServer.request(CloudSolrServer.java:736) at org.apache.solr.cloud.DeleteReplicaTest.removeAndWaitForReplicaGone(DeleteReplicaTest.java:172) at org.apache.solr.cloud.DeleteReplicaTest.deleteLiveReplicaTest(DeleteReplicaTest.java:145) at org.apache.solr.cloud.DeleteReplicaTest.doTest(DeleteReplicaTest.java:89) at org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
[JENKINS] Lucene-Solr-5.x-MacOSX (64bit/jdk1.7.0) - Build # 1850 - Still Failing!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-MacOSX/1850/ Java: 64bit/jdk1.7.0 -XX:+UseCompressedOops -XX:+UseG1GC 1 tests failed. REGRESSION: org.apache.solr.TestDistributedGrouping.testDistribSearch Error Message: java.lang.NullPointerException at org.apache.solr.search.grouping.distributed.responseprocessor.StoredFieldsShardResponseProcessor.process(StoredFieldsShardResponseProcessor.java:45) at org.apache.solr.handler.component.QueryComponent.handleGroupedResponses(QueryComponent.java:708) at org.apache.solr.handler.component.QueryComponent.handleResponses(QueryComponent.java:691) at org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:337) at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:136) at org.apache.solr.core.SolrCore.execute(SolrCore.java:1983) at org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:773) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:408) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:202) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419) at org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:137) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455) at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:229) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137) at org.eclipse.jetty.server.handler.GzipHandler.handle(GzipHandler.java:301) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1077) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384) at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116) at org.eclipse.jetty.server.Server.handle(Server.java:368) at org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489) at org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:942) at org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1004) at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:640) at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235) at org.eclipse.jetty.server.AsyncHttpConnection.handle(AsyncHttpConnection.java:82) at org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:628) at org.eclipse.jetty.io.nio.SelectChannelEndPoint$1.run(SelectChannelEndPoint.java:52) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608) at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543) at java.lang.Thread.run(Thread.java:745) Stack Trace: org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: java.lang.NullPointerException at org.apache.solr.search.grouping.distributed.responseprocessor.StoredFieldsShardResponseProcessor.process(StoredFieldsShardResponseProcessor.java:45) at org.apache.solr.handler.component.QueryComponent.handleGroupedResponses(QueryComponent.java:708) at org.apache.solr.handler.component.QueryComponent.handleResponses(QueryComponent.java:691) at org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:337) at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:136) at org.apache.solr.core.SolrCore.execute(SolrCore.java:1983) at org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:773) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:408) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:202) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419) at org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:137) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455) at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:229) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137) at org.eclipse.jetty.server.handler.GzipHandler.handle(GzipHandler.java:301) at
[JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.8.0_40-ea-b09) - Build # 11315 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/11315/ Java: 64bit/jdk1.8.0_40-ea-b09 -XX:+UseCompressedOops -XX:+UseG1GC 2 tests failed. REGRESSION: org.apache.solr.cloud.DeleteReplicaTest.testDistribSearch Error Message: No live SolrServers available to handle this request:[http://127.0.0.1:48935, http://127.0.0.1:53134, http://127.0.0.1:33827, http://127.0.0.1:60012, http://127.0.0.1:48647] Stack Trace: org.apache.solr.client.solrj.SolrServerException: No live SolrServers available to handle this request:[http://127.0.0.1:48935, http://127.0.0.1:53134, http://127.0.0.1:33827, http://127.0.0.1:60012, http://127.0.0.1:48647] at __randomizedtesting.SeedInfo.seed([90604E220507BE82:1186C03A7258DEBE]:0) at org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(LBHttpSolrServer.java:333) at org.apache.solr.client.solrj.impl.CloudSolrServer.sendRequest(CloudSolrServer.java:1015) at org.apache.solr.client.solrj.impl.CloudSolrServer.requestWithRetryOnStaleState(CloudSolrServer.java:793) at org.apache.solr.client.solrj.impl.CloudSolrServer.request(CloudSolrServer.java:736) at org.apache.solr.cloud.DeleteReplicaTest.removeAndWaitForReplicaGone(DeleteReplicaTest.java:172) at org.apache.solr.cloud.DeleteReplicaTest.deleteLiveReplicaTest(DeleteReplicaTest.java:145) at org.apache.solr.cloud.DeleteReplicaTest.doTest(DeleteReplicaTest.java:89) at org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
[JENKINS] Lucene-Solr-NightlyTests-5.x - Build # 651 - Still Failing
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.x/651/ 4 tests failed. REGRESSION: org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testDistribSearch Error Message: Could not fully create collection: acollectionafterbaddelete Stack Trace: org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: Could not fully create collection: acollectionafterbaddelete at __randomizedtesting.SeedInfo.seed([5E63E98220B52629:DF85679A57EA4615]:0) at org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:569) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:215) at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:211) at org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI(CollectionsAPIDistributedZkTest.java:932) at org.apache.solr.cloud.CollectionsAPIDistributedZkTest.doTest(CollectionsAPIDistributedZkTest.java:203) at org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
[JENKINS-MAVEN] Lucene-Solr-Maven-trunk #1237: POMs out of sync
Build: https://builds.apache.org/job/Lucene-Solr-Maven-trunk/1237/ No tests ran. Build Log: [...truncated 36467 lines...] -validate-maven-dependencies: [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-test-framework:6.0.0-SNAPSHOT: checking for updates from maven-restlet [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-test-framework:6.0.0-SNAPSHOT: checking for updates from releases.cloudera.com [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-parent:6.0.0-SNAPSHOT: checking for updates from sonatype.releases [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-parent:6.0.0-SNAPSHOT: checking for updates from maven-restlet [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-parent:6.0.0-SNAPSHOT: checking for updates from releases.cloudera.com [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-analyzers-common:6.0.0-SNAPSHOT: checking for updates from maven-restlet [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-analyzers-common:6.0.0-SNAPSHOT: checking for updates from releases.cloudera.com [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-analyzers-kuromoji:6.0.0-SNAPSHOT: checking for updates from maven-restlet [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-analyzers-kuromoji:6.0.0-SNAPSHOT: checking for updates from releases.cloudera.com [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-analyzers-phonetic:6.0.0-SNAPSHOT: checking for updates from maven-restlet [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-analyzers-phonetic:6.0.0-SNAPSHOT: checking for updates from releases.cloudera.com [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-backward-codecs:6.0.0-SNAPSHOT: checking for updates from maven-restlet [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-backward-codecs:6.0.0-SNAPSHOT: checking for updates from releases.cloudera.com [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-codecs:6.0.0-SNAPSHOT: checking for updates from maven-restlet [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-codecs:6.0.0-SNAPSHOT: checking for updates from releases.cloudera.com [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-core:6.0.0-SNAPSHOT: checking for updates from maven-restlet [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-core:6.0.0-SNAPSHOT: checking for updates from releases.cloudera.com [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-expressions:6.0.0-SNAPSHOT: checking for updates from maven-restlet [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-expressions:6.0.0-SNAPSHOT: checking for updates from releases.cloudera.com [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-grouping:6.0.0-SNAPSHOT: checking for updates from maven-restlet [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-grouping:6.0.0-SNAPSHOT: checking for updates from releases.cloudera.com [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-highlighter:6.0.0-SNAPSHOT: checking for updates from maven-restlet [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-highlighter:6.0.0-SNAPSHOT: checking for updates from releases.cloudera.com [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-join:6.0.0-SNAPSHOT: checking for updates from maven-restlet [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-join:6.0.0-SNAPSHOT: checking for updates from releases.cloudera.com [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-memory:6.0.0-SNAPSHOT: checking for updates from maven-restlet [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-memory:6.0.0-SNAPSHOT: checking for updates from releases.cloudera.com [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-misc:6.0.0-SNAPSHOT: checking for updates from maven-restlet [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-misc:6.0.0-SNAPSHOT: checking for updates from releases.cloudera.com [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-queries:6.0.0-SNAPSHOT: checking for updates from maven-restlet [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-queries:6.0.0-SNAPSHOT: checking for updates from releases.cloudera.com [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-queryparser:6.0.0-SNAPSHOT: checking for updates from maven-restlet [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-queryparser:6.0.0-SNAPSHOT: checking for updates from releases.cloudera.com [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-spatial:6.0.0-SNAPSHOT: checking for updates from maven-restlet [artifact:dependencies] [INFO] snapshot org.apache.lucene:lucene-spatial:6.0.0-SNAPSHOT: checking for updates from releases.cloudera.com [artifact:dependencies]
[jira] [Updated] (SOLR-6307) Atomic update remove does not work for int array or date array
[ https://issues.apache.org/jira/browse/SOLR-6307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anurag Sharma updated SOLR-6307: Attachment: SOLR-6307.patch Here is the patch parsing the Double before Float Atomic update remove does not work for int array or date array -- Key: SOLR-6307 URL: https://issues.apache.org/jira/browse/SOLR-6307 Project: Solr Issue Type: Bug Components: update Affects Versions: 4.9 Reporter: Kun Xi Assignee: Noble Paul Labels: atomic, difficulty-medium, impact-medium Fix For: 5.0, Trunk Attachments: SOLR-6307.patch, SOLR-6307.patch, SOLR-6307.patch, SOLR-6307.patch, unitTests-6307.txt Try to remove an element in the string array with curl: {code} curl http://localhost:8080/update\?commit\=true -H 'Content-type:application/json' -d '[{ attr_birth_year_is: { remove: [1960]}, id: 1098}]' curl http://localhost:8080/update\?commit\=true -H 'Content-type:application/json' -d '[{reserved_on_dates_dts: {remove: [2014-02-12T12:00:00Z, 2014-07-16T12:00:00Z, 2014-02-15T12:00:00Z, 2014-02-21T12:00:00Z]}, id: 1098}]' {code} Neither of them works. The set and add operation for int array works. The set, remove, and add operation for string array works -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-6511) Fencepost error in LeaderInitiatedRecoveryThread
[ https://issues.apache.org/jira/browse/SOLR-6511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14175862#comment-14175862 ] Shalin Shekhar Mangar commented on SOLR-6511: - Hi Tim, sorry for the late review but it'd more awesome if we put the replica core in addition to the node name inside createdByNode key (or maybe just coreNodeName). Otherwise if there are more than one replica for the same shard on the node then we don't know which one created the LIR. Fencepost error in LeaderInitiatedRecoveryThread Key: SOLR-6511 URL: https://issues.apache.org/jira/browse/SOLR-6511 Project: Solr Issue Type: Bug Reporter: Alan Woodward Assignee: Timothy Potter Fix For: 4.10.2, 5.0 Attachments: SOLR-6511.patch, SOLR-6511.patch At line 106: {code} while (continueTrying ++tries maxTries) { {code} should be {code} while (continueTrying ++tries = maxTries) { {code} This is only a problem when called from DistributedUpdateProcessor, as it can have maxTries set to 1, which means the loop is never actually run. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0_20) - Build # 11472 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/11472/ Java: 32bit/jdk1.8.0_20 -client -XX:+UseG1GC 1 tests failed. REGRESSION: org.apache.solr.cloud.DeleteReplicaTest.testDistribSearch Error Message: No live SolrServers available to handle this request:[https://127.0.0.1:35073, https://127.0.0.1:37912, https://127.0.0.1:38847, https://127.0.0.1:37300, https://127.0.0.1:47056] Stack Trace: org.apache.solr.client.solrj.SolrServerException: No live SolrServers available to handle this request:[https://127.0.0.1:35073, https://127.0.0.1:37912, https://127.0.0.1:38847, https://127.0.0.1:37300, https://127.0.0.1:47056] at __randomizedtesting.SeedInfo.seed([2A5FC9CCF1B7B193:ABB947D486E8D1AF]:0) at org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(LBHttpSolrServer.java:333) at org.apache.solr.client.solrj.impl.CloudSolrServer.sendRequest(CloudSolrServer.java:1015) at org.apache.solr.client.solrj.impl.CloudSolrServer.requestWithRetryOnStaleState(CloudSolrServer.java:793) at org.apache.solr.client.solrj.impl.CloudSolrServer.request(CloudSolrServer.java:736) at org.apache.solr.cloud.DeleteReplicaTest.removeAndWaitForReplicaGone(DeleteReplicaTest.java:172) at org.apache.solr.cloud.DeleteReplicaTest.deleteLiveReplicaTest(DeleteReplicaTest.java:145) at org.apache.solr.cloud.DeleteReplicaTest.doTest(DeleteReplicaTest.java:89) at org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869) at sun.reflect.GeneratedMethodAccessor48.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
[jira] [Commented] (SOLR-6547) CloudSolrServer query getqtime Exception
[ https://issues.apache.org/jira/browse/SOLR-6547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14175867#comment-14175867 ] Anurag Sharma commented on SOLR-6547: - Hi Hoss, Thanks for your update. Time is usually returned as long value. Since getQTime returns, int looks like the interest is to return value in seconds and not milliseconds. I agree with your suggestion to change the return type to either long or use intValue while returning in the current method. Is there a way I can reproduce the java.lang.ClassCastException mentioned in the description? Kevin - Any suggestion? Anurag CloudSolrServer query getqtime Exception Key: SOLR-6547 URL: https://issues.apache.org/jira/browse/SOLR-6547 Project: Solr Issue Type: Bug Components: SolrJ Affects Versions: 4.10 Reporter: kevin We are using CloudSolrServer to query ,but solrj throw Exception ; java.lang.ClassCastException: java.lang.Long cannot be cast to java.lang.Integer at org.apache.solr.client.solrj.response.SolrResponseBase.getQTime(SolrResponseBase.java:76) -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org