[JENKINS] Lucene-Solr-BadApples-Tests-master - Build # 165 - Still Unstable
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-master/165/ 3 tests failed. FAILED: org.apache.solr.update.TestInPlaceUpdatesDistrib.test Error Message: Error from server at http://127.0.0.1:41187: At least one of the node(s) specified [127.0.0.1:41458_] are not currently active in [], no action taken. Stack Trace: org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at http://127.0.0.1:41187: At least one of the node(s) specified [127.0.0.1:41458_] are not currently active in [], no action taken. at __randomizedtesting.SeedInfo.seed([FE0C6C60AB966E30:765853BA056A03C8]:0) at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244) at org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483) at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1107) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211) at org.apache.solr.cloud.AbstractFullDistribZkTestBase.createJettys(AbstractFullDistribZkTestBase.java:425) at org.apache.solr.cloud.AbstractFullDistribZkTestBase.createServers(AbstractFullDistribZkTestBase.java:341) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1006) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:983) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at
[JENKINS] Lucene-Solr-repro - Build # 1557 - Still Unstable
Build: https://builds.apache.org/job/Lucene-Solr-repro/1557/ [...truncated 28 lines...] [repro] Jenkins log URL: https://builds.apache.org/job/Lucene-Solr-BadApples-NightlyTests-master/30/consoleText [repro] Revision: 9481c1f623b77214a2a14ad18efc59fb406ed765 [repro] Ant options: -Dtests.multiplier=2 -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-master/test-data/enwiki.random.lines.txt [repro] Repro line: ant test -Dtestcase=MoveReplicaHDFSTest -Dtests.method=testFailedMove -Dtests.seed=408E7BC14D58FA08 -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.badapples=true -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-master/test-data/enwiki.random.lines.txt -Dtests.locale=es-CL -Dtests.timezone=America/Martinique -Dtests.asserts=true -Dtests.file.encoding=UTF-8 [repro] Repro line: ant test -Dtestcase=TestAuthenticationFramework -Dtests.method=testBasics -Dtests.seed=408E7BC14D58FA08 -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.badapples=true -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-master/test-data/enwiki.random.lines.txt -Dtests.locale=ar-SA -Dtests.timezone=VST -Dtests.asserts=true -Dtests.file.encoding=UTF-8 [repro] Repro line: ant test -Dtestcase=TestSimLargeCluster -Dtests.method=testNodeLost -Dtests.seed=408E7BC14D58FA08 -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.badapples=true -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-master/test-data/enwiki.random.lines.txt -Dtests.locale=sl -Dtests.timezone=Africa/Asmera -Dtests.asserts=true -Dtests.file.encoding=UTF-8 [repro] git rev-parse --abbrev-ref HEAD [repro] git rev-parse HEAD [repro] Initial local git branch/revision: 964cc88cee7d62edf03a923e3217809d630af5d5 [repro] git fetch [repro] git checkout 9481c1f623b77214a2a14ad18efc59fb406ed765 [...truncated 2 lines...] [repro] git merge --ff-only [...truncated 1 lines...] [repro] ant clean [...truncated 6 lines...] [repro] Test suites by module: [repro]solr/core [repro] TestSimLargeCluster [repro] TestAuthenticationFramework [repro] MoveReplicaHDFSTest [repro] ant compile-test [...truncated 3424 lines...] [repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=15 -Dtests.class="*.TestSimLargeCluster|*.TestAuthenticationFramework|*.MoveReplicaHDFSTest" -Dtests.showOutput=onerror -Dtests.multiplier=2 -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-master/test-data/enwiki.random.lines.txt -Dtests.seed=408E7BC14D58FA08 -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.badapples=true -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-master/test-data/enwiki.random.lines.txt -Dtests.locale=sl -Dtests.timezone=Africa/Asmera -Dtests.asserts=true -Dtests.file.encoding=UTF-8 [...truncated 12930 lines...] [repro] Setting last failure code to 256 [repro] Failures: [repro] 0/5 failed: org.apache.solr.cloud.TestAuthenticationFramework [repro] 0/5 failed: org.apache.solr.cloud.autoscaling.sim.TestSimLargeCluster [repro] 1/5 failed: org.apache.solr.cloud.MoveReplicaHDFSTest [repro] git checkout 964cc88cee7d62edf03a923e3217809d630af5d5 [...truncated 2 lines...] [repro] Exiting with code 256 [...truncated 5 lines...] - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-7.x-Solaris (64bit/jdk1.8.0) - Build # 834 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Solaris/834/ Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseSerialGC 1 tests failed. FAILED: org.apache.solr.cloud.TestTlogReplica.testKillTlogReplica Error Message: Expected collection to be created with 1 shards and 2 replicas null Live Nodes: [127.0.0.1:45886_solr, 127.0.0.1:54122_solr] Last available state: null Stack Trace: java.lang.AssertionError: Expected collection to be created with 1 shards and 2 replicas null Live Nodes: [127.0.0.1:45886_solr, 127.0.0.1:54122_solr] Last available state: null at __randomizedtesting.SeedInfo.seed([B99844D18E226ED2:66AA80147F81E1DF]:0) at org.junit.Assert.fail(Assert.java:93) at org.apache.solr.cloud.SolrCloudTestCase.waitForState(SolrCloudTestCase.java:280) at org.apache.solr.cloud.TestTlogReplica.createAndWaitForCollection(TestTlogReplica.java:749) at org.apache.solr.cloud.TestTlogReplica.testKillTlogReplica(TestTlogReplica.java:435) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
[JENKINS] Lucene-Solr-Tests-master - Build # 2828 - Still Unstable
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/2828/ 3 tests failed. FAILED: org.apache.solr.cloud.MoveReplicaHDFSTest.testFailedMove Error Message: No live SolrServers available to handle this request:[http://127.0.0.1:42087/solr/MoveReplicaHDFSTest_failed_coll_true, http://127.0.0.1:36139/solr/MoveReplicaHDFSTest_failed_coll_true] Stack Trace: org.apache.solr.client.solrj.SolrServerException: No live SolrServers available to handle this request:[http://127.0.0.1:42087/solr/MoveReplicaHDFSTest_failed_coll_true, http://127.0.0.1:36139/solr/MoveReplicaHDFSTest_failed_coll_true] at __randomizedtesting.SeedInfo.seed([666B6F2E8027A150:CCA6BCDC37F47480]:0) at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:462) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1107) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:994) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194) at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:942) at org.apache.solr.cloud.MoveReplicaTest.testFailedMove(MoveReplicaTest.java:301) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at
[JENKINS] Lucene-Solr-repro - Build # 1556 - Still unstable
Build: https://builds.apache.org/job/Lucene-Solr-repro/1556/ [...truncated 41 lines...] [repro] Jenkins log URL: https://builds.apache.org/job/Lucene-Solr-Tests-master/2827/consoleText [repro] Revision: 9481c1f623b77214a2a14ad18efc59fb406ed765 [repro] Repro line: ant test -Dtestcase=TestSimTriggerIntegration -Dtests.method=testNodeLostTriggerRestoreState -Dtests.seed=885FDC6C04856435 -Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=hi-IN -Dtests.timezone=Chile/Continental -Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1 [repro] Repro line: ant test -Dtestcase=TestPolicyCloud -Dtests.method=testCreateCollection -Dtests.seed=885FDC6C04856435 -Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=id -Dtests.timezone=Asia/Dubai -Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1 [repro] Repro line: ant test -Dtestcase=TestPolicyCloud -Dtests.method=testCreateCollectionAddReplica -Dtests.seed=885FDC6C04856435 -Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=id -Dtests.timezone=Asia/Dubai -Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1 [repro] Repro line: ant test -Dtestcase=TestSimComputePlanAction -Dtests.method=testNodeAdded -Dtests.seed=885FDC6C04856435 -Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=ca -Dtests.timezone=Europe/Prague -Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1 [repro] Repro line: ant test -Dtestcase=MultiThreadedOCPTest -Dtests.method=test -Dtests.seed=885FDC6C04856435 -Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=en-IE -Dtests.timezone=America/Araguaina -Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1 [repro] git rev-parse --abbrev-ref HEAD [repro] git rev-parse HEAD [repro] Initial local git branch/revision: 964cc88cee7d62edf03a923e3217809d630af5d5 [repro] git fetch [repro] git checkout 9481c1f623b77214a2a14ad18efc59fb406ed765 [...truncated 2 lines...] [repro] git merge --ff-only [...truncated 1 lines...] [repro] ant clean [...truncated 6 lines...] [repro] Test suites by module: [repro]solr/core [repro] TestSimComputePlanAction [repro] TestSimTriggerIntegration [repro] TestPolicyCloud [repro] MultiThreadedOCPTest [repro] ant compile-test [...truncated 3424 lines...] [repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=20 -Dtests.class="*.TestSimComputePlanAction|*.TestSimTriggerIntegration|*.TestPolicyCloud|*.MultiThreadedOCPTest" -Dtests.showOutput=onerror -Dtests.seed=885FDC6C04856435 -Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=ca -Dtests.timezone=Europe/Prague -Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1 [...truncated 3920 lines...] [repro] Setting last failure code to 256 [repro] Failures: [repro] 0/5 failed: org.apache.solr.cloud.MultiThreadedOCPTest [repro] 0/5 failed: org.apache.solr.cloud.autoscaling.TestPolicyCloud [repro] 0/5 failed: org.apache.solr.cloud.autoscaling.sim.TestSimComputePlanAction [repro] 1/5 failed: org.apache.solr.cloud.autoscaling.sim.TestSimTriggerIntegration [repro] git checkout 964cc88cee7d62edf03a923e3217809d630af5d5 [...truncated 2 lines...] [repro] Exiting with code 256 [...truncated 5 lines...] - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-SmokeRelease-master - Build # 1136 - Still Failing
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/1136/ No tests ran. Build Log: [...truncated 23269 lines...] [asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: invalid part, must have at least one section (e.g., chapter, appendix, etc.) [asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid part, must have at least one section (e.g., chapter, appendix, etc.) [java] Processed 2432 links (1984 relative) to 3172 anchors in 246 files [echo] Validated Links & Anchors via: /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr-ref-guide/bare-bones-html/ -dist-changes: [copy] Copying 4 files to /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/changes package: -unpack-solr-tgz: -ensure-solr-tgz-exists: [mkdir] Created dir: /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr.tgz.unpacked [untar] Expanding: /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/solr-8.0.0.tgz into /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr.tgz.unpacked generate-maven-artifacts: resolve: resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml resolve: resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail:
[jira] [Commented] (SOLR-12817) Simply response processing in CreateShardCmd
[ https://issues.apache.org/jira/browse/SOLR-12817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16632671#comment-16632671 ] Cao Manh Dat commented on SOLR-12817: - Sorry but [~ab] and [~noble.paul] will know the answer better than me. > Simply response processing in CreateShardCmd > > > Key: SOLR-12817 > URL: https://issues.apache.org/jira/browse/SOLR-12817 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Varun Thacker >Priority: Major > > While working on SOLR-12708 , Mano disccovered used the response parsing > technique from CreateShardCmd > {code:java} > final NamedList addResult = new NamedList(); > try { > ocmh.addReplica(zkStateReader.getClusterState(), addReplicasProps, > addResult, () -> { > Object addResultFailure = addResult.get("failure"); > if (addResultFailure != null) { > SimpleOrderedMap failure = (SimpleOrderedMap) results.get("failure"); > if (failure == null) { > failure = new SimpleOrderedMap(); > results.add("failure", failure); > } > failure.addAll((NamedList) addResultFailure); > } else { > SimpleOrderedMap success = (SimpleOrderedMap) results.get("success"); > if (success == null) { > success = new SimpleOrderedMap(); > results.add("success", success); > } > success.addAll((NamedList) addResult.get("success")); > } > }); > }{code} > > This code works as the response can have either a failure or a success. But > isn't it the same as doing this? > {code:java} > ocmh.addReplica(zkStateReader.getClusterState(), addReplicasProps, results, > null);{code} > > Maybe I am missing the motication here . [~caomanhdat] WDYT? If the usage is > needed then at-least I'd want to document the reason in the code for future > refernece. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-11) - Build # 22940 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/22940/ Java: 64bit/jdk-11 -XX:+UseCompressedOops -XX:+UseSerialGC 7 tests failed. FAILED: org.apache.solr.client.solrj.impl.CloudSolrClientTest.preferReplicaTypesTest Error Message: Stack Trace: java.lang.AssertionError at __randomizedtesting.SeedInfo.seed([E5FF44A537856F23:DE15E713501CA22F]:0) at org.junit.Assert.fail(Assert.java:92) at org.junit.Assert.assertTrue(Assert.java:43) at org.junit.Assert.assertTrue(Assert.java:54) at org.apache.solr.client.solrj.impl.CloudSolrClientTest.queryWithPreferReplicaTypes(CloudSolrClientTest.java:957) at org.apache.solr.client.solrj.impl.CloudSolrClientTest.preferReplicaTypesTest(CloudSolrClientTest.java:886) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985) at org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:110) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.base/java.lang.Thread.run(Thread.java:834) FAILED:
[jira] [Commented] (SOLR-10981) Allow update to load gzip files
[ https://issues.apache.org/jira/browse/SOLR-10981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16632649#comment-16632649 ] Andrew Lundgren commented on SOLR-10981: I update to patch to use: org.apache.http.entity.ContentType.APPLICATION* where applicable. I updated toLowerCase() to use the Locale.ROOT based on the forbidden api results. I updated the change and usage to include: "This feature is exclusively for use with {{stream.url}} and {{stream.file"}} Ready for another review! > Allow update to load gzip files > > > Key: SOLR-10981 > URL: https://issues.apache.org/jira/browse/SOLR-10981 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrJ >Affects Versions: 6.6 >Reporter: Andrew Lundgren >Priority: Major > Labels: patch > Fix For: 7.6, master (8.0) > > Attachments: SOLR-10981.patch, SOLR-10981.patch, SOLR-10981.patch, > SOLR-10981.patch, SOLR-10981.patch, SOLR-10981.patch > > > We currently import large CSV files. We store them in gzip files as they > compress at around 80%. > To import them we must gunzip them and then import them. After that we no > longer need the decompressed files. > This patch allows directly opening either URL, or local files that are > gzipped. > For URLs, to determine if the file is gzipped, it will check the content > encoding=="gzip" or if the file ends in ".gz" > For files, if the file ends in ".gz" then it will assume the file is gzipped. > I have tested the patch with 4.10.4, 6.6.0, 7.0.1 and master from git. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-10981) Allow update to load gzip files
[ https://issues.apache.org/jira/browse/SOLR-10981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Lundgren updated SOLR-10981: --- Attachment: SOLR-10981.patch > Allow update to load gzip files > > > Key: SOLR-10981 > URL: https://issues.apache.org/jira/browse/SOLR-10981 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrJ >Affects Versions: 6.6 >Reporter: Andrew Lundgren >Priority: Major > Labels: patch > Fix For: 7.6, master (8.0) > > Attachments: SOLR-10981.patch, SOLR-10981.patch, SOLR-10981.patch, > SOLR-10981.patch, SOLR-10981.patch, SOLR-10981.patch > > > We currently import large CSV files. We store them in gzip files as they > compress at around 80%. > To import them we must gunzip them and then import them. After that we no > longer need the decompressed files. > This patch allows directly opening either URL, or local files that are > gzipped. > For URLs, to determine if the file is gzipped, it will check the content > encoding=="gzip" or if the file ends in ".gz" > For files, if the file ends in ".gz" then it will assume the file is gzipped. > I have tested the patch with 4.10.4, 6.6.0, 7.0.1 and master from git. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12767) Deprecate min_rf parameter and always include the achieved rf in the response
[ https://issues.apache.org/jira/browse/SOLR-12767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16632608#comment-16632608 ] Anshum Gupta commented on SOLR-12767: - Looks good to me. Here are a few minor (nit-pick?) suggestions: * I know you already have an entry in the Bug Fixes and the Improvements section, but I think this would be a good thing to add to the 'upgrading from' section too? Just as this is a user facing param that we want to remove. * In DistributedUpdateProcessor.java - Can you add a TODO tag there? {code:java} +// Kept for rolling upgrades only. Should be removed in Solr 9{code} > Deprecate min_rf parameter and always include the achieved rf in the response > - > > Key: SOLR-12767 > URL: https://issues.apache.org/jira/browse/SOLR-12767 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Tomás Fernández Löbbe >Assignee: Tomás Fernández Löbbe >Priority: Major > Attachments: SOLR-12767.patch, SOLR-12767.patch, SOLR-12767.patch, > SOLR-12767.patch, SOLR-12767.patch > > > Currently the {{min_rf}} parameter does two things. > 1. It tells Solr that the user wants to keep track of the achieved > replication factor > 2. (undocumented AFAICT) It prevents Solr from putting replicas in recovery > if the achieved replication factor is lower than the {{min_rf}} specified > #2 is intentional and I believe the reason behind it is to prevent replicas > to go into recovery in cases of short hiccups (since the assumption is that > the user is going to retry the request anyway). This is dangerous because if > the user doesn’t retry (or retries a number of times but keeps failing) the > replicas will be permanently inconsistent. Also, since we now retry updates > from leaders to replicas, this behavior has less value, since short temporary > blips should be recovered by those retries anyway. > I think we should remove the behavior described in #2, #1 is still valuable, > but there isn’t much point of making the parameter an integer, the user is > just telling Solr that they want the achieved replication factor, so it could > be a boolean, but I’m thinking we probably don’t even want to expose the > parameter, and just always keep track of it, and include it in the response. > It’s not costly to calculate, so why keep two separate code paths? -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-12817) Simply response processing in CreateShardCmd
Varun Thacker created SOLR-12817: Summary: Simply response processing in CreateShardCmd Key: SOLR-12817 URL: https://issues.apache.org/jira/browse/SOLR-12817 Project: Solr Issue Type: Improvement Security Level: Public (Default Security Level. Issues are Public) Reporter: Varun Thacker While working on SOLR-12708 , Mano disccovered used the response parsing technique from CreateShardCmd {code:java} final NamedList addResult = new NamedList(); try { ocmh.addReplica(zkStateReader.getClusterState(), addReplicasProps, addResult, () -> { Object addResultFailure = addResult.get("failure"); if (addResultFailure != null) { SimpleOrderedMap failure = (SimpleOrderedMap) results.get("failure"); if (failure == null) { failure = new SimpleOrderedMap(); results.add("failure", failure); } failure.addAll((NamedList) addResultFailure); } else { SimpleOrderedMap success = (SimpleOrderedMap) results.get("success"); if (success == null) { success = new SimpleOrderedMap(); results.add("success", success); } success.addAll((NamedList) addResult.get("success")); } }); }{code} This code works as the response can have either a failure or a success. But isn't it the same as doing this? {code:java} ocmh.addReplica(zkStateReader.getClusterState(), addReplicasProps, results, null);{code} Maybe I am missing the motication here . [~caomanhdat] WDYT? If the usage is needed then at-least I'd want to document the reason in the code for future refernece. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12708) Async collection actions should not hide failures
[ https://issues.apache.org/jira/browse/SOLR-12708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16632585#comment-16632585 ] Varun Thacker commented on SOLR-12708: -- {quote}I have to be honest and admit that I copied the full block from {{CreateShardCmd.java}}. I think the code is doing the right thing there. In both branches of the {{if}} the code checks if the main {{results}} has success/failure node already, and creates if necessary. Then adds the corresponding {{addResult}} field into the main one. The only difference is that the failure recalled before the {{if}} block. {quote} I think the usage of this code block is correct in CreateShardCmd but not where we are using it in the patch. Here's why : CreateShardCmd is one core admin API call . So the response is either a success or failure. Hence the if-else block covers it. In this patch, there are multiple add-replica calls who's response we are acknowledging. So there can be replicas that came back with success and some that failed. If there is a failure we will just add the failure response back to the results and not the success ones . This way we process requests and responses are very complicated for some reason and we should improve it in general . But do you see what I am seeing here? > Async collection actions should not hide failures > - > > Key: SOLR-12708 > URL: https://issues.apache.org/jira/browse/SOLR-12708 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: Admin UI, Backup/Restore >Affects Versions: 7.4 >Reporter: Mano Kovacs >Assignee: Varun Thacker >Priority: Major > Time Spent: 10m > Remaining Estimate: 0h > > Async collection API may hide failures compared to sync version. > [OverseerCollectionMessageHandler::processResponses|https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/cloud/api/collections/OverseerCollectionMessageHandler.java#L744] > structures errors differently in the response, that hides failures from most > evaluators. RestoreCmd did not receive, nor handle async addReplica issues. > Sample create collection sync and async result with invalid solrconfig.xml: > {noformat} > { > "responseHeader":{ > "status":0, > "QTime":32104}, > "failure":{ > "localhost:8983_solr":"org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:Error > from server at http://localhost:8983/solr: Error CREATEing SolrCore > 'name4_shard1_replica_n1': Unable to create core [name4_shard1_replica_n1] > Caused by: The content of elements must consist of well-formed character data > or markup.", > "localhost:8983_solr":"org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:Error > from server at http://localhost:8983/solr: Error CREATEing SolrCore > 'name4_shard2_replica_n2': Unable to create core [name4_shard2_replica_n2] > Caused by: The content of elements must consist of well-formed character data > or markup.", > "localhost:8983_solr":"org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:Error > from server at http://localhost:8983/solr: Error CREATEing SolrCore > 'name4_shard1_replica_n2': Unable to create core [name4_shard1_replica_n2] > Caused by: The content of elements must consist of well-formed character data > or markup.", > "localhost:8983_solr":"org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:Error > from server at http://localhost:8983/solr: Error CREATEing SolrCore > 'name4_shard2_replica_n1': Unable to create core [name4_shard2_replica_n1] > Caused by: The content of elements must consist of well-formed character data > or markup."} > } > {noformat} > vs async: > {noformat} > { > "responseHeader":{ > "status":0, > "QTime":3}, > "success":{ > "localhost:8983_solr":{ > "responseHeader":{ > "status":0, > "QTime":12}}, > "localhost:8983_solr":{ > "responseHeader":{ > "status":0, > "QTime":3}}, > "localhost:8983_solr":{ > "responseHeader":{ > "status":0, > "QTime":11}}, > "localhost:8983_solr":{ > "responseHeader":{ > "status":0, > "QTime":12}}}, > "myTaskId2709146382836":{ > "responseHeader":{ > "status":0, > "QTime":1}, > "STATUS":"failed", > "Response":"Error CREATEing SolrCore 'name_shard2_replica_n2': Unable to > create core [name_shard2_replica_n2] Caused by: The content of elements must > consist of well-formed character data or markup."}, > "status":{ > "state":"completed", > "msg":"found [myTaskId] in completed tasks"}} > {noformat} > Proposing adding failure node to the results, keeping backward compatible but > correct result. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail:
[GitHub] lucene-solr pull request #438: SOLR-12593: Remove date parsing functionality...
Github user asfgit closed the pull request at: https://github.com/apache/lucene-solr/pull/438 --- - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12593) Remove date parsing functionality from extraction contrib
[ https://issues.apache.org/jira/browse/SOLR-12593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16632543#comment-16632543 ] ASF subversion and git services commented on SOLR-12593: Commit 964cc88cee7d62edf03a923e3217809d630af5d5 in lucene-solr's branch refs/heads/master from [~dsmiley] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=964cc88 ] SOLR-12593: remove date parsing from extract contrib * added "ignored_*" to the default configSet * Updated Ref Guide info on Solr Cell to demonstrate usage without using the techproducts configSet Closes #438 > Remove date parsing functionality from extraction contrib > - > > Key: SOLR-12593 > URL: https://issues.apache.org/jira/browse/SOLR-12593 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: contrib - Solr Cell (Tika extraction) >Reporter: David Smiley >Assignee: David Smiley >Priority: Major > Fix For: master (8.0) > > Attachments: SOLR-12593.patch, SOLR-12593.patch > > Time Spent: 3h 50m > Remaining Estimate: 0h > > The date parsing functionality in the extraction contrib is obsoleted by > equivalent functionality in ParseDateFieldUpdateProcessorFactory. It should > be removed. We should add documentation within this part of the ref guide on > how to accomplish the same (and test it). -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-12-ea+12) - Build # 22939 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/22939/ Java: 64bit/jdk-12-ea+12 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC 20 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.servlet.HttpSolrCallGetCoreTest Error Message: Could not find collection:collection1 Stack Trace: java.lang.AssertionError: Could not find collection:collection1 at __randomizedtesting.SeedInfo.seed([130E2DDFC62B7BB7]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.junit.Assert.assertNotNull(Assert.java:526) at org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:155) at org.apache.solr.servlet.HttpSolrCallGetCoreTest.setupCluster(HttpSolrCallGetCoreTest.java:53) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:875) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.base/java.lang.Thread.run(Thread.java:835) FAILED: junit.framework.TestSuite.org.apache.solr.servlet.HttpSolrCallGetCoreTest Error Message: Could not find collection:collection1 Stack Trace: java.lang.AssertionError: Could not find collection:collection1 at __randomizedtesting.SeedInfo.seed([130E2DDFC62B7BB7]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.junit.Assert.assertNotNull(Assert.java:526) at org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:155) at org.apache.solr.servlet.HttpSolrCallGetCoreTest.setupCluster(HttpSolrCallGetCoreTest.java:53) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:875) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at
[JENKINS] Lucene-Solr-7.x-Windows (64bit/jdk-11) - Build # 808 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/808/ Java: 64bit/jdk-11 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC 4 tests failed. FAILED: org.apache.solr.cloud.cdcr.CdcrBidirectionalTest.testBiDir Error Message: Collection not found: cdcr-cluster1 Stack Trace: org.apache.solr.common.SolrException: Collection not found: cdcr-cluster1 at __randomizedtesting.SeedInfo.seed([8889EDDD6230DCDC:CD521D3F7A1E909E]:0) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:851) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194) at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:942) at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:957) at org.apache.solr.cloud.cdcr.CdcrTestsUtil.invokeCdcrAction(CdcrTestsUtil.java:83) at org.apache.solr.cloud.cdcr.CdcrTestsUtil.cdcrStart(CdcrTestsUtil.java:60) at org.apache.solr.cloud.cdcr.CdcrBidirectionalTest.testBiDir(CdcrBidirectionalTest.java:78) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at
[JENKINS-MAVEN] Lucene-Solr-Maven-master #2366: POMs out of sync
Build: https://builds.apache.org/job/Lucene-Solr-Maven-master/2366/ No tests ran. Build Log: [...truncated 17795 lines...] BUILD FAILED /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-master/build.xml:672: The following error occurred while executing this line: /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-master/build.xml:209: The following error occurred while executing this line: /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-master/lucene/build.xml:404: The following error occurred while executing this line: /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-master/lucene/common-build.xml:650: Error deploying artifact 'org.apache.lucene:lucene-solr-grandparent:pom': Error deploying artifact: Failed to transfer file: https://repository.apache.org/content/repositories/snapshots/org/apache/lucene/lucene-solr-grandparent/8.0.0-SNAPSHOT/lucene-solr-grandparent-8.0.0-20180928.195001-235.pom. Return code is: 401 Total time: 6 minutes 33 seconds Build step 'Invoke Ant' marked build as failure Email was triggered for: Failure - Any Sending email for trigger: Failure - Any - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-BadApples-NightlyTests-master - Build # 30 - Still Unstable
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-NightlyTests-master/30/ 3 tests failed. FAILED: org.apache.solr.cloud.MoveReplicaHDFSTest.testFailedMove Error Message: No live SolrServers available to handle this request:[http://127.0.0.1:36856/solr/MoveReplicaHDFSTest_failed_coll_true, http://127.0.0.1:35500/solr/MoveReplicaHDFSTest_failed_coll_true] Stack Trace: org.apache.solr.client.solrj.SolrServerException: No live SolrServers available to handle this request:[http://127.0.0.1:36856/solr/MoveReplicaHDFSTest_failed_coll_true, http://127.0.0.1:35500/solr/MoveReplicaHDFSTest_failed_coll_true] at __randomizedtesting.SeedInfo.seed([408E7BC14D58FA08:EA43A833FA8B2FD8]:0) at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:462) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1107) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:994) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194) at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:942) at org.apache.solr.cloud.MoveReplicaTest.testFailedMove(MoveReplicaTest.java:301) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at
[jira] [Commented] (SOLR-12816) Don't allow RunUpdateProcessorFactory to be set before DistributedUpdateProcessorFactory
[ https://issues.apache.org/jira/browse/SOLR-12816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16632257#comment-16632257 ] Alexandre Rafalovitch commented on SOLR-12816: -- I think the issue was that some URPs can be injected between those two and have a choice to be handled centrally or per-node. Also, I could have sworn, we are already injecting one of them if missing. But maybe there is a use-case for default situation and then recognize an explicit one with the extra check. Of course, if new style processor attribute is used, they are already default. > Don't allow RunUpdateProcessorFactory to be set before > DistributedUpdateProcessorFactory > > > Key: SOLR-12816 > URL: https://issues.apache.org/jira/browse/SOLR-12816 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Varun Thacker >Priority: Major > > Here's the problem that came up with a customer call today morning - "My > documents are not getting replicated to the replicas and the doc counts don't > match up" > It was a 3 node cluster. The collection was 1 shard X 3 replicas . > This is a scary situation to be in. We started down the patch of debugging > replica types , auto-commits , checking if the {{_version_}} field and > {{id}} fields were defined correctly etc. > > The problem was the user had defined a custom update processor chain and had > RunUpdateProcessorFactory defined before DistributedUpdateProcessorFactory > {code:java} > > > > > {code} > > With this update chain, whichever node you index the document against will be > the only one indexing the document. It will never forward to the other nodes. > So you can index against a node hosting a replica and the leader will never > get this document. > Is there any use-case where having RunUpdateProcessor before > DistributedUpdateProcessor is needed? > > Perhaps we could borrow the idea from TRA or make these two update processors > default and remove them from the default configs? > {code:java} > When processing an update for a TRA, Solr initializes its > UpdateRequestProcessor chain as usual, but when DistributedUpdateProcessor > (DUP) initializes, it detects that the update targets a TRA and injects > TimeRoutedUpdateProcessor (TRUP) in front of itself.{code} > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-12816) Don't allow RunUpdateProcessorFactory to be set before DistributedUpdateProcessorFactory
Varun Thacker created SOLR-12816: Summary: Don't allow RunUpdateProcessorFactory to be set before DistributedUpdateProcessorFactory Key: SOLR-12816 URL: https://issues.apache.org/jira/browse/SOLR-12816 Project: Solr Issue Type: Improvement Security Level: Public (Default Security Level. Issues are Public) Reporter: Varun Thacker Here's the problem that came up with a customer call today morning - "My documents are not getting replicated to the replicas and the doc counts don't match up" It was a 3 node cluster. The collection was 1 shard X 3 replicas . This is a scary situation to be in. We started down the patch of debugging replica types , auto-commits , checking if the {{_version_}} field and {{id}} fields were defined correctly etc. The problem was the user had defined a custom update processor chain and had RunUpdateProcessorFactory defined before DistributedUpdateProcessorFactory {code:java} {code} With this update chain, whichever node you index the document against will be the only one indexing the document. It will never forward to the other nodes. So you can index against a node hosting a replica and the leader will never get this document. Is there any use-case where having RunUpdateProcessor before DistributedUpdateProcessor is needed? Perhaps we could borrow the idea from TRA or make these two update processors default and remove them from the default configs? {code:java} When processing an update for a TRA, Solr initializes its UpdateRequestProcessor chain as usual, but when DistributedUpdateProcessor (DUP) initializes, it detects that the update targets a TRA and injects TimeRoutedUpdateProcessor (TRUP) in front of itself.{code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-12815) Implement maxOps limit for IndexSizeTrigger
Andrzej Bialecki created SOLR-12815: Summary: Implement maxOps limit for IndexSizeTrigger Key: SOLR-12815 URL: https://issues.apache.org/jira/browse/SOLR-12815 Project: Solr Issue Type: Sub-task Security Level: Public (Default Security Level. Issues are Public) Components: AutoScaling Reporter: Andrzej Bialecki Assignee: Andrzej Bialecki {{IndexSizeTrigger}} can currently generate unlimited number of requested operations (such as split shard). Combined with the avalanche effect in SOLR-12730 this may cause a massive spike in the cluster load, and some of these operations may fail. It's probably better to put an arbitrary limit on the maximum number of generated ops requested in one trigger event. This will delay some of the needed changes (thus leading to thresholds being exceeded to a larger degree) but it will increase the likelihood that all operations succeed. A similar limit already exists in {{SearchRateTrigger}}. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-Windows (64bit/jdk-9.0.4) - Build # 7540 - Failure!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7540/ Java: 64bit/jdk-9.0.4 -XX:+UseCompressedOops -XX:+UseParallelGC All tests passed Build Log: [...truncated 14383 lines...] [junit4] JVM J1: stdout was not empty, see: C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\temp\junit4-J1-20180928_163448_020310956040832591741.sysout [junit4] >>> JVM J1 emitted unexpected output (verbatim) [junit4] # [junit4] # A fatal error has been detected by the Java Runtime Environment: [junit4] # [junit4] # EXCEPTION_ACCESS_VIOLATION (0xc005) at pc=0x61a80b2f, pid=13804, tid=2680 [junit4] # [junit4] # JRE version: OpenJDK Runtime Environment (9.0+11) (build 9.0.4+11) [junit4] # Java VM: OpenJDK 64-Bit Server VM (9.0.4+11, mixed mode, tiered, compressed oops, parallel gc, windows-amd64) [junit4] # Problematic frame: [junit4] # V [jvm.dll+0x460b2f] [junit4] # [junit4] # No core dump will be written. Minidumps are not enabled by default on client versions of Windows [junit4] # [junit4] # An error report file with more information is saved as: [junit4] # C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\hs_err_pid13804.log [junit4] # [junit4] # Compiler replay data is saved as: [junit4] # C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\replay_pid13804.log [junit4] # [junit4] # If you would like to submit a bug report, please visit: [junit4] # http://bugreport.java.com/bugreport/crash.jsp [junit4] # [junit4] <<< JVM J1: EOF [...truncated 1039 lines...] [junit4] ERROR: JVM J1 ended with an exception, command line: C:\Users\jenkins\tools\java\64bit\jdk-9.0.4\bin\java.exe -XX:+UseCompressedOops -XX:+UseParallelGC -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\heapdumps -ea -esa --illegal-access=deny -Dtests.prefix=tests -Dtests.seed=652905790F5BCC43 -Xmx512M -Dtests.iters= -Dtests.verbose=false -Dtests.infostream=false -Dtests.codec=random -Dtests.postingsformat=random -Dtests.docvaluesformat=random -Dtests.locale=random -Dtests.timezone=random -Dtests.directory=random -Dtests.linedocsfile=europarl.lines.txt.gz -Dtests.luceneMatchVersion=8.0.0 -Dtests.cleanthreads=perClass -Djava.util.logging.config.file=C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\tools\junit4\logging.properties -Dtests.nightly=false -Dtests.weekly=false -Dtests.monster=false -Dtests.slow=true -Dtests.asserts=true -Dtests.multiplier=1 -DtempDir=./temp -Djava.io.tmpdir=./temp -Dcommon.dir=C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene -Dclover.db.dir=C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\clover\db -Djava.security.policy=C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\tools\junit4\solr-tests.policy -Dtests.LUCENE_VERSION=8.0.0 -Djetty.testMode=1 -Djetty.insecurerandom=1 -Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory -Djava.awt.headless=true -Djdk.map.althashing.threshold=0 -Dtests.src.home=C:\Users\jenkins\workspace\Lucene-Solr-master-Windows -Djava.security.egd=file:/dev/./urandom -Djunit4.childvm.cwd=C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1 -Djunit4.tempDir=C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\temp -Djunit4.childvm.id=1 -Djunit4.childvm.count=2 -Dfile.encoding=ISO-8859-1 -Dtests.disableHdfs=true -Djava.security.manager=org.apache.lucene.util.TestSecurityManager -Dtests.filterstacks=true -Dtests.leaveTemporary=false -Dtests.badapples=false -classpath
[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk-11) - Build # 2823 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2823/ Java: 64bit/jdk-11 -XX:+UseCompressedOops -XX:+UseG1GC 9 tests failed. FAILED: org.apache.solr.cloud.api.collections.ShardSplitTest.testSplitShardWithRuleLink Error Message: Error from server at https://127.0.0.1:33005: Unable to call distrib softCommit on: https://127.0.0.1:38577/shardSplitWithRule_link_shard1_replica_n1/ Stack Trace: org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at https://127.0.0.1:33005: Unable to call distrib softCommit on: https://127.0.0.1:38577/shardSplitWithRule_link_shard1_replica_n1/ at __randomizedtesting.SeedInfo.seed([432A4EDBCA95AD5D:4936FB404009CBF8]:0) at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244) at org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483) at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1107) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211) at org.apache.solr.cloud.api.collections.ShardSplitTest.doSplitShardWithRule(ShardSplitTest.java:571) at org.apache.solr.cloud.api.collections.ShardSplitTest.testSplitShardWithRuleLink(ShardSplitTest.java:550) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1008) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:983) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
[GitHub] lucene-solr issue #451: LUCENE-8496: Add selective indexing to BKD/POINTS co...
Github user nknize commented on the issue: https://github.com/apache/lucene-solr/pull/451 Updated the PR and attached an update patch to [LUCENE-8496](https://issues.apache.org/jira/browse/LUCENE-8496) along with a patch that modifies `LatLonShape.Triangle` encoding to take advantage of selective indexing. I think this is close. Wouldn't mind another round of review along with thoughts on whether we should commit this for a Lucene 7.6 release or hold off for 8.0. --- - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12798) Structural changes in SolrJ since version 7.0.0 have effectively disabled multipart post
[ https://issues.apache.org/jira/browse/SOLR-12798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16632114#comment-16632114 ] Jan Høydahl commented on SOLR-12798: bq. should be what SolrJ *always* does when it's asked to do POST, so URL limits aren't exceeded no matter what gets thrown at it. Sounds like a good thing to secure robustness. Also the /admin/metrics bug that recently surfaced would benefit if we prefer POST over GET in general more places > Structural changes in SolrJ since version 7.0.0 have effectively disabled > multipart post > > > Key: SOLR-12798 > URL: https://issues.apache.org/jira/browse/SOLR-12798 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrJ >Affects Versions: 7.4 >Reporter: Karl Wright >Assignee: Karl Wright >Priority: Major > Attachments: HOT Balloon Trip_Ultra HD.jpg, > SOLR-12798-approach.patch, SOLR-12798-reproducer.patch, > SOLR-12798-workaround.patch, no params in url.png, solr-update-request.txt > > > Project ManifoldCF uses SolrJ to post documents to Solr. When upgrading from > SolrJ 7.0.x to SolrJ 7.4, we encountered significant structural changes to > SolrJ's HttpSolrClient class that seemingly disable any use of multipart > post. This is critical because ManifoldCF's documents often contain metadata > in excess of 4K that therefore cannot be stuffed into a URL. > The changes in question seem to have been performed by Paul Noble on > 10/31/2017, with the introduction of the RequestWriter mechanism. Basically, > if a request has a RequestWriter, it is used exclusively to write the > request, and that overrides the stream mechanism completely. I haven't > chased it back to a specific ticket. > ManifoldCF's usage of SolrJ involves the creation of > ContentStreamUpdateRequests for all posts meant for Solr Cell, and the > creation of UpdateRequests for posts not meant for Solr Cell (as well as for > delete and commit requests). For our release cycle that is taking place > right now, we're shipping a modified version of HttpSolrClient that ignores > the RequestWriter when dealing with ContentStreamUpdateRequests. We > apparently cannot use multipart for all requests because on the Solr side we > get "pfountz Should not get here!" errors on the Solr side when we do, which > generate HTTP error code 500 responses. That should not happen either, in my > opinion. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-12709) Simulate a 1 bln docs scaling-up scenario
[ https://issues.apache.org/jira/browse/SOLR-12709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrzej Bialecki resolved SOLR-12709. -- Resolution: Fixed Fix Version/s: 7.5 master (8.0) > Simulate a 1 bln docs scaling-up scenario > - > > Key: SOLR-12709 > URL: https://issues.apache.org/jira/browse/SOLR-12709 > Project: Solr > Issue Type: Sub-task > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Andrzej Bialecki >Assignee: Andrzej Bialecki >Priority: Major > Fix For: master (8.0), 7.5 > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12709) Simulate a 1 bln docs scaling-up scenario
[ https://issues.apache.org/jira/browse/SOLR-12709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16632086#comment-16632086 ] ASF subversion and git services commented on SOLR-12709: Commit 6f23bd37fef599085db63395977a9fbd550d9551 in lucene-solr's branch refs/heads/branch_7x from Andrzej Bialecki [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=6f23bd3 ] SOLR-12709: Add TestSimExtremeIndexing for testing simulated large indexing jobs. Several important improvements to the simulator. > Simulate a 1 bln docs scaling-up scenario > - > > Key: SOLR-12709 > URL: https://issues.apache.org/jira/browse/SOLR-12709 > Project: Solr > Issue Type: Sub-task > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Andrzej Bialecki >Assignee: Andrzej Bialecki >Priority: Major > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-11956) Collapsing on undefined field returns 500
[ https://issues.apache.org/jira/browse/SOLR-11956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Smiley resolved SOLR-11956. - Resolution: Fixed Assignee: David Smiley Fix Version/s: 7.6 > Collapsing on undefined field returns 500 > - > > Key: SOLR-11956 > URL: https://issues.apache.org/jira/browse/SOLR-11956 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: query parsers >Reporter: Munendra S N >Assignee: David Smiley >Priority: Trivial > Fix For: 7.6 > > Attachments: SOLR-11956.patch, SOLR-11956.patch, SOLR-11956.patch > > > When collapsing is specified on the undefined field then, the response > returned has status 500 instead of 400. > This is because of wrapping of SolrException to RuntimeException > https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/search/CollapsingQParserPlugin.java#L377 > Then in request handler base, all RuntimeException are converted to > SolrException with status 500 -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_172) - Build # 22938 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/22938/ Java: 32bit/jdk1.8.0_172 -client -XX:+UseParallelGC 1 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.cloud.api.collections.ShardSplitTest Error Message: ObjectTracker found 6 object(s) that were not released!!! [NIOFSDirectory, NIOFSDirectory, NIOFSDirectory, SolrCore, InternalHttpClient, NIOFSDirectory] org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: org.apache.lucene.store.NIOFSDirectory at org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42) at org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:348) at org.apache.solr.core.SolrCore.initSnapshotMetaDataManager(SolrCore.java:507) at org.apache.solr.core.SolrCore.(SolrCore.java:954) at org.apache.solr.core.SolrCore.(SolrCore.java:869) at org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1143) at org.apache.solr.core.CoreContainer.create(CoreContainer.java:1053) at org.apache.solr.handler.admin.CoreAdminOperation.lambda$static$0(CoreAdminOperation.java:92) at org.apache.solr.handler.admin.CoreAdminOperation.execute(CoreAdminOperation.java:360) at org.apache.solr.handler.admin.CoreAdminHandler$CallInfo.call(CoreAdminHandler.java:395) at org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:180) at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199) at org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:734) at org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:715) at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:496) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:377) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:323) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1642) at org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:139) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1642) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533) at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255) at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595) at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1317) at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473) at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564) at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1219) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144) at org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:674) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132) at org.eclipse.jetty.server.Server.handle(Server.java:531) at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:352) at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:260) at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:281) at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:102) at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:118) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:333) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:310) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:168) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:132) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:762) at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:680) at java.lang.Thread.run(Thread.java:748) org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: org.apache.lucene.store.NIOFSDirectory at org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42) at org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:348) at org.apache.solr.update.SolrIndexWriter.create(SolrIndexWriter.java:95) at org.apache.solr.core.SolrCore.initIndex(SolrCore.java:768) at org.apache.solr.core.SolrCore.(SolrCore.java:960) at org.apache.solr.core.SolrCore.(SolrCore.java:869) at
[jira] [Commented] (SOLR-11956) Collapsing on undefined field returns 500
[ https://issues.apache.org/jira/browse/SOLR-11956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16631983#comment-16631983 ] Erick Erickson commented on SOLR-11956: --- [~dsmiley] Can we close this one? > Collapsing on undefined field returns 500 > - > > Key: SOLR-11956 > URL: https://issues.apache.org/jira/browse/SOLR-11956 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: query parsers >Reporter: Munendra S N >Priority: Trivial > Attachments: SOLR-11956.patch, SOLR-11956.patch, SOLR-11956.patch > > > When collapsing is specified on the undefined field then, the response > returned has status 500 instead of 400. > This is because of wrapping of SolrException to RuntimeException > https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/search/CollapsingQParserPlugin.java#L377 > Then in request handler base, all RuntimeException are converted to > SolrException with status 500 -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11956) Collapsing on undefined field returns 500
[ https://issues.apache.org/jira/browse/SOLR-11956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16631980#comment-16631980 ] Munendra S N commented on SOLR-11956: - This issue has been fixed as part of SOLR-6280 > Collapsing on undefined field returns 500 > - > > Key: SOLR-11956 > URL: https://issues.apache.org/jira/browse/SOLR-11956 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: query parsers >Reporter: Munendra S N >Priority: Trivial > Attachments: SOLR-11956.patch, SOLR-11956.patch, SOLR-11956.patch > > > When collapsing is specified on the undefined field then, the response > returned has status 500 instead of 400. > This is because of wrapping of SolrException to RuntimeException > https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/search/CollapsingQParserPlugin.java#L377 > Then in request handler base, all RuntimeException are converted to > SolrException with status 500 -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-4382) edismax, qf, multiterm analyzer, wildcard query bug
[ https://issues.apache.org/jira/browse/SOLR-4382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16631979#comment-16631979 ] Munendra S N commented on SOLR-4382: [~iorixxx] In SOLR-5163, support for the check on unknown fields in *qf* has been added. So, when an unknown field is specified then, the request fails with Exception *Undefined field in qf*. In case, you don't want the request to fail then ** can be added to the schema (the solution suggested by you) > edismax, qf, multiterm analyzer, wildcard query bug > --- > > Key: SOLR-4382 > URL: https://issues.apache.org/jira/browse/SOLR-4382 > Project: Solr > Issue Type: Bug > Components: query parsers, Schema and Analysis, search >Affects Versions: 4.0, 4.1 >Reporter: Ahmet Arslan >Priority: Major > Attachments: SOLR-4382.patch > > > If I fire a wildcard query o*t*v*h using edismax, and add a non existed field > to qf parameter, i get this phrase query at the end. (with > autoGeneratePhraseQueries="true") > http://localhost:8983/solr/collection1/select?q=O*t*v*h=xml=on=edismax=sku%20doesNotExit > parsedquery = (+DisjunctionMaxQuery((sku:"o t v h")))/no_coord > parsedquery_toString = +(sku:"o t v h") > Existing field(s) works as expected : > http://localhost:8983/solr/collection1/select?q=O*t*v*h=xml=on=edismax=sku > yields > (+DisjunctionMaxQuery((sku:o*t*v*h)))/no_coord > +(sku:o*t*v*h) > For a workaround I enabled following dynamic field : > > User list link : http://search-lucene.com/m/zB104LKTRI -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-11644) RealTimeGet not working when router.field is not an uniqeKey
[ https://issues.apache.org/jira/browse/SOLR-11644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erick Erickson resolved SOLR-11644. --- Resolution: Information Provided Hmmm, it's perfectly valid to specify a routing field with a compositeId router, so failing the collection creation isn't correct. Looking again, is this a typo? {{get?id=1044101665}} Given your schema, it should be: {{get?candidate_id=1044101665}} BTW, I know it's been almost a year, but asking questions like this on the user's list first is a better option since it would have had more eyes-on and maybe gotten a response a lot sooner. Also from the ref guide: {quote}Please note that RealTime Get or retrieval by id would also require the parameter _route_ (or shard.keys) to avoid a distributed search. {quote} I'll go ahead and close this then. > RealTimeGet not working when router.field is not an uniqeKey > > > Key: SOLR-11644 > URL: https://issues.apache.org/jira/browse/SOLR-11644 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 6.6.2, 7.1 >Reporter: Jarek Mazgaj >Priority: Major > > I have a schema with following fields: > {code:java} > > > > candidate_id > {code} > A collection was created with following parameters: > * numShards=4 > * replicationFactor=2 > * *router.field=company_id* > When I try to do a Real Time Get with no routing information: > {code:java} > /get?id=1044101665 > {code} > I get an empty response. > When I try to add routing information (search returns document for these > values): > {code:java} > /get?id=1044101665&_route_=77493783 > {code} > I get an error: > {code} > org.apache.solr.common.SolrException: Can't find shard 'applicants_shard7' > at > org.apache.solr.handler.component.RealTimeGetComponent.sliceToShards(RealTimeGetComponent.java:888) > at > org.apache.solr.handler.component.RealTimeGetComponent.createSubRequests(RealTimeGetComponent.java:835) > at > org.apache.solr.handler.component.RealTimeGetComponent.distributedProcess(RealTimeGetComponent.java:791) > at > org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:345) > at > org.apache.solr.handler.RealTimeGetHandler.handleRequestBody(RealTimeGetHandler.java:46) > at > org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:177) > at org.apache.solr.core.SolrCore.execute(SolrCore.java:2484) > at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:720) > at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:526) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:382) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:326) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1751) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) > at > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) > at > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226) > at > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180) > at > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512) > at > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) > at > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) > at > org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213) > at > org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) > at > org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) > at org.eclipse.jetty.server.Server.handle(Server.java:534) > at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320) > at > org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251) > at > org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283) > at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108) > at > org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
[JENKINS] Lucene-Solr-Tests-master - Build # 2827 - Still Unstable
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/2827/ 5 tests failed. FAILED: org.apache.solr.cloud.MultiThreadedOCPTest.test Error Message: Stack Trace: java.lang.AssertionError at __randomizedtesting.SeedInfo.seed([885FDC6C04856435:BE3B6AA7909CD]:0) at org.junit.Assert.fail(Assert.java:92) at org.junit.Assert.assertTrue(Assert.java:43) at org.junit.Assert.assertTrue(Assert.java:54) at org.apache.solr.cloud.MultiThreadedOCPTest.testFillWorkQueue(MultiThreadedOCPTest.java:114) at org.apache.solr.cloud.MultiThreadedOCPTest.test(MultiThreadedOCPTest.java:69) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1008) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:983) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.lang.Thread.run(Thread.java:748)
[jira] [Commented] (SOLR-11644) RealTimeGet not working when router.field is not an uniqeKey
[ https://issues.apache.org/jira/browse/SOLR-11644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16631907#comment-16631907 ] Jarek Mazgaj commented on SOLR-11644: - [~erickerickson] you are right. Most likely it was caused by my misunderstanding. It was not clear for me that I cannot use *router.field* and *router.name=compositeId*. It seemed like a really useful combination in our use case. It is not clearly described here: https://lucene.apache.org/solr/guide/6_6/collections-api.html This could be also validated in code. It would save me a lot of time if CREATE API call just failed in this case. > RealTimeGet not working when router.field is not an uniqeKey > > > Key: SOLR-11644 > URL: https://issues.apache.org/jira/browse/SOLR-11644 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 6.6.2, 7.1 >Reporter: Jarek Mazgaj >Priority: Major > > I have a schema with following fields: > {code:java} > > > > candidate_id > {code} > A collection was created with following parameters: > * numShards=4 > * replicationFactor=2 > * *router.field=company_id* > When I try to do a Real Time Get with no routing information: > {code:java} > /get?id=1044101665 > {code} > I get an empty response. > When I try to add routing information (search returns document for these > values): > {code:java} > /get?id=1044101665&_route_=77493783 > {code} > I get an error: > {code} > org.apache.solr.common.SolrException: Can't find shard 'applicants_shard7' > at > org.apache.solr.handler.component.RealTimeGetComponent.sliceToShards(RealTimeGetComponent.java:888) > at > org.apache.solr.handler.component.RealTimeGetComponent.createSubRequests(RealTimeGetComponent.java:835) > at > org.apache.solr.handler.component.RealTimeGetComponent.distributedProcess(RealTimeGetComponent.java:791) > at > org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:345) > at > org.apache.solr.handler.RealTimeGetHandler.handleRequestBody(RealTimeGetHandler.java:46) > at > org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:177) > at org.apache.solr.core.SolrCore.execute(SolrCore.java:2484) > at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:720) > at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:526) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:382) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:326) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1751) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) > at > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) > at > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226) > at > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180) > at > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512) > at > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) > at > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) > at > org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213) > at > org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) > at > org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) > at org.eclipse.jetty.server.Server.handle(Server.java:534) > at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320) > at > org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251) > at > org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283) > at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108) > at > org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93) > at > org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303) > at >
[JENKINS] Lucene-Solr-repro - Build # 1553 - Still Unstable
Build: https://builds.apache.org/job/Lucene-Solr-repro/1553/ [...truncated 33 lines...] [repro] Jenkins log URL: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/900/consoleText [repro] Revision: faad36d24358b283bf99109edbdbf6dfb95adf11 [repro] Repro line: ant test -Dtestcase=TestSimTriggerIntegration -Dtests.method=testSearchRate -Dtests.seed=99768FC15580C402 -Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=ko-KR -Dtests.timezone=Pacific/Wallis -Dtests.asserts=true -Dtests.file.encoding=US-ASCII [repro] git rev-parse --abbrev-ref HEAD [repro] git rev-parse HEAD [repro] Initial local git branch/revision: 9481c1f623b77214a2a14ad18efc59fb406ed765 [repro] git fetch [repro] git checkout faad36d24358b283bf99109edbdbf6dfb95adf11 [...truncated 2 lines...] [repro] git merge --ff-only [...truncated 1 lines...] [repro] ant clean [...truncated 6 lines...] [repro] Test suites by module: [repro]solr/core [repro] TestSimTriggerIntegration [repro] ant compile-test [...truncated 3437 lines...] [repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 -Dtests.class="*.TestSimTriggerIntegration" -Dtests.showOutput=onerror -Dtests.seed=99768FC15580C402 -Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=ko-KR -Dtests.timezone=Pacific/Wallis -Dtests.asserts=true -Dtests.file.encoding=US-ASCII [...truncated 2283 lines...] [repro] Setting last failure code to 256 [repro] Failures: [repro] 1/5 failed: org.apache.solr.cloud.autoscaling.sim.TestSimTriggerIntegration [repro] git checkout 9481c1f623b77214a2a14ad18efc59fb406ed765 [...truncated 2 lines...] [repro] Exiting with code 256 [...truncated 5 lines...] - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 2080 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/2080/ Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseParallelGC 1 tests failed. FAILED: org.apache.solr.cloud.autoscaling.sim.TestSimExtremeIndexing.testScaleUp Error Message: {numFound=98480,start=0,docs=[]} expected:<10> but was:<98480> Stack Trace: java.lang.AssertionError: {numFound=98480,start=0,docs=[]} expected:<10> but was:<98480> at __randomizedtesting.SeedInfo.seed([CB9A68785A294BD8:EAC42EDA56079579]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.failNotEquals(Assert.java:647) at org.junit.Assert.assertEquals(Assert.java:128) at org.junit.Assert.assertEquals(Assert.java:472) at org.apache.solr.cloud.autoscaling.sim.TestSimExtremeIndexing.testScaleUp(TestSimExtremeIndexing.java:121) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.lang.Thread.run(Thread.java:748) Build Log: [...truncated 12797 lines...]
[jira] [Commented] (SOLR-12798) Structural changes in SolrJ since version 7.0.0 have effectively disabled multipart post
[ https://issues.apache.org/jira/browse/SOLR-12798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16631860#comment-16631860 ] Shawn Heisey commented on SOLR-12798: - bq. How do you suggest we handle binary data that is meant for SolrCell? I would suggest that you don't do this. At all. Tika is prone to OOM and JVM crashes, as [~julienFL] already noted. When this happens in SolrCell, Solr goes down too. So it's strongly recommended for all users to never use SolrCell in production, which in my opinion means that MCF should not be using SolrCell. Tika should be separate, so if it explodes, the Solr server keeps running. That said... I think support for multi-part POST should be first class in SolrJ, and I would even say that sending separate parts for parameters and the actual body should be what SolrJ *always* does when it's asked to do POST, so URL limits aren't exceeded no matter what gets thrown at it. And we need to make sure that multi-part handling on the server side is rock-solid. (I'm not suggesting there's any problems there ... but if any are found, they need attention) It's probably a good idea to support multiple *data* streams as well in SolrJ. This would probably require some changes on the server side, and a separate Jira issue. If MCF creates SolrInputDocument objects, it can put everything there. MCF wouldn't need to be concerned about format (the JSON mentioned earlier), only one POST part is required, URL parameters are not needed, and the standard /update handler can be used, even without a change for this issue. > Structural changes in SolrJ since version 7.0.0 have effectively disabled > multipart post > > > Key: SOLR-12798 > URL: https://issues.apache.org/jira/browse/SOLR-12798 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrJ >Affects Versions: 7.4 >Reporter: Karl Wright >Assignee: Karl Wright >Priority: Major > Attachments: HOT Balloon Trip_Ultra HD.jpg, > SOLR-12798-approach.patch, SOLR-12798-reproducer.patch, > SOLR-12798-workaround.patch, no params in url.png, solr-update-request.txt > > > Project ManifoldCF uses SolrJ to post documents to Solr. When upgrading from > SolrJ 7.0.x to SolrJ 7.4, we encountered significant structural changes to > SolrJ's HttpSolrClient class that seemingly disable any use of multipart > post. This is critical because ManifoldCF's documents often contain metadata > in excess of 4K that therefore cannot be stuffed into a URL. > The changes in question seem to have been performed by Paul Noble on > 10/31/2017, with the introduction of the RequestWriter mechanism. Basically, > if a request has a RequestWriter, it is used exclusively to write the > request, and that overrides the stream mechanism completely. I haven't > chased it back to a specific ticket. > ManifoldCF's usage of SolrJ involves the creation of > ContentStreamUpdateRequests for all posts meant for Solr Cell, and the > creation of UpdateRequests for posts not meant for Solr Cell (as well as for > delete and commit requests). For our release cycle that is taking place > right now, we're shipping a modified version of HttpSolrClient that ignores > the RequestWriter when dealing with ContentStreamUpdateRequests. We > apparently cannot use multipart for all requests because on the Solr side we > get "pfountz Should not get here!" errors on the Solr side when we do, which > generate HTTP error code 500 responses. That should not happen either, in my > opinion. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12502) Unify and reduce the number of SolrClient#add methods
[ https://issues.apache.org/jira/browse/SOLR-12502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16631779#comment-16631779 ] Jason Gerlowski commented on SOLR-12502: bq. I don’t think this is a source of confusion for our users Agreed. I don't think anyone's confused when their method type-ahead shows 10 {{add}} variants. They get it. bq. I feel like we should not keep our APIs in a flux, something that we have done in the past Again, I agree. Your point about API churn, Anshum and Tomas, is well taken. The bar is pretty high to make SolrClient changes worth the user confusion/work. Purely cosmetic changes aren't worth it. But there's more than cosmetic benefits to the brainstormed ideas above. There are a lot of problems that are caused or exacerbated by the current SolrClient interface. It makes it easy for users to do any number of behaviors that the community warns against in the strongest terms: non-batch indexing, client-invoked commits and rollbacks, etc. The support in the SolrClient interface for multiple collections has also caused a lot of problems historically (see my last comment above). And more cosmetically, the method overloading is only partially effective: I regularly see client code that needs to repackage a {{SolrInputDocument[]}} into a List to call {{add}}, for example. These are all more or less related to the SolrClient interface's method overloading or the fixes proposed. Is the disruption of changing the interface worth it for the cosmetic improvements? No. Is the disruption worth it if the change also eases some of the nagging and painful problems SolrJ users run into? Maybe. The answer could still very well be "no", but if we're weighing tradeoffs now, it's worth pointing out the more concrete benefits. > Unify and reduce the number of SolrClient#add methods > - > > Key: SOLR-12502 > URL: https://issues.apache.org/jira/browse/SOLR-12502 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrJ >Reporter: Varun Thacker >Priority: Major > > On SOLR-11654 we noticed that SolrClient#add has 10 overloaded methods which > can be very confusing to new users. > Also the UpdateRequest class is public so that means if a user is looking for > a custom combination they can always choose to do so by writing a couple of > lines of code. > For 8.0 which might not be very far away we can improve this situation > > Quoting David from SOLR-11654 > {quote}Any way I guess we'll leave SolrClient alone. Thanks for your input > Varun. Yes it's a shame there are so many darned overloaded methods... I > think it's a large part due to the optional "collection" parameter which like > doubles the methods! I've been bitten several times writing SolrJ code that > doesn't use the right overloaded version (forgot to specify collection). I > think for 8.0, *either* all SolrClient methods without "collection" can be > removed in favor of insisting you use the overloaded variant accepting a > collection, *or* SolrClient itself could be locked down to one collection at > the time you create it *or* have a CollectionSolrClient interface retrieved > from a SolrClient.withCollection(collection) in which all the operations that > require a SolrClient are on that interface and not SolrClient proper. > Several ideas to consider. > {quote} > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-repro - Build # 1552 - Unstable
Build: https://builds.apache.org/job/Lucene-Solr-repro/1552/ [...truncated 35 lines...] [repro] Jenkins log URL: https://builds.apache.org/job/Lucene-Solr-Tests-master/2826/consoleText [repro] Revision: 9481c1f623b77214a2a14ad18efc59fb406ed765 [repro] Repro line: ant test -Dtestcase=TestSimTriggerIntegration -Dtests.method=testEventFromRestoredState -Dtests.seed=C91757E36D30553F -Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=de-LU -Dtests.timezone=Africa/Dar_es_Salaam -Dtests.asserts=true -Dtests.file.encoding=UTF-8 [repro] Repro line: ant test -Dtestcase=MoveReplicaHDFSTest -Dtests.method=testFailedMove -Dtests.seed=C91757E36D30553F -Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=cs-CZ -Dtests.timezone=America/Virgin -Dtests.asserts=true -Dtests.file.encoding=UTF-8 [repro] Repro line: ant test -Dtestcase=TestTrackingShardHandlerFactory -Dtests.method=testRequestTracking -Dtests.seed=C91757E36D30553F -Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=ca-ES -Dtests.timezone=Africa/Asmara -Dtests.asserts=true -Dtests.file.encoding=UTF-8 [repro] git rev-parse --abbrev-ref HEAD [repro] git rev-parse HEAD [repro] Initial local git branch/revision: 9481c1f623b77214a2a14ad18efc59fb406ed765 [repro] git fetch [repro] git checkout 9481c1f623b77214a2a14ad18efc59fb406ed765 [...truncated 1 lines...] [repro] git merge --ff-only [...truncated 1 lines...] [repro] ant clean [...truncated 6 lines...] [repro] Test suites by module: [repro]solr/core [repro] MoveReplicaHDFSTest [repro] TestTrackingShardHandlerFactory [repro] TestSimTriggerIntegration [repro] ant compile-test [...truncated 3424 lines...] [repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=15 -Dtests.class="*.MoveReplicaHDFSTest|*.TestTrackingShardHandlerFactory|*.TestSimTriggerIntegration" -Dtests.showOutput=onerror -Dtests.seed=C91757E36D30553F -Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=cs-CZ -Dtests.timezone=America/Virgin -Dtests.asserts=true -Dtests.file.encoding=UTF-8 [...truncated 3449 lines...] [repro] Setting last failure code to 256 [repro] Failures: [repro] 0/5 failed: org.apache.solr.cloud.MoveReplicaHDFSTest [repro] 0/5 failed: org.apache.solr.handler.component.TestTrackingShardHandlerFactory [repro] 1/5 failed: org.apache.solr.cloud.autoscaling.sim.TestSimTriggerIntegration [repro] git checkout 9481c1f623b77214a2a14ad18efc59fb406ed765 [...truncated 1 lines...] [repro] Exiting with code 256 [...truncated 6 lines...] - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-7.x-MacOSX (64bit/jdk1.8.0) - Build # 856 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-MacOSX/856/ Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseG1GC 1 tests failed. FAILED: org.apache.solr.cloud.autoscaling.sim.TestSimTriggerIntegration.testSearchRate Error Message: java.util.concurrent.ExecutionException: java.io.IOException: org.apache.solr.api.ApiBag$ExceptionWithErrObject: Error in command payload, errors: [{suspend-trigger={name=.scheduled_maintenance}, errorMessages=[No trigger exists with name: .scheduled_maintenance]}], Stack Trace: java.io.IOException: java.util.concurrent.ExecutionException: java.io.IOException: org.apache.solr.api.ApiBag$ExceptionWithErrObject: Error in command payload, errors: [{suspend-trigger={name=.scheduled_maintenance}, errorMessages=[No trigger exists with name: .scheduled_maintenance]}], at __randomizedtesting.SeedInfo.seed([676836455E0466E1:3A2028CC91C2C0AE]:0) at org.apache.solr.cloud.autoscaling.sim.SimCloudManager.request(SimCloudManager.java:632) at org.apache.solr.cloud.autoscaling.sim.SimCloudManager$1.request(SimCloudManager.java:211) at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219) at org.apache.solr.cloud.autoscaling.sim.TestSimTriggerIntegration.setupTest(TestSimTriggerIntegration.java:129) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:969) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at
[jira] [Commented] (LUCENE-8516) Make WordDelimiterGraphFilter a Tokenizer
[ https://issues.apache.org/jira/browse/LUCENE-8516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16631756#comment-16631756 ] Robert Muir commented on LUCENE-8516: - It just doesnt seem like it will really improve anything if it takes that tokenizer parameter. Depending on what the tokenizer does, it may still be buggy just like before. still a tokenfilter in disguise. That is clear from the api being totally wrong (tokenizer taking tokenizer as input). > Make WordDelimiterGraphFilter a Tokenizer > - > > Key: LUCENE-8516 > URL: https://issues.apache.org/jira/browse/LUCENE-8516 > Project: Lucene - Core > Issue Type: Task >Reporter: Alan Woodward >Assignee: Alan Woodward >Priority: Major > Attachments: LUCENE-8516.patch > > > Being able to split tokens up at arbitrary points in a filter chain, in > effect adding a second round of tokenization, can cause any number of > problems when trying to keep tokenstreams to contract. The most common > offender here is the WordDelimiterGraphFilter, which can produce broken > offsets in a wide range of situations. > We should make WDGF a Tokenizer in its own right, which should preserve all > the functionality we need, but make reasoning about the resulting tokenstream > much simpler. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1651 - Failure
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1651/ 5 tests failed. FAILED: org.apache.lucene.index.TestIndexWriterDelete.testUpdatesOnDiskFull Error Message: Test abandoned because suite timeout was reached. Stack Trace: java.lang.Exception: Test abandoned because suite timeout was reached. at __randomizedtesting.SeedInfo.seed([9F8B417443135BE6]:0) FAILED: junit.framework.TestSuite.org.apache.lucene.index.TestIndexWriterDelete Error Message: Suite timeout exceeded (>= 720 msec). Stack Trace: java.lang.Exception: Suite timeout exceeded (>= 720 msec). at __randomizedtesting.SeedInfo.seed([9F8B417443135BE6]:0) FAILED: org.apache.lucene.document.TestLatLonLineShapeQueries.testRandomBig Error Message: Java heap space Stack Trace: java.lang.OutOfMemoryError: Java heap space at __randomizedtesting.SeedInfo.seed([9EFA06E36489750C:19AD7B6CF5D0098C]:0) at java.util.Arrays.copyOf(Arrays.java:3332) at java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:124) at java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:448) at java.lang.StringBuilder.append(StringBuilder.java:136) at org.apache.lucene.index.CorruptIndexException.(CorruptIndexException.java:62) at org.apache.lucene.index.CorruptIndexException.(CorruptIndexException.java:47) at org.apache.lucene.codecs.CodecUtil.checkFooter(CodecUtil.java:471) at org.apache.lucene.util.bkd.BKDWriter.verifyChecksum(BKDWriter.java:1412) at org.apache.lucene.util.bkd.BKDWriter.build(BKDWriter.java:1835) at org.apache.lucene.util.bkd.BKDWriter.build(BKDWriter.java:1854) at org.apache.lucene.util.bkd.BKDWriter.finish(BKDWriter.java:1007) at org.apache.lucene.index.RandomCodec$1$1.writeField(RandomCodec.java:139) at org.apache.lucene.codecs.PointsWriter.mergeOneField(PointsWriter.java:62) at org.apache.lucene.codecs.PointsWriter.merge(PointsWriter.java:186) at org.apache.lucene.codecs.lucene60.Lucene60PointsWriter.merge(Lucene60PointsWriter.java:144) at org.apache.lucene.codecs.asserting.AssertingPointsFormat$AssertingPointsWriter.merge(AssertingPointsFormat.java:142) at org.apache.lucene.index.SegmentMerger.mergePoints(SegmentMerger.java:201) at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:161) at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4453) at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:4075) at org.apache.lucene.index.SerialMergeScheduler.merge(SerialMergeScheduler.java:40) at org.apache.lucene.index.IndexWriter.maybeMerge(IndexWriter.java:2178) at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:2011) at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1962) at org.apache.lucene.document.BaseLatLonShapeTestCase.indexRandomShapes(BaseLatLonShapeTestCase.java:226) at org.apache.lucene.document.BaseLatLonShapeTestCase.verify(BaseLatLonShapeTestCase.java:192) at org.apache.lucene.document.BaseLatLonShapeTestCase.doTestRandom(BaseLatLonShapeTestCase.java:173) at org.apache.lucene.document.BaseLatLonShapeTestCase.testRandomBig(BaseLatLonShapeTestCase.java:149) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) FAILED: org.apache.solr.cloud.autoscaling.sim.TestSimExtremeIndexing.testScaleUp Error Message: {numFound=900725000,start=0,docs=[]} expected:<10> but was:<900725000> Stack Trace: java.lang.AssertionError: {numFound=900725000,start=0,docs=[]} expected:<10> but was:<900725000> at __randomizedtesting.SeedInfo.seed([A588F677BD53BB3B:84D6B0D5B17D659A]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.failNotEquals(Assert.java:647) at org.junit.Assert.assertEquals(Assert.java:128) at org.junit.Assert.assertEquals(Assert.java:472) at org.apache.solr.cloud.autoscaling.sim.TestSimExtremeIndexing.testScaleUp(TestSimExtremeIndexing.java:121) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935) at
[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-10.0.1) - Build # 22937 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/22937/ Java: 64bit/jdk-10.0.1 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC 2 tests failed. FAILED: org.apache.solr.cloud.MoveReplicaHDFSTest.testFailedMove Error Message: No live SolrServers available to handle this request:[https://127.0.0.1:43433/solr/MoveReplicaHDFSTest_failed_coll_true, https://127.0.0.1:35649/solr/MoveReplicaHDFSTest_failed_coll_true] Stack Trace: org.apache.solr.client.solrj.SolrServerException: No live SolrServers available to handle this request:[https://127.0.0.1:43433/solr/MoveReplicaHDFSTest_failed_coll_true, https://127.0.0.1:35649/solr/MoveReplicaHDFSTest_failed_coll_true] at __randomizedtesting.SeedInfo.seed([25ECB28576F4978B:8F216177C127425B]:0) at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:462) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1107) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:994) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194) at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:942) at org.apache.solr.cloud.MoveReplicaTest.testFailedMove(MoveReplicaTest.java:301) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:564) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
[jira] [Commented] (LUCENE-8516) Make WordDelimiterGraphFilter a Tokenizer
[ https://issues.apache.org/jira/browse/LUCENE-8516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16631705#comment-16631705 ] Alan Woodward commented on LUCENE-8516: --- It's needed at the moment for the concatenation parameters, in that if you're stringing terms back together again then you need to know where to stop. Then again, that's an argument to get rid of concatenization. IME I've seen WDGF used for two purposes: searching for hyphenated or apostrophised words, and searching for IDs or manufacturing part numbers. Concentrating on the second, we could make this tokenizer something like CharTokenizer only instead of only breaking on specific characters, you can also break on transitions. For the first, a simple filter that indexes all subparts of a word without changing offsets (more like a synonym filter) might be the way forward? > Make WordDelimiterGraphFilter a Tokenizer > - > > Key: LUCENE-8516 > URL: https://issues.apache.org/jira/browse/LUCENE-8516 > Project: Lucene - Core > Issue Type: Task >Reporter: Alan Woodward >Assignee: Alan Woodward >Priority: Major > Attachments: LUCENE-8516.patch > > > Being able to split tokens up at arbitrary points in a filter chain, in > effect adding a second round of tokenization, can cause any number of > problems when trying to keep tokenstreams to contract. The most common > offender here is the WordDelimiterGraphFilter, which can produce broken > offsets in a wide range of situations. > We should make WDGF a Tokenizer in its own right, which should preserve all > the functionality we need, but make reasoning about the resulting tokenstream > much simpler. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8335) HdfsLockFactory does not allow core to come up after a node was killed
[ https://issues.apache.org/jira/browse/SOLR-8335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16631699#comment-16631699 ] Mano Kovacs commented on SOLR-8335: --- I removed hamcrest from the patch. If I could get a review, that would be greatly appreciated: https://github.com/apache/lucene-solr/pull/459 > HdfsLockFactory does not allow core to come up after a node was killed > -- > > Key: SOLR-8335 > URL: https://issues.apache.org/jira/browse/SOLR-8335 > Project: Solr > Issue Type: Bug >Affects Versions: 5.0, 5.1, 5.2, 5.2.1, 5.3, 5.3.1 >Reporter: Varun Thacker >Assignee: Mark Miller >Priority: Major > Attachments: SOLR-8335.patch > > Time Spent: 10m > Remaining Estimate: 0h > > When using HdfsLockFactory if a node gets killed instead of a graceful > shutdown the write.lock file remains in HDFS . The next time you start the > node the core doesn't load up because of LockObtainFailedException . > I was able to reproduce this in all 5.x versions of Solr . The problem wasn't > there when I tested it in 4.10.4 > Steps to reproduce this on 5.x > 1. Create directory in HDFS : {{bin/hdfs dfs -mkdir /solr}} > 2. Start Solr: {{bin/solr start -Dsolr.directoryFactory=HdfsDirectoryFactory > -Dsolr.lock.type=hdfs -Dsolr.data.dir=hdfs://localhost:9000/solr > -Dsolr.updatelog=hdfs://localhost:9000/solr}} > 3. Create core: {{./bin/solr create -c test -n data_driven}} > 4. Kill solr > 5. The lock file is there in HDFS and is called {{write.lock}} > 6. Start Solr again and you get a stack trace like this: > {code} > 2015-11-23 13:28:04.287 ERROR (coreLoadExecutor-6-thread-1) [ x:test] > o.a.s.c.CoreContainer Error creating core [test]: Index locked for write for > core 'test'. Solr now longer supports forceful unlocking via > 'unlockOnStartup'. Please verify locks manually! > org.apache.solr.common.SolrException: Index locked for write for core 'test'. > Solr now longer supports forceful unlocking via 'unlockOnStartup'. Please > verify locks manually! > at org.apache.solr.core.SolrCore.(SolrCore.java:820) > at org.apache.solr.core.SolrCore.(SolrCore.java:659) > at org.apache.solr.core.CoreContainer.create(CoreContainer.java:723) > at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:443) > at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:434) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$1.run(ExecutorUtil.java:210) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > Caused by: org.apache.lucene.store.LockObtainFailedException: Index locked > for write for core 'test'. Solr now longer supports forceful unlocking via > 'unlockOnStartup'. Please verify locks manually! > at org.apache.solr.core.SolrCore.initIndex(SolrCore.java:528) > at org.apache.solr.core.SolrCore.(SolrCore.java:761) > ... 9 more > 2015-11-23 13:28:04.289 ERROR (coreContainerWorkExecutor-2-thread-1) [ ] > o.a.s.c.CoreContainer Error waiting for SolrCore to be created > java.util.concurrent.ExecutionException: > org.apache.solr.common.SolrException: Unable to create core [test] > at java.util.concurrent.FutureTask.report(FutureTask.java:122) > at java.util.concurrent.FutureTask.get(FutureTask.java:192) > at org.apache.solr.core.CoreContainer$2.run(CoreContainer.java:472) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$1.run(ExecutorUtil.java:210) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > Caused by: org.apache.solr.common.SolrException: Unable to create core [test] > at org.apache.solr.core.CoreContainer.create(CoreContainer.java:737) > at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:443) > at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:434) > ... 5 more > Caused by: org.apache.solr.common.SolrException: Index locked for write for > core 'test'. Solr now longer supports forceful unlocking via > 'unlockOnStartup'. Please verify locks manually! > at org.apache.solr.core.SolrCore.(SolrCore.java:820) > at
[JENKINS] Lucene-Solr-NightlyTests-7.x - Build # 330 - Still Failing
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-7.x/330/ 8 tests failed. FAILED: org.apache.lucene.index.TestDuelingCodecsAtNight.testBigEquals Error Message: Java heap space Stack Trace: java.lang.OutOfMemoryError: Java heap space at __randomizedtesting.SeedInfo.seed([8C6FB86E1DDA5339:6F6BE3EBB6C1977E]:0) at org.apache.lucene.util.fst.BytesStore.skipBytes(BytesStore.java:307) at org.apache.lucene.util.fst.FST.addNode(FST.java:701) at org.apache.lucene.util.fst.NodeHash.add(NodeHash.java:126) at org.apache.lucene.util.fst.Builder.compileNode(Builder.java:200) at org.apache.lucene.util.fst.Builder.freezeTail(Builder.java:296) at org.apache.lucene.util.fst.Builder.add(Builder.java:400) at org.apache.lucene.codecs.memory.MemoryPostingsFormat$TermsWriter.finishTerm(MemoryPostingsFormat.java:245) at org.apache.lucene.codecs.memory.MemoryPostingsFormat$TermsWriter.access$300(MemoryPostingsFormat.java:104) at org.apache.lucene.codecs.memory.MemoryPostingsFormat$MemoryFieldsConsumer.write(MemoryPostingsFormat.java:386) at org.apache.lucene.codecs.FieldsConsumer.merge(FieldsConsumer.java:105) at org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsWriter.merge(PerFieldPostingsFormat.java:164) at org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:231) at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:116) at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4446) at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:4068) at org.apache.lucene.index.SerialMergeScheduler.merge(SerialMergeScheduler.java:40) at org.apache.lucene.index.IndexWriter.maybeMerge(IndexWriter.java:2170) at org.apache.lucene.index.IndexWriter.processEvents(IndexWriter.java:5103) at org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1612) at org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1228) at org.apache.lucene.index.RandomIndexWriter.addDocument(RandomIndexWriter.java:189) at org.apache.lucene.index.TestDuelingCodecs.createRandomIndex(TestDuelingCodecs.java:139) at org.apache.lucene.index.TestDuelingCodecsAtNight.testBigEquals(TestDuelingCodecsAtNight.java:34) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) FAILED: org.apache.solr.cloud.LIROnShardRestartTest.testAllReplicasInLIR Error Message: No live SolrServers available to handle this request Stack Trace: org.apache.solr.client.solrj.SolrServerException: No live SolrServers available to handle this request at __randomizedtesting.SeedInfo.seed([9216266A93445FCD:C88E1CACEDC4382A]:0) at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:460) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1107) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211) at org.apache.solr.cloud.LIROnShardRestartTest.testAllReplicasInLIR(LIROnShardRestartTest.java:175) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985) at
[jira] [Commented] (LUCENE-8516) Make WordDelimiterGraphFilter a Tokenizer
[ https://issues.apache.org/jira/browse/LUCENE-8516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16631648#comment-16631648 ] Robert Muir commented on LUCENE-8516: - {quote} WordDelimiterTokenizer takes a root tokenizer (so you could base it on standard, keyword or whitespace and still get the extra level of tokenization you need) and then applies its extra tokenization on top. {quote} This seems unnecessary. Its already over-configurable as far as how to break itself. > Make WordDelimiterGraphFilter a Tokenizer > - > > Key: LUCENE-8516 > URL: https://issues.apache.org/jira/browse/LUCENE-8516 > Project: Lucene - Core > Issue Type: Task >Reporter: Alan Woodward >Assignee: Alan Woodward >Priority: Major > Attachments: LUCENE-8516.patch > > > Being able to split tokens up at arbitrary points in a filter chain, in > effect adding a second round of tokenization, can cause any number of > problems when trying to keep tokenstreams to contract. The most common > offender here is the WordDelimiterGraphFilter, which can produce broken > offsets in a wide range of situations. > We should make WDGF a Tokenizer in its own right, which should preserve all > the functionality we need, but make reasoning about the resulting tokenstream > much simpler. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 4848 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/4848/ Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseSerialGC 2 tests failed. FAILED: org.apache.solr.cloud.autoscaling.sim.TestSimExtremeIndexing.testScaleUp Error Message: {numFound=998575000,start=0,docs=[]} expected:<10> but was:<998575000> Stack Trace: java.lang.AssertionError: {numFound=998575000,start=0,docs=[]} expected:<10> but was:<998575000> at __randomizedtesting.SeedInfo.seed([D2F9DB8B544F6C67:F3A79D295861B2C6]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.failNotEquals(Assert.java:647) at org.junit.Assert.assertEquals(Assert.java:128) at org.junit.Assert.assertEquals(Assert.java:472) at org.apache.solr.cloud.autoscaling.sim.TestSimExtremeIndexing.testScaleUp(TestSimExtremeIndexing.java:121) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.lang.Thread.run(Thread.java:748) FAILED:
[jira] [Commented] (SOLR-12798) Structural changes in SolrJ since version 7.0.0 have effectively disabled multipart post
[ https://issues.apache.org/jira/browse/SOLR-12798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16631644#comment-16631644 ] Mikhail Khludnev commented on SOLR-12798: - [~noble.paul] I'm not sure why we need to revamp handlers which expect content stream to manage them to read doc fields. The other concern is that now the request with single content stream works like a mine field, when one adds too long params it blows surprisingly. Always stripping params from POST urls make it way more predictable for users. > Structural changes in SolrJ since version 7.0.0 have effectively disabled > multipart post > > > Key: SOLR-12798 > URL: https://issues.apache.org/jira/browse/SOLR-12798 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrJ >Affects Versions: 7.4 >Reporter: Karl Wright >Assignee: Karl Wright >Priority: Major > Attachments: HOT Balloon Trip_Ultra HD.jpg, > SOLR-12798-approach.patch, SOLR-12798-reproducer.patch, > SOLR-12798-workaround.patch, no params in url.png, solr-update-request.txt > > > Project ManifoldCF uses SolrJ to post documents to Solr. When upgrading from > SolrJ 7.0.x to SolrJ 7.4, we encountered significant structural changes to > SolrJ's HttpSolrClient class that seemingly disable any use of multipart > post. This is critical because ManifoldCF's documents often contain metadata > in excess of 4K that therefore cannot be stuffed into a URL. > The changes in question seem to have been performed by Paul Noble on > 10/31/2017, with the introduction of the RequestWriter mechanism. Basically, > if a request has a RequestWriter, it is used exclusively to write the > request, and that overrides the stream mechanism completely. I haven't > chased it back to a specific ticket. > ManifoldCF's usage of SolrJ involves the creation of > ContentStreamUpdateRequests for all posts meant for Solr Cell, and the > creation of UpdateRequests for posts not meant for Solr Cell (as well as for > delete and commit requests). For our release cycle that is taking place > right now, we're shipping a modified version of HttpSolrClient that ignores > the RequestWriter when dealing with ContentStreamUpdateRequests. We > apparently cannot use multipart for all requests because on the Solr side we > get "pfountz Should not get here!" errors on the Solr side when we do, which > generate HTTP error code 500 responses. That should not happen either, in my > opinion. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-12367) When adding a model referencing a non-existent feature the error message is very ambiguous
[ https://issues.apache.org/jira/browse/SOLR-12367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16631637#comment-16631637 ] Kamuela Lau edited comment on SOLR-12367 at 9/28/18 9:54 AM: - I have attached a patch to address the issue in the description. The message will now say "org.apache.solr.ltr.model.ModelException: Features cannot be null; perhaps check for missing features". To reproduce, follow the same procedure as described in the description. For now, I have not dealt with the ClassCastException that shows up in the comment (i.e. upload features, then execute above) for casting a Long to a Double, however should this be deemed necessary, I can add that to the patch, or create another issue for it. This should also close 12676, and (maybe?) 12676. Any comments would be appreciated. was (Author: kamulau): I have attached a patch to address the issue in the description. The message will now say "org.apache.solr.ltr.model.ModelException: Features cannot be null; perhaps check for missing features". To reproduce, follow the same procedure as described in the description. For now, I have not dealt with the ClassCastException that shows up in the comment (i.e. upload features, then execute above) for casting a Long to a Double, however should this be deemed necessary, I can add that to the patch, or create another issue for it. Any comments would be appreciated. > When adding a model referencing a non-existent feature the error message is > very ambiguous > -- > > Key: SOLR-12367 > URL: https://issues.apache.org/jira/browse/SOLR-12367 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: contrib - LTR >Affects Versions: 7.3.1 >Reporter: Georg Sorst >Priority: Minor > Attachments: SOLR-12367.patch > > > When adding a model that references a non-existent feature a very ambiguous > error message is thrown, something like "Model type does not exist > org.apache.solr.ltr.model.{{LinearModel}}". > > To reproduce, do not add any features and just add a model, for example by > doing this: > > {{curl -XPUT 'http://localhost:8983/solr/gettingstarted/schema/model-store' > --data-binary '}} > { > {{ "class": "org.apache.solr.ltr.model.LinearModel",}} > {{ "name": "myModel",}} > {{ "features": [ \{"name": "whatever" }],}} > {{ "params": {"weights": {"whatever": 1.0 > {{}' -H 'Content-type:application/json'}} > > The resulting error message "Model type does not exist > {{org.apache.solr.ltr.model.LinearModel" is extremely misleading and cost me > a while to figure out the actual cause.}} > > A more suitable error message should probably indicate the name of the > missing feature that the model is trying to reference. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12367) When adding a model referencing a non-existent feature the error message is very ambiguous
[ https://issues.apache.org/jira/browse/SOLR-12367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16631637#comment-16631637 ] Kamuela Lau commented on SOLR-12367: I have attached a patch to address the issue in the description. The message will now say "org.apache.solr.ltr.model.ModelException: Features cannot be null; perhaps check for missing features". To reproduce, follow the same procedure as described in the description. For now, I have not dealt with the ClassCastException that shows up in the comment (i.e. upload features, then execute above) for casting a Long to a Double, however should this be deemed necessary, I can add that to the patch, or create another issue for it. Any comments would be appreciated. > When adding a model referencing a non-existent feature the error message is > very ambiguous > -- > > Key: SOLR-12367 > URL: https://issues.apache.org/jira/browse/SOLR-12367 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: contrib - LTR >Affects Versions: 7.3.1 >Reporter: Georg Sorst >Priority: Minor > Attachments: SOLR-12367.patch > > > When adding a model that references a non-existent feature a very ambiguous > error message is thrown, something like "Model type does not exist > org.apache.solr.ltr.model.{{LinearModel}}". > > To reproduce, do not add any features and just add a model, for example by > doing this: > > {{curl -XPUT 'http://localhost:8983/solr/gettingstarted/schema/model-store' > --data-binary '}} > { > {{ "class": "org.apache.solr.ltr.model.LinearModel",}} > {{ "name": "myModel",}} > {{ "features": [ \{"name": "whatever" }],}} > {{ "params": {"weights": {"whatever": 1.0 > {{}' -H 'Content-type:application/json'}} > > The resulting error message "Model type does not exist > {{org.apache.solr.ltr.model.LinearModel" is extremely misleading and cost me > a while to figure out the actual cause.}} > > A more suitable error message should probably indicate the name of the > missing feature that the model is trying to reference. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8516) Make WordDelimiterGraphFilter a Tokenizer
[ https://issues.apache.org/jira/browse/LUCENE-8516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16631638#comment-16631638 ] Alan Woodward commented on LUCENE-8516: --- Here's a first stab at a patch, which largely just copies existing WDGF functionality. WordDelimiterTokenizer takes a root tokenizer (so you could base it on standard, keyword or whitespace and still get the extra level of tokenization you need) and then applies its extra tokenization on top. * I've removed the 'english possessive' option as we have an existing filter that will do that * I've kept configuration flags, but this may be an opportunity to make the API easier to use - for example, we could make WordDelimiterIterator an abstract class with an overridable isBreak(int previous, int current) method > Make WordDelimiterGraphFilter a Tokenizer > - > > Key: LUCENE-8516 > URL: https://issues.apache.org/jira/browse/LUCENE-8516 > Project: Lucene - Core > Issue Type: Task >Reporter: Alan Woodward >Assignee: Alan Woodward >Priority: Major > Attachments: LUCENE-8516.patch > > > Being able to split tokens up at arbitrary points in a filter chain, in > effect adding a second round of tokenization, can cause any number of > problems when trying to keep tokenstreams to contract. The most common > offender here is the WordDelimiterGraphFilter, which can produce broken > offsets in a wide range of situations. > We should make WDGF a Tokenizer in its own right, which should preserve all > the functionality we need, but make reasoning about the resulting tokenstream > much simpler. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12367) When adding a model referencing a non-existent feature the error message is very ambiguous
[ https://issues.apache.org/jira/browse/SOLR-12367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kamuela Lau updated SOLR-12367: --- Attachment: SOLR-12367.patch > When adding a model referencing a non-existent feature the error message is > very ambiguous > -- > > Key: SOLR-12367 > URL: https://issues.apache.org/jira/browse/SOLR-12367 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: contrib - LTR >Affects Versions: 7.3.1 >Reporter: Georg Sorst >Priority: Minor > Attachments: SOLR-12367.patch > > > When adding a model that references a non-existent feature a very ambiguous > error message is thrown, something like "Model type does not exist > org.apache.solr.ltr.model.{{LinearModel}}". > > To reproduce, do not add any features and just add a model, for example by > doing this: > > {{curl -XPUT 'http://localhost:8983/solr/gettingstarted/schema/model-store' > --data-binary '}} > { > {{ "class": "org.apache.solr.ltr.model.LinearModel",}} > {{ "name": "myModel",}} > {{ "features": [ \{"name": "whatever" }],}} > {{ "params": {"weights": {"whatever": 1.0 > {{}' -H 'Content-type:application/json'}} > > The resulting error message "Model type does not exist > {{org.apache.solr.ltr.model.LinearModel" is extremely misleading and cost me > a while to figure out the actual cause.}} > > A more suitable error message should probably indicate the name of the > missing feature that the model is trying to reference. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-8516) Make WordDelimiterGraphFilter a Tokenizer
[ https://issues.apache.org/jira/browse/LUCENE-8516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alan Woodward updated LUCENE-8516: -- Attachment: LUCENE-8516.patch > Make WordDelimiterGraphFilter a Tokenizer > - > > Key: LUCENE-8516 > URL: https://issues.apache.org/jira/browse/LUCENE-8516 > Project: Lucene - Core > Issue Type: Task >Reporter: Alan Woodward >Assignee: Alan Woodward >Priority: Major > Attachments: LUCENE-8516.patch > > > Being able to split tokens up at arbitrary points in a filter chain, in > effect adding a second round of tokenization, can cause any number of > problems when trying to keep tokenstreams to contract. The most common > offender here is the WordDelimiterGraphFilter, which can produce broken > offsets in a wide range of situations. > We should make WDGF a Tokenizer in its own right, which should preserve all > the functionality we need, but make reasoning about the resulting tokenstream > much simpler. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (LUCENE-8516) Make WordDelimiterGraphFilter a Tokenizer
Alan Woodward created LUCENE-8516: - Summary: Make WordDelimiterGraphFilter a Tokenizer Key: LUCENE-8516 URL: https://issues.apache.org/jira/browse/LUCENE-8516 Project: Lucene - Core Issue Type: Task Reporter: Alan Woodward Assignee: Alan Woodward Being able to split tokens up at arbitrary points in a filter chain, in effect adding a second round of tokenization, can cause any number of problems when trying to keep tokenstreams to contract. The most common offender here is the WordDelimiterGraphFilter, which can produce broken offsets in a wide range of situations. We should make WDGF a Tokenizer in its own right, which should preserve all the functionality we need, but make reasoning about the resulting tokenstream much simpler. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12798) Structural changes in SolrJ since version 7.0.0 have effectively disabled multipart post
[ https://issues.apache.org/jira/browse/SOLR-12798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16631608#comment-16631608 ] Julien Massiera commented on SOLR-12798: Assuming I have a PDF file which contains an image that can be "OCRized". I have a process that sends the PDF to a Tika server that will extract the metadata of the PDF file + the text extracted from the image thanks to Tesseract. At the end of the Tika job, the process retrieve two elements : a list of metadata as an arraylist and a file containing the text extracted from the image inside the PDF file. Now, to the metadata list I add the ACLs of the PDF file (which are hudge) and I need the metadata and the file to be sent as one document to Solr for indexation. What are you recommendations in term of code to do this in the most efficient way (in term of memory consumption and performances of course), using SolrJ ? And which handler would you use on Solr side ? I will test it and see if I experience the URL limit issue > Structural changes in SolrJ since version 7.0.0 have effectively disabled > multipart post > > > Key: SOLR-12798 > URL: https://issues.apache.org/jira/browse/SOLR-12798 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrJ >Affects Versions: 7.4 >Reporter: Karl Wright >Assignee: Karl Wright >Priority: Major > Attachments: HOT Balloon Trip_Ultra HD.jpg, > SOLR-12798-approach.patch, SOLR-12798-reproducer.patch, > SOLR-12798-workaround.patch, no params in url.png, solr-update-request.txt > > > Project ManifoldCF uses SolrJ to post documents to Solr. When upgrading from > SolrJ 7.0.x to SolrJ 7.4, we encountered significant structural changes to > SolrJ's HttpSolrClient class that seemingly disable any use of multipart > post. This is critical because ManifoldCF's documents often contain metadata > in excess of 4K that therefore cannot be stuffed into a URL. > The changes in question seem to have been performed by Paul Noble on > 10/31/2017, with the introduction of the RequestWriter mechanism. Basically, > if a request has a RequestWriter, it is used exclusively to write the > request, and that overrides the stream mechanism completely. I haven't > chased it back to a specific ticket. > ManifoldCF's usage of SolrJ involves the creation of > ContentStreamUpdateRequests for all posts meant for Solr Cell, and the > creation of UpdateRequests for posts not meant for Solr Cell (as well as for > delete and commit requests). For our release cycle that is taking place > right now, we're shipping a modified version of HttpSolrClient that ignores > the RequestWriter when dealing with ContentStreamUpdateRequests. We > apparently cannot use multipart for all requests because on the Solr side we > get "pfountz Should not get here!" errors on the Solr side when we do, which > generate HTTP error code 500 responses. That should not happen either, in my > opinion. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12798) Structural changes in SolrJ since version 7.0.0 have effectively disabled multipart post
[ https://issues.apache.org/jira/browse/SOLR-12798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16631581#comment-16631581 ] Jan Høydahl commented on SOLR-12798: {quote}here you can see how ManifoldCF accomplish content stream blob with long params {quote} This code is for posting to ExtractingHandler, and contains a limited amount of literal metadata, unless the ACLs are huge, which I suppose they may very well be. And that would warrant the multi-part requirement in itself. > Structural changes in SolrJ since version 7.0.0 have effectively disabled > multipart post > > > Key: SOLR-12798 > URL: https://issues.apache.org/jira/browse/SOLR-12798 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrJ >Affects Versions: 7.4 >Reporter: Karl Wright >Assignee: Karl Wright >Priority: Major > Attachments: HOT Balloon Trip_Ultra HD.jpg, > SOLR-12798-approach.patch, SOLR-12798-reproducer.patch, > SOLR-12798-workaround.patch, no params in url.png, solr-update-request.txt > > > Project ManifoldCF uses SolrJ to post documents to Solr. When upgrading from > SolrJ 7.0.x to SolrJ 7.4, we encountered significant structural changes to > SolrJ's HttpSolrClient class that seemingly disable any use of multipart > post. This is critical because ManifoldCF's documents often contain metadata > in excess of 4K that therefore cannot be stuffed into a URL. > The changes in question seem to have been performed by Paul Noble on > 10/31/2017, with the introduction of the RequestWriter mechanism. Basically, > if a request has a RequestWriter, it is used exclusively to write the > request, and that overrides the stream mechanism completely. I haven't > chased it back to a specific ticket. > ManifoldCF's usage of SolrJ involves the creation of > ContentStreamUpdateRequests for all posts meant for Solr Cell, and the > creation of UpdateRequests for posts not meant for Solr Cell (as well as for > delete and commit requests). For our release cycle that is taking place > right now, we're shipping a modified version of HttpSolrClient that ignores > the RequestWriter when dealing with ContentStreamUpdateRequests. We > apparently cannot use multipart for all requests because on the Solr side we > get "pfountz Should not get here!" errors on the Solr side when we do, which > generate HTTP error code 500 responses. That should not happen either, in my > opinion. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12798) Structural changes in SolrJ since version 7.0.0 have effectively disabled multipart post
[ https://issues.apache.org/jira/browse/SOLR-12798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16631565#comment-16631565 ] Noble Paul commented on SOLR-12798: --- [~mkhludnev] we will post the data as follows {code:java} { "docs" :[ {"params:{ "a":"b","c":"d"}, "payload" : "" }, {"params:{ "p":"q","r:"s"}, "payload" : "" } ] } {code} On the serverside, we unmarshal the params first and then read the pay load stream > Structural changes in SolrJ since version 7.0.0 have effectively disabled > multipart post > > > Key: SOLR-12798 > URL: https://issues.apache.org/jira/browse/SOLR-12798 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrJ >Affects Versions: 7.4 >Reporter: Karl Wright >Assignee: Karl Wright >Priority: Major > Attachments: HOT Balloon Trip_Ultra HD.jpg, > SOLR-12798-approach.patch, SOLR-12798-reproducer.patch, > SOLR-12798-workaround.patch, no params in url.png, solr-update-request.txt > > > Project ManifoldCF uses SolrJ to post documents to Solr. When upgrading from > SolrJ 7.0.x to SolrJ 7.4, we encountered significant structural changes to > SolrJ's HttpSolrClient class that seemingly disable any use of multipart > post. This is critical because ManifoldCF's documents often contain metadata > in excess of 4K that therefore cannot be stuffed into a URL. > The changes in question seem to have been performed by Paul Noble on > 10/31/2017, with the introduction of the RequestWriter mechanism. Basically, > if a request has a RequestWriter, it is used exclusively to write the > request, and that overrides the stream mechanism completely. I haven't > chased it back to a specific ticket. > ManifoldCF's usage of SolrJ involves the creation of > ContentStreamUpdateRequests for all posts meant for Solr Cell, and the > creation of UpdateRequests for posts not meant for Solr Cell (as well as for > delete and commit requests). For our release cycle that is taking place > right now, we're shipping a modified version of HttpSolrClient that ignores > the RequestWriter when dealing with ContentStreamUpdateRequests. We > apparently cannot use multipart for all requests because on the Solr side we > get "pfountz Should not get here!" errors on the Solr side when we do, which > generate HTTP error code 500 responses. That should not happen either, in my > opinion. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS-EA] Lucene-Solr-7.x-Linux (64bit/jdk-12-ea+12) - Build # 2821 - Failure!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2821/ Java: 64bit/jdk-12-ea+12 -XX:-UseCompressedOops -XX:+UseSerialGC 4 tests failed. FAILED: org.apache.solr.cloud.api.collections.CollectionsAPIAsyncDistributedZkTest.testAsyncIdRaceCondition Error Message: expected:<9> but was:<8> Stack Trace: java.lang.AssertionError: expected:<9> but was:<8> at __randomizedtesting.SeedInfo.seed([86AE232BCF5DDDF6:D3234EC6A333245D]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.failNotEquals(Assert.java:647) at org.junit.Assert.assertEquals(Assert.java:128) at org.junit.Assert.assertEquals(Assert.java:472) at org.junit.Assert.assertEquals(Assert.java:456) at org.apache.solr.cloud.api.collections.CollectionsAPIAsyncDistributedZkTest.testAsyncIdRaceCondition(CollectionsAPIAsyncDistributedZkTest.java:244) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.base/java.lang.Thread.run(Thread.java:835) FAILED:
[jira] [Commented] (SOLR-12798) Structural changes in SolrJ since version 7.0.0 have effectively disabled multipart post
[ https://issues.apache.org/jira/browse/SOLR-12798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16631548#comment-16631548 ] Mikhail Khludnev commented on SOLR-12798: - [~janhoy], here you can see how ManifoldCF accomplish content stream blob with long params. https://github.com/apache/manifoldcf/blob/11f8021c22c7fc141d237970b713b197992b5921/connectors/solr/connector/src/main/java/org/apache/manifoldcf/agents/output/solr/HttpPoster.java#L1224 You can see particular params attached. I just replying your question literally, regardless of my (lack of) understanding nor opinion regarding this design. > Structural changes in SolrJ since version 7.0.0 have effectively disabled > multipart post > > > Key: SOLR-12798 > URL: https://issues.apache.org/jira/browse/SOLR-12798 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrJ >Affects Versions: 7.4 >Reporter: Karl Wright >Assignee: Karl Wright >Priority: Major > Attachments: HOT Balloon Trip_Ultra HD.jpg, > SOLR-12798-approach.patch, SOLR-12798-reproducer.patch, > SOLR-12798-workaround.patch, no params in url.png, solr-update-request.txt > > > Project ManifoldCF uses SolrJ to post documents to Solr. When upgrading from > SolrJ 7.0.x to SolrJ 7.4, we encountered significant structural changes to > SolrJ's HttpSolrClient class that seemingly disable any use of multipart > post. This is critical because ManifoldCF's documents often contain metadata > in excess of 4K that therefore cannot be stuffed into a URL. > The changes in question seem to have been performed by Paul Noble on > 10/31/2017, with the introduction of the RequestWriter mechanism. Basically, > if a request has a RequestWriter, it is used exclusively to write the > request, and that overrides the stream mechanism completely. I haven't > chased it back to a specific ticket. > ManifoldCF's usage of SolrJ involves the creation of > ContentStreamUpdateRequests for all posts meant for Solr Cell, and the > creation of UpdateRequests for posts not meant for Solr Cell (as well as for > delete and commit requests). For our release cycle that is taking place > right now, we're shipping a modified version of HttpSolrClient that ignores > the RequestWriter when dealing with ContentStreamUpdateRequests. We > apparently cannot use multipart for all requests because on the Solr side we > get "pfountz Should not get here!" errors on the Solr side when we do, which > generate HTTP error code 500 responses. That should not happen either, in my > opinion. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12798) Structural changes in SolrJ since version 7.0.0 have effectively disabled multipart post
[ https://issues.apache.org/jira/browse/SOLR-12798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16631520#comment-16631520 ] Jan Høydahl commented on SOLR-12798: {quote}How we can avoid multipart when we have one big chunk as a content stream and one chunk with huge params? {quote} I have still not seen the usecase for this. Why would there be huge params when you post a binary content stream to SolrCell? The params would come from the metadata inside the binary docs, which are unpacked on the Solr server side? You could of course have large metadata about a PDF sitting in a database on the client and want to post that with the binary doc but as I understand the usecase for MCF, the huge metadata is parsed from the binary doc by Tika on the server side? > Structural changes in SolrJ since version 7.0.0 have effectively disabled > multipart post > > > Key: SOLR-12798 > URL: https://issues.apache.org/jira/browse/SOLR-12798 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrJ >Affects Versions: 7.4 >Reporter: Karl Wright >Assignee: Karl Wright >Priority: Major > Attachments: HOT Balloon Trip_Ultra HD.jpg, > SOLR-12798-approach.patch, SOLR-12798-reproducer.patch, > SOLR-12798-workaround.patch, no params in url.png, solr-update-request.txt > > > Project ManifoldCF uses SolrJ to post documents to Solr. When upgrading from > SolrJ 7.0.x to SolrJ 7.4, we encountered significant structural changes to > SolrJ's HttpSolrClient class that seemingly disable any use of multipart > post. This is critical because ManifoldCF's documents often contain metadata > in excess of 4K that therefore cannot be stuffed into a URL. > The changes in question seem to have been performed by Paul Noble on > 10/31/2017, with the introduction of the RequestWriter mechanism. Basically, > if a request has a RequestWriter, it is used exclusively to write the > request, and that overrides the stream mechanism completely. I haven't > chased it back to a specific ticket. > ManifoldCF's usage of SolrJ involves the creation of > ContentStreamUpdateRequests for all posts meant for Solr Cell, and the > creation of UpdateRequests for posts not meant for Solr Cell (as well as for > delete and commit requests). For our release cycle that is taking place > right now, we're shipping a modified version of HttpSolrClient that ignores > the RequestWriter when dealing with ContentStreamUpdateRequests. We > apparently cannot use multipart for all requests because on the Solr side we > get "pfountz Should not get here!" errors on the Solr side when we do, which > generate HTTP error code 500 responses. That should not happen either, in my > opinion. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12798) Structural changes in SolrJ since version 7.0.0 have effectively disabled multipart post
[ https://issues.apache.org/jira/browse/SOLR-12798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16631514#comment-16631514 ] Mikhail Khludnev commented on SOLR-12798: - [~noble.paul] not sure I follow. How we can avoid multipart when we have one big chunk as a content stream and one chunk with huge params? > Structural changes in SolrJ since version 7.0.0 have effectively disabled > multipart post > > > Key: SOLR-12798 > URL: https://issues.apache.org/jira/browse/SOLR-12798 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrJ >Affects Versions: 7.4 >Reporter: Karl Wright >Assignee: Karl Wright >Priority: Major > Attachments: HOT Balloon Trip_Ultra HD.jpg, > SOLR-12798-approach.patch, SOLR-12798-reproducer.patch, > SOLR-12798-workaround.patch, no params in url.png, solr-update-request.txt > > > Project ManifoldCF uses SolrJ to post documents to Solr. When upgrading from > SolrJ 7.0.x to SolrJ 7.4, we encountered significant structural changes to > SolrJ's HttpSolrClient class that seemingly disable any use of multipart > post. This is critical because ManifoldCF's documents often contain metadata > in excess of 4K that therefore cannot be stuffed into a URL. > The changes in question seem to have been performed by Paul Noble on > 10/31/2017, with the introduction of the RequestWriter mechanism. Basically, > if a request has a RequestWriter, it is used exclusively to write the > request, and that overrides the stream mechanism completely. I haven't > chased it back to a specific ticket. > ManifoldCF's usage of SolrJ involves the creation of > ContentStreamUpdateRequests for all posts meant for Solr Cell, and the > creation of UpdateRequests for posts not meant for Solr Cell (as well as for > delete and commit requests). For our release cycle that is taking place > right now, we're shipping a modified version of HttpSolrClient that ignores > the RequestWriter when dealing with ContentStreamUpdateRequests. We > apparently cannot use multipart for all requests because on the Solr side we > get "pfountz Should not get here!" errors on the Solr side when we do, which > generate HTTP error code 500 responses. That should not happen either, in my > opinion. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-12798) Structural changes in SolrJ since version 7.0.0 have effectively disabled multipart post
[ https://issues.apache.org/jira/browse/SOLR-12798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16631502#comment-16631502 ] Noble Paul edited comment on SOLR-12798 at 9/28/18 8:05 AM: The ideal fix is to avoid multipart altogether because we control both ends of the communication. was (Author: noble.paul): The ideal fix is to avoid multipart altogether because we support both ends of the communication. > Structural changes in SolrJ since version 7.0.0 have effectively disabled > multipart post > > > Key: SOLR-12798 > URL: https://issues.apache.org/jira/browse/SOLR-12798 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrJ >Affects Versions: 7.4 >Reporter: Karl Wright >Assignee: Karl Wright >Priority: Major > Attachments: HOT Balloon Trip_Ultra HD.jpg, > SOLR-12798-approach.patch, SOLR-12798-reproducer.patch, > SOLR-12798-workaround.patch, no params in url.png, solr-update-request.txt > > > Project ManifoldCF uses SolrJ to post documents to Solr. When upgrading from > SolrJ 7.0.x to SolrJ 7.4, we encountered significant structural changes to > SolrJ's HttpSolrClient class that seemingly disable any use of multipart > post. This is critical because ManifoldCF's documents often contain metadata > in excess of 4K that therefore cannot be stuffed into a URL. > The changes in question seem to have been performed by Paul Noble on > 10/31/2017, with the introduction of the RequestWriter mechanism. Basically, > if a request has a RequestWriter, it is used exclusively to write the > request, and that overrides the stream mechanism completely. I haven't > chased it back to a specific ticket. > ManifoldCF's usage of SolrJ involves the creation of > ContentStreamUpdateRequests for all posts meant for Solr Cell, and the > creation of UpdateRequests for posts not meant for Solr Cell (as well as for > delete and commit requests). For our release cycle that is taking place > right now, we're shipping a modified version of HttpSolrClient that ignores > the RequestWriter when dealing with ContentStreamUpdateRequests. We > apparently cannot use multipart for all requests because on the Solr side we > get "pfountz Should not get here!" errors on the Solr side when we do, which > generate HTTP error code 500 responses. That should not happen either, in my > opinion. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12798) Structural changes in SolrJ since version 7.0.0 have effectively disabled multipart post
[ https://issues.apache.org/jira/browse/SOLR-12798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16631502#comment-16631502 ] Noble Paul commented on SOLR-12798: --- The ideal fix is to avoid multipart altogether because we support both ends of the communication. > Structural changes in SolrJ since version 7.0.0 have effectively disabled > multipart post > > > Key: SOLR-12798 > URL: https://issues.apache.org/jira/browse/SOLR-12798 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrJ >Affects Versions: 7.4 >Reporter: Karl Wright >Assignee: Karl Wright >Priority: Major > Attachments: HOT Balloon Trip_Ultra HD.jpg, > SOLR-12798-approach.patch, SOLR-12798-reproducer.patch, > SOLR-12798-workaround.patch, no params in url.png, solr-update-request.txt > > > Project ManifoldCF uses SolrJ to post documents to Solr. When upgrading from > SolrJ 7.0.x to SolrJ 7.4, we encountered significant structural changes to > SolrJ's HttpSolrClient class that seemingly disable any use of multipart > post. This is critical because ManifoldCF's documents often contain metadata > in excess of 4K that therefore cannot be stuffed into a URL. > The changes in question seem to have been performed by Paul Noble on > 10/31/2017, with the introduction of the RequestWriter mechanism. Basically, > if a request has a RequestWriter, it is used exclusively to write the > request, and that overrides the stream mechanism completely. I haven't > chased it back to a specific ticket. > ManifoldCF's usage of SolrJ involves the creation of > ContentStreamUpdateRequests for all posts meant for Solr Cell, and the > creation of UpdateRequests for posts not meant for Solr Cell (as well as for > delete and commit requests). For our release cycle that is taking place > right now, we're shipping a modified version of HttpSolrClient that ignores > the RequestWriter when dealing with ContentStreamUpdateRequests. We > apparently cannot use multipart for all requests because on the Solr side we > get "pfountz Should not get here!" errors on the Solr side when we do, which > generate HTTP error code 500 responses. That should not happen either, in my > opinion. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-12807) out of memory error due to a lot of zk watchers in solr cloud
[ https://issues.apache.org/jira/browse/SOLR-12807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mine Orange resolved SOLR-12807. Resolution: Fixed Fix Version/s: 6.5.1 6.6 7.0 > out of memory error due to a lot of zk watchers in solr cloud > -- > > Key: SOLR-12807 > URL: https://issues.apache.org/jira/browse/SOLR-12807 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Affects Versions: 6.1 >Reporter: Mine Orange >Priority: Major > Fix For: 7.0, 6.6, 6.5.1 > > > Analyzing the dump file,we found a lot of watchers in childWatches of > ZKWatchManager,nearly 1.8G,the znode of childWatches is > /overseer/collection-queue-work,confirm that it is not because of the > frequent use of collection API and the network is normal. > The instance is the overseer leader of a solr cluster and did not restart for > more than a year,suspect that the watchers grow with time. > Our solr version is 6.1 and zookeeper version is 3.4.9. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12807) out of memory error due to a lot of zk watchers in solr cloud
[ https://issues.apache.org/jira/browse/SOLR-12807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16631479#comment-16631479 ] Mine Orange commented on SOLR-12807: Well,as the SOLR-10420 shows,the childwatcher leak in DistributedQueue may eventually lead to OOM of my instance,i have tested the 6.6 version and compared it with the 6.1 version,focusing on the registration of the childwatcher. There is no repeated registration problem of watcher in 6.6 version. > out of memory error due to a lot of zk watchers in solr cloud > -- > > Key: SOLR-12807 > URL: https://issues.apache.org/jira/browse/SOLR-12807 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Affects Versions: 6.1 >Reporter: Mine Orange >Priority: Major > > Analyzing the dump file,we found a lot of watchers in childWatches of > ZKWatchManager,nearly 1.8G,the znode of childWatches is > /overseer/collection-queue-work,confirm that it is not because of the > frequent use of collection API and the network is normal. > The instance is the overseer leader of a solr cluster and did not restart for > more than a year,suspect that the watchers grow with time. > Our solr version is 6.1 and zookeeper version is 3.4.9. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-11) - Build # 22936 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/22936/ Java: 64bit/jdk-11 -XX:-UseCompressedOops -XX:+UseSerialGC 5 tests failed. FAILED: org.apache.solr.cloud.MoveReplicaHDFSTest.testFailedMove Error Message: No live SolrServers available to handle this request:[https://127.0.0.1:36853/solr/MoveReplicaHDFSTest_failed_coll_true, https://127.0.0.1:38289/solr/MoveReplicaHDFSTest_failed_coll_true] Stack Trace: org.apache.solr.client.solrj.SolrServerException: No live SolrServers available to handle this request:[https://127.0.0.1:36853/solr/MoveReplicaHDFSTest_failed_coll_true, https://127.0.0.1:38289/solr/MoveReplicaHDFSTest_failed_coll_true] at __randomizedtesting.SeedInfo.seed([216D5058F7F976A8:8BA083AA402AA378]:0) at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:462) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1107) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:994) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194) at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:942) at org.apache.solr.cloud.MoveReplicaTest.testFailedMove(MoveReplicaTest.java:301) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
[jira] [Commented] (SOLR-12798) Structural changes in SolrJ since version 7.0.0 have effectively disabled multipart post
[ https://issues.apache.org/jira/browse/SOLR-12798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16631430#comment-16631430 ] Mikhail Khludnev commented on SOLR-12798: - Ok. turns out to trigger passing params as a part of multipart request one needs to pass at least _two_ _named_ streams. Here [^SOLR-12798-workaround.patch]. [~kwri...@metacarta.com] would you mind to evaluate a quick workaround, after binary payload is added as a content stream, can it add a named add-nothing stream as well like in the patch below? {code} up.addContentStream(new ContentStreamBase.StringStream("") { { setName("multipart trigger. SOLR-12798"); } }); {code} Regarding the more or less appropriate fix: should we pass params as multipart with POST always? or try to estimate their size, and put so only long one? > Structural changes in SolrJ since version 7.0.0 have effectively disabled > multipart post > > > Key: SOLR-12798 > URL: https://issues.apache.org/jira/browse/SOLR-12798 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrJ >Affects Versions: 7.4 >Reporter: Karl Wright >Assignee: Karl Wright >Priority: Major > Attachments: HOT Balloon Trip_Ultra HD.jpg, > SOLR-12798-approach.patch, SOLR-12798-reproducer.patch, > SOLR-12798-workaround.patch, no params in url.png, solr-update-request.txt > > > Project ManifoldCF uses SolrJ to post documents to Solr. When upgrading from > SolrJ 7.0.x to SolrJ 7.4, we encountered significant structural changes to > SolrJ's HttpSolrClient class that seemingly disable any use of multipart > post. This is critical because ManifoldCF's documents often contain metadata > in excess of 4K that therefore cannot be stuffed into a URL. > The changes in question seem to have been performed by Paul Noble on > 10/31/2017, with the introduction of the RequestWriter mechanism. Basically, > if a request has a RequestWriter, it is used exclusively to write the > request, and that overrides the stream mechanism completely. I haven't > chased it back to a specific ticket. > ManifoldCF's usage of SolrJ involves the creation of > ContentStreamUpdateRequests for all posts meant for Solr Cell, and the > creation of UpdateRequests for posts not meant for Solr Cell (as well as for > delete and commit requests). For our release cycle that is taking place > right now, we're shipping a modified version of HttpSolrClient that ignores > the RequestWriter when dealing with ContentStreamUpdateRequests. We > apparently cannot use multipart for all requests because on the Solr side we > get "pfountz Should not get here!" errors on the Solr side when we do, which > generate HTTP error code 500 responses. That should not happen either, in my > opinion. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12798) Structural changes in SolrJ since version 7.0.0 have effectively disabled multipart post
[ https://issues.apache.org/jira/browse/SOLR-12798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Khludnev updated SOLR-12798: Attachment: SOLR-12798-workaround.patch > Structural changes in SolrJ since version 7.0.0 have effectively disabled > multipart post > > > Key: SOLR-12798 > URL: https://issues.apache.org/jira/browse/SOLR-12798 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrJ >Affects Versions: 7.4 >Reporter: Karl Wright >Assignee: Karl Wright >Priority: Major > Attachments: HOT Balloon Trip_Ultra HD.jpg, > SOLR-12798-approach.patch, SOLR-12798-reproducer.patch, > SOLR-12798-workaround.patch, no params in url.png, solr-update-request.txt > > > Project ManifoldCF uses SolrJ to post documents to Solr. When upgrading from > SolrJ 7.0.x to SolrJ 7.4, we encountered significant structural changes to > SolrJ's HttpSolrClient class that seemingly disable any use of multipart > post. This is critical because ManifoldCF's documents often contain metadata > in excess of 4K that therefore cannot be stuffed into a URL. > The changes in question seem to have been performed by Paul Noble on > 10/31/2017, with the introduction of the RequestWriter mechanism. Basically, > if a request has a RequestWriter, it is used exclusively to write the > request, and that overrides the stream mechanism completely. I haven't > chased it back to a specific ticket. > ManifoldCF's usage of SolrJ involves the creation of > ContentStreamUpdateRequests for all posts meant for Solr Cell, and the > creation of UpdateRequests for posts not meant for Solr Cell (as well as for > delete and commit requests). For our release cycle that is taking place > right now, we're shipping a modified version of HttpSolrClient that ignores > the RequestWriter when dealing with ContentStreamUpdateRequests. We > apparently cannot use multipart for all requests because on the Solr side we > get "pfountz Should not get here!" errors on the Solr side when we do, which > generate HTTP error code 500 responses. That should not happen either, in my > opinion. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-12798) Structural changes in SolrJ since version 7.0.0 have effectively disabled multipart post
[ https://issues.apache.org/jira/browse/SOLR-12798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16631063#comment-16631063 ] Mikhail Khludnev edited comment on SOLR-12798 at 9/28/18 6:31 AM: -- This is somewhat -scary- interesting. see [^SOLR-12798-reproducer.patch]. If we sent just one file it disables multipart and huge params might go to URL. After that, test -*hangs*- fails with the log message {quote} 390718 WARN (qtp1324113116-23) [x:collection1] o.e.j.h.HttpParser URI is too large >8192 HttpSolrClient$RemoteSolrException: Error from server at http://127.0.0.1:55150/solr/collection1: Expected mime type application/octet-stream but got text/html. Bad Message 414reason: URI Too Long {quote} was (Author: mkhludnev): This is somewhat scary. see [^SOLR-12798-reproducer.patch]. If we sent just one file it disables multipart and huge params might go to URL. After that, test *hangs* with the log message {quote} 390718 WARN (qtp1324113116-23) [x:collection1] o.e.j.h.HttpParser URI is too large >8192 {quote} > Structural changes in SolrJ since version 7.0.0 have effectively disabled > multipart post > > > Key: SOLR-12798 > URL: https://issues.apache.org/jira/browse/SOLR-12798 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrJ >Affects Versions: 7.4 >Reporter: Karl Wright >Assignee: Karl Wright >Priority: Major > Attachments: HOT Balloon Trip_Ultra HD.jpg, > SOLR-12798-approach.patch, SOLR-12798-reproducer.patch, no params in url.png, > solr-update-request.txt > > > Project ManifoldCF uses SolrJ to post documents to Solr. When upgrading from > SolrJ 7.0.x to SolrJ 7.4, we encountered significant structural changes to > SolrJ's HttpSolrClient class that seemingly disable any use of multipart > post. This is critical because ManifoldCF's documents often contain metadata > in excess of 4K that therefore cannot be stuffed into a URL. > The changes in question seem to have been performed by Paul Noble on > 10/31/2017, with the introduction of the RequestWriter mechanism. Basically, > if a request has a RequestWriter, it is used exclusively to write the > request, and that overrides the stream mechanism completely. I haven't > chased it back to a specific ticket. > ManifoldCF's usage of SolrJ involves the creation of > ContentStreamUpdateRequests for all posts meant for Solr Cell, and the > creation of UpdateRequests for posts not meant for Solr Cell (as well as for > delete and commit requests). For our release cycle that is taking place > right now, we're shipping a modified version of HttpSolrClient that ignores > the RequestWriter when dealing with ContentStreamUpdateRequests. We > apparently cannot use multipart for all requests because on the Solr side we > get "pfountz Should not get here!" errors on the Solr side when we do, which > generate HTTP error code 500 responses. That should not happen either, in my > opinion. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org