[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1982 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1982/ Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC 26 tests failed. FAILED: org.apache.solr.client.solrj.io.stream.StreamingTest.testRollupStream Error Message: Could not find a healthy node to handle the request. Stack Trace: org.apache.solr.common.SolrException: Could not find a healthy node to handle the request. at __randomizedtesting.SeedInfo.seed([1433C42303A5C305:2DE6F666A41A16F]:0) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1100) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:886) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:993) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:993) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:993) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:993) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:993) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:819) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194) at org.apache.solr.client.solrj.request.UpdateRequest.commit(UpdateRequest.java:233) at org.apache.solr.client.solrj.io.stream.StreamingTest.clearCollection(StreamingTest.java:119) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:968) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
[JENKINS] Lucene-Solr-repro - Build # 1001 - Still Unstable
Build: https://builds.apache.org/job/Lucene-Solr-repro/1001/ [...truncated 28 lines...] [repro] Jenkins log URL: https://builds.apache.org/job/Lucene-Solr-NightlyTests-7.x/266/consoleText [repro] Revision: 5c40fe5906ecc5eabb89bf1a3086dd9121402d61 [repro] Ant options: -Dtests.multiplier=2 -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt [repro] Repro line: ant test -Dtestcase=TestLatLonShapeQueries -Dtests.method=testRandomBig -Dtests.seed=AEC5BDFEA2072157 -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt -Dtests.locale=und -Dtests.timezone=Europe/Tiraspol -Dtests.asserts=true -Dtests.file.encoding=UTF-8 [repro] Repro line: ant test -Dtestcase=HdfsChaosMonkeySafeLeaderTest -Dtests.seed=17C3484A7E923D01 -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt -Dtests.locale=ar -Dtests.timezone=Africa/Nairobi -Dtests.asserts=true -Dtests.file.encoding=US-ASCII [repro] git rev-parse --abbrev-ref HEAD [repro] git rev-parse HEAD [repro] Initial local git branch/revision: 1bea1da5dc43d3b392c5e363c3ad970e1df6d5fc [repro] git fetch [repro] git checkout 5c40fe5906ecc5eabb89bf1a3086dd9121402d61 [...truncated 2 lines...] [repro] git merge --ff-only [...truncated 1 lines...] [repro] ant clean [...truncated 6 lines...] [repro] Test suites by module: [repro]lucene/sandbox [repro] TestLatLonShapeQueries [repro]solr/core [repro] HdfsChaosMonkeySafeLeaderTest [repro] ant compile-test [...truncated 165 lines...] [repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 -Dtests.class="*.TestLatLonShapeQueries" -Dtests.showOutput=onerror -Dtests.multiplier=2 -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt -Dtests.seed=AEC5BDFEA2072157 -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt -Dtests.locale=und -Dtests.timezone=Europe/Tiraspol -Dtests.asserts=true -Dtests.file.encoding=UTF-8 [...truncated 208 lines...] [repro] Setting last failure code to 256 [repro] ant compile-test [...truncated 3256 lines...] [repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 -Dtests.class="*.HdfsChaosMonkeySafeLeaderTest" -Dtests.showOutput=onerror -Dtests.multiplier=2 -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt -Dtests.seed=17C3484A7E923D01 -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt -Dtests.locale=ar -Dtests.timezone=Africa/Nairobi -Dtests.asserts=true -Dtests.file.encoding=US-ASCII [...truncated 8700 lines...] [repro] Setting last failure code to 256 [repro] Failures: [repro] 1/5 failed: org.apache.solr.cloud.hdfs.HdfsChaosMonkeySafeLeaderTest [repro] 5/5 failed: org.apache.lucene.document.TestLatLonShapeQueries [repro] Re-testing 100% failures at the tip of branch_7x [repro] git fetch [repro] git checkout branch_7x [...truncated 4 lines...] [repro] git merge --ff-only [...truncated 35 lines...] [repro] ant clean [...truncated 8 lines...] [repro] Test suites by module: [repro]lucene/sandbox [repro] TestLatLonShapeQueries [repro] ant compile-test [...truncated 165 lines...] [repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 -Dtests.class="*.TestLatLonShapeQueries" -Dtests.showOutput=onerror -Dtests.multiplier=2 -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt -Dtests.seed=AEC5BDFEA2072157 -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt -Dtests.locale=und -Dtests.timezone=Europe/Tiraspol -Dtests.asserts=true -Dtests.file.encoding=UTF-8 [...truncated 205 lines...] [repro] Setting last failure code to 256 [repro] Failures at the tip of branch_7x: [repro] 5/5 failed: org.apache.lucene.document.TestLatLonShapeQueries [repro] Re-testing 100% failures at the tip of branch_7x without a seed [repro] ant clean [...truncated 7 lines...] [repro] Test suites by module: [repro]lucene/sandbox [repro] TestLatLonShapeQueries [repro] ant compile-test [...truncated 165 lines...] [repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 -Dtests.class="*.TestLatLonShapeQueries" -Dtests.showOutput=onerror -Dtests.multiplier=2
[JENKINS] Lucene-Solr-Tests-master - Build # 2611 - Still Unstable
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/2611/ 1 tests failed. FAILED: org.apache.solr.cloud.autoscaling.sim.TestTriggerIntegration.testNodeLostTriggerRestoreState Error Message: The trigger did not fire at all Stack Trace: java.lang.AssertionError: The trigger did not fire at all at __randomizedtesting.SeedInfo.seed([7D107492C3FEAF88:56EFA1C95986BA58]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.apache.solr.cloud.autoscaling.sim.TestTriggerIntegration.testNodeLostTriggerRestoreState(TestTriggerIntegration.java:324) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.lang.Thread.run(Thread.java:748) Build Log: [...truncated 12264 lines...] [junit4] Suite: org.apache.solr.cloud.autoscaling.sim.TestTriggerIntegration [junit4] 2> Creating dataDir:
[jira] [Commented] (SOLR-12570) OpenNLPExtractNamedEntitiesUpdateProcessor cannot support multi fields because pattern replacement doesn't work correctly
[ https://issues.apache.org/jira/browse/SOLR-12570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16551521#comment-16551521 ] Koji Sekiguchi commented on SOLR-12570: --- I posted a patch in LUCENE-8420. It includes the new ner model which can predict LOCATION in addition to PERSON. I think we can add the test for this after LUCENE-8420 committed, I haven't tried the new model file to predict LOCATION, though. > OpenNLPExtractNamedEntitiesUpdateProcessor cannot support multi fields > because pattern replacement doesn't work correctly > - > > Key: SOLR-12570 > URL: https://issues.apache.org/jira/browse/SOLR-12570 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: UpdateRequestProcessors >Affects Versions: 7.3, 7.3.1, 7.4 >Reporter: Koji Sekiguchi >Priority: Minor > Fix For: master (8.0), 7.5 > > Attachments: SOLR-12570.patch > > > Because of the following code, if resolvedDest is "body_{EntityType}_s" and > becomes "body_PERSON_s" by replacement, but once it is replaced, as > placeholder ({EntityType}) is overwritten, the destination is always > "body_PERSON_s". > {code} > resolvedDest = resolvedDest.replace(ENTITY_TYPE, entityType); > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8420) Upgrade OpenNLP to 1.9.0
[ https://issues.apache.org/jira/browse/LUCENE-8420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16551519#comment-16551519 ] Koji Sekiguchi commented on LUCENE-8420: I created model files for 1.9.0 by executing ant train-test-models under lucene/analysis/opennlp/. As for the training data, I renamed ner_flashman.txt to ner.txt and let the file have location type for SOLR-12570. I deleted opennlp-maxent which is never used (and I think it's old; opennlp-tools package includes maxent). > Upgrade OpenNLP to 1.9.0 > > > Key: LUCENE-8420 > URL: https://issues.apache.org/jira/browse/LUCENE-8420 > Project: Lucene - Core > Issue Type: Task > Components: modules/analysis >Affects Versions: 7.4 >Reporter: Koji Sekiguchi >Priority: Minor > Fix For: master (8.0), 7.5 > > Attachments: LUCENE-8420.patch > > > OpenNLP 1.9.0 generates new format model file which 1.8.x cannot read. 1.9.0 > can read the previous format for back-compat. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-8420) Upgrade OpenNLP to 1.9.0
[ https://issues.apache.org/jira/browse/LUCENE-8420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Koji Sekiguchi updated LUCENE-8420: --- Attachment: LUCENE-8420.patch > Upgrade OpenNLP to 1.9.0 > > > Key: LUCENE-8420 > URL: https://issues.apache.org/jira/browse/LUCENE-8420 > Project: Lucene - Core > Issue Type: Task > Components: modules/analysis >Affects Versions: 7.4 >Reporter: Koji Sekiguchi >Priority: Minor > Fix For: master (8.0), 7.5 > > Attachments: LUCENE-8420.patch > > > OpenNLP 1.9.0 generates new format model file which 1.8.x cannot read. 1.9.0 > can read the previous format for back-compat. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-repro - Build # 1000 - Still Unstable
Build: https://builds.apache.org/job/Lucene-Solr-repro/1000/ [...truncated 37 lines...] [repro] Jenkins log URL: https://builds.apache.org/job/Lucene-Solr-Tests-master/2610/consoleText [repro] Revision: f6e9d00b90ac624b05586b225b9cda7eb7ea60ae [repro] Repro line: ant test -Dtestcase=InfixSuggestersTest -Dtests.method=testShutdownDuringBuild -Dtests.seed=26248563215CF7AC -Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=mk -Dtests.timezone=Etc/GMT-14 -Dtests.asserts=true -Dtests.file.encoding=US-ASCII [repro] Repro line: ant test -Dtestcase=GraphTest -Dtests.seed=17A97EFCD9A55CDD -Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=it -Dtests.timezone=Africa/Djibouti -Dtests.asserts=true -Dtests.file.encoding=UTF-8 [repro] git rev-parse --abbrev-ref HEAD [repro] git rev-parse HEAD [repro] Initial local git branch/revision: 1bea1da5dc43d3b392c5e363c3ad970e1df6d5fc [repro] git fetch [repro] git checkout f6e9d00b90ac624b05586b225b9cda7eb7ea60ae [...truncated 2 lines...] [repro] git merge --ff-only [...truncated 1 lines...] [repro] ant clean [...truncated 6 lines...] [repro] Test suites by module: [repro]solr/solrj [repro] GraphTest [repro]solr/core [repro] InfixSuggestersTest [repro] ant compile-test [...truncated 2453 lines...] [repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 -Dtests.class="*.GraphTest" -Dtests.showOutput=onerror -Dtests.seed=17A97EFCD9A55CDD -Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=it -Dtests.timezone=Africa/Djibouti -Dtests.asserts=true -Dtests.file.encoding=UTF-8 [...truncated 68 lines...] [repro] ant compile-test [...truncated 1329 lines...] [repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 -Dtests.class="*.InfixSuggestersTest" -Dtests.showOutput=onerror -Dtests.seed=26248563215CF7AC -Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=mk -Dtests.timezone=Etc/GMT-14 -Dtests.asserts=true -Dtests.file.encoding=US-ASCII [...truncated 641 lines...] [repro] Setting last failure code to 256 [repro] Failures: [repro] 0/5 failed: org.apache.solr.client.solrj.io.graph.GraphTest [repro] 1/5 failed: org.apache.solr.handler.component.InfixSuggestersTest [repro] git checkout 1bea1da5dc43d3b392c5e363c3ad970e1df6d5fc [...truncated 2 lines...] [repro] Exiting with code 256 [...truncated 5 lines...] - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-BadApples-NightlyTests-master - Build # 20 - Still Failing
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-NightlyTests-master/20/ 20 tests failed. FAILED: org.apache.lucene.document.TestLatLonShapeQueries.testRandomBig Error Message: Java heap space Stack Trace: java.lang.OutOfMemoryError: Java heap space at __randomizedtesting.SeedInfo.seed([ADBF889458EF963A:2AE8F51BC9B6EABA]:0) at org.apache.lucene.geo.GeoTestUtil.createRegularPolygon(GeoTestUtil.java:325) at org.apache.lucene.geo.GeoTestUtil.nextPolygon(GeoTestUtil.java:398) at org.apache.lucene.document.TestLatLonShapeQueries.doTestRandom(TestLatLonShapeQueries.java:127) at org.apache.lucene.document.TestLatLonShapeQueries.testRandomBig(TestLatLonShapeQueries.java:107) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) FAILED: org.apache.solr.cloud.BasicDistributedZkTest.test Error Message: Error from server at http://127.0.0.1:45827/e_j/z/collection1: Async exception during distributed update: Error from server at http://127.0.0.1:54976/e_j/z/collection1_shard1_replica_n47: Bad Request request: http://127.0.0.1:54976/e_j/z/collection1_shard1_replica_n47/update?update.chain=distrib-dup-test-chain-explicit=TOLEADER=http%3A%2F%2F127.0.0.1%3A54976%2Fe_j%2Fz%2Fcollection1_shard1_replica_n47%2F=javabin=2 Remote error message: Exception writing document id 61 to the index; possible analysis error. Stack Trace: org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at http://127.0.0.1:45827/e_j/z/collection1: Async exception during distributed update: Error from server at http://127.0.0.1:54976/e_j/z/collection1_shard1_replica_n47: Bad Request request: http://127.0.0.1:54976/e_j/z/collection1_shard1_replica_n47/update?update.chain=distrib-dup-test-chain-explicit=TOLEADER=http%3A%2F%2F127.0.0.1%3A54976%2Fe_j%2Fz%2Fcollection1_shard1_replica_n47%2F=javabin=2 Remote error message: Exception writing document id 61 to the index; possible analysis error. at __randomizedtesting.SeedInfo.seed([5FEC78A489E667C9:D7B8477E271A0A31]:0) at
[jira] [Commented] (SOLR-12477) Return server error(500) for AlreadyClosedException instead of client Errors(400)
[ https://issues.apache.org/jira/browse/SOLR-12477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16551474#comment-16551474 ] jefferyyuan commented on SOLR-12477: Thanks, [~varunthacker] Made the change as you suggested. Please check. Just one exception: - corruptLeader may throw RemoteSolrException when called by test method. so the test code changes accordingly. > Return server error(500) for AlreadyClosedException instead of client > Errors(400) > - > > Key: SOLR-12477 > URL: https://issues.apache.org/jira/browse/SOLR-12477 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: update >Affects Versions: 7.3.1, master (8.0) >Reporter: jefferyyuan >Assignee: Varun Thacker >Priority: Minor > Labels: update > Fix For: 7.3.2, master (8.0) > > Time Spent: 10m > Remaining Estimate: 0h > > In some cases(for example: corrupt index), addDoc0 throws > AlreadyClosedException, but solr server returns client error 400 to client > This will confuse customers and especially monitoring tool. > Patch: [https://github.com/apache/lucene-solr/pull/402] -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk-9) - Build # 4746 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/4746/ Java: 64bit/jdk-9 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC 1 tests failed. FAILED: org.apache.solr.cloud.DistributedQueueTest.testPeekElements Error Message: expected:<1> but was:<0> Stack Trace: java.lang.AssertionError: expected:<1> but was:<0> at __randomizedtesting.SeedInfo.seed([B06B33959FC26799:4D4589B44FFB3384]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.failNotEquals(Assert.java:647) at org.junit.Assert.assertEquals(Assert.java:128) at org.junit.Assert.assertEquals(Assert.java:472) at org.junit.Assert.assertEquals(Assert.java:456) at org.apache.solr.cloud.DistributedQueueTest.testPeekElements(DistributedQueueTest.java:261) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:564) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.base/java.lang.Thread.run(Thread.java:844) Build Log: [...truncated 1888 lines...] [junit4] JVM J0: stderr was not empty, see:
[jira] [Updated] (LUCENE-8420) Upgrade OpenNLP to 1.9.0
[ https://issues.apache.org/jira/browse/LUCENE-8420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Rowe updated LUCENE-8420: --- Component/s: modules/analysis > Upgrade OpenNLP to 1.9.0 > > > Key: LUCENE-8420 > URL: https://issues.apache.org/jira/browse/LUCENE-8420 > Project: Lucene - Core > Issue Type: Task > Components: modules/analysis >Affects Versions: 7.4 >Reporter: Koji Sekiguchi >Priority: Minor > Fix For: master (8.0), 7.5 > > > OpenNLP 1.9.0 generates new format model file which 1.8.x cannot read. 1.9.0 > can read the previous format for back-compat. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Moved] (LUCENE-8420) Upgrade OpenNLP to 1.9.0
[ https://issues.apache.org/jira/browse/LUCENE-8420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Rowe moved SOLR-12571 to LUCENE-8420: --- Fix Version/s: (was: 7.5) (was: master (8.0)) 7.5 master (8.0) Affects Version/s: (was: 7.4) 7.4 Security: (was: Public) Component/s: (was: update) (was: contrib - LangId) Key: LUCENE-8420 (was: SOLR-12571) Project: Lucene - Core (was: Solr) > Upgrade OpenNLP to 1.9.0 > > > Key: LUCENE-8420 > URL: https://issues.apache.org/jira/browse/LUCENE-8420 > Project: Lucene - Core > Issue Type: Task >Affects Versions: 7.4 >Reporter: Koji Sekiguchi >Priority: Minor > Fix For: master (8.0), 7.5 > > > OpenNLP 1.9.0 generates new format model file which 1.8.x cannot read. 1.9.0 > can read the previous format for back-compat. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12571) Upgrade OpenNLP to 1.9.0
[ https://issues.apache.org/jira/browse/SOLR-12571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16551394#comment-16551394 ] Steve Rowe commented on SOLR-12571: --- +1. The test models should also be regenerated: {{ant train-test-models}} under {{lucene/analysis/opennlp/}}. Also, I'm going to make move this from a SOLR issue to a LUCENE issue. > Upgrade OpenNLP to 1.9.0 > > > Key: SOLR-12571 > URL: https://issues.apache.org/jira/browse/SOLR-12571 > Project: Solr > Issue Type: Task > Security Level: Public(Default Security Level. Issues are Public) > Components: contrib - LangId, update >Affects Versions: 7.4 >Reporter: Koji Sekiguchi >Priority: Minor > Fix For: master (8.0), 7.5 > > > OpenNLP 1.9.0 generates new format model file which 1.8.x cannot read. 1.9.0 > can read the previous format for back-compat. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12570) OpenNLPExtractNamedEntitiesUpdateProcessor cannot support multi fields because pattern replacement doesn't work correctly
[ https://issues.apache.org/jira/browse/SOLR-12570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16551389#comment-16551389 ] Steve Rowe commented on SOLR-12570: --- +1 to the patch, good catch! It would be good to have a test for this capability (none there now) - we'd need to generate a test model that predicts multiple entity types; the one test model we have now can only predict {{PERSON}}. > OpenNLPExtractNamedEntitiesUpdateProcessor cannot support multi fields > because pattern replacement doesn't work correctly > - > > Key: SOLR-12570 > URL: https://issues.apache.org/jira/browse/SOLR-12570 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: UpdateRequestProcessors >Affects Versions: 7.3, 7.3.1, 7.4 >Reporter: Koji Sekiguchi >Priority: Minor > Fix For: master (8.0), 7.5 > > Attachments: SOLR-12570.patch > > > Because of the following code, if resolvedDest is "body_{EntityType}_s" and > becomes "body_PERSON_s" by replacement, but once it is replaced, as > placeholder ({EntityType}) is overwritten, the destination is always > "body_PERSON_s". > {code} > resolvedDest = resolvedDest.replace(ENTITY_TYPE, entityType); > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9394) CDCR: Exception on target site while using deleteById to delete a document
[ https://issues.apache.org/jira/browse/SOLR-9394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16551359#comment-16551359 ] Amrit Sarkar commented on SOLR-9394: Yeah, should have specified the jira, sorry about that. While building SOLR-11003, bidirectional approach, we reconfigured the entries for each tlog element, and made sure rightful fieldvalue should go to its fieldType. Though I was {{never able to replicate}} the bug on this jira.. > CDCR: Exception on target site while using deleteById to delete a document > -- > > Key: SOLR-9394 > URL: https://issues.apache.org/jira/browse/SOLR-9394 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: CDCR >Affects Versions: 6.1 >Reporter: BHARATH K VENKATARAMANA >Priority: Critical > > Deleting a document on the main site by using deleteById solrj method is > causing the below exception on the target site, even though the document is > deleted correctly on the main site. But if we use deleteByQuery, it works > fine. In the solr schema.xml the unique key is the "id" field and we have it > as long, if we change that to string and then deleteById works. > Error stacktrace on the target site SOLR node leader:- > 2016-08-06 08:09:21.091 ERROR (qtp472654579-2699) [c:collection s:shard1 > r:core_node3 x:collection] o.a.s.h.RequestHandlerBase > org.apache.solr.common.SolrException: Invalid Number: ^A^@^@^@^@^@^L^K0W > at > org.apache.solr.schema.TrieField.readableToIndexed(TrieField.java:537) > at > org.apache.solr.update.DeleteUpdateCommand.getIndexedId(DeleteUpdateCommand.java:65) > at > org.apache.solr.update.processor.DistributedUpdateProcessor.versionDelete(DistributedUpdateProcessor.java:1495) > at > org.apache.solr.update.processor.CdcrUpdateProcessor.versionDelete(CdcrUpdateProcessor.java:85) > at > org.apache.solr.update.processor.DistributedUpdateProcessor.processDelete(DistributedUpdateProcessor.java:1154) > at > org.apache.solr.handler.loader.JavabinLoader.delete(JavabinLoader.java:151) > at > org.apache.solr.handler.loader.JavabinLoader.parseAndLoadDocs(JavabinLoader.java:112) > at > org.apache.solr.handler.loader.JavabinLoader.load(JavabinLoader.java:54) > at > org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:97) > at > org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:69) > at > org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:156) > at org.apache.solr.core.SolrCore.execute(SolrCore.java:2036) > at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:657) > at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:464) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:257) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:208) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) > at > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) > at > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226) > at > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160) > at > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511) > at > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) > at > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) > at > org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213) > at > org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) > at org.eclipse.jetty.server.Server.handle(Server.java:518) > at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:308) > at > org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:244) > at > org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273) > at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95) > at >
[jira] [Commented] (SOLR-9394) CDCR: Exception on target site while using deleteById to delete a document
[ https://issues.apache.org/jira/browse/SOLR-9394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16551318#comment-16551318 ] Varun Thacker commented on SOLR-9394: - {quote} as I believe (from jenkins) this has been in later versions. {quote} Was this a bug? Which Jira was this addressed in? Just closing out this issue without saying if this was an issue or fixed as part of a Jira ( link ) makes it impossible for others to follow what happened here > CDCR: Exception on target site while using deleteById to delete a document > -- > > Key: SOLR-9394 > URL: https://issues.apache.org/jira/browse/SOLR-9394 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: CDCR >Affects Versions: 6.1 >Reporter: BHARATH K VENKATARAMANA >Priority: Critical > > Deleting a document on the main site by using deleteById solrj method is > causing the below exception on the target site, even though the document is > deleted correctly on the main site. But if we use deleteByQuery, it works > fine. In the solr schema.xml the unique key is the "id" field and we have it > as long, if we change that to string and then deleteById works. > Error stacktrace on the target site SOLR node leader:- > 2016-08-06 08:09:21.091 ERROR (qtp472654579-2699) [c:collection s:shard1 > r:core_node3 x:collection] o.a.s.h.RequestHandlerBase > org.apache.solr.common.SolrException: Invalid Number: ^A^@^@^@^@^@^L^K0W > at > org.apache.solr.schema.TrieField.readableToIndexed(TrieField.java:537) > at > org.apache.solr.update.DeleteUpdateCommand.getIndexedId(DeleteUpdateCommand.java:65) > at > org.apache.solr.update.processor.DistributedUpdateProcessor.versionDelete(DistributedUpdateProcessor.java:1495) > at > org.apache.solr.update.processor.CdcrUpdateProcessor.versionDelete(CdcrUpdateProcessor.java:85) > at > org.apache.solr.update.processor.DistributedUpdateProcessor.processDelete(DistributedUpdateProcessor.java:1154) > at > org.apache.solr.handler.loader.JavabinLoader.delete(JavabinLoader.java:151) > at > org.apache.solr.handler.loader.JavabinLoader.parseAndLoadDocs(JavabinLoader.java:112) > at > org.apache.solr.handler.loader.JavabinLoader.load(JavabinLoader.java:54) > at > org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:97) > at > org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:69) > at > org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:156) > at org.apache.solr.core.SolrCore.execute(SolrCore.java:2036) > at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:657) > at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:464) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:257) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:208) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) > at > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) > at > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226) > at > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160) > at > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511) > at > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) > at > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) > at > org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213) > at > org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) > at org.eclipse.jetty.server.Server.handle(Server.java:518) > at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:308) > at > org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:244) > at > org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273) > at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95) > at >
[jira] [Commented] (LUCENE-8415) Clean up Directory contracts (write-once, no reads-before-write-completed)
[ https://issues.apache.org/jira/browse/LUCENE-8415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16551298#comment-16551298 ] Robert Muir commented on LUCENE-8415: - I guess what I mean is, RAMDirectory is different because it lives in a bubble. On the other hand FSDirectory shares a world with other applications (maybe java, maybe not) and maybe even other computers in more ridiculous setups. So java code in the Directory isn't really up for the task of enforcing, we need OS help. > Clean up Directory contracts (write-once, no reads-before-write-completed) > -- > > Key: LUCENE-8415 > URL: https://issues.apache.org/jira/browse/LUCENE-8415 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Dawid Weiss >Assignee: Dawid Weiss >Priority: Minor > Time Spent: 10m > Remaining Estimate: 0h > > Created a PR here for early review. > https://github.com/apache/lucene-solr/pull/424 > I changed: > * the wording in Directory documentation to be a bit more formalized about > what rules a Directory should obey (and users expect). > * modified the test framework to verify the above in mock classes. > Currently a number of Directory implementations fail the > {{testReadFileOpenForWrites}} test that I added, so I'll keep working on that. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Assigned] (SOLR-12572) Reuse fieldvalues computed while sorting at writing in ExportWriter
[ https://issues.apache.org/jira/browse/SOLR-12572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Varun Thacker reassigned SOLR-12572: Assignee: Varun Thacker > Reuse fieldvalues computed while sorting at writing in ExportWriter > --- > > Key: SOLR-12572 > URL: https://issues.apache.org/jira/browse/SOLR-12572 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: streaming expressions >Reporter: Amrit Sarkar >Assignee: Varun Thacker >Priority: Minor > Attachments: SOLR-12572.patch > > > --- to be updated -- -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-9394) CDCR: Exception on target site while using deleteById to delete a document
[ https://issues.apache.org/jira/browse/SOLR-9394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erick Erickson resolved SOLR-9394. -- Resolution: Duplicate Fixed as part of other JIRAs. > CDCR: Exception on target site while using deleteById to delete a document > -- > > Key: SOLR-9394 > URL: https://issues.apache.org/jira/browse/SOLR-9394 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: CDCR >Affects Versions: 6.1 >Reporter: BHARATH K VENKATARAMANA >Priority: Critical > > Deleting a document on the main site by using deleteById solrj method is > causing the below exception on the target site, even though the document is > deleted correctly on the main site. But if we use deleteByQuery, it works > fine. In the solr schema.xml the unique key is the "id" field and we have it > as long, if we change that to string and then deleteById works. > Error stacktrace on the target site SOLR node leader:- > 2016-08-06 08:09:21.091 ERROR (qtp472654579-2699) [c:collection s:shard1 > r:core_node3 x:collection] o.a.s.h.RequestHandlerBase > org.apache.solr.common.SolrException: Invalid Number: ^A^@^@^@^@^@^L^K0W > at > org.apache.solr.schema.TrieField.readableToIndexed(TrieField.java:537) > at > org.apache.solr.update.DeleteUpdateCommand.getIndexedId(DeleteUpdateCommand.java:65) > at > org.apache.solr.update.processor.DistributedUpdateProcessor.versionDelete(DistributedUpdateProcessor.java:1495) > at > org.apache.solr.update.processor.CdcrUpdateProcessor.versionDelete(CdcrUpdateProcessor.java:85) > at > org.apache.solr.update.processor.DistributedUpdateProcessor.processDelete(DistributedUpdateProcessor.java:1154) > at > org.apache.solr.handler.loader.JavabinLoader.delete(JavabinLoader.java:151) > at > org.apache.solr.handler.loader.JavabinLoader.parseAndLoadDocs(JavabinLoader.java:112) > at > org.apache.solr.handler.loader.JavabinLoader.load(JavabinLoader.java:54) > at > org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:97) > at > org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:69) > at > org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:156) > at org.apache.solr.core.SolrCore.execute(SolrCore.java:2036) > at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:657) > at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:464) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:257) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:208) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) > at > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) > at > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226) > at > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160) > at > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511) > at > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) > at > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) > at > org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213) > at > org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) > at org.eclipse.jetty.server.Server.handle(Server.java:518) > at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:308) > at > org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:244) > at > org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273) > at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95) > at > org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93) > at > org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceAndRun(ExecuteProduceConsume.java:246) > at >
[JENKINS] Lucene-Solr-master-Windows (64bit/jdk-9.0.4) - Build # 7430 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7430/ Java: 64bit/jdk-9.0.4 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC 3 tests failed. FAILED: org.apache.solr.client.solrj.io.stream.StreamDecoratorTest.testParallelExecutorStream Error Message: Stack Trace: java.lang.AssertionError at __randomizedtesting.SeedInfo.seed([D04E347EE5F89CF4:6D594167DCD4A1A9]:0) at org.junit.Assert.fail(Assert.java:92) at org.junit.Assert.assertTrue(Assert.java:43) at org.junit.Assert.assertTrue(Assert.java:54) at org.apache.solr.client.solrj.io.stream.StreamDecoratorTest.testParallelExecutorStream(StreamDecoratorTest.java:3605) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:564) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.base/java.lang.Thread.run(Thread.java:844) FAILED: junit.framework.TestSuite.org.apache.solr.client.solrj.io.stream.StreamDecoratorTest Error Message: 6 threads leaked from SUITE scope at org.apache.solr.client.solrj.io.stream.StreamDecoratorTest: 1) Thread[id=3727,
[jira] [Commented] (SOLR-9394) CDCR: Exception on target site while using deleteById to delete a document
[ https://issues.apache.org/jira/browse/SOLR-9394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16551280#comment-16551280 ] Amrit Sarkar commented on SOLR-9394: Hi Erick, +1 from me closing this, as I believe (from jenkins) this has been in later versions. > CDCR: Exception on target site while using deleteById to delete a document > -- > > Key: SOLR-9394 > URL: https://issues.apache.org/jira/browse/SOLR-9394 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: CDCR >Affects Versions: 6.1 >Reporter: BHARATH K VENKATARAMANA >Priority: Critical > > Deleting a document on the main site by using deleteById solrj method is > causing the below exception on the target site, even though the document is > deleted correctly on the main site. But if we use deleteByQuery, it works > fine. In the solr schema.xml the unique key is the "id" field and we have it > as long, if we change that to string and then deleteById works. > Error stacktrace on the target site SOLR node leader:- > 2016-08-06 08:09:21.091 ERROR (qtp472654579-2699) [c:collection s:shard1 > r:core_node3 x:collection] o.a.s.h.RequestHandlerBase > org.apache.solr.common.SolrException: Invalid Number: ^A^@^@^@^@^@^L^K0W > at > org.apache.solr.schema.TrieField.readableToIndexed(TrieField.java:537) > at > org.apache.solr.update.DeleteUpdateCommand.getIndexedId(DeleteUpdateCommand.java:65) > at > org.apache.solr.update.processor.DistributedUpdateProcessor.versionDelete(DistributedUpdateProcessor.java:1495) > at > org.apache.solr.update.processor.CdcrUpdateProcessor.versionDelete(CdcrUpdateProcessor.java:85) > at > org.apache.solr.update.processor.DistributedUpdateProcessor.processDelete(DistributedUpdateProcessor.java:1154) > at > org.apache.solr.handler.loader.JavabinLoader.delete(JavabinLoader.java:151) > at > org.apache.solr.handler.loader.JavabinLoader.parseAndLoadDocs(JavabinLoader.java:112) > at > org.apache.solr.handler.loader.JavabinLoader.load(JavabinLoader.java:54) > at > org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:97) > at > org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:69) > at > org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:156) > at org.apache.solr.core.SolrCore.execute(SolrCore.java:2036) > at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:657) > at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:464) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:257) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:208) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) > at > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) > at > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226) > at > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160) > at > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511) > at > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) > at > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) > at > org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213) > at > org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) > at org.eclipse.jetty.server.Server.handle(Server.java:518) > at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:308) > at > org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:244) > at > org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273) > at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95) > at > org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93) > at > org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceAndRun(ExecuteProduceConsume.java:246) >
[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-10) - Build # 22491 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/22491/ Java: 64bit/jdk-10 -XX:-UseCompressedOops -XX:+UseParallelGC 1 tests failed. FAILED: org.apache.solr.cloud.autoscaling.ScheduledTriggerTest.testTrigger Error Message: expected:<3> but was:<2> Stack Trace: java.lang.AssertionError: expected:<3> but was:<2> at __randomizedtesting.SeedInfo.seed([7EB654C5A4670BF3:1D7D62473DA878DE]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.failNotEquals(Assert.java:647) at org.junit.Assert.assertEquals(Assert.java:128) at org.junit.Assert.assertEquals(Assert.java:472) at org.junit.Assert.assertEquals(Assert.java:456) at org.apache.solr.cloud.autoscaling.ScheduledTriggerTest.scheduledTriggerTest(ScheduledTriggerTest.java:112) at org.apache.solr.cloud.autoscaling.ScheduledTriggerTest.testTrigger(ScheduledTriggerTest.java:65) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:564) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.base/java.lang.Thread.run(Thread.java:844) Build Log:
[jira] [Commented] (SOLR-12028) BadApple and AwaitsFix annotations usage
[ https://issues.apache.org/jira/browse/SOLR-12028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16551265#comment-16551265 ] ASF subversion and git services commented on SOLR-12028: Commit b086323f5c46698ed407ec9af1e1f080f76155ac in lucene-solr's branch refs/heads/branch_7x from Erick Erickson [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=b086323 ] SOLR-12028: BadApple and AwaitsFix annotations usage (cherry picked from commit 1bea1da) > BadApple and AwaitsFix annotations usage > > > Key: SOLR-12028 > URL: https://issues.apache.org/jira/browse/SOLR-12028 > Project: Solr > Issue Type: Task > Security Level: Public(Default Security Level. Issues are Public) > Components: Tests >Reporter: Erick Erickson >Assignee: Erick Erickson >Priority: Major > Attachments: SOLR-12016-buildsystem.patch, SOLR-12028-3-Mar.patch, > SOLR-12028-sysprops-reproduce.patch, SOLR-12028.patch, SOLR-12028.patch > > > There's a long discussion of this topic at SOLR-12016. Here's a summary: > - BadApple annotations are used for tests that intermittently fail, say < 30% > of the time. Tests that fail more often shold be moved to AwaitsFix. This is, > of course, a judgement call > - AwaitsFix annotations are used for tests that, for some reason, the problem > can't be fixed immediately. Likely reasons are third-party dependencies, > extreme difficulty tracking down, dependency on another JIRA etc. > Jenkins jobs will typically run with BadApple disabled to cut down on noise. > Periodically Jenkins jobs will be run with BadApples enabled so BadApple > tests won't be lost and reports can be generated. Tests that run with > BadApples disabled that fail require _immediate_ attention. > The default for developers is that BadApple is enabled. > If you are working on one of these tests and cannot get the test to fail > locally, it is perfectly acceptable to comment the annotation out. You should > let the dev list know that this is deliberate. > This JIRA is a placeholder for BadApple tests to point to between the times > they're identified as BadApple and they're either fixed or changed to > AwaitsFix or assigned their own JIRA. > I've assigned this to myself to track so I don't lose track of it. No one > person will fix all of these issues, this will be an ongoing technical debt > cleanup effort. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12028) BadApple and AwaitsFix annotations usage
[ https://issues.apache.org/jira/browse/SOLR-12028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16551242#comment-16551242 ] ASF subversion and git services commented on SOLR-12028: Commit 1bea1da5dc43d3b392c5e363c3ad970e1df6d5fc in lucene-solr's branch refs/heads/master from Erick Erickson [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=1bea1da ] SOLR-12028: BadApple and AwaitsFix annotations usage > BadApple and AwaitsFix annotations usage > > > Key: SOLR-12028 > URL: https://issues.apache.org/jira/browse/SOLR-12028 > Project: Solr > Issue Type: Task > Security Level: Public(Default Security Level. Issues are Public) > Components: Tests >Reporter: Erick Erickson >Assignee: Erick Erickson >Priority: Major > Attachments: SOLR-12016-buildsystem.patch, SOLR-12028-3-Mar.patch, > SOLR-12028-sysprops-reproduce.patch, SOLR-12028.patch, SOLR-12028.patch > > > There's a long discussion of this topic at SOLR-12016. Here's a summary: > - BadApple annotations are used for tests that intermittently fail, say < 30% > of the time. Tests that fail more often shold be moved to AwaitsFix. This is, > of course, a judgement call > - AwaitsFix annotations are used for tests that, for some reason, the problem > can't be fixed immediately. Likely reasons are third-party dependencies, > extreme difficulty tracking down, dependency on another JIRA etc. > Jenkins jobs will typically run with BadApple disabled to cut down on noise. > Periodically Jenkins jobs will be run with BadApples enabled so BadApple > tests won't be lost and reports can be generated. Tests that run with > BadApples disabled that fail require _immediate_ attention. > The default for developers is that BadApple is enabled. > If you are working on one of these tests and cannot get the test to fail > locally, it is perfectly acceptable to comment the annotation out. You should > let the dev list know that this is deliberate. > This JIRA is a placeholder for BadApple tests to point to between the times > they're identified as BadApple and they're either fixed or changed to > AwaitsFix or assigned their own JIRA. > I've assigned this to myself to track so I don't lose track of it. No one > person will fix all of these issues, this will be an ongoing technical debt > cleanup effort. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-repro - Build # 999 - Unstable
Build: https://builds.apache.org/job/Lucene-Solr-repro/999/ [...truncated 28 lines...] [repro] Jenkins log URL: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-master/100/consoleText [repro] Revision: 9d3cc1e16fd0aa8c49855691134046323ac57e52 [repro] Repro line: ant test -Dtestcase=MoveReplicaHDFSTest -Dtests.method=testFailedMove -Dtests.seed=2BD282460AF3AD87 -Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true -Dtests.locale=es-PE -Dtests.timezone=Arctic/Longyearbyen -Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1 [repro] Repro line: ant test -Dtestcase=TestTriggerIntegration -Dtests.method=testListeners -Dtests.seed=2BD282460AF3AD87 -Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true -Dtests.locale=ar-JO -Dtests.timezone=Europe/Chisinau -Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1 [repro] Repro line: ant test -Dtestcase=IndexSizeTriggerTest -Dtests.method=testSplitIntegration -Dtests.seed=2BD282460AF3AD87 -Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true -Dtests.locale=is-IS -Dtests.timezone=SystemV/CST6CDT -Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1 [repro] Repro line: ant test -Dtestcase=ShardSplitTest -Dtests.method=testSplitMixedReplicaTypes -Dtests.seed=2BD282460AF3AD87 -Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true -Dtests.locale=fr-CH -Dtests.timezone=Australia/NSW -Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1 [repro] Repro line: ant test -Dtestcase=ShardSplitTest -Dtests.method=testSplitWithChaosMonkey -Dtests.seed=2BD282460AF3AD87 -Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true -Dtests.locale=fr-CH -Dtests.timezone=Australia/NSW -Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1 [repro] Repro line: ant test -Dtestcase=ShardSplitTest -Dtests.method=test -Dtests.seed=2BD282460AF3AD87 -Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true -Dtests.locale=fr-CH -Dtests.timezone=Australia/NSW -Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1 [repro] Repro line: ant test -Dtestcase=HdfsAutoAddReplicasIntegrationTest -Dtests.method=testSimple -Dtests.seed=2BD282460AF3AD87 -Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true -Dtests.locale=de-DE -Dtests.timezone=Africa/Blantyre -Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1 [repro] git rev-parse --abbrev-ref HEAD [repro] git rev-parse HEAD [repro] Initial local git branch/revision: c152caeb238da36ccbcea86d3050c0b976508efb [repro] git fetch [repro] git checkout 9d3cc1e16fd0aa8c49855691134046323ac57e52 [...truncated 2 lines...] [repro] git merge --ff-only [...truncated 1 lines...] [repro] ant clean [...truncated 6 lines...] [repro] Test suites by module: [repro]solr/core [repro] MoveReplicaHDFSTest [repro] ShardSplitTest [repro] HdfsAutoAddReplicasIntegrationTest [repro] TestTriggerIntegration [repro] IndexSizeTriggerTest [repro] ant compile-test [...truncated 3301 lines...] [repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=25 -Dtests.class="*.MoveReplicaHDFSTest|*.ShardSplitTest|*.HdfsAutoAddReplicasIntegrationTest|*.TestTriggerIntegration|*.IndexSizeTriggerTest" -Dtests.showOutput=onerror -Dtests.seed=2BD282460AF3AD87 -Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true -Dtests.locale=es-PE -Dtests.timezone=Arctic/Longyearbyen -Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1 [...truncated 150687 lines...] [repro] Setting last failure code to 256 [repro] Failures: [repro] 1/5 failed: org.apache.solr.cloud.MoveReplicaHDFSTest [repro] 1/5 failed: org.apache.solr.cloud.autoscaling.HdfsAutoAddReplicasIntegrationTest [repro] 2/5 failed: org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest [repro] 3/5 failed: org.apache.solr.cloud.autoscaling.sim.TestTriggerIntegration [repro] 5/5 failed: org.apache.solr.cloud.api.collections.ShardSplitTest [repro] Re-testing 100% failures at the tip of master [repro] git fetch [repro] git checkout master [...truncated 4 lines...] [repro] git merge --ff-only [...truncated 102 lines...] [repro] ant clean [...truncated 8 lines...] [repro] Test suites by module: [repro]solr/core [repro] ShardSplitTest [repro] ant compile-test [...truncated 3301 lines...] [repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 -Dtests.class="*.ShardSplitTest" -Dtests.showOutput=onerror -Dtests.seed=2BD282460AF3AD87 -Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true -Dtests.locale=fr-CH -Dtests.timezone=Australia/NSW -Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1 [...truncated 106226 lines...] [repro] Setting last failure code to 256 [repro] Failures at the tip of master: [repro] 5/5 failed: org.apache.solr.cloud.api.collections.ShardSplitTest [repro] Re-testing 100% failures at the tip of master without a seed [repro] ant clean [...truncated 8 lines...] [repro] Test suites by module: [repro]solr/core [repro]
[jira] [Commented] (LUCENE-8415) Clean up Directory contracts (write-once, no reads-before-write-completed)
[ https://issues.apache.org/jira/browse/LUCENE-8415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16551227#comment-16551227 ] Dawid Weiss commented on LUCENE-8415: - Yep, I agree. Doesn't make sense to make FS impls. slower just to enforce it, it's enough that we run tests that capture it early. I'll work on it. > Clean up Directory contracts (write-once, no reads-before-write-completed) > -- > > Key: LUCENE-8415 > URL: https://issues.apache.org/jira/browse/LUCENE-8415 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Dawid Weiss >Assignee: Dawid Weiss >Priority: Minor > Time Spent: 10m > Remaining Estimate: 0h > > Created a PR here for early review. > https://github.com/apache/lucene-solr/pull/424 > I changed: > * the wording in Directory documentation to be a bit more formalized about > what rules a Directory should obey (and users expect). > * modified the test framework to verify the above in mock classes. > Currently a number of Directory implementations fail the > {{testReadFileOpenForWrites}} test that I added, so I'll keep working on that. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8415) Clean up Directory contracts (write-once, no reads-before-write-completed)
[ https://issues.apache.org/jira/browse/LUCENE-8415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16551224#comment-16551224 ] Robert Muir commented on LUCENE-8415: - I don't think such stuff belongs in the directory. We should be leaning on the operating system for such guarantees. I know you've been looking at RAMDirectory, but its really an atypical case/wildcard. MockDirectoryWrapper/mockfs stuff should have all the assertions we can throw at it. > Clean up Directory contracts (write-once, no reads-before-write-completed) > -- > > Key: LUCENE-8415 > URL: https://issues.apache.org/jira/browse/LUCENE-8415 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Dawid Weiss >Assignee: Dawid Weiss >Priority: Minor > Time Spent: 10m > Remaining Estimate: 0h > > Created a PR here for early review. > https://github.com/apache/lucene-solr/pull/424 > I changed: > * the wording in Directory documentation to be a bit more formalized about > what rules a Directory should obey (and users expect). > * modified the test framework to verify the above in mock classes. > Currently a number of Directory implementations fail the > {{testReadFileOpenForWrites}} test that I added, so I'll keep working on that. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8415) Clean up Directory contracts (write-once, no reads-before-write-completed)
[ https://issues.apache.org/jira/browse/LUCENE-8415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16551188#comment-16551188 ] Dawid Weiss commented on LUCENE-8415: - Oh, I absolutely agree. The thing is should we try to enforce it at runtime, in the code of each directory, or only verify that we don't do it in the tests (in MockDirectoryWrapper)? > Clean up Directory contracts (write-once, no reads-before-write-completed) > -- > > Key: LUCENE-8415 > URL: https://issues.apache.org/jira/browse/LUCENE-8415 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Dawid Weiss >Assignee: Dawid Weiss >Priority: Minor > Time Spent: 10m > Remaining Estimate: 0h > > Created a PR here for early review. > https://github.com/apache/lucene-solr/pull/424 > I changed: > * the wording in Directory documentation to be a bit more formalized about > what rules a Directory should obey (and users expect). > * modified the test framework to verify the above in mock classes. > Currently a number of Directory implementations fail the > {{testReadFileOpenForWrites}} test that I added, so I'll keep working on that. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11598) Export Writer needs to support more than 4 Sort fields - Say 10, ideally it should not be bound at all, but 4 seems to really short sell the StreamRollup capabilities.
[ https://issues.apache.org/jira/browse/SOLR-11598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16551163#comment-16551163 ] Varun Thacker commented on SOLR-11598: -- Thanks Amrit for the patch and testing, catching a bug which was causing slowdowns from the earlier patches! I'll commit it to branch_7x on Monday > Export Writer needs to support more than 4 Sort fields - Say 10, ideally it > should not be bound at all, but 4 seems to really short sell the StreamRollup > capabilities. > --- > > Key: SOLR-11598 > URL: https://issues.apache.org/jira/browse/SOLR-11598 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: streaming expressions >Affects Versions: 6.6.1, 7.0 >Reporter: Aroop >Assignee: Varun Thacker >Priority: Major > Labels: patch > Attachments: SOLR-11598-6_6-streamtests, SOLR-11598-6_6.patch, > SOLR-11598-master.patch, SOLR-11598.patch, SOLR-11598.patch, > SOLR-11598.patch, SOLR-11598.patch, SOLR-11598.patch, SOLR-11598.patch, > SOLR-11598.patch, SOLR-11598.patch, SOLR-11598.patch, SOLR-11598.patch, > SOLR-11598.patch, streaming-export reports.xlsx > > > I am a user of Streaming and I am currently trying to use rollups on an 10 > dimensional document. > I am unable to get correct results on this query as I am bounded by the > limitation of the export handler which supports only 4 sort fields. > I do not see why this needs to be the case, as it could very well be 10 or 20. > My current needs would be satisfied with 10, but one would want to ask why > can't it be any decent integer n, beyond which we know performance degrades, > but even then it should be caveat emptor. > [~varunthacker] > Code Link: > https://github.com/apache/lucene-solr/blob/19db1df81a18e6eb2cce5be973bf2305d606a9f8/solr/core/src/java/org/apache/solr/handler/ExportWriter.java#L455 > Error > null:java.io.IOException: A max of 4 sorts can be specified > at > org.apache.solr.handler.ExportWriter.getSortDoc(ExportWriter.java:452) > at org.apache.solr.handler.ExportWriter.writeDocs(ExportWriter.java:228) > at > org.apache.solr.handler.ExportWriter.lambda$null$1(ExportWriter.java:219) > at > org.apache.solr.common.util.JavaBinCodec.writeIterator(JavaBinCodec.java:664) > at > org.apache.solr.common.util.JavaBinCodec.writeKnownType(JavaBinCodec.java:333) > at > org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:223) > at org.apache.solr.common.util.JavaBinCodec$1.put(JavaBinCodec.java:394) > at > org.apache.solr.handler.ExportWriter.lambda$null$2(ExportWriter.java:219) > at > org.apache.solr.common.util.JavaBinCodec.writeMap(JavaBinCodec.java:437) > at > org.apache.solr.common.util.JavaBinCodec.writeKnownType(JavaBinCodec.java:354) > at > org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:223) > at org.apache.solr.common.util.JavaBinCodec$1.put(JavaBinCodec.java:394) > at > org.apache.solr.handler.ExportWriter.lambda$write$3(ExportWriter.java:217) > at > org.apache.solr.common.util.JavaBinCodec.writeMap(JavaBinCodec.java:437) > at org.apache.solr.handler.ExportWriter.write(ExportWriter.java:215) > at org.apache.solr.core.SolrCore$3.write(SolrCore.java:2601) > at > org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:49) > at > org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:809) > at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:538) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) > at > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) > at > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226) > at > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180) > at > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512) > at > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) > at > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112) >
[jira] [Commented] (SOLR-11598) Export Writer needs to support more than 4 Sort fields - Say 10, ideally it should not be bound at all, but 4 seems to really short sell the StreamRollup capabilities.
[ https://issues.apache.org/jira/browse/SOLR-11598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16551161#comment-16551161 ] ASF subversion and git services commented on SOLR-11598: Commit 9d9c3a0cd87832980a4745ec96fb2cd1216dcb4e in lucene-solr's branch refs/heads/master from [~varunthacker] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=9d9c3a0 ] SOLR-11598: Support more than 4 sort fields in the export writer > Export Writer needs to support more than 4 Sort fields - Say 10, ideally it > should not be bound at all, but 4 seems to really short sell the StreamRollup > capabilities. > --- > > Key: SOLR-11598 > URL: https://issues.apache.org/jira/browse/SOLR-11598 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: streaming expressions >Affects Versions: 6.6.1, 7.0 >Reporter: Aroop >Assignee: Varun Thacker >Priority: Major > Labels: patch > Attachments: SOLR-11598-6_6-streamtests, SOLR-11598-6_6.patch, > SOLR-11598-master.patch, SOLR-11598.patch, SOLR-11598.patch, > SOLR-11598.patch, SOLR-11598.patch, SOLR-11598.patch, SOLR-11598.patch, > SOLR-11598.patch, SOLR-11598.patch, SOLR-11598.patch, SOLR-11598.patch, > SOLR-11598.patch, streaming-export reports.xlsx > > > I am a user of Streaming and I am currently trying to use rollups on an 10 > dimensional document. > I am unable to get correct results on this query as I am bounded by the > limitation of the export handler which supports only 4 sort fields. > I do not see why this needs to be the case, as it could very well be 10 or 20. > My current needs would be satisfied with 10, but one would want to ask why > can't it be any decent integer n, beyond which we know performance degrades, > but even then it should be caveat emptor. > [~varunthacker] > Code Link: > https://github.com/apache/lucene-solr/blob/19db1df81a18e6eb2cce5be973bf2305d606a9f8/solr/core/src/java/org/apache/solr/handler/ExportWriter.java#L455 > Error > null:java.io.IOException: A max of 4 sorts can be specified > at > org.apache.solr.handler.ExportWriter.getSortDoc(ExportWriter.java:452) > at org.apache.solr.handler.ExportWriter.writeDocs(ExportWriter.java:228) > at > org.apache.solr.handler.ExportWriter.lambda$null$1(ExportWriter.java:219) > at > org.apache.solr.common.util.JavaBinCodec.writeIterator(JavaBinCodec.java:664) > at > org.apache.solr.common.util.JavaBinCodec.writeKnownType(JavaBinCodec.java:333) > at > org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:223) > at org.apache.solr.common.util.JavaBinCodec$1.put(JavaBinCodec.java:394) > at > org.apache.solr.handler.ExportWriter.lambda$null$2(ExportWriter.java:219) > at > org.apache.solr.common.util.JavaBinCodec.writeMap(JavaBinCodec.java:437) > at > org.apache.solr.common.util.JavaBinCodec.writeKnownType(JavaBinCodec.java:354) > at > org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:223) > at org.apache.solr.common.util.JavaBinCodec$1.put(JavaBinCodec.java:394) > at > org.apache.solr.handler.ExportWriter.lambda$write$3(ExportWriter.java:217) > at > org.apache.solr.common.util.JavaBinCodec.writeMap(JavaBinCodec.java:437) > at org.apache.solr.handler.ExportWriter.write(ExportWriter.java:215) > at org.apache.solr.core.SolrCore$3.write(SolrCore.java:2601) > at > org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:49) > at > org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:809) > at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:538) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) > at > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) > at > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226) > at > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180) > at > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512) > at >
[jira] [Commented] (SOLR-6823) Improve extensibility of DistributedUpdateProcessor regarding version processing
[ https://issues.apache.org/jira/browse/SOLR-6823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16551157#comment-16551157 ] Erick Erickson commented on SOLR-6823: -- [~sarkaramr...@gmail.com] Any opinion on this? > Improve extensibility of DistributedUpdateProcessor regarding version > processing > > > Key: SOLR-6823 > URL: https://issues.apache.org/jira/browse/SOLR-6823 > Project: Solr > Issue Type: Improvement > Components: SolrCloud, update >Affects Versions: 6.0 >Reporter: Renaud Delbru >Priority: Major > Attachments: SOLR-6823.patch > > > As described in SOLR-6462, > {quote} > doDeleteByQuery() is structured differently than processAdd() and > processDelete() in DistributedUpdateProcessor. We refactored > doDeleteByQuery() by extracting a portion of its code into a helper method > versionDeleteByQuery() which is then overriden in the CdcrUpdateProcessor. > This way doDeleteByQuery() is structurally similar to the other two cases and > we are able to keep the CDCR logic completely separated. > {quote} > This issue provides a patch for the DisitrbutedUpdateProcessor for trunk. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9394) CDCR: Exception on target site while using deleteById to delete a document
[ https://issues.apache.org/jira/browse/SOLR-9394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16551156#comment-16551156 ] Erick Erickson commented on SOLR-9394: -- [~sarkaramr...@gmail.com][~varunthacker] Should we close this? > CDCR: Exception on target site while using deleteById to delete a document > -- > > Key: SOLR-9394 > URL: https://issues.apache.org/jira/browse/SOLR-9394 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: CDCR >Affects Versions: 6.1 >Reporter: BHARATH K VENKATARAMANA >Priority: Critical > > Deleting a document on the main site by using deleteById solrj method is > causing the below exception on the target site, even though the document is > deleted correctly on the main site. But if we use deleteByQuery, it works > fine. In the solr schema.xml the unique key is the "id" field and we have it > as long, if we change that to string and then deleteById works. > Error stacktrace on the target site SOLR node leader:- > 2016-08-06 08:09:21.091 ERROR (qtp472654579-2699) [c:collection s:shard1 > r:core_node3 x:collection] o.a.s.h.RequestHandlerBase > org.apache.solr.common.SolrException: Invalid Number: ^A^@^@^@^@^@^L^K0W > at > org.apache.solr.schema.TrieField.readableToIndexed(TrieField.java:537) > at > org.apache.solr.update.DeleteUpdateCommand.getIndexedId(DeleteUpdateCommand.java:65) > at > org.apache.solr.update.processor.DistributedUpdateProcessor.versionDelete(DistributedUpdateProcessor.java:1495) > at > org.apache.solr.update.processor.CdcrUpdateProcessor.versionDelete(CdcrUpdateProcessor.java:85) > at > org.apache.solr.update.processor.DistributedUpdateProcessor.processDelete(DistributedUpdateProcessor.java:1154) > at > org.apache.solr.handler.loader.JavabinLoader.delete(JavabinLoader.java:151) > at > org.apache.solr.handler.loader.JavabinLoader.parseAndLoadDocs(JavabinLoader.java:112) > at > org.apache.solr.handler.loader.JavabinLoader.load(JavabinLoader.java:54) > at > org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:97) > at > org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:69) > at > org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:156) > at org.apache.solr.core.SolrCore.execute(SolrCore.java:2036) > at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:657) > at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:464) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:257) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:208) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) > at > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) > at > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226) > at > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160) > at > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511) > at > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) > at > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) > at > org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213) > at > org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) > at org.eclipse.jetty.server.Server.handle(Server.java:518) > at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:308) > at > org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:244) > at > org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273) > at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95) > at > org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93) > at > org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceAndRun(ExecuteProduceConsume.java:246) > at >
[JENKINS] Lucene-Solr-7.x-Solaris (64bit/jdk1.8.0) - Build # 735 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Solaris/735/ Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC 1 tests failed. FAILED: org.apache.solr.cloud.MoveReplicaHDFSTest.testFailedMove Error Message: No live SolrServers available to handle this request:[https://127.0.0.1:38724/solr/MoveReplicaHDFSTest_failed_coll_true, https://127.0.0.1:59375/solr/MoveReplicaHDFSTest_failed_coll_true] Stack Trace: org.apache.solr.client.solrj.SolrServerException: No live SolrServers available to handle this request:[https://127.0.0.1:38724/solr/MoveReplicaHDFSTest_failed_coll_true, https://127.0.0.1:59375/solr/MoveReplicaHDFSTest_failed_coll_true] at __randomizedtesting.SeedInfo.seed([7551A846818CF05E:DF9C7BB4365F258E]:0) at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:462) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1106) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:886) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:993) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:819) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194) at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:942) at org.apache.solr.cloud.MoveReplicaTest.testFailedMove(MoveReplicaTest.java:288) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
[jira] [Commented] (LUCENE-8415) Clean up Directory contracts (write-once, no reads-before-write-completed)
[ https://issues.apache.org/jira/browse/LUCENE-8415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16551147#comment-16551147 ] Robert Muir commented on LUCENE-8415: - by this i mean, use atomic rename for all index files not just segments_N. then nobody can be reading from them until they are complete. > Clean up Directory contracts (write-once, no reads-before-write-completed) > -- > > Key: LUCENE-8415 > URL: https://issues.apache.org/jira/browse/LUCENE-8415 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Dawid Weiss >Assignee: Dawid Weiss >Priority: Minor > Time Spent: 10m > Remaining Estimate: 0h > > Created a PR here for early review. > https://github.com/apache/lucene-solr/pull/424 > I changed: > * the wording in Directory documentation to be a bit more formalized about > what rules a Directory should obey (and users expect). > * modified the test framework to verify the above in mock classes. > Currently a number of Directory implementations fail the > {{testReadFileOpenForWrites}} test that I added, so I'll keep working on that. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-12311) Suggester is not getting built on all replicas when "suggest.build=true" is issued
[ https://issues.apache.org/jira/browse/SOLR-12311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Cassandra Targett resolved SOLR-12311. -- Resolution: Duplicate > Suggester is not getting built on all replicas when "suggest.build=true" is > issued > -- > > Key: SOLR-12311 > URL: https://issues.apache.org/jira/browse/SOLR-12311 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: Suggester >Affects Versions: 7.3 >Reporter: Kannan Ranganathan >Priority: Critical > > The suggester dictionary is not getting built in all the replicas when a > "suggest.build=true" is issued. It is getting built only on the replica that > the first "suggest.build=true" query hits. Further queries that use the > suggest component get only partial suggest results when the replicas where > the dictionary is not built are hit. > This can be reproduced with the sample "techproducts" collection, > # Create the "techproducts" collection with 2 shards and 2 replicas. > # The default suggest component "mySuggester" has "buildOnStartup"=false > # Send in this query to build the suggester and query it, > "http://localhost:8983/solr/techproducts/suggest?suggest.build=true=mySuggester=elec; > . You will see 4 suggestions. > # Hit this query, without the "suggest.build=true" parameter multiple times > and sometimes you will see 4 suggestions and in other times only 2 > suggestions > "http://localhost:8983/solr/techproducts/suggest?suggest.dictionary=mySuggester=elec; > # When the above query in Step 4 is sent with "distrib=false" to each of the > replicas, we can see that some replicas does not return any results. > # When the logs are analyzed, we can see that the first time we send a query > with "suggest.build=true", the suggest dictionary is built only on the > replica that the distributed query hits and not the other ones. > Expected behaviour: > With one "suggest.build=true" query, the dictionary should be built on all > replicas, so that further queries can get all the suggestions. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8415) Clean up Directory contracts (write-once, no reads-before-write-completed)
[ https://issues.apache.org/jira/browse/LUCENE-8415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16551141#comment-16551141 ] Dawid Weiss commented on LUCENE-8415: - bq. you could also do no bookkeeping and simply pay the cost of more renames, right? Err, I don't follow? My idea was to ensure a file open for output cannot be open for input (until the output is closed). This proves quite extensive code-wise to enforce, so I think it'll be better to leave it up to the implementation to decide (Directory.openInput may throw an exception on an input that is still being written to). I'll update the patch and submit for review (on Monday, unfortunately -- I'm away for the weekend). On a positive note, removing segment* exceptions from the mock classes didn't break anything after beasting overnight. > Clean up Directory contracts (write-once, no reads-before-write-completed) > -- > > Key: LUCENE-8415 > URL: https://issues.apache.org/jira/browse/LUCENE-8415 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Dawid Weiss >Assignee: Dawid Weiss >Priority: Minor > Time Spent: 10m > Remaining Estimate: 0h > > Created a PR here for early review. > https://github.com/apache/lucene-solr/pull/424 > I changed: > * the wording in Directory documentation to be a bit more formalized about > what rules a Directory should obey (and users expect). > * modified the test framework to verify the above in mock classes. > Currently a number of Directory implementations fail the > {{testReadFileOpenForWrites}} test that I added, so I'll keep working on that. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12164) Ref Guide: Redesign HTML version landing page
[ https://issues.apache.org/jira/browse/SOLR-12164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Cassandra Targett updated SOLR-12164: - Attachment: SOLR-12164.patch > Ref Guide: Redesign HTML version landing page > - > > Key: SOLR-12164 > URL: https://issues.apache.org/jira/browse/SOLR-12164 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: documentation >Reporter: Cassandra Targett >Assignee: Cassandra Targett >Priority: Major > Fix For: master (8.0), 7.5 > > Attachments: NewLandingPageBottom.png, NewLandingPageMid.png, > NewLandingPageTop.png, PDF-intro.png, SOLR-12164.patch > > > We've had the same first page of the Ref Guide for a long time, and it's > probably fine as far as it goes, but that isn't very far. It's effectively a > wall of text. > Since we've got the ability to work with an online presentation, and we have > some tools available already in use (BootstrapJS, etc.), we can do some new > things. > I've got a couple ideas I was playing with a few months ago. I'll dust those > off and attach some screenshots here + a patch or two. These will, of course, > work for the PDF so I'll include something to show that too (it can also be > snazzier). -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12164) Ref Guide: Redesign HTML version landing page
[ https://issues.apache.org/jira/browse/SOLR-12164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16551117#comment-16551117 ] Cassandra Targett commented on SOLR-12164: -- I've attached a patch for the changes I'd like to make if anyone would like to check it out. I'm still not 100% satisfied with the descriptions, but can tweak those in the future when/if better ideas come to me. > Ref Guide: Redesign HTML version landing page > - > > Key: SOLR-12164 > URL: https://issues.apache.org/jira/browse/SOLR-12164 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: documentation >Reporter: Cassandra Targett >Assignee: Cassandra Targett >Priority: Major > Fix For: master (8.0), 7.5 > > Attachments: NewLandingPageBottom.png, NewLandingPageMid.png, > NewLandingPageTop.png, PDF-intro.png > > > We've had the same first page of the Ref Guide for a long time, and it's > probably fine as far as it goes, but that isn't very far. It's effectively a > wall of text. > Since we've got the ability to work with an online presentation, and we have > some tools available already in use (BootstrapJS, etc.), we can do some new > things. > I've got a couple ideas I was playing with a few months ago. I'll dust those > off and attach some screenshots here + a patch or two. These will, of course, > work for the PDF so I'll include something to show that too (it can also be > snazzier). -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12489) Restore collection does not respect user specified replicationFactor
[ https://issues.apache.org/jira/browse/SOLR-12489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16551107#comment-16551107 ] Varun Thacker commented on SOLR-12489: -- Thanks Steve! I'll look into it today > Restore collection does not respect user specified replicationFactor > > > Key: SOLR-12489 > URL: https://issues.apache.org/jira/browse/SOLR-12489 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Varun Thacker >Assignee: Varun Thacker >Priority: Major > Labels: Backup/Restore > Fix For: master (8.0), 7.5 > > Attachments: SOLR-12489.patch > > > When restoring a collection we can pass in the replicationFactor > However while restoring the collection we don't make use of this param and > end up using whatever is present as the nrtReplicas key in the state.json > > {code:java} > int numNrtReplicas = getInt(message, NRT_REPLICAS, > backupCollectionState.getNumNrtReplicas(), 0); > if (numNrtReplicas == 0) { > numNrtReplicas = getInt(message, REPLICATION_FACTOR, > backupCollectionState.getReplicationFactor(), 0); > }{code} > The tests didn't catch this as the create collection call from SolrJ sets > nrtReplicas = replicationFactor and then we never restore with a different > replicationFactor -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12477) Return server error(500) for AlreadyClosedException instead of client Errors(400)
[ https://issues.apache.org/jira/browse/SOLR-12477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16551101#comment-16551101 ] Varun Thacker commented on SOLR-12477: -- Hi Jeffery, Patch looks good to me! Perhaps we could also assert that the exception thrown at [https://github.com/apache/lucene-solr/blob/master/solr/core/src/test/org/apache/solr/cloud/LeaderTragicEventTest.java#L135] is also an AlreadyClosedException . We could use SolrException.getRootCause to check if it's AlreadyClosedException ? > Return server error(500) for AlreadyClosedException instead of client > Errors(400) > - > > Key: SOLR-12477 > URL: https://issues.apache.org/jira/browse/SOLR-12477 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: update >Affects Versions: 7.3.1, master (8.0) >Reporter: jefferyyuan >Assignee: Varun Thacker >Priority: Minor > Labels: update > Fix For: 7.3.2, master (8.0) > > Time Spent: 10m > Remaining Estimate: 0h > > In some cases(for example: corrupt index), addDoc0 throws > AlreadyClosedException, but solr server returns client error 400 to client > This will confuse customers and especially monitoring tool. > Patch: [https://github.com/apache/lucene-solr/pull/402] -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Assigned] (SOLR-12477) Return server error(500) for AlreadyClosedException instead of client Errors(400)
[ https://issues.apache.org/jira/browse/SOLR-12477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Varun Thacker reassigned SOLR-12477: Assignee: Varun Thacker > Return server error(500) for AlreadyClosedException instead of client > Errors(400) > - > > Key: SOLR-12477 > URL: https://issues.apache.org/jira/browse/SOLR-12477 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: update >Affects Versions: 7.3.1, master (8.0) >Reporter: jefferyyuan >Assignee: Varun Thacker >Priority: Minor > Labels: update > Fix For: 7.3.2, master (8.0) > > Time Spent: 10m > Remaining Estimate: 0h > > In some cases(for example: corrupt index), addDoc0 throws > AlreadyClosedException, but solr server returns client error 400 to client > This will confuse customers and especially monitoring tool. > Patch: [https://github.com/apache/lucene-solr/pull/402] -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-2562) Make Luke a Lucene/Solr Module
[ https://issues.apache.org/jira/browse/LUCENE-2562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16551087#comment-16551087 ] Steve Rowe commented on LUCENE-2562: I created https://issues.apache.org/jira/browse/LEGAL-396 to ask for an exception to allow Lucene to depend on OpenJFX. > Make Luke a Lucene/Solr Module > -- > > Key: LUCENE-2562 > URL: https://issues.apache.org/jira/browse/LUCENE-2562 > Project: Lucene - Core > Issue Type: Task >Reporter: Mark Miller >Priority: Major > Labels: gsoc2014 > Attachments: LUCENE-2562-Ivy.patch, LUCENE-2562-Ivy.patch, > LUCENE-2562-Ivy.patch, LUCENE-2562-ivy.patch, LUCENE-2562.patch, > LUCENE-2562.patch, Luke-ALE-1.png, Luke-ALE-2.png, Luke-ALE-3.png, > Luke-ALE-4.png, Luke-ALE-5.png, luke-javafx1.png, luke-javafx2.png, > luke-javafx3.png, luke1.jpg, luke2.jpg, luke3.jpg, lukeALE-documents.png > > Time Spent: 10m > Remaining Estimate: 0h > > see > "RE: Luke - in need of maintainer": > http://markmail.org/message/m4gsto7giltvrpuf > "Web-based Luke": http://markmail.org/message/4xwps7p7ifltme5q > I think it would be great if there was a version of Luke that always worked > with trunk - and it would also be great if it was easier to match Luke jars > with Lucene versions. > While I'd like to get GWT Luke into the mix as well, I think the easiest > starting point is to straight port Luke to another UI toolkit before > abstracting out DTO objects that both GWT Luke and Pivot Luke could share. > I've started slowly converting Luke's use of thinlet to Apache Pivot. I > haven't/don't have a lot of time for this at the moment, but I've plugged > away here and there over the past work or two. There is still a *lot* to do. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-9.0.4) - Build # 22490 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/22490/ Java: 64bit/jdk-9.0.4 -XX:+UseCompressedOops -XX:+UseG1GC 2 tests failed. FAILED: org.apache.solr.handler.component.InfixSuggestersTest.testShutdownDuringBuild Error Message: junit.framework.AssertionFailedError: Unexpected wrapped exception type, expected CoreIsClosedException Stack Trace: java.util.concurrent.ExecutionException: junit.framework.AssertionFailedError: Unexpected wrapped exception type, expected CoreIsClosedException at __randomizedtesting.SeedInfo.seed([3B3F6FF3F9FE2BD6:E4B20D4CC7977EB4]:0) at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191) at org.apache.solr.handler.component.InfixSuggestersTest.testShutdownDuringBuild(InfixSuggestersTest.java:130) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:564) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.base/java.lang.Thread.run(Thread.java:844) Caused by: junit.framework.AssertionFailedError: Unexpected wrapped
[jira] [Commented] (SOLR-11598) Export Writer needs to support more than 4 Sort fields - Say 10, ideally it should not be bound at all, but 4 seems to really short sell the StreamRollup capabilities.
[ https://issues.apache.org/jira/browse/SOLR-11598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16551036#comment-16551036 ] Varun Thacker commented on SOLR-11598: -- Final patch! Pre-commit passes. Plan on committing this after running another round of tests > Export Writer needs to support more than 4 Sort fields - Say 10, ideally it > should not be bound at all, but 4 seems to really short sell the StreamRollup > capabilities. > --- > > Key: SOLR-11598 > URL: https://issues.apache.org/jira/browse/SOLR-11598 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: streaming expressions >Affects Versions: 6.6.1, 7.0 >Reporter: Aroop >Assignee: Varun Thacker >Priority: Major > Labels: patch > Attachments: SOLR-11598-6_6-streamtests, SOLR-11598-6_6.patch, > SOLR-11598-master.patch, SOLR-11598.patch, SOLR-11598.patch, > SOLR-11598.patch, SOLR-11598.patch, SOLR-11598.patch, SOLR-11598.patch, > SOLR-11598.patch, SOLR-11598.patch, SOLR-11598.patch, SOLR-11598.patch, > SOLR-11598.patch, streaming-export reports.xlsx > > > I am a user of Streaming and I am currently trying to use rollups on an 10 > dimensional document. > I am unable to get correct results on this query as I am bounded by the > limitation of the export handler which supports only 4 sort fields. > I do not see why this needs to be the case, as it could very well be 10 or 20. > My current needs would be satisfied with 10, but one would want to ask why > can't it be any decent integer n, beyond which we know performance degrades, > but even then it should be caveat emptor. > [~varunthacker] > Code Link: > https://github.com/apache/lucene-solr/blob/19db1df81a18e6eb2cce5be973bf2305d606a9f8/solr/core/src/java/org/apache/solr/handler/ExportWriter.java#L455 > Error > null:java.io.IOException: A max of 4 sorts can be specified > at > org.apache.solr.handler.ExportWriter.getSortDoc(ExportWriter.java:452) > at org.apache.solr.handler.ExportWriter.writeDocs(ExportWriter.java:228) > at > org.apache.solr.handler.ExportWriter.lambda$null$1(ExportWriter.java:219) > at > org.apache.solr.common.util.JavaBinCodec.writeIterator(JavaBinCodec.java:664) > at > org.apache.solr.common.util.JavaBinCodec.writeKnownType(JavaBinCodec.java:333) > at > org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:223) > at org.apache.solr.common.util.JavaBinCodec$1.put(JavaBinCodec.java:394) > at > org.apache.solr.handler.ExportWriter.lambda$null$2(ExportWriter.java:219) > at > org.apache.solr.common.util.JavaBinCodec.writeMap(JavaBinCodec.java:437) > at > org.apache.solr.common.util.JavaBinCodec.writeKnownType(JavaBinCodec.java:354) > at > org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:223) > at org.apache.solr.common.util.JavaBinCodec$1.put(JavaBinCodec.java:394) > at > org.apache.solr.handler.ExportWriter.lambda$write$3(ExportWriter.java:217) > at > org.apache.solr.common.util.JavaBinCodec.writeMap(JavaBinCodec.java:437) > at org.apache.solr.handler.ExportWriter.write(ExportWriter.java:215) > at org.apache.solr.core.SolrCore$3.write(SolrCore.java:2601) > at > org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:49) > at > org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:809) > at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:538) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) > at > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) > at > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226) > at > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180) > at > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512) > at > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) > at > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112) > at >
[jira] [Updated] (SOLR-11598) Export Writer needs to support more than 4 Sort fields - Say 10, ideally it should not be bound at all, but 4 seems to really short sell the StreamRollup capabilities.
[ https://issues.apache.org/jira/browse/SOLR-11598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Varun Thacker updated SOLR-11598: - Attachment: SOLR-11598.patch > Export Writer needs to support more than 4 Sort fields - Say 10, ideally it > should not be bound at all, but 4 seems to really short sell the StreamRollup > capabilities. > --- > > Key: SOLR-11598 > URL: https://issues.apache.org/jira/browse/SOLR-11598 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: streaming expressions >Affects Versions: 6.6.1, 7.0 >Reporter: Aroop >Assignee: Varun Thacker >Priority: Major > Labels: patch > Attachments: SOLR-11598-6_6-streamtests, SOLR-11598-6_6.patch, > SOLR-11598-master.patch, SOLR-11598.patch, SOLR-11598.patch, > SOLR-11598.patch, SOLR-11598.patch, SOLR-11598.patch, SOLR-11598.patch, > SOLR-11598.patch, SOLR-11598.patch, SOLR-11598.patch, SOLR-11598.patch, > SOLR-11598.patch, streaming-export reports.xlsx > > > I am a user of Streaming and I am currently trying to use rollups on an 10 > dimensional document. > I am unable to get correct results on this query as I am bounded by the > limitation of the export handler which supports only 4 sort fields. > I do not see why this needs to be the case, as it could very well be 10 or 20. > My current needs would be satisfied with 10, but one would want to ask why > can't it be any decent integer n, beyond which we know performance degrades, > but even then it should be caveat emptor. > [~varunthacker] > Code Link: > https://github.com/apache/lucene-solr/blob/19db1df81a18e6eb2cce5be973bf2305d606a9f8/solr/core/src/java/org/apache/solr/handler/ExportWriter.java#L455 > Error > null:java.io.IOException: A max of 4 sorts can be specified > at > org.apache.solr.handler.ExportWriter.getSortDoc(ExportWriter.java:452) > at org.apache.solr.handler.ExportWriter.writeDocs(ExportWriter.java:228) > at > org.apache.solr.handler.ExportWriter.lambda$null$1(ExportWriter.java:219) > at > org.apache.solr.common.util.JavaBinCodec.writeIterator(JavaBinCodec.java:664) > at > org.apache.solr.common.util.JavaBinCodec.writeKnownType(JavaBinCodec.java:333) > at > org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:223) > at org.apache.solr.common.util.JavaBinCodec$1.put(JavaBinCodec.java:394) > at > org.apache.solr.handler.ExportWriter.lambda$null$2(ExportWriter.java:219) > at > org.apache.solr.common.util.JavaBinCodec.writeMap(JavaBinCodec.java:437) > at > org.apache.solr.common.util.JavaBinCodec.writeKnownType(JavaBinCodec.java:354) > at > org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:223) > at org.apache.solr.common.util.JavaBinCodec$1.put(JavaBinCodec.java:394) > at > org.apache.solr.handler.ExportWriter.lambda$write$3(ExportWriter.java:217) > at > org.apache.solr.common.util.JavaBinCodec.writeMap(JavaBinCodec.java:437) > at org.apache.solr.handler.ExportWriter.write(ExportWriter.java:215) > at org.apache.solr.core.SolrCore$3.write(SolrCore.java:2601) > at > org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:49) > at > org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:809) > at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:538) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) > at > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) > at > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226) > at > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180) > at > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512) > at > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) > at > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) > at >
[jira] [Comment Edited] (SOLR-12546) CSVResponseWriter doesnt return non-stored field even when docValues is enabled, when no explicit fl specified
[ https://issues.apache.org/jira/browse/SOLR-12546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16550962#comment-16550962 ] Ganesh Sethuraman edited comment on SOLR-12546 at 7/20/18 5:24 PM: --- I see this problem happen, irrespective of whether we fl=* or not. UPDATE: But if we explicitly provide the fl, with individual field names, it provides the data. It is true that for both fl=* or no fl, it does not work was (Author: ganeshmailbox): I see this problem happen, irrespective of whether we fl=* or not. > CSVResponseWriter doesnt return non-stored field even when docValues is > enabled, when no explicit fl specified > -- > > Key: SOLR-12546 > URL: https://issues.apache.org/jira/browse/SOLR-12546 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: Response Writers >Affects Versions: 7.2.1 >Reporter: Karthik S >Priority: Major > Fix For: 7.2.2 > > > As part of this Jira SOLR-2970, CSVResponseWriter doesn't return fields > whose stored attribute set to false, but doesnt consider docvalues. > > Causing fields whose stored=false and docValues =true are not returned when > no explicit fl are specified. Behavior must be same as of json/xml response > writer.. > > Eg: > - Created collection with below fields > type="string"/> > type="int" stored="false"/> > type="plong" stored="false"/> > > precisionStep="0"/> > > > > - Added few documents > contentid,testint,testlong > id,1,56 > id2,2,66 > > - http://machine:port/solr/testdocvalue/select?q=*:*=json > [\{"contentid":"id","_version_":1605281886069850112, > "timestamp":"2018-07-06T22:28:25.335Z","testint":1, > "testlong":56}, > { > "contentid":"id2","_version_":1605281886075092992, > "timestamp":"2018-07-06T22:28:25.335Z","testint":2, > "testlong":66}] > > - http://machine:port/solr/testdocvalue/select?q=*:*=csv > "_version_",contentid,timestamp1605281886069850112,id,2018-07-06T22:28:25.335Z1605281886075092992,id2,2018-07-06T22:28:25.335Z > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12502) Unify and reduce the number of SolrClient#add methods
[ https://issues.apache.org/jira/browse/SOLR-12502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16551015#comment-16551015 ] Jason Gerlowski commented on SOLR-12502: The {{add}} methods are especially egregious, but it's worth pointing out that the problem is endemic to most of the bunches of {{SolrClient}} methods. As of 7.4, we have: - 10 {{add}} methods (5 w/ collection, 5 w/o) - 4 {{addBean}} methods (2 w/ collection, 2 w/o) - 6 {{addBeans}} methods (3 w/ collection, 3 w/o) - 6 {{commit}} methods (3 w/ collection, 3 w/o) - 8 {{deleteById}} methods (4 w/ collection, 4 w/o) - 4 {{deleteByQuery}} methods (2 w/ collection, 2 w/o) - 8 {{getById}} methods (4 w/ collection, 4 w/o) - 6 {optimize}} methods (3 w/ collection, 3 w/o) - 4 {{query}} methods (2 w/ collection, 2 w/o) Any solution we decide on for 8.0 would ideally also be applied to those other method-groups as well. That's more work unfortunately, but it'd keep the API coherent, which is probably best for our users. Of the options David suggested back on SOLR-11654, I don't have any strong opinions. I like the idea of Option 2 (locking SolrClient to a collection at creation time)...it would clean up the interface, and we could remove some complexity internal to SolrClient impl's as well. But I don't have a good feel for how common it is today for users to reuse a single client across collections. Forcing users to go from 1 to numCollections clients could be expensive for CloudSolrClient too, with it connecting to ZK. Maybe others with more experience could chime in on that? If those are issues, option (1) or (3) is probably best. > Unify and reduce the number of SolrClient#add methods > - > > Key: SOLR-12502 > URL: https://issues.apache.org/jira/browse/SOLR-12502 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrJ >Reporter: Varun Thacker >Priority: Major > > On SOLR-11654 we noticed that SolrClient#add has 10 overloaded methods which > can be very confusing to new users. > Also the UpdateRequest class is public so that means if a user is looking for > a custom combination they can always choose to do so by writing a couple of > lines of code. > For 8.0 which might not be very far away we can improve this situation > > Quoting David from SOLR-11654 > {quote}Any way I guess we'll leave SolrClient alone. Thanks for your input > Varun. Yes it's a shame there are so many darned overloaded methods... I > think it's a large part due to the optional "collection" parameter which like > doubles the methods! I've been bitten several times writing SolrJ code that > doesn't use the right overloaded version (forgot to specify collection). I > think for 8.0, *either* all SolrClient methods without "collection" can be > removed in favor of insisting you use the overloaded variant accepting a > collection, *or* SolrClient itself could be locked down to one collection at > the time you create it *or* have a CollectionSolrClient interface retrieved > from a SolrClient.withCollection(collection) in which all the operations that > require a SolrClient are on that interface and not SolrClient proper. > Several ideas to consider. > {quote} > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1981 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1981/ Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseSerialGC 1 tests failed. FAILED: org.apache.solr.handler.component.InfixSuggestersTest.testShutdownDuringBuild Error Message: junit.framework.AssertionFailedError: Unexpected wrapped exception type, expected CoreIsClosedException Stack Trace: java.util.concurrent.ExecutionException: junit.framework.AssertionFailedError: Unexpected wrapped exception type, expected CoreIsClosedException at __randomizedtesting.SeedInfo.seed([51F6D73E1712274B:8E7BB581297B7229]:0) at java.util.concurrent.FutureTask.report(FutureTask.java:122) at java.util.concurrent.FutureTask.get(FutureTask.java:192) at org.apache.solr.handler.component.InfixSuggestersTest.testShutdownDuringBuild(InfixSuggestersTest.java:130) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.lang.Thread.run(Thread.java:748) Caused by: junit.framework.AssertionFailedError: Unexpected
[jira] [Assigned] (LUCENE-8408) Code cleanup - TokenStreamFromTermVector - ATTRIBUTE_FACTORY
[ https://issues.apache.org/jira/browse/LUCENE-8408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Smiley reassigned LUCENE-8408: Resolution: Fixed Assignee: David Smiley Fix Version/s: 7.5 > Code cleanup - TokenStreamFromTermVector - ATTRIBUTE_FACTORY > > > Key: LUCENE-8408 > URL: https://issues.apache.org/jira/browse/LUCENE-8408 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Michael Braun >Assignee: David Smiley >Priority: Trivial > Fix For: 7.5 > > Attachments: LUCENE-8408.patch > > > At the top of TokenStreamFromTermVector: > {code} > //This attribute factory uses less memory when captureState() is called. > public static final AttributeFactory ATTRIBUTE_FACTORY = > AttributeFactory.getStaticImplementation( > AttributeFactory.DEFAULT_ATTRIBUTE_FACTORY, > PackedTokenAttributeImpl.class); > {code} > This is the default if super() was called with no-args from the constructor, > so I believe this can go away. CC [~dsmiley] -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12545) CSVResponseWriter doesnt return non-stored field even when docValues is enabled [ with no fl specified[
[ https://issues.apache.org/jira/browse/SOLR-12545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16550965#comment-16550965 ] Ganesh Sethuraman commented on SOLR-12545: -- posted it in SOLR-12546 > CSVResponseWriter doesnt return non-stored field even when docValues is > enabled [ with no fl specified[ > --- > > Key: SOLR-12545 > URL: https://issues.apache.org/jira/browse/SOLR-12545 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: Response Writers >Affects Versions: 7.2 >Reporter: Karthik S >Priority: Minor > > As part of this Jira SOLR-2970 , CSVResponseWriter doesnt return fields > whose stored attribute set to false, but it doesnt consider docValues > attribute. > > Causing fields with stored= false, docValues=true are not returned when no > explicit fl fields specified for wt=csv. > Behavior must be same as of other json/xml response writer. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8418) LatLonShapeBoundingBoxQuery failure in Polygon with Hole
[ https://issues.apache.org/jira/browse/LUCENE-8418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16550967#comment-16550967 ] ASF subversion and git services commented on LUCENE-8418: - Commit 540839d0d237e38872079e37c7819c4a9b7c8bd2 in lucene-solr's branch refs/heads/branch_7x from [~nknize] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=540839d ] LUCENE-8418: Mute LatLonShape polygonWithHole test until fix is applied > LatLonShapeBoundingBoxQuery failure in Polygon with Hole > > > Key: LUCENE-8418 > URL: https://issues.apache.org/jira/browse/LUCENE-8418 > Project: Lucene - Core > Issue Type: Bug >Reporter: Nicholas Knize >Assignee: Nicholas Knize >Priority: Major > > Found the following test failure while testing with a random polygon with > hole: > {code} > 07:13:46[junit4] 2> NOTE: reproduce with: ant test > -Dtestcase=TestLatLonShape -Dtests.method=testBasicIntersects > -Dtests.seed=A8704FF5E1106095 -Dtests.slow=true -Dtests.badapples=true > -Dtests.locale=ar -Dtests.timezone=Europe/Amsterdam -Dtests.asserts=true > -Dtests.file.encoding=ISO-8859-1 > 07:13:46[junit4] FAILURE 0.48s J0 | TestLatLonShape.testBasicIntersects > <<< > 07:13:46[junit4]> Throwable #1: java.lang.AssertionError: > expected:<0> but was:<1> > 07:13:46[junit4]> at > __randomizedtesting.SeedInfo.seed([A8704FF5E1106095:9F0DBC00DD87C3EB]:0) > 07:13:46[junit4]> at > org.apache.lucene.document.TestLatLonShape.testBasicIntersects(TestLatLonShape.java:113) > 07:13:46[junit4]> at java.lang.Thread.run(Thread.java:748) > 07:13:46[junit4] 2> NOTE: leaving temporary files on disk at: > /var/lib/jenkins/workspace/apache+lucene-solr+branch_7x/lucene/build/sandbox/test/J0/temp/lucene.document.TestLatLonShape_A8704FF5E1106095-001 > 07:13:46[junit4] 2> NOTE: test params are: codec=Asserting(Lucene70): > {}, docValues:{}, maxPointsInLeafNode=140, maxMBSortInHeap=7.774833175701376, > sim=RandomSimilarity(queryNorm=false): {}, locale=ar, > timezone=Europe/Amsterdam > 07:13:46[junit4] 2> NOTE: Linux 3.16.0-4-amd64 amd64/Oracle Corporation > 1.8.0_171 (64-bit)/cpus=16,threads=1,free=302653784,total=449314816 > 07:13:46[junit4] 2> NOTE: All tests run in this JVM: [TestLatLonShape] > 07:13:46[junit4] Completed [18/24 (1!)] on J0 in 21.09s, 3 tests, 1 > failure, 1 skipped <<< FAILURES! > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8418) LatLonShapeBoundingBoxQuery failure in Polygon with Hole
[ https://issues.apache.org/jira/browse/LUCENE-8418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16550964#comment-16550964 ] ASF subversion and git services commented on LUCENE-8418: - Commit 509561bf2a9effe4fce19551c9ec037975cf9c02 in lucene-solr's branch refs/heads/master from [~nknize] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=509561b ] LUCENE-8418: Mute LatLonShape polygonWithHole test until fix is applied > LatLonShapeBoundingBoxQuery failure in Polygon with Hole > > > Key: LUCENE-8418 > URL: https://issues.apache.org/jira/browse/LUCENE-8418 > Project: Lucene - Core > Issue Type: Bug >Reporter: Nicholas Knize >Assignee: Nicholas Knize >Priority: Major > > Found the following test failure while testing with a random polygon with > hole: > {code} > 07:13:46[junit4] 2> NOTE: reproduce with: ant test > -Dtestcase=TestLatLonShape -Dtests.method=testBasicIntersects > -Dtests.seed=A8704FF5E1106095 -Dtests.slow=true -Dtests.badapples=true > -Dtests.locale=ar -Dtests.timezone=Europe/Amsterdam -Dtests.asserts=true > -Dtests.file.encoding=ISO-8859-1 > 07:13:46[junit4] FAILURE 0.48s J0 | TestLatLonShape.testBasicIntersects > <<< > 07:13:46[junit4]> Throwable #1: java.lang.AssertionError: > expected:<0> but was:<1> > 07:13:46[junit4]> at > __randomizedtesting.SeedInfo.seed([A8704FF5E1106095:9F0DBC00DD87C3EB]:0) > 07:13:46[junit4]> at > org.apache.lucene.document.TestLatLonShape.testBasicIntersects(TestLatLonShape.java:113) > 07:13:46[junit4]> at java.lang.Thread.run(Thread.java:748) > 07:13:46[junit4] 2> NOTE: leaving temporary files on disk at: > /var/lib/jenkins/workspace/apache+lucene-solr+branch_7x/lucene/build/sandbox/test/J0/temp/lucene.document.TestLatLonShape_A8704FF5E1106095-001 > 07:13:46[junit4] 2> NOTE: test params are: codec=Asserting(Lucene70): > {}, docValues:{}, maxPointsInLeafNode=140, maxMBSortInHeap=7.774833175701376, > sim=RandomSimilarity(queryNorm=false): {}, locale=ar, > timezone=Europe/Amsterdam > 07:13:46[junit4] 2> NOTE: Linux 3.16.0-4-amd64 amd64/Oracle Corporation > 1.8.0_171 (64-bit)/cpus=16,threads=1,free=302653784,total=449314816 > 07:13:46[junit4] 2> NOTE: All tests run in this JVM: [TestLatLonShape] > 07:13:46[junit4] Completed [18/24 (1!)] on J0 in 21.09s, 3 tests, 1 > failure, 1 skipped <<< FAILURES! > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8408) Code cleanup - TokenStreamFromTermVector - ATTRIBUTE_FACTORY
[ https://issues.apache.org/jira/browse/LUCENE-8408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16550963#comment-16550963 ] ASF subversion and git services commented on LUCENE-8408: - Commit df662a318d69f3eb629abe8ac95cfcc703077eb8 in lucene-solr's branch refs/heads/branch_7x from [~dsmiley] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=df662a3 ] LUCENE-8408: Highlighter: Remove obsolete private AttributeFactory instance (cherry picked from commit 20a7ee9) > Code cleanup - TokenStreamFromTermVector - ATTRIBUTE_FACTORY > > > Key: LUCENE-8408 > URL: https://issues.apache.org/jira/browse/LUCENE-8408 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Michael Braun >Priority: Trivial > Attachments: LUCENE-8408.patch > > > At the top of TokenStreamFromTermVector: > {code} > //This attribute factory uses less memory when captureState() is called. > public static final AttributeFactory ATTRIBUTE_FACTORY = > AttributeFactory.getStaticImplementation( > AttributeFactory.DEFAULT_ATTRIBUTE_FACTORY, > PackedTokenAttributeImpl.class); > {code} > This is the default if super() was called with no-args from the constructor, > so I believe this can go away. CC [~dsmiley] -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12546) CSVResponseWriter doesnt return non-stored field even when docValues is enabled, when no explicit fl specified
[ https://issues.apache.org/jira/browse/SOLR-12546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16550962#comment-16550962 ] Ganesh Sethuraman commented on SOLR-12546: -- I see this problem happen, irrespective of whether we fl=* or not. > CSVResponseWriter doesnt return non-stored field even when docValues is > enabled, when no explicit fl specified > -- > > Key: SOLR-12546 > URL: https://issues.apache.org/jira/browse/SOLR-12546 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: Response Writers >Affects Versions: 7.2.1 >Reporter: Karthik S >Priority: Major > Fix For: 7.2.2 > > > As part of this Jira SOLR-2970, CSVResponseWriter doesn't return fields > whose stored attribute set to false, but doesnt consider docvalues. > > Causing fields whose stored=false and docValues =true are not returned when > no explicit fl are specified. Behavior must be same as of json/xml response > writer.. > > Eg: > - Created collection with below fields > type="string"/> > type="int" stored="false"/> > type="plong" stored="false"/> > > precisionStep="0"/> > > > > - Added few documents > contentid,testint,testlong > id,1,56 > id2,2,66 > > - http://machine:port/solr/testdocvalue/select?q=*:*=json > [\{"contentid":"id","_version_":1605281886069850112, > "timestamp":"2018-07-06T22:28:25.335Z","testint":1, > "testlong":56}, > { > "contentid":"id2","_version_":1605281886075092992, > "timestamp":"2018-07-06T22:28:25.335Z","testint":2, > "testlong":66}] > > - http://machine:port/solr/testdocvalue/select?q=*:*=csv > "_version_",contentid,timestamp1605281886069850112,id,2018-07-06T22:28:25.335Z1605281886075092992,id2,2018-07-06T22:28:25.335Z > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12545) CSVResponseWriter doesnt return non-stored field even when docValues is enabled [ with no fl specified[
[ https://issues.apache.org/jira/browse/SOLR-12545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16550954#comment-16550954 ] Steve Rowe commented on SOLR-12545: --- [~ganeshmailbox]: please post your comment on SOLR-12546; this issue was closed as a duplicate of it, and your comment will likely not be noticed by people working on the open issue. > CSVResponseWriter doesnt return non-stored field even when docValues is > enabled [ with no fl specified[ > --- > > Key: SOLR-12545 > URL: https://issues.apache.org/jira/browse/SOLR-12545 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: Response Writers >Affects Versions: 7.2 >Reporter: Karthik S >Priority: Minor > > As part of this Jira SOLR-2970 , CSVResponseWriter doesnt return fields > whose stored attribute set to false, but it doesnt consider docValues > attribute. > > Causing fields with stored= false, docValues=true are not returned when no > explicit fl fields specified for wt=csv. > Behavior must be same as of other json/xml response writer. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8408) Code cleanup - TokenStreamFromTermVector - ATTRIBUTE_FACTORY
[ https://issues.apache.org/jira/browse/LUCENE-8408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16550956#comment-16550956 ] ASF subversion and git services commented on LUCENE-8408: - Commit 20a7ee9e11f42915161a7d12857e2565040a131d in lucene-solr's branch refs/heads/master from [~dsmiley] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=20a7ee9 ] LUCENE-8408: Highlighter: Remove obsolete private AttributeFactory instance > Code cleanup - TokenStreamFromTermVector - ATTRIBUTE_FACTORY > > > Key: LUCENE-8408 > URL: https://issues.apache.org/jira/browse/LUCENE-8408 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Michael Braun >Priority: Trivial > Attachments: LUCENE-8408.patch > > > At the top of TokenStreamFromTermVector: > {code} > //This attribute factory uses less memory when captureState() is called. > public static final AttributeFactory ATTRIBUTE_FACTORY = > AttributeFactory.getStaticImplementation( > AttributeFactory.DEFAULT_ATTRIBUTE_FACTORY, > PackedTokenAttributeImpl.class); > {code} > This is the default if super() was called with no-args from the constructor, > so I believe this can go away. CC [~dsmiley] -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1589 - Failure
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1589/ 4 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.cloud.LeaderTragicEventTest Error Message: ObjectTracker found 1 object(s) that were not released!!! [TransactionLog] org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: org.apache.solr.update.TransactionLog at org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42) at org.apache.solr.update.TransactionLog.(TransactionLog.java:188) at org.apache.solr.update.UpdateLog.newTransactionLog(UpdateLog.java:467) at org.apache.solr.update.UpdateLog.ensureLog(UpdateLog.java:1323) at org.apache.solr.update.UpdateLog.add(UpdateLog.java:571) at org.apache.solr.update.UpdateLog.add(UpdateLog.java:551) at org.apache.solr.update.DirectUpdateHandler2.doNormalUpdate(DirectUpdateHandler2.java:345) at org.apache.solr.update.DirectUpdateHandler2.addDoc0(DirectUpdateHandler2.java:283) at org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:233) at org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:67) at org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55) at org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalAdd(DistributedUpdateProcessor.java:950) at org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:1168) at org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:633) at org.apache.solr.update.processor.LogUpdateProcessorFactory$LogUpdateProcessor.processAdd(LogUpdateProcessorFactory.java:103) at org.apache.solr.handler.loader.JavabinLoader$1.update(JavabinLoader.java:98) at org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readOuterMostDocIterator(JavaBinUpdateRequestCodec.java:188) at org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readIterator(JavaBinUpdateRequestCodec.java:144) at org.apache.solr.common.util.JavaBinCodec.readObject(JavaBinCodec.java:311) at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:256) at org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readNamedList(JavaBinUpdateRequestCodec.java:130) at org.apache.solr.common.util.JavaBinCodec.readObject(JavaBinCodec.java:276) at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:256) at org.apache.solr.common.util.JavaBinCodec.unmarshal(JavaBinCodec.java:178) at org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec.unmarshal(JavaBinUpdateRequestCodec.java:195) at org.apache.solr.handler.loader.JavabinLoader.parseAndLoadDocs(JavabinLoader.java:109) at org.apache.solr.handler.loader.JavabinLoader.load(JavabinLoader.java:55) at org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:97) at org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:68) at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199) at org.apache.solr.core.SolrCore.execute(SolrCore.java:2541) at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:709) at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:515) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:377) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:323) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1642) at org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:139) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1642) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533) at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255) at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595) at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1317) at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473) at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564) at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1219) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144) at org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:674) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132) at
[jira] [Commented] (SOLR-12545) CSVResponseWriter doesnt return non-stored field even when docValues is enabled [ with no fl specified[
[ https://issues.apache.org/jira/browse/SOLR-12545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16550950#comment-16550950 ] Ganesh Sethuraman commented on SOLR-12545: -- I see this problem happen, irrespective of whether we fl=* or not. > CSVResponseWriter doesnt return non-stored field even when docValues is > enabled [ with no fl specified[ > --- > > Key: SOLR-12545 > URL: https://issues.apache.org/jira/browse/SOLR-12545 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: Response Writers >Affects Versions: 7.2 >Reporter: Karthik S >Priority: Minor > > As part of this Jira SOLR-2970 , CSVResponseWriter doesnt return fields > whose stored attribute set to false, but it doesnt consider docValues > attribute. > > Causing fields with stored= false, docValues=true are not returned when no > explicit fl fields specified for wt=csv. > Behavior must be same as of other json/xml response writer. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (LUCENE-8419) Return token unchanged for pathological Stempel tokens
Trey Jones created LUCENE-8419: -- Summary: Return token unchanged for pathological Stempel tokens Key: LUCENE-8419 URL: https://issues.apache.org/jira/browse/LUCENE-8419 Project: Lucene - Core Issue Type: New Feature Components: modules/analysis Reporter: Trey Jones Attachments: dotc.txt, dotdotc.txt, twoletter.txt In the aggregate, Stempel does a good job, but certain tokens get stemmed pathologically, conflating completely unrelated words in the search index. Depending on the scoring function, documents returned may have no form of the word that was in the query, only unrelated forms (see ć examples below). It's probably not possible to fix the stemmer, and it's probably not possible to catch _every_ error, but catching and ignoring certain large classes of errors would greatly improve precision, and doing it in the stemmer would prevent losses to recall that happen from cleaning up these errors outside the stemmer. An obvious example is that numbers ending in 1 have the last two digits replaced with ć. So 12341 is stemmed as 123ć. Numbers ending in 31 have the last 4 numbers removed and replaced with ć, so 12331 is stemmed as 1ć. Mixed letters and numbers are treated the same: abc123451 is stemmed as abc1234ć, abc1231 is stemmed as abcć. *Proposed solution:* any token that ends in a number should not be stemmed, it should just be returned unchanged. One letter stems from the set [a-zńć] are generally useless and often absurd. ć is the worst offender by far (it's the ending of the infinitive form of verbs). All of these tokens (found on Polish Wikipedia/Wiktionary) get stemmed to ć: * acque Adrien aguas Águas Alainem Alandh Amores Ansoe Arau asinaio aŭdas audyt Awiwie Ayres Baby badż Baina Bains Balue Baon baque Barbola Bazy Beau beim Beroe Betz Blaue blenda bleue Blizzard boor Boruca Boym Brodła Brogi Bronksie Brydż Budgie Budiafa bujny Buon Buot Button Caan Cains Canoe Canona caon Celu Charl Chloe ciag Cioma Cmdr Conseil Conso Cotton Cramp Creel Cuyk cyan czcią Czermny czto D.III Daws Daxue dazzle decy Defoe Dereń Detroit digue Dior Ditton Dojlido dosei douk DRaaS drag drau Dudacy dudas Dutton Duty Dziób eayd Edwy Edyp eiro Eltz Emain erar ESaaS faan Fetz figurar Fitz foam Frau Fugue GAAB gaan Gabirol Gaon gasue Gaup Geol GeoMIP Getz gigue Ginny Gioią Girl Goam Gołymin Gosei Götz grasso Grodnie Gula Guroo gyan HAAB Haan Heim Héroe Hitz Hoam Hohenho Hosei Huon Hutton Huub hyaina Iberii inkuby Inoue Issue ITaaS Iudas Izmaile Jaan Jaws jedyn Jews jira Josepho Jost Josue Judas Kaan Kaleido Karoo Katz Kazue Kehoe khayag kiwa Kiwu Klaas kmdr Kokei Konoe kozer kpią Kringle ksiezyce Któż Kutz L231 L331 Laan Lalli Laon Laws łebka Leroo Liban Ligue Liro Lisoli Logue Loja Londyn Lubomyr Luque Lutz Lytton łzawy Maan mains Mainy malpaco Mammal mandag MBaaS meeki Merl Metz MIDAS middag Miras mmol modą moins Monty Moryń motz mróż Mutz Müzesi MVaaS Naam nabrzeża Nadab Nadala Nalewki Nd:YAG neol News Nieszawa Nimue Nyam ÖAAB oblał oddala okala Olień opar oppi Orioł Osioł osoagi Osyki Otóż Output Oxalido pasmową Patton Pearl Peau peoplk Petz poar Pobrzeża poecie Pogue Pono posagi posł Praha Pringle probie progi Prońko Prosper prwdę Psioł Pułka Putz QDTOE Quien Qwest radża raga Rains reht Reich Retz Revue Right RITZ Roam Rogue Roque rosii RU31 Rutki Ryan SAAB saasso salue Sampaio Satz Sears Sekisho semo Setton Sgan Siloe Sitz Skopje Slot Šmarje Smrkci Soar sopo sozinho springa Steel Stip Straz Strip Suez sukuby Sumach Surgucie Sutton svasso Szosą szto Tadas Taira tęczy Teodorą teol Tisii Tisza Toluca Tomoe Toque TPMŻ Traiana Trask Traue Tulyag Tuque Turinga Undas Uniw usque Vague Value Venue Vidas Vogue Voor W331 Waringa weht Weich Weija Wheel widmem WKAG worku Wotton Wryk Wschowie wsiach wsiami Wybrzeża wydala Wyraz XLIII XVIII XXIII Yaski yeol YONO Yorki zakręcie Zijab zipo. Four-character tokens ending in 31 (like 2,31 9,31 1031 1131 7431 8331 a331) also all get stemmed to ć. Below are examples of other tokens (from Polish Wikipedia/Wiktionary) that get stemmed to one-letter tokens in [a-zńć]. Note that i, o, u, w, and z are stop words, and so don't show up in the list. * a: a, addo, adygea, jhwh, also * b: b, bdrm, barr, bebek, berr, bounty, bures, burr, berm, birm * c: alzira, c, carr, county, haight, hermas, kidoń, paich, pieter, połóż, radoń, soest, tatort, voight, zaba, biegną, pokaż, wskaż, zoisyt * d: award, d, dlek, deeb * e: e, eddy, eloi * f: f, farr, firm * g: g, geagea, grunty, gwdy, gyro, górą * h: h * i: inre, isro * j: j, judo * k: k, kgtj, kpzr, karr, kerr, ksok * l: l, leeb, loeb * m: m, magazyn, marr, mayor, merr, mnsi, murr, mgły, najmu * n: johnowi, n * o: obzr, offy * p: p, pace, paoli, parr, pasji, pawełek, pyro, pirsy, plmb * q: q * r: r, rite, rrek * s: s, sarr, site, sowie, szok * t:
[jira] [Created] (LUCENE-8418) LatLonShapeBoundingBoxQuery failure in Polygon with Hole
Nicholas Knize created LUCENE-8418: -- Summary: LatLonShapeBoundingBoxQuery failure in Polygon with Hole Key: LUCENE-8418 URL: https://issues.apache.org/jira/browse/LUCENE-8418 Project: Lucene - Core Issue Type: Bug Reporter: Nicholas Knize Assignee: Nicholas Knize Found the following test failure while testing with a random polygon with hole: {code} 07:13:46[junit4] 2> NOTE: reproduce with: ant test -Dtestcase=TestLatLonShape -Dtests.method=testBasicIntersects -Dtests.seed=A8704FF5E1106095 -Dtests.slow=true -Dtests.badapples=true -Dtests.locale=ar -Dtests.timezone=Europe/Amsterdam -Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1 07:13:46[junit4] FAILURE 0.48s J0 | TestLatLonShape.testBasicIntersects <<< 07:13:46[junit4]> Throwable #1: java.lang.AssertionError: expected:<0> but was:<1> 07:13:46[junit4]> at __randomizedtesting.SeedInfo.seed([A8704FF5E1106095:9F0DBC00DD87C3EB]:0) 07:13:46[junit4]> at org.apache.lucene.document.TestLatLonShape.testBasicIntersects(TestLatLonShape.java:113) 07:13:46[junit4]> at java.lang.Thread.run(Thread.java:748) 07:13:46[junit4] 2> NOTE: leaving temporary files on disk at: /var/lib/jenkins/workspace/apache+lucene-solr+branch_7x/lucene/build/sandbox/test/J0/temp/lucene.document.TestLatLonShape_A8704FF5E1106095-001 07:13:46[junit4] 2> NOTE: test params are: codec=Asserting(Lucene70): {}, docValues:{}, maxPointsInLeafNode=140, maxMBSortInHeap=7.774833175701376, sim=RandomSimilarity(queryNorm=false): {}, locale=ar, timezone=Europe/Amsterdam 07:13:46[junit4] 2> NOTE: Linux 3.16.0-4-amd64 amd64/Oracle Corporation 1.8.0_171 (64-bit)/cpus=16,threads=1,free=302653784,total=449314816 07:13:46[junit4] 2> NOTE: All tests run in this JVM: [TestLatLonShape] 07:13:46[junit4] Completed [18/24 (1!)] on J0 in 21.09s, 3 tests, 1 failure, 1 skipped <<< FAILURES! {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12509) Improve SplitShardCmd performance and reliability
[ https://issues.apache.org/jira/browse/SOLR-12509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16550905#comment-16550905 ] Andrzej Bialecki commented on SOLR-12509: -- Thanks Shalin for the review! I attached a new patch that fixes these issues. > Improve SplitShardCmd performance and reliability > - > > Key: SOLR-12509 > URL: https://issues.apache.org/jira/browse/SOLR-12509 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Reporter: Andrzej Bialecki >Assignee: Andrzej Bialecki >Priority: Major > Attachments: SOLR-12509.patch, SOLR-12509.patch > > > {{SplitShardCmd}} is currently quite complex. > Shard splitting occurs on active shards, which are still being updated, so > the splitting has to involve several carefully orchestrated steps, making > sure that new sub-shard placeholders are properly created and visible, and > then also applying buffered updates to the split leaders and performing > recovery on sub-shard replicas. > This process could be simplified in cases where collections are not actively > being updated or can tolerate a little downtime - we could put the shard > "offline", ie. disable writing while the splitting is in progress (in order > to avoid users' confusion we should disable writing to the whole collection). > The actual index splitting could perhaps be improved to use > {{HardLinkCopyDirectoryWrapper}} for creating a copy of the index by > hard-linking existing index segments, and then applying deletes to the > documents that don't belong in a sub-shard. However, the resulting index > slices that replicas would have to pull would be the same size as the whole > shard. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12509) Improve SplitShardCmd performance and reliability
[ https://issues.apache.org/jira/browse/SOLR-12509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrzej Bialecki updated SOLR-12509: - Attachment: SOLR-12509.patch > Improve SplitShardCmd performance and reliability > - > > Key: SOLR-12509 > URL: https://issues.apache.org/jira/browse/SOLR-12509 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Reporter: Andrzej Bialecki >Assignee: Andrzej Bialecki >Priority: Major > Attachments: SOLR-12509.patch, SOLR-12509.patch > > > {{SplitShardCmd}} is currently quite complex. > Shard splitting occurs on active shards, which are still being updated, so > the splitting has to involve several carefully orchestrated steps, making > sure that new sub-shard placeholders are properly created and visible, and > then also applying buffered updates to the split leaders and performing > recovery on sub-shard replicas. > This process could be simplified in cases where collections are not actively > being updated or can tolerate a little downtime - we could put the shard > "offline", ie. disable writing while the splitting is in progress (in order > to avoid users' confusion we should disable writing to the whole collection). > The actual index splitting could perhaps be improved to use > {{HardLinkCopyDirectoryWrapper}} for creating a copy of the index by > hard-linking existing index segments, and then applying deletes to the > documents that don't belong in a sub-shard. However, the resulting index > slices that replicas would have to pull would be the same size as the whole > shard. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-2562) Make Luke a Lucene/Solr Module
[ https://issues.apache.org/jira/browse/LUCENE-2562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16550895#comment-16550895 ] Tomoko Uchida commented on LUCENE-2562: --- [~steve_rowe] Thank you! Hope there will be good news. > Make Luke a Lucene/Solr Module > -- > > Key: LUCENE-2562 > URL: https://issues.apache.org/jira/browse/LUCENE-2562 > Project: Lucene - Core > Issue Type: Task >Reporter: Mark Miller >Priority: Major > Labels: gsoc2014 > Attachments: LUCENE-2562-Ivy.patch, LUCENE-2562-Ivy.patch, > LUCENE-2562-Ivy.patch, LUCENE-2562-ivy.patch, LUCENE-2562.patch, > LUCENE-2562.patch, Luke-ALE-1.png, Luke-ALE-2.png, Luke-ALE-3.png, > Luke-ALE-4.png, Luke-ALE-5.png, luke-javafx1.png, luke-javafx2.png, > luke-javafx3.png, luke1.jpg, luke2.jpg, luke3.jpg, lukeALE-documents.png > > Time Spent: 10m > Remaining Estimate: 0h > > see > "RE: Luke - in need of maintainer": > http://markmail.org/message/m4gsto7giltvrpuf > "Web-based Luke": http://markmail.org/message/4xwps7p7ifltme5q > I think it would be great if there was a version of Luke that always worked > with trunk - and it would also be great if it was easier to match Luke jars > with Lucene versions. > While I'd like to get GWT Luke into the mix as well, I think the easiest > starting point is to straight port Luke to another UI toolkit before > abstracting out DTO objects that both GWT Luke and Pivot Luke could share. > I've started slowly converting Luke's use of thinlet to Apache Pivot. I > haven't/don't have a lot of time for this at the moment, but I've plugged > away here and there over the past work or two. There is still a *lot* to do. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-2562) Make Luke a Lucene/Solr Module
[ https://issues.apache.org/jira/browse/LUCENE-2562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16550885#comment-16550885 ] Steve Rowe commented on LUCENE-2562: bq. We cannot link to GPL so this is a bummer. I don't think JavaFX, unless shipped with the JDK is a viable option for Apache projects. I think we should ask ASF legal how to proceed? bq. As you know, in the ASF legal page, GNU GPL including GNU Classpath exceptions is listed in the Category X list (honestly, I read this terms yesterday and not prepared to handle such legal matters.) https://www.apache.org/legal/resolved.html#category-x bq. I'm afraid that I cannot ask to and negotiate with ASF on my own about complicated matters with my limited knowledge about licensing and English vocabulary. I will make an INFRA JIRA and ask if an exception can be made for OpenJFX as a dependency (vs. bundled with the JRE, which is allowed). > Make Luke a Lucene/Solr Module > -- > > Key: LUCENE-2562 > URL: https://issues.apache.org/jira/browse/LUCENE-2562 > Project: Lucene - Core > Issue Type: Task >Reporter: Mark Miller >Priority: Major > Labels: gsoc2014 > Attachments: LUCENE-2562-Ivy.patch, LUCENE-2562-Ivy.patch, > LUCENE-2562-Ivy.patch, LUCENE-2562-ivy.patch, LUCENE-2562.patch, > LUCENE-2562.patch, Luke-ALE-1.png, Luke-ALE-2.png, Luke-ALE-3.png, > Luke-ALE-4.png, Luke-ALE-5.png, luke-javafx1.png, luke-javafx2.png, > luke-javafx3.png, luke1.jpg, luke2.jpg, luke3.jpg, lukeALE-documents.png > > Time Spent: 10m > Remaining Estimate: 0h > > see > "RE: Luke - in need of maintainer": > http://markmail.org/message/m4gsto7giltvrpuf > "Web-based Luke": http://markmail.org/message/4xwps7p7ifltme5q > I think it would be great if there was a version of Luke that always worked > with trunk - and it would also be great if it was easier to match Luke jars > with Lucene versions. > While I'd like to get GWT Luke into the mix as well, I think the easiest > starting point is to straight port Luke to another UI toolkit before > abstracting out DTO objects that both GWT Luke and Pivot Luke could share. > I've started slowly converting Luke's use of thinlet to Apache Pivot. I > haven't/don't have a lot of time for this at the moment, but I've plugged > away here and there over the past work or two. There is still a *lot* to do. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12572) Reuse fieldvalues computed while sorting at writing in ExportWriter
[ https://issues.apache.org/jira/browse/SOLR-12572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amrit Sarkar updated SOLR-12572: Attachment: SOLR-12572.patch > Reuse fieldvalues computed while sorting at writing in ExportWriter > --- > > Key: SOLR-12572 > URL: https://issues.apache.org/jira/browse/SOLR-12572 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: streaming expressions >Reporter: Amrit Sarkar >Priority: Minor > Attachments: SOLR-12572.patch > > > --- to be updated -- -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-12572) Reuse fieldvalues computed while sorting at writing in ExportWriter
Amrit Sarkar created SOLR-12572: --- Summary: Reuse fieldvalues computed while sorting at writing in ExportWriter Key: SOLR-12572 URL: https://issues.apache.org/jira/browse/SOLR-12572 Project: Solr Issue Type: Improvement Security Level: Public (Default Security Level. Issues are Public) Components: streaming expressions Reporter: Amrit Sarkar --- to be updated -- -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-9542) Kerberos delegation tokens requires missing Jackson library
[ https://issues.apache.org/jira/browse/SOLR-9542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Gerlowski resolved SOLR-9542. --- Resolution: Fixed > Kerberos delegation tokens requires missing Jackson library > --- > > Key: SOLR-9542 > URL: https://issues.apache.org/jira/browse/SOLR-9542 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Ishan Chattopadhyaya >Priority: Major > Fix For: 6.3 > > Attachments: SOLR-9542.patch > > > GET, RENEW or CANCEL operations for the delegation tokens support requires > the Solr server to have old jackson added as a dependency. > Steps to reproduce the problem: > 1) Configure Solr to use delegation tokens > 2) Start Solr > 3) Use a SolrJ application to get a delegation token. > The server throws the following: > {code} > java.lang.NoClassDefFoundError: org/codehaus/jackson/map/ObjectMapper > at > org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler.managementOperation(DelegationTokenAuthenticationHandler.java:279) > at > org.apache.solr.security.KerberosPlugin$RequestContinuesRecorderAuthenticationHandler.managementOperation(KerberosPlugin.java:566) > at > org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:514) > at > org.apache.solr.security.DelegationTokenKerberosFilter.doFilter(DelegationTokenKerberosFilter.java:123) > at > org.apache.solr.security.KerberosPlugin.doAuthenticate(KerberosPlugin.java:265) > at > org.apache.solr.servlet.SolrDispatchFilter.authenticateRequest(SolrDispatchFilter.java:318) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:222) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:208) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) > at > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) > at > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226) > at > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160) > at > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511) > at > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) > at > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) > at > org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213) > at > org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) > at org.eclipse.jetty.server.Server.handle(Server.java:518) > at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:308) > at > org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:244) > at > org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273) > at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95) > at > org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93) > at > org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceAndRun(ExecuteProduceConsume.java:246) > at > org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:156) > at > org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:654) > at > org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:572) > at java.lang.Thread.run(Thread.java:745) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Reopened] (SOLR-9542) Kerberos delegation tokens requires missing Jackson library
[ https://issues.apache.org/jira/browse/SOLR-9542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Gerlowski reopened SOLR-9542: --- > Kerberos delegation tokens requires missing Jackson library > --- > > Key: SOLR-9542 > URL: https://issues.apache.org/jira/browse/SOLR-9542 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Ishan Chattopadhyaya >Priority: Major > Fix For: 6.3 > > Attachments: SOLR-9542.patch > > > GET, RENEW or CANCEL operations for the delegation tokens support requires > the Solr server to have old jackson added as a dependency. > Steps to reproduce the problem: > 1) Configure Solr to use delegation tokens > 2) Start Solr > 3) Use a SolrJ application to get a delegation token. > The server throws the following: > {code} > java.lang.NoClassDefFoundError: org/codehaus/jackson/map/ObjectMapper > at > org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler.managementOperation(DelegationTokenAuthenticationHandler.java:279) > at > org.apache.solr.security.KerberosPlugin$RequestContinuesRecorderAuthenticationHandler.managementOperation(KerberosPlugin.java:566) > at > org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:514) > at > org.apache.solr.security.DelegationTokenKerberosFilter.doFilter(DelegationTokenKerberosFilter.java:123) > at > org.apache.solr.security.KerberosPlugin.doAuthenticate(KerberosPlugin.java:265) > at > org.apache.solr.servlet.SolrDispatchFilter.authenticateRequest(SolrDispatchFilter.java:318) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:222) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:208) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) > at > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) > at > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226) > at > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160) > at > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511) > at > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) > at > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) > at > org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213) > at > org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) > at org.eclipse.jetty.server.Server.handle(Server.java:518) > at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:308) > at > org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:244) > at > org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273) > at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95) > at > org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93) > at > org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceAndRun(ExecuteProduceConsume.java:246) > at > org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:156) > at > org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:654) > at > org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:572) > at java.lang.Thread.run(Thread.java:745) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-9542) Kerberos delegation tokens requires missing Jackson library
[ https://issues.apache.org/jira/browse/SOLR-9542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Gerlowski resolved SOLR-9542. --- Resolution: Won't Fix Fix Version/s: 6.3 > Kerberos delegation tokens requires missing Jackson library > --- > > Key: SOLR-9542 > URL: https://issues.apache.org/jira/browse/SOLR-9542 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Ishan Chattopadhyaya >Priority: Major > Fix For: 6.3 > > Attachments: SOLR-9542.patch > > > GET, RENEW or CANCEL operations for the delegation tokens support requires > the Solr server to have old jackson added as a dependency. > Steps to reproduce the problem: > 1) Configure Solr to use delegation tokens > 2) Start Solr > 3) Use a SolrJ application to get a delegation token. > The server throws the following: > {code} > java.lang.NoClassDefFoundError: org/codehaus/jackson/map/ObjectMapper > at > org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler.managementOperation(DelegationTokenAuthenticationHandler.java:279) > at > org.apache.solr.security.KerberosPlugin$RequestContinuesRecorderAuthenticationHandler.managementOperation(KerberosPlugin.java:566) > at > org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:514) > at > org.apache.solr.security.DelegationTokenKerberosFilter.doFilter(DelegationTokenKerberosFilter.java:123) > at > org.apache.solr.security.KerberosPlugin.doAuthenticate(KerberosPlugin.java:265) > at > org.apache.solr.servlet.SolrDispatchFilter.authenticateRequest(SolrDispatchFilter.java:318) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:222) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:208) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) > at > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) > at > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226) > at > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160) > at > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511) > at > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) > at > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) > at > org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213) > at > org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) > at org.eclipse.jetty.server.Server.handle(Server.java:518) > at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:308) > at > org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:244) > at > org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273) > at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95) > at > org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93) > at > org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceAndRun(ExecuteProduceConsume.java:246) > at > org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:156) > at > org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:654) > at > org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:572) > at java.lang.Thread.run(Thread.java:745) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12243) Edismax missing phrase queries when phrases contain multiterm synonyms
[ https://issues.apache.org/jira/browse/SOLR-12243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16550867#comment-16550867 ] Alessandro Benedetti commented on SOLR-12243: - Hi community, if this bug-fix is not of interest could we have an explanation why and have this Jira issue closed ? Thanks > Edismax missing phrase queries when phrases contain multiterm synonyms > -- > > Key: SOLR-12243 > URL: https://issues.apache.org/jira/browse/SOLR-12243 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: query parsers >Affects Versions: 7.1 > Environment: RHEL, MacOS X > Do not believe this is environment-specific. >Reporter: Elizabeth Haubert >Priority: Major > Attachments: SOLR-12243.patch, SOLR-12243.patch, SOLR-12243.patch, > SOLR-12243.patch > > Time Spent: 10m > Remaining Estimate: 0h > > synonyms.txt: > allergic, hypersensitive > aspirin, acetylsalicylic acid > dog, canine, canis familiris, k 9 > rat, rattus > request handler: > > > > edismax > 0.4 > title^100 > title~20^5000 > title~11 > title~22^1000 > text > > 3-1 6-3 930% > *:* > 25 > > > Phrase queries (pf, pf2, pf3) containing "dog" or "aspirin" against the > above list will not be generated. > "allergic reaction dog" will generate pf2: "allergic reaction", but not > pf:"allergic reaction dog", pf2: "reaction dog", or pf3: "allergic reaction > dog" > "aspirin dose in rats" will generate pf3: "dose ? rats" but not pf2: "aspirin > dose" or pf3:"aspirin dose ?" > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11578) Solr 7 Admin UI (Cloud > Graph) should reflect the Replica type to give a more accurate representation of the cluster
[ https://issues.apache.org/jira/browse/SOLR-11578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16550830#comment-16550830 ] Cassandra Targett commented on SOLR-11578: -- [~rohitcse], [~erickerickson], there are a number of changes to the Ref Guide ({{cloud-screens.adoc}}) that could have been made with this commit: updated screenshots, a description of the new type labels, and clean up of a section that discusses the Dump screen that no longer exists (not related to this change I think). Are these updates planned for a later time but before 7.5 is released? > Solr 7 Admin UI (Cloud > Graph) should reflect the Replica type to give a > more accurate representation of the cluster > - > > Key: SOLR-11578 > URL: https://issues.apache.org/jira/browse/SOLR-11578 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: Admin UI >Affects Versions: 7.0, 7.1 >Reporter: Rohit >Assignee: Erick Erickson >Priority: Minor > Fix For: 7.5 > > Attachments: NRT_Tooltip.png, OnFirefox.png, OnSafari.png, > SOLR-11578.patch, SOLR-11578.patch, SOLR-11578.patch, SOLR-11578.patch, > Screen Shot-2.png, Screenshot-1.png, TLOG_Tooltip.png, Updated Graph.png, > Updated Legend.png, Updated Radial Graph.png, jquery-ui.min.css, > jquery-ui.min.js, jquery-ui.structure.min.css, replica_info.png > > > New replica types were introduced in Solr 7. > 1. The Solr Admin UI --> Cloud --> Graph mode should be updated to reflect > the new replica types (NRT, TLOG, PULL) > 2. It will give a better overview of the cluster as well as help in > troubleshooting and diagnosing issues. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (LUCENE-8417) Expose Stempel stopword filter
Trey Jones created LUCENE-8417: -- Summary: Expose Stempel stopword filter Key: LUCENE-8417 URL: https://issues.apache.org/jira/browse/LUCENE-8417 Project: Lucene - Core Issue Type: New Feature Components: modules/analysis Reporter: Trey Jones Stempel (lucene-solr/lucene/analysis/stempel/) internally uses a stopword list. The stemmer is exposed as "polish_stem" but the stopword list is not exposed. If someone wants to unpack the Stempel analyzer to customize it, they have to go find the stopword list on their own and recreate it. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-10.0.1) - Build # 22489 - Failure!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/22489/ Java: 64bit/jdk-10.0.1 -XX:+UseCompressedOops -XX:+UseSerialGC All tests passed Build Log: [...truncated 14917 lines...] [junit4] JVM J2: stdout was not empty, see: /home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/temp/junit4-J2-20180720_135752_8435709298792307581876.sysout [junit4] >>> JVM J2 emitted unexpected output (verbatim) [junit4] # [junit4] # A fatal error has been detected by the Java Runtime Environment: [junit4] # [junit4] # SIGSEGV (0xb) at pc=0x7f3dab5b2409, pid=25682, tid=25717 [junit4] # [junit4] # JRE version: OpenJDK Runtime Environment (10.0.1+10) (build 10.0.1+10) [junit4] # Java VM: OpenJDK 64-Bit Server VM (10.0.1+10, mixed mode, tiered, compressed oops, serial gc, linux-amd64) [junit4] # Problematic frame: [junit4] # V [libjvm.so+0xc48409] PhaseIdealLoop::split_up(Node*, Node*, Node*) [clone .part.40]+0x619 [junit4] # [junit4] # No core dump will be written. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again [junit4] # [junit4] # An error report file with more information is saved as: [junit4] # /home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J2/hs_err_pid25682.log [junit4] # [junit4] # Compiler replay data is saved as: [junit4] # /home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J2/replay_pid25682.log [junit4] # [junit4] # If you would like to submit a bug report, please visit: [junit4] # http://bugreport.java.com/bugreport/crash.jsp [junit4] # [junit4] <<< JVM J2: EOF [...truncated 313 lines...] [junit4] ERROR: JVM J2 ended with an exception, command line: /home/jenkins/tools/java/64bit/jdk-10.0.1/bin/java -XX:+UseCompressedOops -XX:+UseSerialGC -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/home/jenkins/workspace/Lucene-Solr-master-Linux/heapdumps -ea -esa --illegal-access=deny -Dtests.prefix=tests -Dtests.seed=DA183E6804219490 -Xmx512M -Dtests.iters= -Dtests.verbose=false -Dtests.infostream=false -Dtests.codec=random -Dtests.postingsformat=random -Dtests.docvaluesformat=random -Dtests.locale=random -Dtests.timezone=random -Dtests.directory=random -Dtests.linedocsfile=europarl.lines.txt.gz -Dtests.luceneMatchVersion=8.0.0 -Dtests.cleanthreads=perClass -Djava.util.logging.config.file=/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/tools/junit4/logging.properties -Dtests.nightly=false -Dtests.weekly=false -Dtests.monster=false -Dtests.slow=true -Dtests.asserts=true -Dtests.multiplier=3 -DtempDir=./temp -Djava.io.tmpdir=./temp -Djunit4.tempDir=/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/temp -Dcommon.dir=/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene -Dclover.db.dir=/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/clover/db -Djava.security.policy=/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/tools/junit4/solr-tests.policy -Dtests.LUCENE_VERSION=8.0.0 -Djetty.testMode=1 -Djetty.insecurerandom=1 -Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory -Djava.awt.headless=true -Djdk.map.althashing.threshold=0 -Dtests.src.home=/home/jenkins/workspace/Lucene-Solr-master-Linux -Djava.security.egd=file:/dev/./urandom -Djunit4.childvm.cwd=/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J2 -Djunit4.childvm.id=2 -Djunit4.childvm.count=3 -Djava.security.manager=org.apache.lucene.util.TestSecurityManager -Dtests.filterstacks=true -Dtests.leaveTemporary=false -Dtests.badapples=false -classpath
[jira] [Created] (LUCENE-8416) Add tokenized version of o.o. to Stempel stopwords
Trey Jones created LUCENE-8416: -- Summary: Add tokenized version of o.o. to Stempel stopwords Key: LUCENE-8416 URL: https://issues.apache.org/jira/browse/LUCENE-8416 Project: Lucene - Core Issue Type: Improvement Components: modules/analysis Reporter: Trey Jones The Stempel stopword list ( lucene-solr/lucene/analysis/stempel/src/resources/org/apache/lucene/analysis/pl/stopwords.txt ) contains "o.o." which is a good stopword (it's part of the abbreviation for "limited liability company", which is "[sp. z o.o.|https://en.wiktionary.org/wiki/sp._z_o.o.];. However, the standard tokenizer changes "o.o." to "o.o" so the stopword filter has no effect. Add "o.o" to the stopword list. (It's probably okay to leave "o.o." in the list, though, in case a different tokenizer is used.) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-11598) Export Writer needs to support more than 4 Sort fields - Say 10, ideally it should not be bound at all, but 4 seems to really short sell the StreamRollup capabilities.
[ https://issues.apache.org/jira/browse/SOLR-11598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amrit Sarkar updated SOLR-11598: Attachment: (was: SOLR-11598.patch) > Export Writer needs to support more than 4 Sort fields - Say 10, ideally it > should not be bound at all, but 4 seems to really short sell the StreamRollup > capabilities. > --- > > Key: SOLR-11598 > URL: https://issues.apache.org/jira/browse/SOLR-11598 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: streaming expressions >Affects Versions: 6.6.1, 7.0 >Reporter: Aroop >Assignee: Varun Thacker >Priority: Major > Labels: patch > Attachments: SOLR-11598-6_6-streamtests, SOLR-11598-6_6.patch, > SOLR-11598-master.patch, SOLR-11598.patch, SOLR-11598.patch, > SOLR-11598.patch, SOLR-11598.patch, SOLR-11598.patch, SOLR-11598.patch, > SOLR-11598.patch, SOLR-11598.patch, SOLR-11598.patch, SOLR-11598.patch, > streaming-export reports.xlsx > > > I am a user of Streaming and I am currently trying to use rollups on an 10 > dimensional document. > I am unable to get correct results on this query as I am bounded by the > limitation of the export handler which supports only 4 sort fields. > I do not see why this needs to be the case, as it could very well be 10 or 20. > My current needs would be satisfied with 10, but one would want to ask why > can't it be any decent integer n, beyond which we know performance degrades, > but even then it should be caveat emptor. > [~varunthacker] > Code Link: > https://github.com/apache/lucene-solr/blob/19db1df81a18e6eb2cce5be973bf2305d606a9f8/solr/core/src/java/org/apache/solr/handler/ExportWriter.java#L455 > Error > null:java.io.IOException: A max of 4 sorts can be specified > at > org.apache.solr.handler.ExportWriter.getSortDoc(ExportWriter.java:452) > at org.apache.solr.handler.ExportWriter.writeDocs(ExportWriter.java:228) > at > org.apache.solr.handler.ExportWriter.lambda$null$1(ExportWriter.java:219) > at > org.apache.solr.common.util.JavaBinCodec.writeIterator(JavaBinCodec.java:664) > at > org.apache.solr.common.util.JavaBinCodec.writeKnownType(JavaBinCodec.java:333) > at > org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:223) > at org.apache.solr.common.util.JavaBinCodec$1.put(JavaBinCodec.java:394) > at > org.apache.solr.handler.ExportWriter.lambda$null$2(ExportWriter.java:219) > at > org.apache.solr.common.util.JavaBinCodec.writeMap(JavaBinCodec.java:437) > at > org.apache.solr.common.util.JavaBinCodec.writeKnownType(JavaBinCodec.java:354) > at > org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:223) > at org.apache.solr.common.util.JavaBinCodec$1.put(JavaBinCodec.java:394) > at > org.apache.solr.handler.ExportWriter.lambda$write$3(ExportWriter.java:217) > at > org.apache.solr.common.util.JavaBinCodec.writeMap(JavaBinCodec.java:437) > at org.apache.solr.handler.ExportWriter.write(ExportWriter.java:215) > at org.apache.solr.core.SolrCore$3.write(SolrCore.java:2601) > at > org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:49) > at > org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:809) > at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:538) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) > at > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) > at > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226) > at > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180) > at > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512) > at > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) > at > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) > at >
[jira] [Updated] (SOLR-11598) Export Writer needs to support more than 4 Sort fields - Say 10, ideally it should not be bound at all, but 4 seems to really short sell the StreamRollup capabilities.
[ https://issues.apache.org/jira/browse/SOLR-11598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amrit Sarkar updated SOLR-11598: Attachment: SOLR-11598.patch > Export Writer needs to support more than 4 Sort fields - Say 10, ideally it > should not be bound at all, but 4 seems to really short sell the StreamRollup > capabilities. > --- > > Key: SOLR-11598 > URL: https://issues.apache.org/jira/browse/SOLR-11598 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: streaming expressions >Affects Versions: 6.6.1, 7.0 >Reporter: Aroop >Assignee: Varun Thacker >Priority: Major > Labels: patch > Attachments: SOLR-11598-6_6-streamtests, SOLR-11598-6_6.patch, > SOLR-11598-master.patch, SOLR-11598.patch, SOLR-11598.patch, > SOLR-11598.patch, SOLR-11598.patch, SOLR-11598.patch, SOLR-11598.patch, > SOLR-11598.patch, SOLR-11598.patch, SOLR-11598.patch, SOLR-11598.patch, > streaming-export reports.xlsx > > > I am a user of Streaming and I am currently trying to use rollups on an 10 > dimensional document. > I am unable to get correct results on this query as I am bounded by the > limitation of the export handler which supports only 4 sort fields. > I do not see why this needs to be the case, as it could very well be 10 or 20. > My current needs would be satisfied with 10, but one would want to ask why > can't it be any decent integer n, beyond which we know performance degrades, > but even then it should be caveat emptor. > [~varunthacker] > Code Link: > https://github.com/apache/lucene-solr/blob/19db1df81a18e6eb2cce5be973bf2305d606a9f8/solr/core/src/java/org/apache/solr/handler/ExportWriter.java#L455 > Error > null:java.io.IOException: A max of 4 sorts can be specified > at > org.apache.solr.handler.ExportWriter.getSortDoc(ExportWriter.java:452) > at org.apache.solr.handler.ExportWriter.writeDocs(ExportWriter.java:228) > at > org.apache.solr.handler.ExportWriter.lambda$null$1(ExportWriter.java:219) > at > org.apache.solr.common.util.JavaBinCodec.writeIterator(JavaBinCodec.java:664) > at > org.apache.solr.common.util.JavaBinCodec.writeKnownType(JavaBinCodec.java:333) > at > org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:223) > at org.apache.solr.common.util.JavaBinCodec$1.put(JavaBinCodec.java:394) > at > org.apache.solr.handler.ExportWriter.lambda$null$2(ExportWriter.java:219) > at > org.apache.solr.common.util.JavaBinCodec.writeMap(JavaBinCodec.java:437) > at > org.apache.solr.common.util.JavaBinCodec.writeKnownType(JavaBinCodec.java:354) > at > org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:223) > at org.apache.solr.common.util.JavaBinCodec$1.put(JavaBinCodec.java:394) > at > org.apache.solr.handler.ExportWriter.lambda$write$3(ExportWriter.java:217) > at > org.apache.solr.common.util.JavaBinCodec.writeMap(JavaBinCodec.java:437) > at org.apache.solr.handler.ExportWriter.write(ExportWriter.java:215) > at org.apache.solr.core.SolrCore$3.write(SolrCore.java:2601) > at > org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:49) > at > org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:809) > at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:538) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) > at > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) > at > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226) > at > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180) > at > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512) > at > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) > at > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) > at >
[jira] [Updated] (SOLR-11598) Export Writer needs to support more than 4 Sort fields - Say 10, ideally it should not be bound at all, but 4 seems to really short sell the StreamRollup capabilities.
[ https://issues.apache.org/jira/browse/SOLR-11598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amrit Sarkar updated SOLR-11598: Attachment: SOLR-11598.patch > Export Writer needs to support more than 4 Sort fields - Say 10, ideally it > should not be bound at all, but 4 seems to really short sell the StreamRollup > capabilities. > --- > > Key: SOLR-11598 > URL: https://issues.apache.org/jira/browse/SOLR-11598 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: streaming expressions >Affects Versions: 6.6.1, 7.0 >Reporter: Aroop >Assignee: Varun Thacker >Priority: Major > Labels: patch > Attachments: SOLR-11598-6_6-streamtests, SOLR-11598-6_6.patch, > SOLR-11598-master.patch, SOLR-11598.patch, SOLR-11598.patch, > SOLR-11598.patch, SOLR-11598.patch, SOLR-11598.patch, SOLR-11598.patch, > SOLR-11598.patch, SOLR-11598.patch, SOLR-11598.patch, SOLR-11598.patch, > streaming-export reports.xlsx > > > I am a user of Streaming and I am currently trying to use rollups on an 10 > dimensional document. > I am unable to get correct results on this query as I am bounded by the > limitation of the export handler which supports only 4 sort fields. > I do not see why this needs to be the case, as it could very well be 10 or 20. > My current needs would be satisfied with 10, but one would want to ask why > can't it be any decent integer n, beyond which we know performance degrades, > but even then it should be caveat emptor. > [~varunthacker] > Code Link: > https://github.com/apache/lucene-solr/blob/19db1df81a18e6eb2cce5be973bf2305d606a9f8/solr/core/src/java/org/apache/solr/handler/ExportWriter.java#L455 > Error > null:java.io.IOException: A max of 4 sorts can be specified > at > org.apache.solr.handler.ExportWriter.getSortDoc(ExportWriter.java:452) > at org.apache.solr.handler.ExportWriter.writeDocs(ExportWriter.java:228) > at > org.apache.solr.handler.ExportWriter.lambda$null$1(ExportWriter.java:219) > at > org.apache.solr.common.util.JavaBinCodec.writeIterator(JavaBinCodec.java:664) > at > org.apache.solr.common.util.JavaBinCodec.writeKnownType(JavaBinCodec.java:333) > at > org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:223) > at org.apache.solr.common.util.JavaBinCodec$1.put(JavaBinCodec.java:394) > at > org.apache.solr.handler.ExportWriter.lambda$null$2(ExportWriter.java:219) > at > org.apache.solr.common.util.JavaBinCodec.writeMap(JavaBinCodec.java:437) > at > org.apache.solr.common.util.JavaBinCodec.writeKnownType(JavaBinCodec.java:354) > at > org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:223) > at org.apache.solr.common.util.JavaBinCodec$1.put(JavaBinCodec.java:394) > at > org.apache.solr.handler.ExportWriter.lambda$write$3(ExportWriter.java:217) > at > org.apache.solr.common.util.JavaBinCodec.writeMap(JavaBinCodec.java:437) > at org.apache.solr.handler.ExportWriter.write(ExportWriter.java:215) > at org.apache.solr.core.SolrCore$3.write(SolrCore.java:2601) > at > org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:49) > at > org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:809) > at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:538) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) > at > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) > at > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226) > at > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180) > at > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512) > at > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) > at > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) > at >
[jira] [Created] (SOLR-12571) Upgrade OpenNLP to 1.9.0
Koji Sekiguchi created SOLR-12571: - Summary: Upgrade OpenNLP to 1.9.0 Key: SOLR-12571 URL: https://issues.apache.org/jira/browse/SOLR-12571 Project: Solr Issue Type: Task Security Level: Public (Default Security Level. Issues are Public) Components: contrib - LangId, update Affects Versions: 7.4 Reporter: Koji Sekiguchi Fix For: master (8.0), 7.5 OpenNLP 1.9.0 generates new format model file which 1.8.x cannot read. 1.9.0 can read the previous format for back-compat. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12570) OpenNLPExtractNamedEntitiesUpdateProcessor cannot support multi fields because pattern replacement doesn't work correctly
[ https://issues.apache.org/jira/browse/SOLR-12570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Koji Sekiguchi updated SOLR-12570: -- Attachment: SOLR-12570.patch > OpenNLPExtractNamedEntitiesUpdateProcessor cannot support multi fields > because pattern replacement doesn't work correctly > - > > Key: SOLR-12570 > URL: https://issues.apache.org/jira/browse/SOLR-12570 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: UpdateRequestProcessors >Affects Versions: 7.3, 7.3.1, 7.4 >Reporter: Koji Sekiguchi >Priority: Minor > Fix For: master (8.0), 7.5 > > Attachments: SOLR-12570.patch > > > Because of the following code, if resolvedDest is "body_{EntityType}_s" and > becomes "body_PERSON_s" by replacement, but once it is replaced, as > placeholder ({EntityType}) is overwritten, the destination is always > "body_PERSON_s". > {code} > resolvedDest = resolvedDest.replace(ENTITY_TYPE, entityType); > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-12570) OpenNLPExtractNamedEntitiesUpdateProcessor cannot support multi fields because pattern replacement doesn't work correctly
Koji Sekiguchi created SOLR-12570: - Summary: OpenNLPExtractNamedEntitiesUpdateProcessor cannot support multi fields because pattern replacement doesn't work correctly Key: SOLR-12570 URL: https://issues.apache.org/jira/browse/SOLR-12570 Project: Solr Issue Type: Bug Security Level: Public (Default Security Level. Issues are Public) Components: UpdateRequestProcessors Affects Versions: 7.4, 7.3.1, 7.3 Reporter: Koji Sekiguchi Fix For: master (8.0), 7.5 Because of the following code, if resolvedDest is "body_{EntityType}_s" and becomes "body_PERSON_s" by replacement, but once it is replaced, as placeholder ({EntityType}) is overwritten, the destination is always "body_PERSON_s". {code} resolvedDest = resolvedDest.replace(ENTITY_TYPE, entityType); {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8415) Clean up Directory contracts (write-once, no reads-before-write-completed)
[ https://issues.apache.org/jira/browse/LUCENE-8415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16550740#comment-16550740 ] Robert Muir commented on LUCENE-8415: - you could also do no bookkeeping and simply pay the cost of more renames, right? Currently only the segments_N is written atomically like this. But writing to a temp file and then renaming at the end is pretty easy to understand, lots of applications do it. > Clean up Directory contracts (write-once, no reads-before-write-completed) > -- > > Key: LUCENE-8415 > URL: https://issues.apache.org/jira/browse/LUCENE-8415 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Dawid Weiss >Assignee: Dawid Weiss >Priority: Minor > Time Spent: 10m > Remaining Estimate: 0h > > Created a PR here for early review. > https://github.com/apache/lucene-solr/pull/424 > I changed: > * the wording in Directory documentation to be a bit more formalized about > what rules a Directory should obey (and users expect). > * modified the test framework to verify the above in mock classes. > Currently a number of Directory implementations fail the > {{testReadFileOpenForWrites}} test that I added, so I'll keep working on that. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-SmokeRelease-7.x - Build # 264 - Still Failing
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-7.x/264/ No tests ran. Build Log: [...truncated 23001 lines...] [asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: invalid part, must have at least one section (e.g., chapter, appendix, etc.) [asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid part, must have at least one section (e.g., chapter, appendix, etc.) [java] Processed 2228 links (1783 relative) to 3003 anchors in 229 files [echo] Validated Links & Anchors via: /home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/build/solr-ref-guide/bare-bones-html/ -dist-changes: [copy] Copying 4 files to /home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/package/changes -dist-keys: [get] Getting: http://home.apache.org/keys/group/lucene.asc [get] To: /home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/package/KEYS package: -unpack-solr-tgz: -ensure-solr-tgz-exists: [mkdir] Created dir: /home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/build/solr.tgz.unpacked [untar] Expanding: /home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/package/solr-7.5.0.tgz into /home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/build/solr.tgz.unpacked generate-maven-artifacts: resolve: resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml resolve: resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml
[jira] [Commented] (SOLR-12305) Making buffering tlog bounded for faster recovery
[ https://issues.apache.org/jira/browse/SOLR-12305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16550682#comment-16550682 ] Shalin Shekhar Mangar commented on SOLR-12305: -- +1 LGTM > Making buffering tlog bounded for faster recovery > - > > Key: SOLR-12305 > URL: https://issues.apache.org/jira/browse/SOLR-12305 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Cao Manh Dat >Assignee: Cao Manh Dat >Priority: Major > Attachments: SOLR-12305.patch > > > The current recovery process has 2 main problems (pointed out by > [~shalinmangar] ) which make it may never finish. > # The replay updates process is too slow, we do it in a single-thread > fashion. Therefore if the more updates get appended at a faster rate, the > replay process will be never finished > # The buffering tlog is unbounded, we keep adding more entries to buffering > tlog and waiting for them to get replayed. If we have a way to reduce the > number of updates in buffering tlog, even when replay process is slow it will > eventually finish. > I come up with a solution for the second problem which is described on this > link: > [https://docs.google.com/document/d/14DCkYRvYnQmactyWek3nYtUVdpu_CVIA4ZBTfQigjlU/edit?usp=sharing] > In short, the document presents a modification for current recovery process > (section 3: algorithm) and also proof the correctness of the modification > (section 1 and 2). There are some pros in this approach > * Making buffering tlog bounded. > * It will automatically throttle updates from the leader, imagine this case > ** We have a shard with a leader and a replica. When leader sends replica an > update. > ** If the replica is healthy, the leader will have to wait for the replica > to finish process that updates before return to users. Let's call the total > time for an update is T0 > ** If the replica is recovering, in the current code, the replica will only > append that update to its buffering tlog (which is much faster than > indexing), so the total time for an update is T1 < T0. Therefore the rate of > incoming updates will be higher in this case. > ** In above design, T1 will be subequal to T0. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS-EA] Lucene-Solr-7.x-Windows (64bit/jdk-11-ea+22) - Build # 702 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/702/ Java: 64bit/jdk-11-ea+22 -XX:+UseCompressedOops -XX:+UseParallelGC 18 tests failed. FAILED: org.apache.solr.cloud.TestSkipOverseerOperations.testSkipLeaderOperations Error Message: IOException occured when talking to server at: https://127.0.0.1:1/solr Stack Trace: org.apache.solr.client.solrj.SolrServerException: IOException occured when talking to server at: https://127.0.0.1:1/solr at __randomizedtesting.SeedInfo.seed([327FBA7C8FBF41C2:C2956980509C31A8]:0) at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:657) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244) at org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483) at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1106) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:886) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:819) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211) at org.apache.solr.cloud.TestSkipOverseerOperations.testSkipLeaderOperations(TestSkipOverseerOperations.java:71) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
[jira] [Commented] (SOLR-12509) Improve SplitShardCmd performance and reliability
[ https://issues.apache.org/jira/browse/SOLR-12509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16550662#comment-16550662 ] Shalin Shekhar Mangar commented on SOLR-12509: -- Awesome speedups! A few minor issues: # SolrIndexSplitter.findDocsToDelete uses the wrong key to lookup inside the synchronized block -- {{docsToDelete.get(readerContext.ord);}} # There is a new {{DefaultSolrCoreState.getIndexWriterLock}} method which isn't used anywhere? # Typo {{changepostd}} in {{ReplicaMutator}} # We should rename {{index.split}} to follow the {{index.}} convention otherwise dangling "index.split" directories won't be cleaned up by {{DirectoryFactory.cleanupOldIndexDirectories}} > Improve SplitShardCmd performance and reliability > - > > Key: SOLR-12509 > URL: https://issues.apache.org/jira/browse/SOLR-12509 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Reporter: Andrzej Bialecki >Assignee: Andrzej Bialecki >Priority: Major > Attachments: SOLR-12509.patch > > > {{SplitShardCmd}} is currently quite complex. > Shard splitting occurs on active shards, which are still being updated, so > the splitting has to involve several carefully orchestrated steps, making > sure that new sub-shard placeholders are properly created and visible, and > then also applying buffered updates to the split leaders and performing > recovery on sub-shard replicas. > This process could be simplified in cases where collections are not actively > being updated or can tolerate a little downtime - we could put the shard > "offline", ie. disable writing while the splitting is in progress (in order > to avoid users' confusion we should disable writing to the whole collection). > The actual index splitting could perhaps be improved to use > {{HardLinkCopyDirectoryWrapper}} for creating a copy of the index by > hard-linking existing index segments, and then applying deletes to the > documents that don't belong in a sub-shard. However, the resulting index > slices that replicas would have to pull would be the same size as the whole > shard. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12305) Making buffering tlog bounded for faster recovery
[ https://issues.apache.org/jira/browse/SOLR-12305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16550651#comment-16550651 ] Cao Manh Dat commented on SOLR-12305: - I will commit the patch soon if there is no objection. > Making buffering tlog bounded for faster recovery > - > > Key: SOLR-12305 > URL: https://issues.apache.org/jira/browse/SOLR-12305 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Cao Manh Dat >Assignee: Cao Manh Dat >Priority: Major > Attachments: SOLR-12305.patch > > > The current recovery process has 2 main problems (pointed out by > [~shalinmangar] ) which make it may never finish. > # The replay updates process is too slow, we do it in a single-thread > fashion. Therefore if the more updates get appended at a faster rate, the > replay process will be never finished > # The buffering tlog is unbounded, we keep adding more entries to buffering > tlog and waiting for them to get replayed. If we have a way to reduce the > number of updates in buffering tlog, even when replay process is slow it will > eventually finish. > I come up with a solution for the second problem which is described on this > link: > [https://docs.google.com/document/d/14DCkYRvYnQmactyWek3nYtUVdpu_CVIA4ZBTfQigjlU/edit?usp=sharing] > In short, the document presents a modification for current recovery process > (section 3: algorithm) and also proof the correctness of the modification > (section 1 and 2). There are some pros in this approach > * Making buffering tlog bounded. > * It will automatically throttle updates from the leader, imagine this case > ** We have a shard with a leader and a replica. When leader sends replica an > update. > ** If the replica is healthy, the leader will have to wait for the replica > to finish process that updates before return to users. Let's call the total > time for an update is T0 > ** If the replica is recovering, in the current code, the replica will only > append that update to its buffering tlog (which is much faster than > indexing), so the total time for an update is T1 < T0. Therefore the rate of > incoming updates will be higher in this case. > ** In above design, T1 will be subequal to T0. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-2562) Make Luke a Lucene/Solr Module
[ https://issues.apache.org/jira/browse/LUCENE-2562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16550630#comment-16550630 ] Tomoko Uchida commented on LUCENE-2562: --- Let me add one more thing, I'm afraid that I cannot ask to and negotiate with ASF on my own about complicated matters with my limited knowledge about licensing and English vocabulary. But if you think there are chances to proceed, please guide me :) > Make Luke a Lucene/Solr Module > -- > > Key: LUCENE-2562 > URL: https://issues.apache.org/jira/browse/LUCENE-2562 > Project: Lucene - Core > Issue Type: Task >Reporter: Mark Miller >Priority: Major > Labels: gsoc2014 > Attachments: LUCENE-2562-Ivy.patch, LUCENE-2562-Ivy.patch, > LUCENE-2562-Ivy.patch, LUCENE-2562-ivy.patch, LUCENE-2562.patch, > LUCENE-2562.patch, Luke-ALE-1.png, Luke-ALE-2.png, Luke-ALE-3.png, > Luke-ALE-4.png, Luke-ALE-5.png, luke-javafx1.png, luke-javafx2.png, > luke-javafx3.png, luke1.jpg, luke2.jpg, luke3.jpg, lukeALE-documents.png > > Time Spent: 10m > Remaining Estimate: 0h > > see > "RE: Luke - in need of maintainer": > http://markmail.org/message/m4gsto7giltvrpuf > "Web-based Luke": http://markmail.org/message/4xwps7p7ifltme5q > I think it would be great if there was a version of Luke that always worked > with trunk - and it would also be great if it was easier to match Luke jars > with Lucene versions. > While I'd like to get GWT Luke into the mix as well, I think the easiest > starting point is to straight port Luke to another UI toolkit before > abstracting out DTO objects that both GWT Luke and Pivot Luke could share. > I've started slowly converting Luke's use of thinlet to Apache Pivot. I > haven't/don't have a lot of time for this at the moment, but I've plugged > away here and there over the past work or two. There is still a *lot* to do. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8415) Clean up Directory contracts (write-once, no reads-before-write-completed)
[ https://issues.apache.org/jira/browse/LUCENE-8415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16550614#comment-16550614 ] Dawid Weiss commented on LUCENE-8415: - Ensuring {{testReadFileOpenForWrites}} works would require moving some bookkeeping to Directory classes (and IndexOutput implementations). A concurrent hash map of open outputs and an update on IndexOutput.close(), essentially. We have a few options. Make it a contractual requirement (then we have to implement this bookkeeping for true filesystems since they allow readers over a writer for the same process). Make this an assertion-mode only check (implement book keeping, but don't run it except for assertion-enabled runs). Finally, don't make any checks at all, but give the contractual liberty for Directory implementations to throw AccessDeniedException in {{openInput}} if a file is still open. The offending directory implementations right now are: {code} - org.apache.lucene.store.TestTrackingDirectoryWrapper.testReadFileOpenForWrites [junit4] - org.apache.lucene.store.TestMmapDirectory.testReadFileOpenForWrites [junit4] - org.apache.lucene.store.TestSimpleFSDirectory.testReadFileOpenForWrites [junit4] - org.apache.lucene.store.TestNRTCachingDirectory.testReadFileOpenForWrites [junit4] - org.apache.lucene.store.TestFileSwitchDirectory.testReadFileOpenForWrites [junit4] - org.apache.lucene.store.TestRAMDirectory.testReadFileOpenForWrites [junit4] - org.apache.lucene.store.TestNIOFSDirectory.testReadFileOpenForWrites [junit4] - org.apache.lucene.store.TestMultiMMap.testReadFileOpenForWrites [junit4] - org.apache.lucene.store.TestFilterDirectory.testReadFileOpenForWrites {code} > Clean up Directory contracts (write-once, no reads-before-write-completed) > -- > > Key: LUCENE-8415 > URL: https://issues.apache.org/jira/browse/LUCENE-8415 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Dawid Weiss >Assignee: Dawid Weiss >Priority: Minor > Time Spent: 10m > Remaining Estimate: 0h > > Created a PR here for early review. > https://github.com/apache/lucene-solr/pull/424 > I changed: > * the wording in Directory documentation to be a bit more formalized about > what rules a Directory should obey (and users expect). > * modified the test framework to verify the above in mock classes. > Currently a number of Directory implementations fail the > {{testReadFileOpenForWrites}} test that I added, so I'll keep working on that. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-NightlyTests-7.x - Build # 266 - Failure
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-7.x/266/ 2 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.cloud.hdfs.HdfsChaosMonkeySafeLeaderTest Error Message: ObjectTracker found 1 object(s) that were not released!!! [HdfsTransactionLog] org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: org.apache.solr.update.HdfsTransactionLog at org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42) at org.apache.solr.update.HdfsTransactionLog.(HdfsTransactionLog.java:132) at org.apache.solr.update.HdfsTransactionLog.(HdfsTransactionLog.java:76) at org.apache.solr.update.HdfsUpdateLog.ensureLog(HdfsUpdateLog.java:335) at org.apache.solr.update.UpdateLog.deleteByQuery(UpdateLog.java:659) at org.apache.solr.update.DirectUpdateHandler2.deleteByQuery(DirectUpdateHandler2.java:526) at org.apache.solr.update.processor.RunUpdateProcessor.processDelete(RunUpdateProcessorFactory.java:78) at org.apache.solr.update.processor.UpdateRequestProcessor.processDelete(UpdateRequestProcessor.java:59) at org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalDelete(DistributedUpdateProcessor.java:956) at org.apache.solr.update.processor.DistributedUpdateProcessor.versionDeleteByQuery(DistributedUpdateProcessor.java:1688) at org.apache.solr.update.processor.DistributedUpdateProcessor.doDeleteByQuery(DistributedUpdateProcessor.java:1577) at org.apache.solr.update.processor.DistributedUpdateProcessor.processDelete(DistributedUpdateProcessor.java:1362) at org.apache.solr.update.processor.LogUpdateProcessorFactory$LogUpdateProcessor.processDelete(LogUpdateProcessorFactory.java:124) at org.apache.solr.handler.loader.JavabinLoader.delete(JavabinLoader.java:158) at org.apache.solr.handler.loader.JavabinLoader.parseAndLoadDocs(JavabinLoader.java:114) at org.apache.solr.handler.loader.JavabinLoader.load(JavabinLoader.java:55) at org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:97) at org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:68) at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199) at org.apache.solr.core.SolrCore.execute(SolrCore.java:2541) at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:709) at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:515) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:377) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:323) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1642) at org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:139) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1642) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533) at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255) at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595) at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1317) at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473) at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564) at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1219) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144) at org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:674) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132) at org.eclipse.jetty.server.Server.handle(Server.java:531) at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:352) at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:260) at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:281) at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:102) at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:118) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:333) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:310) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:168) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:126) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:366) at
[jira] [Commented] (SOLR-12305) Making buffering tlog bounded for faster recovery
[ https://issues.apache.org/jira/browse/SOLR-12305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16550588#comment-16550588 ] Cao Manh Dat commented on SOLR-12305: - Attached a patch for this ticket. The theory for proving the correctness is quite long. But the change here is quite minimal (thanks to SOLR-9922). In short of the change here is "When a replica is replaying its buffer updates if it receives an update contains a full document (atomic update, index new documents), instead of writing the update to buffer tlog, write the update directly to index (and tlog) normally." > Making buffering tlog bounded for faster recovery > - > > Key: SOLR-12305 > URL: https://issues.apache.org/jira/browse/SOLR-12305 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Cao Manh Dat >Assignee: Cao Manh Dat >Priority: Major > Attachments: SOLR-12305.patch > > > The current recovery process has 2 main problems (pointed out by > [~shalinmangar] ) which make it may never finish. > # The replay updates process is too slow, we do it in a single-thread > fashion. Therefore if the more updates get appended at a faster rate, the > replay process will be never finished > # The buffering tlog is unbounded, we keep adding more entries to buffering > tlog and waiting for them to get replayed. If we have a way to reduce the > number of updates in buffering tlog, even when replay process is slow it will > eventually finish. > I come up with a solution for the second problem which is described on this > link: > [https://docs.google.com/document/d/14DCkYRvYnQmactyWek3nYtUVdpu_CVIA4ZBTfQigjlU/edit?usp=sharing] > In short, the document presents a modification for current recovery process > (section 3: algorithm) and also proof the correctness of the modification > (section 1 and 2). There are some pros in this approach > * Making buffering tlog bounded. > * It will automatically throttle updates from the leader, imagine this case > ** We have a shard with a leader and a replica. When leader sends replica an > update. > ** If the replica is healthy, the leader will have to wait for the replica > to finish process that updates before return to users. Let's call the total > time for an update is T0 > ** If the replica is recovering, in the current code, the replica will only > append that update to its buffering tlog (which is much faster than > indexing), so the total time for an update is T1 < T0. Therefore the rate of > incoming updates will be higher in this case. > ** In above design, T1 will be subequal to T0. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12305) Making buffering tlog bounded for faster recovery
[ https://issues.apache.org/jira/browse/SOLR-12305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Cao Manh Dat updated SOLR-12305: Attachment: SOLR-12305.patch > Making buffering tlog bounded for faster recovery > - > > Key: SOLR-12305 > URL: https://issues.apache.org/jira/browse/SOLR-12305 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Cao Manh Dat >Assignee: Cao Manh Dat >Priority: Major > Attachments: SOLR-12305.patch > > > The current recovery process has 2 main problems (pointed out by > [~shalinmangar] ) which make it may never finish. > # The replay updates process is too slow, we do it in a single-thread > fashion. Therefore if the more updates get appended at a faster rate, the > replay process will be never finished > # The buffering tlog is unbounded, we keep adding more entries to buffering > tlog and waiting for them to get replayed. If we have a way to reduce the > number of updates in buffering tlog, even when replay process is slow it will > eventually finish. > I come up with a solution for the second problem which is described on this > link: > [https://docs.google.com/document/d/14DCkYRvYnQmactyWek3nYtUVdpu_CVIA4ZBTfQigjlU/edit?usp=sharing] > In short, the document presents a modification for current recovery process > (section 3: algorithm) and also proof the correctness of the modification > (section 1 and 2). There are some pros in this approach > * Making buffering tlog bounded. > * It will automatically throttle updates from the leader, imagine this case > ** We have a shard with a leader and a replica. When leader sends replica an > update. > ** If the replica is healthy, the leader will have to wait for the replica > to finish process that updates before return to users. Let's call the total > time for an update is T0 > ** If the replica is recovering, in the current code, the replica will only > append that update to its buffering tlog (which is much faster than > indexing), so the total time for an update is T1 < T0. Therefore the rate of > incoming updates will be higher in this case. > ** In above design, T1 will be subequal to T0. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-7.x-Linux (32bit/jdk1.8.0_172) - Build # 2355 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2355/ Java: 32bit/jdk1.8.0_172 -server -XX:+UseParallelGC 1 tests failed. FAILED: org.apache.solr.cloud.cdcr.CdcrBidirectionalTest.testBiDir Error Message: Captured an uncaught exception in thread: Thread[id=13843, name=cdcr-replicator-4532-thread-1, state=RUNNABLE, group=TGRP-CdcrBidirectionalTest] Stack Trace: com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught exception in thread: Thread[id=13843, name=cdcr-replicator-4532-thread-1, state=RUNNABLE, group=TGRP-CdcrBidirectionalTest] at __randomizedtesting.SeedInfo.seed([C4853456D03F4887:815EC4B4C81104C5]:0) Caused by: java.lang.AssertionError: 1606499373076709376 != 1606499372623724544 at __randomizedtesting.SeedInfo.seed([C4853456D03F4887]:0) at org.apache.solr.update.CdcrUpdateLog$CdcrLogReader.forwardSeek(CdcrUpdateLog.java:611) at org.apache.solr.handler.CdcrReplicator.run(CdcrReplicator.java:125) at org.apache.solr.handler.CdcrReplicatorScheduler.lambda$null$0(CdcrReplicatorScheduler.java:81) at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Build Log: [...truncated 13462 lines...] [junit4] Suite: org.apache.solr.cloud.cdcr.CdcrBidirectionalTest [junit4] 2> Creating dataDir: /home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.cdcr.CdcrBidirectionalTest_C4853456D03F4887-001/init-core-data-001 [junit4] 2> 1050607 WARN (SUITE-CdcrBidirectionalTest-seed#[C4853456D03F4887]-worker) [] o.a.s.SolrTestCaseJ4 startTrackingSearchers: numOpens=10 numCloses=10 [junit4] 2> 1050608 INFO (SUITE-CdcrBidirectionalTest-seed#[C4853456D03F4887]-worker) [] o.a.s.SolrTestCaseJ4 Using PointFields (NUMERIC_POINTS_SYSPROP=true) w/NUMERIC_DOCVALUES_SYSPROP=true [junit4] 2> 1050609 INFO (SUITE-CdcrBidirectionalTest-seed#[C4853456D03F4887]-worker) [] o.a.s.SolrTestCaseJ4 Randomized ssl (true) and clientAuth (false) via: @org.apache.solr.util.RandomizeSSL(reason=, ssl=NaN, value=NaN, clientAuth=NaN) [junit4] 2> 1050610 INFO (SUITE-CdcrBidirectionalTest-seed#[C4853456D03F4887]-worker) [] o.a.s.SolrTestCaseJ4 SecureRandom sanity checks: test.solr.allowed.securerandom=null & java.security.egd=file:/dev/./urandom [junit4] 2> 1050612 INFO (TEST-CdcrBidirectionalTest.testBiDir-seed#[C4853456D03F4887]) [] o.a.s.SolrTestCaseJ4 ###Starting testBiDir [junit4] 2> 1050612 INFO (TEST-CdcrBidirectionalTest.testBiDir-seed#[C4853456D03F4887]) [] o.a.s.c.MiniSolrCloudCluster Starting cluster of 1 servers in /home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.cdcr.CdcrBidirectionalTest_C4853456D03F4887-001/cdcr-cluster2-001 [junit4] 2> 1050613 INFO (TEST-CdcrBidirectionalTest.testBiDir-seed#[C4853456D03F4887]) [] o.a.s.c.ZkTestServer STARTING ZK TEST SERVER [junit4] 2> 1050613 INFO (Thread-2579) [] o.a.s.c.ZkTestServer client port:0.0.0.0/0.0.0.0:0 [junit4] 2> 1050613 INFO (Thread-2579) [] o.a.s.c.ZkTestServer Starting server [junit4] 2> 1050634 ERROR (Thread-2579) [] o.a.z.s.ZooKeeperServer ZKShutdownHandler is not registered, so ZooKeeper server won't take any action on ERROR or SHUTDOWN server state changes [junit4] 2> 1050713 INFO (TEST-CdcrBidirectionalTest.testBiDir-seed#[C4853456D03F4887]) [] o.a.s.c.ZkTestServer start zk server on port:33301 [junit4] 2> 1050716 INFO (zkConnectionManagerCallback-5216-thread-1) [ ] o.a.s.c.c.ConnectionManager zkClient has connected [junit4] 2> 1050718 WARN (NIOServerCxn.Factory:0.0.0.0/0.0.0.0:0) [] o.a.z.s.NIOServerCnxn Unable to read additional data from client sessionid 0x1005c2b436b, likely client has closed socket [junit4] 2> 1050721 INFO (jetty-launcher-5213-thread-1) [] o.e.j.s.Server jetty-9.4.11.v20180605; built: 2018-06-05T18:24:03.829Z; git: d5fc0523cfa96bfebfbda19606cad384d772f04c; jvm 1.8.0_172-b11 [junit4] 2> 1050723 INFO (jetty-launcher-5213-thread-1) [] o.e.j.s.session DefaultSessionIdManager workerName=node0 [junit4] 2> 1050723 INFO (jetty-launcher-5213-thread-1) [] o.e.j.s.session No SessionScavenger set, using defaults [junit4] 2> 1050723 INFO (jetty-launcher-5213-thread-1) [] o.e.j.s.session node0 Scavenging every 60ms [junit4] 2> 1050723 INFO (jetty-launcher-5213-thread-1) [] o.e.j.s.h.ContextHandler Started o.e.j.s.ServletContextHandler@651f99{/solr,null,AVAILABLE} [junit4] 2> 1050723 INFO (jetty-launcher-5213-thread-1) [] o.e.j.s.AbstractConnector Started
[jira] [Updated] (LUCENE-8415) Clean up Directory contracts (write-once, no reads-before-write-completed)
[ https://issues.apache.org/jira/browse/LUCENE-8415?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dawid Weiss updated LUCENE-8415: Description: Created a PR here for early review. https://github.com/apache/lucene-solr/pull/424 I changed: * the wording in Directory documentation to be a bit more formalized about what rules a Directory should obey (and users expect). * modified the test framework to verify the above in mock classes. Currently a number of Directory implementations fail the {{testReadFileOpenForWrites}} test that I added, so I'll keep working on that. > Clean up Directory contracts (write-once, no reads-before-write-completed) > -- > > Key: LUCENE-8415 > URL: https://issues.apache.org/jira/browse/LUCENE-8415 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Dawid Weiss >Assignee: Dawid Weiss >Priority: Minor > Time Spent: 10m > Remaining Estimate: 0h > > Created a PR here for early review. > https://github.com/apache/lucene-solr/pull/424 > I changed: > * the wording in Directory documentation to be a bit more formalized about > what rules a Directory should obey (and users expect). > * modified the test framework to verify the above in mock classes. > Currently a number of Directory implementations fail the > {{testReadFileOpenForWrites}} test that I added, so I'll keep working on that. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] lucene-solr pull request #424: LUCENE-8415: Clean up Directory contracts (wr...
GitHub user dweiss opened a pull request: https://github.com/apache/lucene-solr/pull/424 LUCENE-8415: Clean up Directory contracts (write-once, no reads-before-write-completed) You can merge this pull request into a Git repository by running: $ git pull https://github.com/dweiss/lucene-solr LUCENE-8415 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/lucene-solr/pull/424.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #424 commit 4bd76f9f646d8796980fb4e8d2db744e27f4bc11 Author: Dawid Weiss Date: 2018-07-19T21:14:29Z Initial removal of test assertions. commit 386b9ceab13e9982cd2dd4d5fb279cd35580ec42 Author: Dawid Weiss Date: 2018-07-20T08:55:26Z Directory documentation updated. commit 5e404eedbdc4bff7dd329069d88bc02f5cf5818c Author: Dawid Weiss Date: 2018-07-20T09:06:50Z Merge branch 'master' into LUCENE-8415 commit 87eaf89a63d10d0c55c184ebf01279646fb87f4f Author: Dawid Weiss Date: 2018-07-20T09:22:42Z Add contract check for open-before-writes-close. --- - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-8415) Clean up Directory contracts (write-once, no reads-before-write-completed)
[ https://issues.apache.org/jira/browse/LUCENE-8415?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dawid Weiss updated LUCENE-8415: Attachment: (was: LUCENE-8415.patch) > Clean up Directory contracts (write-once, no reads-before-write-completed) > -- > > Key: LUCENE-8415 > URL: https://issues.apache.org/jira/browse/LUCENE-8415 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Dawid Weiss >Assignee: Dawid Weiss >Priority: Minor > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11990) Make it possible to co-locate replicas of multiple collections together in a node
[ https://issues.apache.org/jira/browse/SOLR-11990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16550546#comment-16550546 ] Shalin Shekhar Mangar commented on SOLR-11990: -- More changes have been pushed to the branch: # Support for suggestions API are in place # WITH_COLLECTION is not a global tag and similarly the violations/suggestions are also per node # New {{TestPolicy.testWithCollectionSuggestions}} to test suggestions I think this is ready. I'll wait for a final review by [~noble.paul] before merging to master. > Make it possible to co-locate replicas of multiple collections together in a > node > - > > Key: SOLR-11990 > URL: https://issues.apache.org/jira/browse/SOLR-11990 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) > Components: AutoScaling, SolrCloud >Reporter: Shalin Shekhar Mangar >Assignee: Shalin Shekhar Mangar >Priority: Major > Fix For: master (8.0), 7.5 > > Attachments: SOLR-11990.patch, SOLR-11990.patch, SOLR-11990.patch, > SOLR-11990.patch, SOLR-11990.patch, SOLR-11990.patch, SOLR-11990.patch > > > It is necessary to co-locate replicas of different collection together in a > node when cross-collection joins are performed. > while creating a collection specify the parameter > {{withCollection=other-collection-name}} . This ensure that Solr always > ensure that atleast one replica of {{other-collection}} is present with this > collection replicas > This requires changing create collection, create shard and add replica APIs > as well because we want a replica of collection A to be created first before > a replica of collection B is created so that join queries etc are always > possible. > Some caveats to this implementation: > # The {{other-collection}} should only have a single shard named "shard1" > # Any replica of {{other-collection}} created by this feature will be of NRT > type > Removing the above caveats can be a goal of other issues. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-8415) Clean up Directory contracts (write-once, no reads-before-write-completed)
[ https://issues.apache.org/jira/browse/LUCENE-8415?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dawid Weiss updated LUCENE-8415: Attachment: LUCENE-8415.patch > Clean up Directory contracts (write-once, no reads-before-write-completed) > -- > > Key: LUCENE-8415 > URL: https://issues.apache.org/jira/browse/LUCENE-8415 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Dawid Weiss >Assignee: Dawid Weiss >Priority: Minor > Attachments: LUCENE-8415.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-11990) Make it possible to co-locate replicas of multiple collections together in a node
[ https://issues.apache.org/jira/browse/SOLR-11990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shalin Shekhar Mangar updated SOLR-11990: - Description: It is necessary to co-locate replicas of different collection together in a node when cross-collection joins are performed. while creating a collection specify the parameter {{withCollection=other-collection-name}} . This ensure that Solr always ensure that atleast one replica of {{other-collection}} is present with this collection replicas This requires changing create collection, create shard and add replica APIs as well because we want a replica of collection A to be created first before a replica of collection B is created so that join queries etc are always possible. Some caveats to this implementation: # The {{other-collection}} should only have a single shard named "shard1" # Any replica of {{other-collection}} created by this feature will be of NRT type Removing the above caveats can be a goal of other issues. was: It is necessary to co-locate replicas of different collection together in a node when cross-collection joins are performed. while creating a collection specify the parameter {{withCollection=other-collection-name}} . This ensure that Solr always ensure that atleast one replica of {{other-collection}} is present with this collection replicas This requires changing create collection, create shard and add replica APIs as well because we want a replica of collection A to be created first before a replica of collection B is created so that join queries etc are always possible. > Make it possible to co-locate replicas of multiple collections together in a > node > - > > Key: SOLR-11990 > URL: https://issues.apache.org/jira/browse/SOLR-11990 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) > Components: AutoScaling, SolrCloud >Reporter: Shalin Shekhar Mangar >Assignee: Shalin Shekhar Mangar >Priority: Major > Fix For: master (8.0), 7.5 > > Attachments: SOLR-11990.patch, SOLR-11990.patch, SOLR-11990.patch, > SOLR-11990.patch, SOLR-11990.patch, SOLR-11990.patch, SOLR-11990.patch > > > It is necessary to co-locate replicas of different collection together in a > node when cross-collection joins are performed. > while creating a collection specify the parameter > {{withCollection=other-collection-name}} . This ensure that Solr always > ensure that atleast one replica of {{other-collection}} is present with this > collection replicas > This requires changing create collection, create shard and add replica APIs > as well because we want a replica of collection A to be created first before > a replica of collection B is created so that join queries etc are always > possible. > Some caveats to this implementation: > # The {{other-collection}} should only have a single shard named "shard1" > # Any replica of {{other-collection}} created by this feature will be of NRT > type > Removing the above caveats can be a goal of other issues. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-11990) Make it possible to co-locate replicas of multiple collections together in a node
[ https://issues.apache.org/jira/browse/SOLR-11990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shalin Shekhar Mangar updated SOLR-11990: - Description: It is necessary to co-locate replicas of different collection together in a node when cross-collection joins are performed. while creating a collection specify the parameter {{withCollection=other-collection-name}} . This ensure that Solr always ensure that atleast one replica of {{other-collection}} is present with this collection replicas This requires changing create collection, create shard and add replica APIs as well because we want a replica of collection A to be created first before a replica of collection B is created so that join queries etc are always possible. was: It is necessary to co-locate replicas of different collection together in a node when cross-collection joins are performed. while creating a collection specify the parameter {{withCollection=other-collection-name}} . This ensure that Solr always ensure that atleast one replica of {{other-cllection}} is present with this collection replicas This requires changing create collection, create shard and add replica APIs as well because we want a replica of collection A to be created first before a replica of collection B is created so that join queries etc are always possible. > Make it possible to co-locate replicas of multiple collections together in a > node > - > > Key: SOLR-11990 > URL: https://issues.apache.org/jira/browse/SOLR-11990 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) > Components: AutoScaling, SolrCloud >Reporter: Shalin Shekhar Mangar >Assignee: Shalin Shekhar Mangar >Priority: Major > Fix For: master (8.0), 7.5 > > Attachments: SOLR-11990.patch, SOLR-11990.patch, SOLR-11990.patch, > SOLR-11990.patch, SOLR-11990.patch, SOLR-11990.patch, SOLR-11990.patch > > > It is necessary to co-locate replicas of different collection together in a > node when cross-collection joins are performed. > while creating a collection specify the parameter > {{withCollection=other-collection-name}} . This ensure that Solr always > ensure that atleast one replica of {{other-collection}} is present with this > collection replicas > This requires changing create collection, create shard and add replica APIs > as well because we want a replica of collection A to be created first before > a replica of collection B is created so that join queries etc are always > possible. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8306) Allow iteration over the term positions of a Match
[ https://issues.apache.org/jira/browse/LUCENE-8306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16550491#comment-16550491 ] Jim Ferenczi commented on LUCENE-8306: -- +1, thanks [~romseygeek] , the patch looks good. > Allow iteration over the term positions of a Match > -- > > Key: LUCENE-8306 > URL: https://issues.apache.org/jira/browse/LUCENE-8306 > Project: Lucene - Core > Issue Type: New Feature >Reporter: Alan Woodward >Assignee: Alan Woodward >Priority: Major > Attachments: LUCENE-8306.patch, LUCENE-8306.patch, LUCENE-8306.patch, > LUCENE-8306.patch > > > For multi-term queries such as phrase queries, the matches API currently just > returns information about the span of the whole match. It would be useful to > also expose information about the matching terms within the phrase. The same > would apply to Spans and Interval queries. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org