[JENKINS-EA] Lucene-Solr-http2-Windows (64bit/jdk-12-ea+12) - Build # 7 - Still Failing!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-http2-Windows/7/ Java: 64bit/jdk-12-ea+12 -XX:+UseCompressedOops -XX:+UseSerialGC 5 tests failed. FAILED: org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica Error Message: Expected new active leader null Live Nodes: [127.0.0.1:62959_solr, 127.0.0.1:62962_solr, 127.0.0.1:62963_solr] Last available state: DocCollection(raceDeleteReplica_true//collections/raceDeleteReplica_true/state.json/11)={ "pullReplicas":"0", "replicationFactor":"2", "shards":{"shard1":{ "range":"8000-7fff", "state":"active", "replicas":{ "core_node3":{ "core":"raceDeleteReplica_true_shard1_replica_n1", "base_url":"http://127.0.0.1:62968/solr;, "node_name":"127.0.0.1:62968_solr", "state":"down", "type":"NRT", "leader":"true"}, "core_node6":{ "core":"raceDeleteReplica_true_shard1_replica_n5", "base_url":"http://127.0.0.1:62968/solr;, "node_name":"127.0.0.1:62968_solr", "state":"down", "type":"NRT"}, "core_node4":{ "core":"raceDeleteReplica_true_shard1_replica_n2", "base_url":"http://127.0.0.1:62959/solr;, "node_name":"127.0.0.1:62959_solr", "state":"down", "type":"NRT", "router":{"name":"compositeId"}, "maxShardsPerNode":"1", "autoAddReplicas":"false", "nrtReplicas":"2", "tlogReplicas":"0"} Stack Trace: java.lang.AssertionError: Expected new active leader null Live Nodes: [127.0.0.1:62959_solr, 127.0.0.1:62962_solr, 127.0.0.1:62963_solr] Last available state: DocCollection(raceDeleteReplica_true//collections/raceDeleteReplica_true/state.json/11)={ "pullReplicas":"0", "replicationFactor":"2", "shards":{"shard1":{ "range":"8000-7fff", "state":"active", "replicas":{ "core_node3":{ "core":"raceDeleteReplica_true_shard1_replica_n1", "base_url":"http://127.0.0.1:62968/solr;, "node_name":"127.0.0.1:62968_solr", "state":"down", "type":"NRT", "leader":"true"}, "core_node6":{ "core":"raceDeleteReplica_true_shard1_replica_n5", "base_url":"http://127.0.0.1:62968/solr;, "node_name":"127.0.0.1:62968_solr", "state":"down", "type":"NRT"}, "core_node4":{ "core":"raceDeleteReplica_true_shard1_replica_n2", "base_url":"http://127.0.0.1:62959/solr;, "node_name":"127.0.0.1:62959_solr", "state":"down", "type":"NRT", "router":{"name":"compositeId"}, "maxShardsPerNode":"1", "autoAddReplicas":"false", "nrtReplicas":"2", "tlogReplicas":"0"} at __randomizedtesting.SeedInfo.seed([84AE4421C0350CF7:EEB825F1A8C7463D]:0) at org.junit.Assert.fail(Assert.java:93) at org.apache.solr.cloud.SolrCloudTestCase.waitForState(SolrCloudTestCase.java:280) at org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica(DeleteReplicaTest.java:328) at org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica(DeleteReplicaTest.java:223) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at
[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-10.0.1) - Build # 23282 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/23282/ Java: 64bit/jdk-10.0.1 -XX:+UseCompressedOops -XX:+UseSerialGC 81 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.analytics.NoFacetTest Error Message: ObjectTracker found 8 object(s) that were not released!!! [ZkStateReader, ZkStateReader, ZkStateReader, SolrZkClient, ZkStateReader, SolrZkClient, SolrZkClient, SolrZkClient] org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: org.apache.solr.common.cloud.ZkStateReader at org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42) at org.apache.solr.common.cloud.ZkStateReader.(ZkStateReader.java:328) at org.apache.solr.client.solrj.impl.ZkClientClusterStateProvider.connect(ZkClientClusterStateProvider.java:160) at org.apache.solr.client.solrj.impl.CloudSolrClient.getZkStateReader(CloudSolrClient.java:342) at org.apache.solr.client.solrj.impl.SolrClientCloudManager.(SolrClientCloudManager.java:69) at org.apache.solr.cloud.ZkController.getSolrCloudManager(ZkController.java:708) at org.apache.solr.core.CoreContainer.createMetricsHistoryHandler(CoreContainer.java:765) at org.apache.solr.core.CoreContainer.load(CoreContainer.java:586) at org.apache.solr.servlet.SolrDispatchFilter.createCoreContainer(SolrDispatchFilter.java:253) at org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:173) at org.eclipse.jetty.servlet.FilterHolder.initialize(FilterHolder.java:139) at org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:741) at org.eclipse.jetty.servlet.ServletHandler.updateMappings(ServletHandler.java:1477) at org.eclipse.jetty.servlet.ServletHandler.setFilterMappings(ServletHandler.java:1542) at org.eclipse.jetty.servlet.ServletHandler.addFilterMapping(ServletHandler.java:1186) at org.eclipse.jetty.servlet.ServletHandler.addFilterWithMapping(ServletHandler.java:1023) at org.eclipse.jetty.servlet.ServletContextHandler.addFilter(ServletContextHandler.java:473) at org.apache.solr.client.solrj.embedded.JettySolrRunner$1.lifeCycleStarted(JettySolrRunner.java:344) at org.eclipse.jetty.util.component.AbstractLifeCycle.setStarted(AbstractLifeCycle.java:179) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:69) at org.apache.solr.client.solrj.embedded.JettySolrRunner.retryOnPortBindFailure(JettySolrRunner.java:507) at org.apache.solr.client.solrj.embedded.JettySolrRunner.start(JettySolrRunner.java:446) at org.apache.solr.client.solrj.embedded.JettySolrRunner.start(JettySolrRunner.java:414) at org.apache.solr.cloud.MiniSolrCloudCluster.startJettySolrRunner(MiniSolrCloudCluster.java:443) at org.apache.solr.cloud.MiniSolrCloudCluster.lambda$new$0(MiniSolrCloudCluster.java:272) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1135) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) at java.base/java.lang.Thread.run(Thread.java:844) org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: org.apache.solr.common.cloud.ZkStateReader at org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42) at org.apache.solr.common.cloud.ZkStateReader.(ZkStateReader.java:328) at org.apache.solr.client.solrj.impl.ZkClientClusterStateProvider.connect(ZkClientClusterStateProvider.java:160) at org.apache.solr.client.solrj.impl.CloudSolrClient.getZkStateReader(CloudSolrClient.java:342) at org.apache.solr.client.solrj.impl.SolrClientCloudManager.(SolrClientCloudManager.java:69) at org.apache.solr.cloud.ZkController.getSolrCloudManager(ZkController.java:708) at org.apache.solr.core.CoreContainer.createMetricsHistoryHandler(CoreContainer.java:765) at org.apache.solr.core.CoreContainer.load(CoreContainer.java:586) at org.apache.solr.servlet.SolrDispatchFilter.createCoreContainer(SolrDispatchFilter.java:253) at org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:173) at org.eclipse.jetty.servlet.FilterHolder.initialize(FilterHolder.java:139) at org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:741) at org.eclipse.jetty.servlet.ServletHandler.updateMappings(ServletHandler.java:1477) at org.eclipse.jetty.servlet.ServletHandler.setFilterMappings(ServletHandler.java:1542) at org.eclipse.jetty.servlet.ServletHandler.addFilterMapping(ServletHandler.java:1186) at org.eclipse.jetty.servlet.ServletHandler.addFilterWithMapping(ServletHandler.java:1023) at org.eclipse.jetty.servlet.ServletContextHandler.addFilter(ServletContextHandler.java:473) at
[jira] [Commented] (SOLR-12801) Fix the tests, remove BadApples and AwaitsFix annotations, improve env for test development.
[ https://issues.apache.org/jira/browse/SOLR-12801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16705680#comment-16705680 ] ASF subversion and git services commented on SOLR-12801: Commit 3f8b3b20a22f95f2824f585c6742a248e85bc5f7 in lucene-solr's branch refs/heads/branch_7x from markrmiller [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=3f8b3b2 ] SOLR-12801: Fix thread leak in test. > Fix the tests, remove BadApples and AwaitsFix annotations, improve env for > test development. > > > Key: SOLR-12801 > URL: https://issues.apache.org/jira/browse/SOLR-12801 > Project: Solr > Issue Type: Task > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Mark Miller >Assignee: Mark Miller >Priority: Critical > Time Spent: 1h > Remaining Estimate: 0h > > A single issue to counteract the single issue adding tons of annotations, the > continued addition of new flakey tests, and the continued addition of > flakiness to existing tests. > Lots more to come. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12801) Fix the tests, remove BadApples and AwaitsFix annotations, improve env for test development.
[ https://issues.apache.org/jira/browse/SOLR-12801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16705679#comment-16705679 ] ASF subversion and git services commented on SOLR-12801: Commit 9b0b9032e2571b3a37aef93d823161b1b934381e in lucene-solr's branch refs/heads/master from markrmiller [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=9b0b903 ] SOLR-12801: Fix thread leak in test. > Fix the tests, remove BadApples and AwaitsFix annotations, improve env for > test development. > > > Key: SOLR-12801 > URL: https://issues.apache.org/jira/browse/SOLR-12801 > Project: Solr > Issue Type: Task > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Mark Miller >Assignee: Mark Miller >Priority: Critical > Time Spent: 1h > Remaining Estimate: 0h > > A single issue to counteract the single issue adding tons of annotations, the > continued addition of new flakey tests, and the continued addition of > flakiness to existing tests. > Lots more to come. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12801) Fix the tests, remove BadApples and AwaitsFix annotations, improve env for test development.
[ https://issues.apache.org/jira/browse/SOLR-12801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16705675#comment-16705675 ] ASF subversion and git services commented on SOLR-12801: Commit 0e6b29c3e19fd86d7fa4cef805c1e797935cc46a in lucene-solr's branch refs/heads/branch_7x from [~markrmil...@gmail.com] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=0e6b29c ] SOLR-12801: Wait for collections properly. > Fix the tests, remove BadApples and AwaitsFix annotations, improve env for > test development. > > > Key: SOLR-12801 > URL: https://issues.apache.org/jira/browse/SOLR-12801 > Project: Solr > Issue Type: Task > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Mark Miller >Assignee: Mark Miller >Priority: Critical > Time Spent: 1h > Remaining Estimate: 0h > > A single issue to counteract the single issue adding tons of annotations, the > continued addition of new flakey tests, and the continued addition of > flakiness to existing tests. > Lots more to come. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12801) Fix the tests, remove BadApples and AwaitsFix annotations, improve env for test development.
[ https://issues.apache.org/jira/browse/SOLR-12801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16705674#comment-16705674 ] ASF subversion and git services commented on SOLR-12801: Commit 79d7efe811a4e9e672d1a55e8b30683a927c1f4d in lucene-solr's branch refs/heads/branch_7x from [~markrmil...@gmail.com] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=79d7efe ] SOLR-12801: Wait for executor to finish shutdown. > Fix the tests, remove BadApples and AwaitsFix annotations, improve env for > test development. > > > Key: SOLR-12801 > URL: https://issues.apache.org/jira/browse/SOLR-12801 > Project: Solr > Issue Type: Task > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Mark Miller >Assignee: Mark Miller >Priority: Critical > Time Spent: 1h > Remaining Estimate: 0h > > A single issue to counteract the single issue adding tons of annotations, the > continued addition of new flakey tests, and the continued addition of > flakiness to existing tests. > Lots more to come. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13027) Harden LeaderTragicEventTest.
[ https://issues.apache.org/jira/browse/SOLR-13027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16705671#comment-16705671 ] ASF subversion and git services commented on SOLR-13027: Commit 33c40a8da40677f43ea377ca0cb2a1def8649c52 in lucene-solr's branch refs/heads/master from markrmiller [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=33c40a8 ] SOLR-13027: Change retries to work across JVM impls properly by looking for an IOException. > Harden LeaderTragicEventTest. > - > > Key: SOLR-13027 > URL: https://issues.apache.org/jira/browse/SOLR-13027 > Project: Solr > Issue Type: Sub-task > Security Level: Public(Default Security Level. Issues are Public) > Components: Tests >Reporter: Mark Miller >Assignee: Mark Miller >Priority: Major > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13030) MetricsHistoryHandler appears to never be "close()ed" - causes thread leaks?
[ https://issues.apache.org/jira/browse/SOLR-13030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16705668#comment-16705668 ] ASF subversion and git services commented on SOLR-13030: Commit d8f482f5fbae50a4d2ed1758a835a90768f5b279 in lucene-solr's branch refs/heads/master from [~markrmil...@gmail.com] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=d8f482f ] SOLR-13030: Update executor usage to work correctly with Java 11 and update Mockito & HttpComponents to work with Java 11 and fix get remote info retry to work across jvms better. > MetricsHistoryHandler appears to never be "close()ed" - causes thread leaks? > > > Key: SOLR-13030 > URL: https://issues.apache.org/jira/browse/SOLR-13030 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Hoss Man >Assignee: Mark Miller >Priority: Major > > {{MetricsHistoryHandler}} implements {{Closeable}} and depends on it's > {{close()}} method to shutdown a {{private ScheduledThreadPoolExecutor > collectService}} as well as a {{private final SolrRrdBackendFactory factory}} > (which maintains it's own {{private ScheduledThreadPoolExecutor}} but -as far > as i can tell nothing seems to every "close" this {{MetricsHistoryHandler}} > on shutdown.- after changes in 75b1831967982 which move this close() call to > a {{ForkJoinPool}} in {{CoreContainer.shutdown()}} it seems to be leaking > threads. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13030) MetricsHistoryHandler appears to never be "close()ed" - causes thread leaks?
[ https://issues.apache.org/jira/browse/SOLR-13030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16705677#comment-16705677 ] Mark Miller commented on SOLR-13030: [~hossman], I think I addressed most of that and it was related to Java > 8. There is still a problem with SSL on Java 12 though. Otherwise I think things are working now. > MetricsHistoryHandler appears to never be "close()ed" - causes thread leaks? > > > Key: SOLR-13030 > URL: https://issues.apache.org/jira/browse/SOLR-13030 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Hoss Man >Assignee: Mark Miller >Priority: Major > > {{MetricsHistoryHandler}} implements {{Closeable}} and depends on it's > {{close()}} method to shutdown a {{private ScheduledThreadPoolExecutor > collectService}} as well as a {{private final SolrRrdBackendFactory factory}} > (which maintains it's own {{private ScheduledThreadPoolExecutor}} but -as far > as i can tell nothing seems to every "close" this {{MetricsHistoryHandler}} > on shutdown.- after changes in 75b1831967982 which move this close() call to > a {{ForkJoinPool}} in {{CoreContainer.shutdown()}} it seems to be leaking > threads. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13027) Harden LeaderTragicEventTest.
[ https://issues.apache.org/jira/browse/SOLR-13027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16705676#comment-16705676 ] ASF subversion and git services commented on SOLR-13027: Commit 2968669c5362e12d4c946fd6b624ad9c60731720 in lucene-solr's branch refs/heads/branch_7x from markrmiller [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=2968669 ] SOLR-13027: Change retries to work across JVM impls properly by looking for an IOException. > Harden LeaderTragicEventTest. > - > > Key: SOLR-13027 > URL: https://issues.apache.org/jira/browse/SOLR-13027 > Project: Solr > Issue Type: Sub-task > Security Level: Public(Default Security Level. Issues are Public) > Components: Tests >Reporter: Mark Miller >Assignee: Mark Miller >Priority: Major > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13030) MetricsHistoryHandler appears to never be "close()ed" - causes thread leaks?
[ https://issues.apache.org/jira/browse/SOLR-13030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16705673#comment-16705673 ] ASF subversion and git services commented on SOLR-13030: Commit a47b64000b227aae57636877b26f2aa2bc16e1f5 in lucene-solr's branch refs/heads/branch_7x from [~markrmil...@gmail.com] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=a47b640 ] SOLR-13030: Update executor usage to work correctly with Java 11 and update Mockito & HttpComponents to work with Java 11 and fix get remote info retry to work across jvms better. # Conflicts: # solr/CHANGES.txt > MetricsHistoryHandler appears to never be "close()ed" - causes thread leaks? > > > Key: SOLR-13030 > URL: https://issues.apache.org/jira/browse/SOLR-13030 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Hoss Man >Assignee: Mark Miller >Priority: Major > > {{MetricsHistoryHandler}} implements {{Closeable}} and depends on it's > {{close()}} method to shutdown a {{private ScheduledThreadPoolExecutor > collectService}} as well as a {{private final SolrRrdBackendFactory factory}} > (which maintains it's own {{private ScheduledThreadPoolExecutor}} but -as far > as i can tell nothing seems to every "close" this {{MetricsHistoryHandler}} > on shutdown.- after changes in 75b1831967982 which move this close() call to > a {{ForkJoinPool}} in {{CoreContainer.shutdown()}} it seems to be leaking > threads. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12801) Fix the tests, remove BadApples and AwaitsFix annotations, improve env for test development.
[ https://issues.apache.org/jira/browse/SOLR-12801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16705669#comment-16705669 ] ASF subversion and git services commented on SOLR-12801: Commit a3ec5b5fdfa59197fb8a36a29cc158b69835afd8 in lucene-solr's branch refs/heads/master from [~markrmil...@gmail.com] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=a3ec5b5 ] SOLR-12801: Wait for executor to finish shutdown. > Fix the tests, remove BadApples and AwaitsFix annotations, improve env for > test development. > > > Key: SOLR-12801 > URL: https://issues.apache.org/jira/browse/SOLR-12801 > Project: Solr > Issue Type: Task > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Mark Miller >Assignee: Mark Miller >Priority: Critical > Time Spent: 1h > Remaining Estimate: 0h > > A single issue to counteract the single issue adding tons of annotations, the > continued addition of new flakey tests, and the continued addition of > flakiness to existing tests. > Lots more to come. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12801) Fix the tests, remove BadApples and AwaitsFix annotations, improve env for test development.
[ https://issues.apache.org/jira/browse/SOLR-12801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16705670#comment-16705670 ] ASF subversion and git services commented on SOLR-12801: Commit 7f88bfa11234a2ad4c688d131c94db574dc6e516 in lucene-solr's branch refs/heads/master from [~markrmil...@gmail.com] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=7f88bfa ] SOLR-12801: Wait for collections properly. > Fix the tests, remove BadApples and AwaitsFix annotations, improve env for > test development. > > > Key: SOLR-12801 > URL: https://issues.apache.org/jira/browse/SOLR-12801 > Project: Solr > Issue Type: Task > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Mark Miller >Assignee: Mark Miller >Priority: Critical > Time Spent: 1h > Remaining Estimate: 0h > > A single issue to counteract the single issue adding tons of annotations, the > continued addition of new flakey tests, and the continued addition of > flakiness to existing tests. > Lots more to come. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk-11) - Build # 3163 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/3163/ Java: 64bit/jdk-11 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC 78 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.analytics.facet.QueryFacetTest Error Message: ObjectTracker found 8 object(s) that were not released!!! [ZkStateReader, SolrZkClient, SolrZkClient, ZkStateReader, SolrZkClient, SolrZkClient, ZkStateReader, ZkStateReader] org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: org.apache.solr.common.cloud.ZkStateReader at org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42) at org.apache.solr.common.cloud.ZkStateReader.(ZkStateReader.java:328) at org.apache.solr.client.solrj.impl.ZkClientClusterStateProvider.connect(ZkClientClusterStateProvider.java:160) at org.apache.solr.client.solrj.impl.CloudSolrClient.getZkStateReader(CloudSolrClient.java:342) at org.apache.solr.client.solrj.impl.SolrClientCloudManager.(SolrClientCloudManager.java:69) at org.apache.solr.cloud.ZkController.getSolrCloudManager(ZkController.java:710) at org.apache.solr.core.CoreContainer.createMetricsHistoryHandler(CoreContainer.java:760) at org.apache.solr.core.CoreContainer.load(CoreContainer.java:582) at org.apache.solr.servlet.SolrDispatchFilter.createCoreContainer(SolrDispatchFilter.java:253) at org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:173) at org.eclipse.jetty.servlet.FilterHolder.initialize(FilterHolder.java:139) at org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:741) at org.eclipse.jetty.servlet.ServletHandler.updateMappings(ServletHandler.java:1477) at org.eclipse.jetty.servlet.ServletHandler.setFilterMappings(ServletHandler.java:1542) at org.eclipse.jetty.servlet.ServletHandler.addFilterMapping(ServletHandler.java:1186) at org.eclipse.jetty.servlet.ServletHandler.addFilterWithMapping(ServletHandler.java:1023) at org.eclipse.jetty.servlet.ServletContextHandler.addFilter(ServletContextHandler.java:473) at org.apache.solr.client.solrj.embedded.JettySolrRunner$1.lifeCycleStarted(JettySolrRunner.java:344) at org.eclipse.jetty.util.component.AbstractLifeCycle.setStarted(AbstractLifeCycle.java:179) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:69) at org.apache.solr.client.solrj.embedded.JettySolrRunner.retryOnPortBindFailure(JettySolrRunner.java:507) at org.apache.solr.client.solrj.embedded.JettySolrRunner.start(JettySolrRunner.java:446) at org.apache.solr.client.solrj.embedded.JettySolrRunner.start(JettySolrRunner.java:414) at org.apache.solr.cloud.MiniSolrCloudCluster.startJettySolrRunner(MiniSolrCloudCluster.java:443) at org.apache.solr.cloud.MiniSolrCloudCluster.lambda$new$0(MiniSolrCloudCluster.java:272) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:834) org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: org.apache.solr.common.cloud.SolrZkClient at org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42) at org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:203) at org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:126) at org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:116) at org.apache.solr.common.cloud.ZkStateReader.(ZkStateReader.java:306) at org.apache.solr.client.solrj.impl.ZkClientClusterStateProvider.connect(ZkClientClusterStateProvider.java:160) at org.apache.solr.client.solrj.impl.CloudSolrClient.getZkStateReader(CloudSolrClient.java:342) at org.apache.solr.client.solrj.impl.SolrClientCloudManager.(SolrClientCloudManager.java:69) at org.apache.solr.cloud.ZkController.getSolrCloudManager(ZkController.java:710) at org.apache.solr.core.CoreContainer.createMetricsHistoryHandler(CoreContainer.java:760) at org.apache.solr.core.CoreContainer.load(CoreContainer.java:582) at org.apache.solr.servlet.SolrDispatchFilter.createCoreContainer(SolrDispatchFilter.java:253) at org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:173) at org.eclipse.jetty.servlet.FilterHolder.initialize(FilterHolder.java:139) at org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:741) at org.eclipse.jetty.servlet.ServletHandler.updateMappings(ServletHandler.java:1477) at org.eclipse.jetty.servlet.ServletHandler.setFilterMappings(ServletHandler.java:1542) at org.eclipse.jetty.servlet.ServletHandler.addFilterMapping(ServletHandler.java:1186) at
[JENKINS] Lucene-Solr-master-Windows (64bit/jdk1.8.0_172) - Build # 7641 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7641/ Java: 64bit/jdk1.8.0_172 -XX:+UseCompressedOops -XX:+UseParallelGC 8 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.update.processor.TimeRoutedAliasUpdateProcessorTest Error Message: 1 thread leaked from SUITE scope at org.apache.solr.update.processor.TimeRoutedAliasUpdateProcessorTest: 1) Thread[id=1649, name=TRA-preemptive-creation-441-thread-1-processing-n:127.0.0.1:55004_solr x:myalias_2017-10-28_shard1_replica_n1 c:myalias_2017-10-28 s:shard1 r:core_node2, state=TIMED_WAITING, group=TGRP-TimeRoutedAliasUpdateProcessorTest] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) at org.apache.solr.cloud.OverseerTaskQueue$LatchWatcher.await(OverseerTaskQueue.java:158) at org.apache.solr.cloud.OverseerTaskQueue.offer(OverseerTaskQueue.java:208) at org.apache.solr.handler.admin.CollectionsHandler.sendToOCPQueue(CollectionsHandler.java:363) at org.apache.solr.handler.admin.CollectionsHandler.sendToOCPQueue(CollectionsHandler.java:308) at org.apache.solr.cloud.api.collections.MaintainRoutedAliasCmd.remoteInvoke(MaintainRoutedAliasCmd.java:91) at org.apache.solr.update.processor.TimeRoutedAliasUpdateProcessor.createNextCollection(TimeRoutedAliasUpdateProcessor.java:359) at org.apache.solr.update.processor.TimeRoutedAliasUpdateProcessor.lambda$createCollectionsIfRequired$0(TimeRoutedAliasUpdateProcessor.java:228) at org.apache.solr.update.processor.TimeRoutedAliasUpdateProcessor$$Lambda$521/410232415.run(Unknown Source) at org.apache.solr.update.processor.TimeRoutedAliasUpdateProcessor.lambda$preemptiveAsync$1(TimeRoutedAliasUpdateProcessor.java:253) at org.apache.solr.update.processor.TimeRoutedAliasUpdateProcessor$$Lambda$522/658288700.run(Unknown Source) at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209) at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$$Lambda$12/1334084098.run(Unknown Source) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Stack Trace: com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE scope at org.apache.solr.update.processor.TimeRoutedAliasUpdateProcessorTest: 1) Thread[id=1649, name=TRA-preemptive-creation-441-thread-1-processing-n:127.0.0.1:55004_solr x:myalias_2017-10-28_shard1_replica_n1 c:myalias_2017-10-28 s:shard1 r:core_node2, state=TIMED_WAITING, group=TGRP-TimeRoutedAliasUpdateProcessorTest] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163) at org.apache.solr.cloud.OverseerTaskQueue$LatchWatcher.await(OverseerTaskQueue.java:158) at org.apache.solr.cloud.OverseerTaskQueue.offer(OverseerTaskQueue.java:208) at org.apache.solr.handler.admin.CollectionsHandler.sendToOCPQueue(CollectionsHandler.java:363) at org.apache.solr.handler.admin.CollectionsHandler.sendToOCPQueue(CollectionsHandler.java:308) at org.apache.solr.cloud.api.collections.MaintainRoutedAliasCmd.remoteInvoke(MaintainRoutedAliasCmd.java:91) at org.apache.solr.update.processor.TimeRoutedAliasUpdateProcessor.createNextCollection(TimeRoutedAliasUpdateProcessor.java:359) at org.apache.solr.update.processor.TimeRoutedAliasUpdateProcessor.lambda$createCollectionsIfRequired$0(TimeRoutedAliasUpdateProcessor.java:228) at org.apache.solr.update.processor.TimeRoutedAliasUpdateProcessor$$Lambda$521/410232415.run(Unknown Source) at org.apache.solr.update.processor.TimeRoutedAliasUpdateProcessor.lambda$preemptiveAsync$1(TimeRoutedAliasUpdateProcessor.java:253) at org.apache.solr.update.processor.TimeRoutedAliasUpdateProcessor$$Lambda$522/658288700.run(Unknown Source) at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209) at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$$Lambda$12/1334084098.run(Unknown Source) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) at __randomizedtesting.SeedInfo.seed([C3EC920833C32D9F]:0) FAILED:
[JENKINS-EA] Lucene-Solr-7.6-Linux (64bit/jdk-12-ea+12) - Build # 47 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.6-Linux/47/ Java: 64bit/jdk-12-ea+12 -XX:-UseCompressedOops -XX:+UseG1GC 51 tests failed. FAILED: org.apache.solr.client.solrj.impl.CloudSolrClientTest.preferLocalShardsTest Error Message: Response was not received from shards on a single node Stack Trace: java.lang.AssertionError: Response was not received from shards on a single node at __randomizedtesting.SeedInfo.seed([66C6DB229104E07:FAA1F50BD7C7D9FD]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.apache.solr.client.solrj.impl.CloudSolrClientTest.queryWithShardsPreferenceRules(CloudSolrClientTest.java:475) at org.apache.solr.client.solrj.impl.CloudSolrClientTest.preferLocalShardsTest(CloudSolrClientTest.java:425) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985) at org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:110) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.base/java.lang.Thread.run(Thread.java:835) FAILED:
[JENKINS] Lucene-Solr-http2-Linux (64bit/jdk-10.0.1) - Build # 18 - Still Failing!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-http2-Linux/18/ Java: 64bit/jdk-10.0.1 -XX:+UseCompressedOops -XX:+UseSerialGC All tests passed Build Log: [...truncated 25510 lines...] check-licenses: [echo] License check under: /home/jenkins/workspace/Lucene-Solr-http2-Linux/lucene [licenses] MISSING sha1 checksum file for: /home/jenkins/workspace/Lucene-Solr-http2-Linux/lucene/replicator/lib/jetty-continuation-9.4.14.v20181114.jar [licenses] EXPECTED sha1 checksum file : /home/jenkins/workspace/Lucene-Solr-http2-Linux/lucene/licenses/jetty-continuation-9.4.14.v20181114.jar.sha1 [licenses] MISSING sha1 checksum file for: /home/jenkins/workspace/Lucene-Solr-http2-Linux/lucene/replicator/lib/jetty-http-9.4.14.v20181114.jar [licenses] EXPECTED sha1 checksum file : /home/jenkins/workspace/Lucene-Solr-http2-Linux/lucene/licenses/jetty-http-9.4.14.v20181114.jar.sha1 [licenses] MISSING sha1 checksum file for: /home/jenkins/workspace/Lucene-Solr-http2-Linux/lucene/replicator/lib/jetty-io-9.4.14.v20181114.jar [licenses] EXPECTED sha1 checksum file : /home/jenkins/workspace/Lucene-Solr-http2-Linux/lucene/licenses/jetty-io-9.4.14.v20181114.jar.sha1 [licenses] MISSING sha1 checksum file for: /home/jenkins/workspace/Lucene-Solr-http2-Linux/lucene/replicator/lib/jetty-server-9.4.14.v20181114.jar [licenses] EXPECTED sha1 checksum file : /home/jenkins/workspace/Lucene-Solr-http2-Linux/lucene/licenses/jetty-server-9.4.14.v20181114.jar.sha1 [licenses] MISSING sha1 checksum file for: /home/jenkins/workspace/Lucene-Solr-http2-Linux/lucene/replicator/lib/jetty-servlet-9.4.14.v20181114.jar [licenses] EXPECTED sha1 checksum file : /home/jenkins/workspace/Lucene-Solr-http2-Linux/lucene/licenses/jetty-servlet-9.4.14.v20181114.jar.sha1 [...truncated 3 lines...] BUILD FAILED /home/jenkins/workspace/Lucene-Solr-http2-Linux/build.xml:633: The following error occurred while executing this line: /home/jenkins/workspace/Lucene-Solr-http2-Linux/build.xml:117: The following error occurred while executing this line: /home/jenkins/workspace/Lucene-Solr-http2-Linux/lucene/build.xml:90: The following error occurred while executing this line: /home/jenkins/workspace/Lucene-Solr-http2-Linux/lucene/tools/custom-tasks.xml:62: License check failed. Check the logs. If you recently modified ivy-versions.properties or any module's ivy.xml, make sure you run "ant clean-jars jar-checksums" before running precommit. Total time: 77 minutes 47 seconds Build step 'Invoke Ant' marked build as failure Archiving artifacts Setting ANT_1_8_2_HOME=/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2 [WARNINGS] Skipping publisher since build result is FAILURE Recording test results Setting ANT_1_8_2_HOME=/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2 Email was triggered for: Failure - Any Sending email for trigger: Failure - Any Setting ANT_1_8_2_HOME=/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2 Setting ANT_1_8_2_HOME=/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2 Setting ANT_1_8_2_HOME=/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2 Setting ANT_1_8_2_HOME=/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2 - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-http2-Solaris (64bit/jdk1.8.0) - Build # 4 - Still Failing!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-http2-Solaris/4/ Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC 1026 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.analysis.TestFoldingMultitermExtrasQuery Error Message: no conscrypt_openjdk_jni-sunos-x86_64 in java.library.path Stack Trace: java.lang.UnsatisfiedLinkError: no conscrypt_openjdk_jni-sunos-x86_64 in java.library.path at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1867) at java.lang.Runtime.loadLibrary0(Runtime.java:870) at java.lang.System.loadLibrary(System.java:1122) at org.conscrypt.NativeLibraryUtil.loadLibrary(NativeLibraryUtil.java:54) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.conscrypt.NativeLibraryLoader$1.run(NativeLibraryLoader.java:297) at org.conscrypt.NativeLibraryLoader$1.run(NativeLibraryLoader.java:289) at java.security.AccessController.doPrivileged(Native Method) at org.conscrypt.NativeLibraryLoader.loadLibraryFromHelperClassloader(NativeLibraryLoader.java:289) at org.conscrypt.NativeLibraryLoader.loadLibrary(NativeLibraryLoader.java:262) at org.conscrypt.NativeLibraryLoader.load(NativeLibraryLoader.java:162) at org.conscrypt.NativeLibraryLoader.loadFirstAvailable(NativeLibraryLoader.java:106) at org.conscrypt.NativeCryptoJni.init(NativeCryptoJni.java:50) at org.conscrypt.NativeCrypto.(NativeCrypto.java:65) at org.conscrypt.OpenSSLProvider.(OpenSSLProvider.java:60) at org.conscrypt.OpenSSLProvider.(OpenSSLProvider.java:53) at org.conscrypt.OpenSSLProvider.(OpenSSLProvider.java:49) at org.apache.solr.SolrTestCaseJ4.(SolrTestCaseJ4.java:201) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:348) at com.carrotsearch.randomizedtesting.RandomizedRunner$2.run(RandomizedRunner.java:621) Suppressed: java.lang.UnsatisfiedLinkError: no conscrypt_openjdk_jni-sunos-x86_64 in java.library.path at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1867) at java.lang.Runtime.loadLibrary0(Runtime.java:870) at java.lang.System.loadLibrary(System.java:1122) at org.conscrypt.NativeLibraryUtil.loadLibrary(NativeLibraryUtil.java:54) at org.conscrypt.NativeLibraryLoader.loadLibraryFromCurrentClassloader(NativeLibraryLoader.java:318) at org.conscrypt.NativeLibraryLoader.loadLibrary(NativeLibraryLoader.java:273) ... 11 more Suppressed: java.lang.UnsatisfiedLinkError: no conscrypt_openjdk_jni in java.library.path ... 24 more Suppressed: java.lang.UnsatisfiedLinkError: no conscrypt_openjdk_jni in java.library.path at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1867) at java.lang.Runtime.loadLibrary0(Runtime.java:870) at java.lang.System.loadLibrary(System.java:1122) at org.conscrypt.NativeLibraryUtil.loadLibrary(NativeLibraryUtil.java:54) at org.conscrypt.NativeLibraryLoader.loadLibraryFromCurrentClassloader(NativeLibraryLoader.java:318) at org.conscrypt.NativeLibraryLoader.loadLibrary(NativeLibraryLoader.java:273) ... 11 more Suppressed: java.lang.UnsatisfiedLinkError: no conscrypt in java.library.path ... 24 more Suppressed: java.lang.UnsatisfiedLinkError: no conscrypt in java.library.path at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1867) at java.lang.Runtime.loadLibrary0(Runtime.java:870) at java.lang.System.loadLibrary(System.java:1122) at org.conscrypt.NativeLibraryUtil.loadLibrary(NativeLibraryUtil.java:54) at org.conscrypt.NativeLibraryLoader.loadLibraryFromCurrentClassloader(NativeLibraryLoader.java:318) at org.conscrypt.NativeLibraryLoader.loadLibrary(NativeLibraryLoader.java:273) ... 11 more FAILED: org.apache.solr.schema.TestICUCollationFieldDocValues.initializationError Error Message: org.apache.solr.SolrTestCaseJ4 Stack Trace: java.lang.NoClassDefFoundError: org.apache.solr.SolrTestCaseJ4 at java.lang.Class.getDeclaredMethods0(Native Method) at java.lang.Class.privateGetDeclaredMethods(Class.java:2701) at java.lang.Class.getDeclaredMethods(Class.java:1975) at com.carrotsearch.randomizedtesting.ClassModel$3.members(ClassModel.java:215) at com.carrotsearch.randomizedtesting.ClassModel$3.members(ClassModel.java:212) at
[JENKINS] Lucene-Solr-7.x-Solaris (64bit/jdk1.8.0) - Build # 940 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Solaris/940/ Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseSerialGC 2 tests failed. FAILED: org.apache.solr.cloud.autoscaling.NodeMarkersRegistrationTest.testNodeMarkersRegistration Error Message: Path /autoscaling/nodeLost/127.0.0.1:65068_solr exists Stack Trace: java.lang.AssertionError: Path /autoscaling/nodeLost/127.0.0.1:65068_solr exists at __randomizedtesting.SeedInfo.seed([9CCB7815F5D34784:8471F019FBE68A6B]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.junit.Assert.assertFalse(Assert.java:68) at org.apache.solr.cloud.autoscaling.NodeMarkersRegistrationTest.testNodeMarkersRegistration(NodeMarkersRegistrationTest.java:152) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.lang.Thread.run(Thread.java:748) FAILED: org.apache.solr.cloud.autoscaling.NodeMarkersRegistrationTest.testNodeMarkersRegistration Error Message: Path
[jira] [Updated] (SOLR-13016) Computing suggestions when policy have "#EQUAL" or "#ALL" rules take too long
[ https://issues.apache.org/jira/browse/SOLR-13016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Noble Paul updated SOLR-13016: -- Attachment: SOLR-13016.patch > Computing suggestions when policy have "#EQUAL" or "#ALL" rules take too long > - > > Key: SOLR-13016 > URL: https://issues.apache.org/jira/browse/SOLR-13016 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Noble Paul >Assignee: Noble Paul >Priority: Major > Attachments: SOLR-13016.patch, SOLR-13016.patch > > > When rules have computed values such as "#EQUAL" or "#ALL" , it takes too > long to compute. The problem is these values are computed too many times and > as the no:of nodes increase it almost takes forever. These values don't > change and they can be cached -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-13016) Computing suggestions when policy have "#EQUAL" or "#ALL" rules take too long
[ https://issues.apache.org/jira/browse/SOLR-13016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Noble Paul updated SOLR-13016: -- Attachment: (was: SOLR-13016.patch) > Computing suggestions when policy have "#EQUAL" or "#ALL" rules take too long > - > > Key: SOLR-13016 > URL: https://issues.apache.org/jira/browse/SOLR-13016 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Noble Paul >Assignee: Noble Paul >Priority: Major > Attachments: SOLR-13016.patch, SOLR-13016.patch > > > When rules have computed values such as "#EQUAL" or "#ALL" , it takes too > long to compute. The problem is these values are computed too many times and > as the no:of nodes increase it almost takes forever. These values don't > change and they can be cached -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13030) MetricsHistoryHandler appears to never be "close()ed" - causes thread leaks?
[ https://issues.apache.org/jira/browse/SOLR-13030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16705574#comment-16705574 ] Mark Miller commented on SOLR-13030: I see some permissions issues with Java 11 I have to deal with. > MetricsHistoryHandler appears to never be "close()ed" - causes thread leaks? > > > Key: SOLR-13030 > URL: https://issues.apache.org/jira/browse/SOLR-13030 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Hoss Man >Assignee: Mark Miller >Priority: Major > > {{MetricsHistoryHandler}} implements {{Closeable}} and depends on it's > {{close()}} method to shutdown a {{private ScheduledThreadPoolExecutor > collectService}} as well as a {{private final SolrRrdBackendFactory factory}} > (which maintains it's own {{private ScheduledThreadPoolExecutor}} but -as far > as i can tell nothing seems to every "close" this {{MetricsHistoryHandler}} > on shutdown.- after changes in 75b1831967982 which move this close() call to > a {{ForkJoinPool}} in {{CoreContainer.shutdown()}} it seems to be leaking > threads. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-13016) Computing suggestions when policy have "#EQUAL" or "#ALL" rules take too long
[ https://issues.apache.org/jira/browse/SOLR-13016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Noble Paul updated SOLR-13016: -- Attachment: SOLR-13016.patch > Computing suggestions when policy have "#EQUAL" or "#ALL" rules take too long > - > > Key: SOLR-13016 > URL: https://issues.apache.org/jira/browse/SOLR-13016 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Noble Paul >Assignee: Noble Paul >Priority: Major > Attachments: SOLR-13016.patch, SOLR-13016.patch > > > When rules have computed values such as "#EQUAL" or "#ALL" , it takes too > long to compute. The problem is these values are computed too many times and > as the no:of nodes increase it almost takes forever. These values don't > change and they can be cached -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (LUCENE-8518) TestIndexWriter#testSoftUpdateDocuments fails in Elastic CI
[ https://issues.apache.org/jira/browse/LUCENE-8518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nhat Nguyen resolved LUCENE-8518. - Resolution: Fixed Lucene Fields: (was: New,Patch Available) Fixed in https://github.com/apache/lucene-solr/commit/f7fa25069e16caeca1a8bed184dab7ed0c35545f > TestIndexWriter#testSoftUpdateDocuments fails in Elastic CI > --- > > Key: LUCENE-8518 > URL: https://issues.apache.org/jira/browse/LUCENE-8518 > Project: Lucene - Core > Issue Type: Test >Affects Versions: 7.5.1, master (8.0) >Reporter: Nhat Nguyen >Priority: Minor > > {noformat} > [junit4] 2> NOTE: reproduce with: ant test -Dtestcase=TestIndexWriter > -Dtests.method=testSoftUpdateDocuments -Dtests.seed=AB4BD5EBBD599E71 > -Dtests.slow=true -Dtests.badapples=true -Dtests.locale=es-PE > -Dtests.timezone=Asia/Baghdad -Dtests.asserts=true -Dtests.file.encoding=UTF8 > [junit4] FAILURE 0.02s J0 | TestIndexWriter.testSoftUpdateDocuments <<< > [junit4] > Throwable #1: java.lang.AssertionError: expected:<0> but was:<2> > [junit4] > at > __randomizedtesting.SeedInfo.seed([AB4BD5EBBD599E71:6E82C3196539C4C6]:0) > [junit4] > at > org.apache.lucene.index.TestIndexWriter.testSoftUpdateDocuments(TestIndexWriter.java:3171) > [junit4] > at java.lang.Thread.run(Thread.java:748) > [junit4] 2> NOTE: leaving temporary files on disk at: > /var/lib/jenkins/workspace/apache+lucene-solr+branch_7x/lucene/build/core/test/J0/temp/lucene.index.TestIndexWriter_AB4BD5EBBD599E71-001 > [junit4] 2> NOTE: test params are: codec=Lucene70, > sim=RandomSimilarity(queryNorm=true): {=IB LL-D2, c=DFR I(ne)L2, string=DFR > I(F)L2, body=DFR I(n)BZ(0.3), content=IB SPL-D3(800.0), str=DFR > I(ne)L3(800.0), tvtest=DFR I(F)B2, field=IB LL-LZ(0.3), content4=DFR > I(ne)BZ(0.3), str2=DFR I(n)L2, content1=IB LL-L1, binary=DFI(ChiSquared), > id=DFR I(n)L1, myfield=DFR GBZ(0.3)}, locale=es-PE, timezone=Asia/Baghdad > [junit4] 2> NOTE: Linux 3.10.0-693.el7.x86_64 amd64/Oracle Corporation > 1.8.0_181 (64-bit)/cpus=16,threads=1,free=221813776,total=323485696 > [junit4] 2> NOTE: All tests run in this JVM: [TestDoubleRange, > TestSortedSetSortField, TestSortedSetDocValues, TestCloseableThreadLocal, > TestFloatRange, TestIndexWriterFromReader, TestSingleInstanceLockFactory, > TestSparseFixedBitDocIdSet, TestSpanExplanations, > TestUsageTrackingFilterCachingPolicy, TestByteBuffersDirectory, > TestGraphTokenizers, TestNorms, TestSimpleSearchEquivalence, > TestIntArrayDocIdSet, TestWordlistLoader, TestNumericUtils, TestInfoStream, > TestSegmentMerger, TestCheckIndex, TestTopDocsMerge, TestRAMDirectory, > TestArrayUtil, TestStressIndexing2, TestAutomaton, TestIndexWriterUnicode, > TestLucene70NormsFormat, TestAssertions, TestMultiPhraseEnum, > TestPackedTokenAttributeImpl, TestAllFilesHaveChecksumFooter, > TestFastCompressionMode, TestBM25Similarity, > TestControlledRealTimeReopenThread, TestLatLonPointDistanceSort, > TermInSetQueryTest, TestIndexWriterDeleteByQuery, TestReadOnlyIndex, > TestVirtualMethod, TestQueryBuilder, TestDirectPacked, TestMultiMMap, > TestFilterMergePolicy, TestScoreCachingWrappingScorer, > TestGrowableByteArrayDataOutput, TestSearcherManager, TestDocsAndPositions, > TestLogMergePolicy, Test2BNumericDocValues, TestNormsFieldExistsQuery, > TestIndexWriter] > [junit4] Completed [145/486 (1!)] on J0 in 4.27s, 89 tests, 1 failure, 1 > skipped <<< FAILURES! > [junit4] > {noformat} > I think this test failed because we calculated "numDeletedDocs" from maxDocs > and numDocs from two different snapshots. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12839) add a 'prelim_sort' option to JSON faceting
[ https://issues.apache.org/jira/browse/SOLR-12839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hoss Man updated SOLR-12839: Summary: add a 'prelim_sort' option to JSON faceting (was: add a 'resort' option to JSON faceting) > add a 'prelim_sort' option to JSON faceting > --- > > Key: SOLR-12839 > URL: https://issues.apache.org/jira/browse/SOLR-12839 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: Facet Module >Reporter: Hoss Man >Assignee: Hoss Man >Priority: Major > Fix For: master (8.0), 7.7 > > Attachments: SOLR-12839.patch, SOLR-12839.patch, SOLR-12839.patch, > SOLR-12839.patch > > > As discusssed in SOLR-9480 ... > bq. Similar to how the {{rerank}} request param allows people to collect & > score documents using a "cheap" query, and then re-score the top N using a > ore expensive query, I think it would be handy if JSON Facets supported a > {{resort}} option that could be used on any FacetRequestSorted instance right > along side the {{sort}} param, using the same JSON syntax, so that clients > could have Solr internaly sort all the facet buckets by something simple > (like count) and then "Re-Sort" the top N=limit (or maybe ( > N=limit+overrequest ?) using a more expensive function like skg() -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12932) ant test (without badapples=false) should pass easily for developers.
[ https://issues.apache.org/jira/browse/SOLR-12932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16705517#comment-16705517 ] Hoss Man commented on SOLR-12932: - bq. Skimming the logs from jenkins.thetaphi.de #23277, the suite failures mostly seem to fall into 2 types... I completley overlooked this initially (because i was focused on the "fails") but in that jenkins.thetaphi.de #23277 build (which used java10) it seems that every test that didn't have a suite level failure was just skipping all of the test methods. >From the #23277 logs... {noformat} ... [junit4] Completed [2/837 (1!)] on J2 in 16.95s, 9 tests, 2 errors <<< FAILURES! [junit4] [junit4] Suite: org.apache.solr.DistributedIntervalFacetingTest [junit4] Completed [3/837 (1!)] on J2 in 0.00s, 1 test, 1 skipped [junit4] [junit4] Suite: org.apache.solr.rest.schema.TestDynamicFieldCollectionResource [junit4] Completed [4/837 (1!)] on J2 in 0.00s, 5 tests, 5 skipped [junit4] [junit4] Suite: org.apache.solr.index.UninvertDocValuesMergePolicyTest [junit4] Completed [5/837 (1!)] on J2 in 0.00s, 2 tests, 2 skipped [junit4] [junit4] Suite: org.apache.solr.core.TestCodecSupport [junit4] Completed [6/837 (1!)] on J2 in 0.00s, 8 tests, 8 skipped [junit4] [junit4] Suite: org.apache.solr.update.PeerSyncWithLeaderAndIndexFingerprintCachingTest [junit4] Completed [7/837 (1!)] on J2 in 0.00s, 1 test, 1 skipped [junit4] [junit4] Suite: org.apache.solr.request.RegexBytesRefFilterTest [junit4] Completed [8/837 (1!)] on J2 in 0.00s, 1 test, 1 skipped [junit4] [junit4] Suite: org.apache.solr.legacy.TestNumericRangeQuery32 [junit4] Completed [9/837 (1!)] on J2 in 0.00s, 18 tests, 18 skipped ... {noformat} ...that last {{TestNumericRangeQuery32}} subclasses LuceneTestCase directly, and doesn't use any solr code or test-framework code ... so it shouldn't really be possible it's inheriting any assumption/annotation that would cause it to be skipped ... so WTF i see similar behavior myself when i run with java11 locally (fewer suite level failures, but what doesn't fail says it's being skipped) but can't wrap my head around what might be causing this... {noformat} hossman@tray:~/lucene/dev/solr/core [master] $ ant test -Dtests.seed=A28FAA62F9DF8AD7 ... BUILD FAILED /home/hossman/lucene/dev/solr/common-build.xml:558: The following error occurred while executing this line: /home/hossman/lucene/dev/lucene/common-build.xml:1567: The following error occurred while executing this line: /home/hossman/lucene/dev/lucene/common-build.xml:1092: There were test failures: 837 suites (7 ignored), 4121 tests, 7 suite-level errors, 3954 ignored (128 assumptions) [seed: A28FAA62F9DF8AD7] Total time: 1 minute 36 seconds {noformat} ... i can't reproduce this "java11 skips almost every test" wonkiness if i rollback to 81c092d8262a68dfda3994e790f2e1f3fdf275e2 (last master commit before 75b1831967982) > ant test (without badapples=false) should pass easily for developers. > - > > Key: SOLR-12932 > URL: https://issues.apache.org/jira/browse/SOLR-12932 > Project: Solr > Issue Type: Sub-task > Security Level: Public(Default Security Level. Issues are Public) > Components: Tests >Reporter: Mark Miller >Assignee: Mark Miller >Priority: Major > > If we fix the tests we will end up here anyway, but we can shortcut this. > Once I get my first patch in, anyone who mentions a test that fails locally > for them at any time (not jenkins), I will fix it. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13030) MetricsHistoryHandler appears to never be "close()ed" - causes thread leaks?
[ https://issues.apache.org/jira/browse/SOLR-13030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16705489#comment-16705489 ] ASF subversion and git services commented on SOLR-13030: Commit a01d0d9ef9cb3d4bbb5a725f9e2aa8dfd9064f95 in lucene-solr's branch refs/heads/master from markrmiller [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=a01d0d9 ] SOLR-13030: Close MetricsHistoryHandler inline. > MetricsHistoryHandler appears to never be "close()ed" - causes thread leaks? > > > Key: SOLR-13030 > URL: https://issues.apache.org/jira/browse/SOLR-13030 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Hoss Man >Assignee: Mark Miller >Priority: Major > > {{MetricsHistoryHandler}} implements {{Closeable}} and depends on it's > {{close()}} method to shutdown a {{private ScheduledThreadPoolExecutor > collectService}} as well as a {{private final SolrRrdBackendFactory factory}} > (which maintains it's own {{private ScheduledThreadPoolExecutor}} but -as far > as i can tell nothing seems to every "close" this {{MetricsHistoryHandler}} > on shutdown.- after changes in 75b1831967982 which move this close() call to > a {{ForkJoinPool}} in {{CoreContainer.shutdown()}} it seems to be leaking > threads. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-13030) MetricsHistoryHandler appears to never be "close()ed" - causes thread leaks?
[ https://issues.apache.org/jira/browse/SOLR-13030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16705503#comment-16705503 ] Mark Miller edited comment on SOLR-13030 at 12/1/18 12:17 AM: -- I'm not sure if it's as simple as that, but for now I'll just close it inline and see what happens since I've never seen this leak in my local runs. was (Author: markrmil...@gmail.com): I'm not sure if it's as simple as that, but for now I'll just close it inline and what happens since I've never seen this leak in my local runs. > MetricsHistoryHandler appears to never be "close()ed" - causes thread leaks? > > > Key: SOLR-13030 > URL: https://issues.apache.org/jira/browse/SOLR-13030 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Hoss Man >Assignee: Mark Miller >Priority: Major > > {{MetricsHistoryHandler}} implements {{Closeable}} and depends on it's > {{close()}} method to shutdown a {{private ScheduledThreadPoolExecutor > collectService}} as well as a {{private final SolrRrdBackendFactory factory}} > (which maintains it's own {{private ScheduledThreadPoolExecutor}} but -as far > as i can tell nothing seems to every "close" this {{MetricsHistoryHandler}} > on shutdown.- after changes in 75b1831967982 which move this close() call to > a {{ForkJoinPool}} in {{CoreContainer.shutdown()}} it seems to be leaking > threads. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-7.x-MacOSX (64bit/jdk-9) - Build # 962 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-MacOSX/962/ Java: 64bit/jdk-9 -XX:-UseCompressedOops -XX:+UseParallelGC 91 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.schema.TestICUCollationField Error Message: 2 threads leaked from SUITE scope at org.apache.solr.schema.TestICUCollationField: 1) Thread[id=19, name=MetricsHistoryHandler-8-thread-1, state=TIMED_WAITING, group=TGRP-TestICUCollationField] at java.base@9/jdk.internal.misc.Unsafe.park(Native Method) at java.base@9/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234) at java.base@9/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2104) at java.base@9/java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1131) at java.base@9/java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:848) at java.base@9/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1092) at java.base@9/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1152) at java.base@9/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641) at java.base@9/java.lang.Thread.run(Thread.java:844)2) Thread[id=18, name=SolrRrdBackendFactory-7-thread-1, state=TIMED_WAITING, group=TGRP-TestICUCollationField] at java.base@9/jdk.internal.misc.Unsafe.park(Native Method) at java.base@9/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234) at java.base@9/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2104) at java.base@9/java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1131) at java.base@9/java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:848) at java.base@9/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1092) at java.base@9/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1152) at java.base@9/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641) at java.base@9/java.lang.Thread.run(Thread.java:844) Stack Trace: com.carrotsearch.randomizedtesting.ThreadLeakError: 2 threads leaked from SUITE scope at org.apache.solr.schema.TestICUCollationField: 1) Thread[id=19, name=MetricsHistoryHandler-8-thread-1, state=TIMED_WAITING, group=TGRP-TestICUCollationField] at java.base@9/jdk.internal.misc.Unsafe.park(Native Method) at java.base@9/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234) at java.base@9/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2104) at java.base@9/java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1131) at java.base@9/java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:848) at java.base@9/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1092) at java.base@9/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1152) at java.base@9/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641) at java.base@9/java.lang.Thread.run(Thread.java:844) 2) Thread[id=18, name=SolrRrdBackendFactory-7-thread-1, state=TIMED_WAITING, group=TGRP-TestICUCollationField] at java.base@9/jdk.internal.misc.Unsafe.park(Native Method) at java.base@9/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234) at java.base@9/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2104) at java.base@9/java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1131) at java.base@9/java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:848) at java.base@9/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1092) at java.base@9/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1152) at java.base@9/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641) at java.base@9/java.lang.Thread.run(Thread.java:844) at __randomizedtesting.SeedInfo.seed([F7E1F766900F70FA]:0) FAILED: junit.framework.TestSuite.org.apache.solr.schema.TestICUCollationField Error Message: There are still zombie threads that couldn't be terminated:1) Thread[id=19,
[jira] [Commented] (SOLR-13030) MetricsHistoryHandler appears to never be "close()ed" - causes thread leaks?
[ https://issues.apache.org/jira/browse/SOLR-13030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16705503#comment-16705503 ] Mark Miller commented on SOLR-13030: I'm not sure if it's as simple as that, but for now I'll just close it inline and what happens since I've never seen this leak in my local runs. > MetricsHistoryHandler appears to never be "close()ed" - causes thread leaks? > > > Key: SOLR-13030 > URL: https://issues.apache.org/jira/browse/SOLR-13030 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Hoss Man >Assignee: Mark Miller >Priority: Major > > {{MetricsHistoryHandler}} implements {{Closeable}} and depends on it's > {{close()}} method to shutdown a {{private ScheduledThreadPoolExecutor > collectService}} as well as a {{private final SolrRrdBackendFactory factory}} > (which maintains it's own {{private ScheduledThreadPoolExecutor}} but -as far > as i can tell nothing seems to every "close" this {{MetricsHistoryHandler}} > on shutdown.- after changes in 75b1831967982 which move this close() call to > a {{ForkJoinPool}} in {{CoreContainer.shutdown()}} it seems to be leaking > threads. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13030) MetricsHistoryHandler appears to never be "close()ed" - causes thread leaks?
[ https://issues.apache.org/jira/browse/SOLR-13030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16705490#comment-16705490 ] ASF subversion and git services commented on SOLR-13030: Commit 2f6bdb5940ba34386b60ab5868df98eb8cf94b2a in lucene-solr's branch refs/heads/branch_7x from markrmiller [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=2f6bdb5 ] SOLR-13030: Close MetricsHistoryHandler inline. > MetricsHistoryHandler appears to never be "close()ed" - causes thread leaks? > > > Key: SOLR-13030 > URL: https://issues.apache.org/jira/browse/SOLR-13030 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Hoss Man >Assignee: Mark Miller >Priority: Major > > {{MetricsHistoryHandler}} implements {{Closeable}} and depends on it's > {{close()}} method to shutdown a {{private ScheduledThreadPoolExecutor > collectService}} as well as a {{private final SolrRrdBackendFactory factory}} > (which maintains it's own {{private ScheduledThreadPoolExecutor}} but -as far > as i can tell nothing seems to every "close" this {{MetricsHistoryHandler}} > on shutdown.- after changes in 75b1831967982 which move this close() call to > a {{ForkJoinPool}} in {{CoreContainer.shutdown()}} it seems to be leaking > threads. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk-11) - Build # 3162 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/3162/ Java: 64bit/jdk-11 -XX:-UseCompressedOops -XX:+UseG1GC 121 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.schema.TestICUCollationFieldDocValues Error Message: 2 threads leaked from SUITE scope at org.apache.solr.schema.TestICUCollationFieldDocValues: 1) Thread[id=21, name=MetricsHistoryHandler-8-thread-1, state=TIMED_WAITING, group=TGRP-TestICUCollationFieldDocValues] at java.base@11/jdk.internal.misc.Unsafe.park(Native Method) at java.base@11/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234) at java.base@11/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2123) at java.base@11/java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1182) at java.base@11/java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:899) at java.base@11/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1054) at java.base@11/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1114) at java.base@11/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base@11/java.lang.Thread.run(Thread.java:834)2) Thread[id=20, name=SolrRrdBackendFactory-7-thread-1, state=TIMED_WAITING, group=TGRP-TestICUCollationFieldDocValues] at java.base@11/jdk.internal.misc.Unsafe.park(Native Method) at java.base@11/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234) at java.base@11/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2123) at java.base@11/java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1182) at java.base@11/java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:899) at java.base@11/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1054) at java.base@11/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1114) at java.base@11/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base@11/java.lang.Thread.run(Thread.java:834) Stack Trace: com.carrotsearch.randomizedtesting.ThreadLeakError: 2 threads leaked from SUITE scope at org.apache.solr.schema.TestICUCollationFieldDocValues: 1) Thread[id=21, name=MetricsHistoryHandler-8-thread-1, state=TIMED_WAITING, group=TGRP-TestICUCollationFieldDocValues] at java.base@11/jdk.internal.misc.Unsafe.park(Native Method) at java.base@11/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234) at java.base@11/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2123) at java.base@11/java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1182) at java.base@11/java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:899) at java.base@11/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1054) at java.base@11/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1114) at java.base@11/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base@11/java.lang.Thread.run(Thread.java:834) 2) Thread[id=20, name=SolrRrdBackendFactory-7-thread-1, state=TIMED_WAITING, group=TGRP-TestICUCollationFieldDocValues] at java.base@11/jdk.internal.misc.Unsafe.park(Native Method) at java.base@11/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234) at java.base@11/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2123) at java.base@11/java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1182) at java.base@11/java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:899) at java.base@11/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1054) at java.base@11/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1114) at java.base@11/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base@11/java.lang.Thread.run(Thread.java:834) at __randomizedtesting.SeedInfo.seed([F3B9A5A514DE7E79]:0) FAILED: junit.framework.TestSuite.org.apache.solr.schema.TestICUCollationFieldDocValues Error
[jira] [Commented] (SOLR-12932) ant test (without badapples=false) should pass easily for developers.
[ https://issues.apache.org/jira/browse/SOLR-12932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16705407#comment-16705407 ] Hoss Man commented on SOLR-12932: - bq. Thread Leaks ... I'm pretty sure there is a bug in the lifecycle of MetricsHistoryHandler ... created SOLR-13030 > ant test (without badapples=false) should pass easily for developers. > - > > Key: SOLR-12932 > URL: https://issues.apache.org/jira/browse/SOLR-12932 > Project: Solr > Issue Type: Sub-task > Security Level: Public(Default Security Level. Issues are Public) > Components: Tests >Reporter: Mark Miller >Assignee: Mark Miller >Priority: Major > > If we fix the tests we will end up here anyway, but we can shortcut this. > Once I get my first patch in, anyone who mentions a test that fails locally > for them at any time (not jenkins), I will fix it. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Assigned] (SOLR-13030) MetricsHistoryHandler appears to never be "close()ed" - causes thread leaks?
[ https://issues.apache.org/jira/browse/SOLR-13030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hoss Man reassigned SOLR-13030: --- Assignee: Mark Miller (was: Andrzej Bialecki ) Description: {{MetricsHistoryHandler}} implements {{Closeable}} and depends on it's {{close()}} method to shutdown a {{private ScheduledThreadPoolExecutor collectService}} as well as a {{private final SolrRrdBackendFactory factory}} (which maintains it's own {{private ScheduledThreadPoolExecutor}} but -as far as i can tell nothing seems to every "close" this {{MetricsHistoryHandler}} on shutdown.- after changes in 75b1831967982 which move this close() call to a {{ForkJoinPool}} in {{CoreContainer.shutdown()}} it seems to be leaking threads. (was: {{MetricsHistoryHandler}} implements {{Closeable}} and depends on it's {{close()}} method to shutdown a {{private ScheduledThreadPoolExecutor collectService}} as well as a {{private final SolrRrdBackendFactory factory}} (which maintains it's own {{private ScheduledThreadPoolExecutor}} but as far as i can tell nothing seems to every "close" this {{MetricsHistoryHandler}} on shutdown.) Digging in, i just realized the code for closing the MetricsHistoryHandler was changed heavily by [~markrmil...@gmail.com]'s 75b1831967982 commit in SOLR-12932, and now happens as part of a {{customThreadPool.submit()}} in CoreContainer.shutdown -- which is why i missed seeing it the first time i looked. > MetricsHistoryHandler appears to never be "close()ed" - causes thread leaks? > > > Key: SOLR-13030 > URL: https://issues.apache.org/jira/browse/SOLR-13030 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Hoss Man >Assignee: Mark Miller >Priority: Major > > {{MetricsHistoryHandler}} implements {{Closeable}} and depends on it's > {{close()}} method to shutdown a {{private ScheduledThreadPoolExecutor > collectService}} as well as a {{private final SolrRrdBackendFactory factory}} > (which maintains it's own {{private ScheduledThreadPoolExecutor}} but -as far > as i can tell nothing seems to every "close" this {{MetricsHistoryHandler}} > on shutdown.- after changes in 75b1831967982 which move this close() call to > a {{ForkJoinPool}} in {{CoreContainer.shutdown()}} it seems to be leaking > threads. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-13030) MetricsHistoryHandler appears to never be "close()ed" - causes thread leaks?
Hoss Man created SOLR-13030: --- Summary: MetricsHistoryHandler appears to never be "close()ed" - causes thread leaks? Key: SOLR-13030 URL: https://issues.apache.org/jira/browse/SOLR-13030 Project: Solr Issue Type: Bug Security Level: Public (Default Security Level. Issues are Public) Reporter: Hoss Man Assignee: Andrzej Bialecki {{MetricsHistoryHandler}} implements {{Closeable}} and depends on it's {{close()}} method to shutdown a {{private ScheduledThreadPoolExecutor collectService}} as well as a {{private final SolrRrdBackendFactory factory}} (which maintains it's own {{private ScheduledThreadPoolExecutor}} but as far as i can tell nothing seems to every "close" this {{MetricsHistoryHandler}} on shutdown. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-http2-Solaris (64bit/jdk1.8.0) - Build # 3 - Still Failing!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-http2-Solaris/3/ Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC 1026 tests failed. FAILED: org.apache.solr.analysis.TestFoldingMultitermExtrasQuery.initializationError Error Message: org.apache.solr.SolrTestCaseJ4 Stack Trace: java.lang.NoClassDefFoundError: org.apache.solr.SolrTestCaseJ4 at java.lang.Class.getDeclaredMethods0(Native Method) at java.lang.Class.privateGetDeclaredMethods(Class.java:2701) at java.lang.Class.getDeclaredMethods(Class.java:1975) at com.carrotsearch.randomizedtesting.ClassModel$3.members(ClassModel.java:215) at com.carrotsearch.randomizedtesting.ClassModel$3.members(ClassModel.java:212) at com.carrotsearch.randomizedtesting.ClassModel$ModelBuilder.build(ClassModel.java:85) at com.carrotsearch.randomizedtesting.ClassModel.methodsModel(ClassModel.java:224) at com.carrotsearch.randomizedtesting.ClassModel.(ClassModel.java:207) at com.carrotsearch.randomizedtesting.RandomizedRunner.(RandomizedRunner.java:323) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.junit.internal.builders.AnnotatedBuilder.buildRunner(AnnotatedBuilder.java:31) at org.junit.internal.builders.AnnotatedBuilder.runnerForClass(AnnotatedBuilder.java:24) at org.junit.runners.model.RunnerBuilder.safeRunnerForClass(RunnerBuilder.java:57) at org.junit.internal.builders.AllDefaultPossibilitiesBuilder.runnerForClass(AllDefaultPossibilitiesBuilder.java:29) at org.junit.runners.model.RunnerBuilder.safeRunnerForClass(RunnerBuilder.java:57) at org.junit.internal.requests.ClassRequest.getRunner(ClassRequest.java:24) at com.carrotsearch.ant.tasks.junit4.slave.SlaveMain.execute(SlaveMain.java:258) at com.carrotsearch.ant.tasks.junit4.slave.SlaveMain.main(SlaveMain.java:394) at com.carrotsearch.ant.tasks.junit4.slave.SlaveMainSafe.main(SlaveMainSafe.java:13) FAILED: junit.framework.TestSuite.org.apache.solr.schema.TestICUCollationFieldDocValues Error Message: no conscrypt_openjdk_jni-sunos-x86_64 in java.library.path Stack Trace: java.lang.UnsatisfiedLinkError: no conscrypt_openjdk_jni-sunos-x86_64 in java.library.path at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1867) at java.lang.Runtime.loadLibrary0(Runtime.java:870) at java.lang.System.loadLibrary(System.java:1122) at org.conscrypt.NativeLibraryUtil.loadLibrary(NativeLibraryUtil.java:54) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.conscrypt.NativeLibraryLoader$1.run(NativeLibraryLoader.java:297) at org.conscrypt.NativeLibraryLoader$1.run(NativeLibraryLoader.java:289) at java.security.AccessController.doPrivileged(Native Method) at org.conscrypt.NativeLibraryLoader.loadLibraryFromHelperClassloader(NativeLibraryLoader.java:289) at org.conscrypt.NativeLibraryLoader.loadLibrary(NativeLibraryLoader.java:262) at org.conscrypt.NativeLibraryLoader.load(NativeLibraryLoader.java:162) at org.conscrypt.NativeLibraryLoader.loadFirstAvailable(NativeLibraryLoader.java:106) at org.conscrypt.NativeCryptoJni.init(NativeCryptoJni.java:50) at org.conscrypt.NativeCrypto.(NativeCrypto.java:65) at org.conscrypt.OpenSSLProvider.(OpenSSLProvider.java:60) at org.conscrypt.OpenSSLProvider.(OpenSSLProvider.java:53) at org.conscrypt.OpenSSLProvider.(OpenSSLProvider.java:49) at org.apache.solr.SolrTestCaseJ4.(SolrTestCaseJ4.java:201) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:348) at com.carrotsearch.randomizedtesting.RandomizedRunner$2.run(RandomizedRunner.java:621) Suppressed: java.lang.UnsatisfiedLinkError: no conscrypt_openjdk_jni-sunos-x86_64 in java.library.path at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1867) at java.lang.Runtime.loadLibrary0(Runtime.java:870) at java.lang.System.loadLibrary(System.java:1122) at org.conscrypt.NativeLibraryUtil.loadLibrary(NativeLibraryUtil.java:54) at org.conscrypt.NativeLibraryLoader.loadLibraryFromCurrentClassloader(NativeLibraryLoader.java:318) at
[JENKINS-EA] Lucene-Solr-7.6-Linux (64bit/jdk-12-ea+12) - Build # 46 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.6-Linux/46/ Java: 64bit/jdk-12-ea+12 -XX:-UseCompressedOops -XX:+UseParallelGC 35 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.servlet.HttpSolrCallGetCoreTest Error Message: Could not find collection:collection1 Stack Trace: java.lang.AssertionError: Could not find collection:collection1 at __randomizedtesting.SeedInfo.seed([9B850DC5C8A5A828]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.junit.Assert.assertNotNull(Assert.java:526) at org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:155) at org.apache.solr.servlet.HttpSolrCallGetCoreTest.setupCluster(HttpSolrCallGetCoreTest.java:53) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:875) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.base/java.lang.Thread.run(Thread.java:835) FAILED: junit.framework.TestSuite.org.apache.solr.servlet.HttpSolrCallGetCoreTest Error Message: Could not find collection:collection1 Stack Trace: java.lang.AssertionError: Could not find collection:collection1 at __randomizedtesting.SeedInfo.seed([9B850DC5C8A5A828]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.junit.Assert.assertNotNull(Assert.java:526) at org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:155) at org.apache.solr.servlet.HttpSolrCallGetCoreTest.setupCluster(HttpSolrCallGetCoreTest.java:53) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:875) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at
[jira] [Commented] (SOLR-12839) add a 'resort' option to JSON faceting
[ https://issues.apache.org/jira/browse/SOLR-12839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16705390#comment-16705390 ] Hoss Man commented on SOLR-12839: - {quote}my main objection was going to be the name "approximate" which highly suggests that an estimate is fine. "prelim_sort" seems fine. {quote} +1 ... i hadn't considered that. I'm glad i disliked it for other reasons > add a 'resort' option to JSON faceting > -- > > Key: SOLR-12839 > URL: https://issues.apache.org/jira/browse/SOLR-12839 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: Facet Module >Reporter: Hoss Man >Assignee: Hoss Man >Priority: Major > Fix For: master (8.0), 7.7 > > Attachments: SOLR-12839.patch, SOLR-12839.patch, SOLR-12839.patch, > SOLR-12839.patch > > > As discusssed in SOLR-9480 ... > bq. Similar to how the {{rerank}} request param allows people to collect & > score documents using a "cheap" query, and then re-score the top N using a > ore expensive query, I think it would be handy if JSON Facets supported a > {{resort}} option that could be used on any FacetRequestSorted instance right > along side the {{sort}} param, using the same JSON syntax, so that clients > could have Solr internaly sort all the facet buckets by something simple > (like count) and then "Re-Sort" the top N=limit (or maybe ( > N=limit+overrequest ?) using a more expensive function like skg() -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12839) add a 'resort' option to JSON faceting
[ https://issues.apache.org/jira/browse/SOLR-12839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16705386#comment-16705386 ] ASF subversion and git services commented on SOLR-12839: Commit 5dc988f5eeff78464d852f54ce7f06a801dcbfee in lucene-solr's branch refs/heads/master from Chris Hostetter [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=5dc988f ] SOLR-12839: JSON 'terms' Faceting now supports a 'prelim_sort' option to use when initially selecting the top ranking buckets, prior to the final 'sort' option used after refinement. > add a 'resort' option to JSON faceting > -- > > Key: SOLR-12839 > URL: https://issues.apache.org/jira/browse/SOLR-12839 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: Facet Module >Reporter: Hoss Man >Assignee: Hoss Man >Priority: Major > Attachments: SOLR-12839.patch, SOLR-12839.patch, SOLR-12839.patch, > SOLR-12839.patch > > > As discusssed in SOLR-9480 ... > bq. Similar to how the {{rerank}} request param allows people to collect & > score documents using a "cheap" query, and then re-score the top N using a > ore expensive query, I think it would be handy if JSON Facets supported a > {{resort}} option that could be used on any FacetRequestSorted instance right > along side the {{sort}} param, using the same JSON syntax, so that clients > could have Solr internaly sort all the facet buckets by something simple > (like count) and then "Re-Sort" the top N=limit (or maybe ( > N=limit+overrequest ?) using a more expensive function like skg() -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12839) add a 'resort' option to JSON faceting
[ https://issues.apache.org/jira/browse/SOLR-12839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16705385#comment-16705385 ] ASF subversion and git services commented on SOLR-12839: Commit 5763d8b194009cdf602e983d56115be9480f69b8 in lucene-solr's branch refs/heads/branch_7x from Chris Hostetter [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=5763d8b ] SOLR-12839: JSON 'terms' Faceting now supports a 'prelim_sort' option to use when initially selecting the top ranking buckets, prior to the final 'sort' option used after refinement. (cherry picked from commit 5dc988f5eeff78464d852f54ce7f06a801dcbfee) Conflicts: solr/CHANGES.txt > add a 'resort' option to JSON faceting > -- > > Key: SOLR-12839 > URL: https://issues.apache.org/jira/browse/SOLR-12839 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: Facet Module >Reporter: Hoss Man >Assignee: Hoss Man >Priority: Major > Attachments: SOLR-12839.patch, SOLR-12839.patch, SOLR-12839.patch, > SOLR-12839.patch > > > As discusssed in SOLR-9480 ... > bq. Similar to how the {{rerank}} request param allows people to collect & > score documents using a "cheap" query, and then re-score the top N using a > ore expensive query, I think it would be handy if JSON Facets supported a > {{resort}} option that could be used on any FacetRequestSorted instance right > along side the {{sort}} param, using the same JSON syntax, so that clients > could have Solr internaly sort all the facet buckets by something simple > (like count) and then "Re-Sort" the top N=limit (or maybe ( > N=limit+overrequest ?) using a more expensive function like skg() -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12932) ant test (without badapples=false) should pass easily for developers.
[ https://issues.apache.org/jira/browse/SOLR-12932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16705359#comment-16705359 ] Mark Miller commented on SOLR-12932: Or maybe it is JVM version related - I'll look at that angle as well. > ant test (without badapples=false) should pass easily for developers. > - > > Key: SOLR-12932 > URL: https://issues.apache.org/jira/browse/SOLR-12932 > Project: Solr > Issue Type: Sub-task > Security Level: Public(Default Security Level. Issues are Public) > Components: Tests >Reporter: Mark Miller >Assignee: Mark Miller >Priority: Major > > If we fix the tests we will end up here anyway, but we can shortcut this. > Once I get my first patch in, anyone who mentions a test that fails locally > for them at any time (not jenkins), I will fix it. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-12260) edismax: Include phrase clauses as terms in pf/pf2/pf3 when SOW=false
[ https://issues.apache.org/jira/browse/SOLR-12260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16705285#comment-16705285 ] Elizabeth Haubert edited comment on SOLR-12260 at 11/30/18 9:13 PM: Reduced the scope to the case where phrases SOW=false. Consider the case where there is a multi-synonym with query-time expansion: {code:java} anaphylaxis, allergic reaction{code} And a query "cat anaphylaxis aspirin": http://localhost:8983/solr/new_core/select?debugQuery=on=edismax=text=cat%20anapyhlaxis%20aspirin Then this will generate 3 single-term clauses, * ((text:cat) * ((text:"allergic reaction" text:anaphylaxis)) * (text:aspirin)) and 2 pf2 clauses: * ((spanNear([text:cat, spanOr([spanNear([text:allergic, text:reaction], 0, true), text:anaphylaxis])], 0, true)) * (spanNear([spanOr([spanNear([text:allergic, text:reaction], 0, true), text:anaphylaxis]), text:aspirin], 0, true))) If we search for the multi-term synonym "cat allergic reaction aspirin" http://localhost:8983/solr/new_core/select?debugQuery=on=edismax=text=text=cat%20allergic%20reaction%20aspirin the base query generated is the same: * ((text:cat) * ((text:anaphylaxis text:"allergic reaction")) * (text:aspirin)) But the pf2 clauses are quite different: * ((text:"cat allergic") * (spanOr([text:anaphylaxis, spanNear([text:allergic, text:reaction], 0, true)])) * (text:"reaction aspirin")) Aside from having two very different phrase boosts for the same base query, the consequences of this in the second case are fairly ugly for relevance as compared to the first: 1. The single term "anaphylaxis" will be boosted at the pf2 value in addition to the q value 2. Phrases containing "cat anaphylaxis" or "anaphylaxis aspirin" will not be considered for pf2 If the user puts multi-term synonym "allergic reaction" is put in quotes, the results get even worse, because the term is removed from the list of clauses considered as input for pf/pf2/pf3: The base query stays the same: * ((text:cat) * (spanOr([text:anaphylaxis, spanNear([text:allergic, text:reaction], 0, true)])) * (text:aspirin)) But now "allergic reaction" and "anaphylaxis" are removed entirely from the phrase clauses: * (text:"cat aspirin") What is the history around removing phrase clauses from consideration as input to pf/pf2/pf3? was (Author: ehaubert): Reduced the scope to the case where phrases SOW=false. Consider the case where there is a multi-synonym with query-time expansion: {code:java} anaphylaxis, allergic reaction{code} And a query "cat anaphylaxis aspirin": http://localhost:8983/solr/new_core/select?debugQuery=on=edismax=text=cat%20anapyhlaxis%20aspirin Then this will generate 3 single-term clauses, * ((text:cat) * ((text:"allergic reaction" text:anaphylaxis)) * (text:aspirin)) and 2 pf2 clauses: * ((spanNear([text:cat, spanOr([spanNear([text:allergic, text:reaction], 0, true), text:anaphylaxis])], 0, true)) * (spanNear([spanOr([spanNear([text:allergic, text:reaction], 0, true), text:anaphylaxis]), text:aspirin], 0, true))) If we search for the multi-term synonym "cat allergic reaction aspirin" http://localhost:8983/solr/new_core/select?debugQuery=on=edismax=text=text=cat%20allergic%20reaction%20aspirin the base query generated is the same: * ((text:cat) * ((text:anaphylaxis text:"allergic reaction")) * (text:aspirin)) But the pf2 clauses are quite different: * ((text:"cat allergic") * (spanOr([text:anaphylaxis, spanNear([text:allergic, text:reaction], 0, true)])) * (text:"reaction aspirin")) Aside from having two very different phrase boosts for the same base query, the consequences of this in the second case are fairly ugly for relevance as compared to the first: 1. The single term "anaphylaxis" will be boosted at the pf2 value in addition to the q value 2. Phrases containing "cat anaphylaxis" or "anaphylaxis aspirin" will not be considered for pf2 If the user puts multi-term synonym "allergic reaction" is put in quotes, the results get even worse, because the term is removed from the list of clauses considered as input for pf/pf2/pf3: The base query stays the same: * ((text:cat) * (spanOr([text:anaphylaxis, spanNear([text:allergic, text:reaction], 0, true)])) * (text:dog)) But now "allergic reaction" and "anaphylaxis" are removed entirely from the phrase clauses: * (text:"cat dog") What is the history around removing phrase clauses from consideration as input to pf/pf2/pf3? > edismax: Include phrase clauses as terms in pf/pf2/pf3 when SOW=false > -- > > Key: SOLR-12260 > URL: https://issues.apache.org/jira/browse/SOLR-12260 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) > Components: query parsers >
[jira] [Commented] (SOLR-12260) edismax: Include phrase clauses as terms in pf/pf2/pf3 when SOW=false
[ https://issues.apache.org/jira/browse/SOLR-12260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16705285#comment-16705285 ] Elizabeth Haubert commented on SOLR-12260: -- Reduced the scope to the case where phrases SOW=false. Consider the case where there is a multi-synonym with query-time expansion: {code:java} anaphylaxis, allergic reaction{code} And a query "cat anaphylaxis aspirin": http://localhost:8983/solr/new_core/select?debugQuery=on=edismax=text=cat%20anapyhlaxis%20aspirin Then this will generate 3 single-term clauses, * ((text:cat) * ((text:"allergic reaction" text:anaphylaxis)) * (text:aspirin)) and 2 pf2 clauses: * ((spanNear([text:cat, spanOr([spanNear([text:allergic, text:reaction], 0, true), text:anaphylaxis])], 0, true)) * (spanNear([spanOr([spanNear([text:allergic, text:reaction], 0, true), text:anaphylaxis]), text:aspirin], 0, true))) If we search for the multi-term synonym "cat allergic reaction aspirin" http://localhost:8983/solr/new_core/select?debugQuery=on=edismax=text=text=cat%20allergic%20reaction%20aspirin the base query generated is the same: * ((text:cat) * ((text:anaphylaxis text:"allergic reaction")) * (text:aspirin)) But the pf2 clauses are quite different: * ((text:"cat allergic") * (spanOr([text:anaphylaxis, spanNear([text:allergic, text:reaction], 0, true)])) * (text:"reaction aspirin")) Aside from having two very different phrase boosts for the same base query, the consequences of this in the second case are fairly ugly for relevance as compared to the first: 1. The single term "anaphylaxis" will be boosted at the pf2 value in addition to the q value 2. Phrases containing "cat anaphylaxis" or "anaphylaxis aspirin" will not be considered for pf2 If the user puts multi-term synonym "allergic reaction" is put in quotes, the results get even worse, because the term is removed from the list of clauses considered as input for pf/pf2/pf3: The base query stays the same: * ((text:cat) * (spanOr([text:anaphylaxis, spanNear([text:allergic, text:reaction], 0, true)])) * (text:dog)) But now "allergic reaction" and "anaphylaxis" are removed entirely from the phrase clauses: * (text:"cat dog") What is the history around removing phrase clauses from consideration as input to pf/pf2/pf3? > edismax: Include phrase clauses as terms in pf/pf2/pf3 when SOW=false > -- > > Key: SOLR-12260 > URL: https://issues.apache.org/jira/browse/SOLR-12260 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) > Components: query parsers >Reporter: Elizabeth Haubert >Priority: Major > > Phrase queries are currently built only on bareword clauses, which causes > unexpected behavior for queries with mixed quoted and bareword terms: > q:cat "allergic reaction" dog > will flag "allergic reaction" as a phrase, and so will include it in none of > pf/pf2/pf3 > pf or pf2 will be generated as "cat dog". > At a minimum, it would be nice if phrases would be applied as stand-alone > entities to pf2/pf3, if they contain the appropriate number of terms. But I > think the work that has been done to accommodate graph queries should also be > able to handle these phrase terms following the pattern of: > spanNear[text:cat, spanNear(text:allergic, text:reaction, 0, true), text:dog] > > > > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-http2-Linux (32bit/jdk1.8.0_172) - Build # 17 - Still Failing!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-http2-Linux/17/ Java: 32bit/jdk1.8.0_172 -server -XX:+UseSerialGC 1026 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.analysis.TestFoldingMultitermExtrasQuery Error Message: no conscrypt_openjdk_jni-linux-x86 in java.library.path Stack Trace: java.lang.UnsatisfiedLinkError: no conscrypt_openjdk_jni-linux-x86 in java.library.path at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1867) at java.lang.Runtime.loadLibrary0(Runtime.java:870) at java.lang.System.loadLibrary(System.java:1122) at org.conscrypt.NativeLibraryUtil.loadLibrary(NativeLibraryUtil.java:54) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.conscrypt.NativeLibraryLoader$1.run(NativeLibraryLoader.java:297) at org.conscrypt.NativeLibraryLoader$1.run(NativeLibraryLoader.java:289) at java.security.AccessController.doPrivileged(Native Method) at org.conscrypt.NativeLibraryLoader.loadLibraryFromHelperClassloader(NativeLibraryLoader.java:289) at org.conscrypt.NativeLibraryLoader.loadLibrary(NativeLibraryLoader.java:262) at org.conscrypt.NativeLibraryLoader.load(NativeLibraryLoader.java:162) at org.conscrypt.NativeLibraryLoader.loadFirstAvailable(NativeLibraryLoader.java:106) at org.conscrypt.NativeCryptoJni.init(NativeCryptoJni.java:50) at org.conscrypt.NativeCrypto.(NativeCrypto.java:65) at org.conscrypt.OpenSSLProvider.(OpenSSLProvider.java:60) at org.conscrypt.OpenSSLProvider.(OpenSSLProvider.java:53) at org.conscrypt.OpenSSLProvider.(OpenSSLProvider.java:49) at org.apache.solr.SolrTestCaseJ4.(SolrTestCaseJ4.java:201) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:348) at com.carrotsearch.randomizedtesting.RandomizedRunner$2.run(RandomizedRunner.java:621) Suppressed: java.lang.UnsatisfiedLinkError: no conscrypt_openjdk_jni-linux-x86 in java.library.path at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1867) at java.lang.Runtime.loadLibrary0(Runtime.java:870) at java.lang.System.loadLibrary(System.java:1122) at org.conscrypt.NativeLibraryUtil.loadLibrary(NativeLibraryUtil.java:54) at org.conscrypt.NativeLibraryLoader.loadLibraryFromCurrentClassloader(NativeLibraryLoader.java:318) at org.conscrypt.NativeLibraryLoader.loadLibrary(NativeLibraryLoader.java:273) ... 11 more Suppressed: java.lang.UnsatisfiedLinkError: no conscrypt_openjdk_jni in java.library.path ... 24 more Suppressed: java.lang.UnsatisfiedLinkError: no conscrypt_openjdk_jni in java.library.path at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1867) at java.lang.Runtime.loadLibrary0(Runtime.java:870) at java.lang.System.loadLibrary(System.java:1122) at org.conscrypt.NativeLibraryUtil.loadLibrary(NativeLibraryUtil.java:54) at org.conscrypt.NativeLibraryLoader.loadLibraryFromCurrentClassloader(NativeLibraryLoader.java:318) at org.conscrypt.NativeLibraryLoader.loadLibrary(NativeLibraryLoader.java:273) ... 11 more Suppressed: java.lang.UnsatisfiedLinkError: no conscrypt in java.library.path ... 24 more Suppressed: java.lang.UnsatisfiedLinkError: no conscrypt in java.library.path at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1867) at java.lang.Runtime.loadLibrary0(Runtime.java:870) at java.lang.System.loadLibrary(System.java:1122) at org.conscrypt.NativeLibraryUtil.loadLibrary(NativeLibraryUtil.java:54) at org.conscrypt.NativeLibraryLoader.loadLibraryFromCurrentClassloader(NativeLibraryLoader.java:318) at org.conscrypt.NativeLibraryLoader.loadLibrary(NativeLibraryLoader.java:273) ... 11 more FAILED: junit.framework.TestSuite.org.apache.solr.schema.TestICUCollationField Error Message: no conscrypt_openjdk_jni-linux-x86 in java.library.path Stack Trace: java.lang.UnsatisfiedLinkError: no conscrypt_openjdk_jni-linux-x86 in java.library.path at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1867) at java.lang.Runtime.loadLibrary0(Runtime.java:870) at java.lang.System.loadLibrary(System.java:1122) at org.conscrypt.NativeLibraryUtil.loadLibrary(NativeLibraryUtil.java:54) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
[jira] [Updated] (SOLR-12260) edismax: Include phrase clauses as terms in pf/pf2/pf3 when SOW=false
[ https://issues.apache.org/jira/browse/SOLR-12260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Elizabeth Haubert updated SOLR-12260: - Summary: edismax: Include phrase clauses as terms in pf/pf2/pf3 when SOW=false (was: edismax: Include phrase clauses as terms in pf/pf2/pf3 ) > edismax: Include phrase clauses as terms in pf/pf2/pf3 when SOW=false > -- > > Key: SOLR-12260 > URL: https://issues.apache.org/jira/browse/SOLR-12260 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) > Components: query parsers >Reporter: Elizabeth Haubert >Priority: Major > > Phrase queries are currently built only on bareword clauses, which causes > unexpected behavior for queries with mixed quoted and bareword terms: > q:cat "allergic reaction" dog > will flag "allergic reaction" as a phrase, and so will include it in none of > pf/pf2/pf3 > pf or pf2 will be generated as "cat dog". > At a minimum, it would be nice if phrases would be applied as stand-alone > entities to pf2/pf3, if they contain the appropriate number of terms. But I > think the work that has been done to accommodate graph queries should also be > able to handle these phrase terms following the pattern of: > spanNear[text:cat, spanNear(text:allergic, text:reaction, 0, true), text:dog] > > > > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9961) RestoreCore needs the option to download files in parallel.
[ https://issues.apache.org/jira/browse/SOLR-9961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16705257#comment-16705257 ] Tim Owen commented on SOLR-9961: We considered using this patch locally, but actually found the problem was in slow HDFS restores because of an undersized copy buffer. See SOLR-13029 for our change to alleviate that. Since we had lots of collections to restore, we did those in parallel instead of making the file restore parallelised. But the buffer patch made each file restore about 10x faster, with a 256kB buffer instead of 4k. > RestoreCore needs the option to download files in parallel. > --- > > Key: SOLR-9961 > URL: https://issues.apache.org/jira/browse/SOLR-9961 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: Backup/Restore >Affects Versions: 6.2.1 >Reporter: Timothy Potter >Priority: Major > Attachments: SOLR-9961.patch, SOLR-9961.patch > > > My backup to cloud storage (Google cloud storage in this case, but I think > this is a general problem) takes 8 minutes ... the restore of the same core > takes hours. The restore loop in RestoreCore is serial and doesn't allow me > to parallelize the expensive part of this operation (the IO from the remote > cloud storage service). We need the option to parallelize the download (like > distcp). > Also, I tried downloading the same directory using gsutil and it was very > fast, like 2 minutes. So I know it's not the pipe that's limiting perf here. > Here's a very rough patch that does the parallelization. We may also want to > consider a two-step approach: 1) download in parallel to a temp dir, 2) > perform all the of the checksum validation against the local temp dir. That > will save round trips to the remote cloud storage. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-13029) Allow HDFS buffer size to be configured
[ https://issues.apache.org/jira/browse/SOLR-13029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tim Owen updated SOLR-13029: Attachment: SOLR-13029.patch > Allow HDFS buffer size to be configured > --- > > Key: SOLR-13029 > URL: https://issues.apache.org/jira/browse/SOLR-13029 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: Backup/Restore, hdfs >Affects Versions: 7.5, master (8.0) >Reporter: Tim Owen >Priority: Major > Attachments: SOLR-13029.patch > > > There's a default hardcoded buffer size setting of 4096 in the HDFS code > which means in particular that restoring a backup from HDFS takes a long > time. Copying multi-GB files from HDFS using a buffer as small as 4096 bytes > is very inefficient. We changed this in our local build used in production to > 256kB and saw a 10x speed improvement when restoring a backup. Attached patch > simply makes this size configurable using a command line setting, much like > several other buffer size values. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-13029) Allow HDFS buffer size to be configured
Tim Owen created SOLR-13029: --- Summary: Allow HDFS buffer size to be configured Key: SOLR-13029 URL: https://issues.apache.org/jira/browse/SOLR-13029 Project: Solr Issue Type: Improvement Security Level: Public (Default Security Level. Issues are Public) Components: Backup/Restore, hdfs Affects Versions: 7.5, master (8.0) Reporter: Tim Owen There's a default hardcoded buffer size setting of 4096 in the HDFS code which means in particular that restoring a backup from HDFS takes a long time. Copying multi-GB files from HDFS using a buffer as small as 4096 bytes is very inefficient. We changed this in our local build used in production to 256kB and saw a 10x speed improvement when restoring a backup. Attached patch simply makes this size configurable using a command line setting, much like several other buffer size values. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (LUCENE-8572) StringIndexOutOfBoundsException in parser/EscapeQuerySyntaxImpl.java
[ https://issues.apache.org/jira/browse/LUCENE-8572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16705249#comment-16705249 ] Namgyu Kim edited comment on LUCENE-8572 at 11/30/18 8:37 PM: -- Hi, [~romseygeek], [~thetaphi]. I checked the issue and found that it could be a logical problem. First, I think it's not a Locale problem, but a replace algorithm(replaceIgnoreCase) itself. When you see the escapeWhiteChar(), it calls the replaceIgnoreCase() internally. (escapeTerm() -> escapeWhiteChar() -> replaceIgnoreCase()) {code:java} private static CharSequence replaceIgnoreCase(CharSequence string, CharSequence sequence1, CharSequence escapeChar, Locale locale) { // string = "İpone " [304, 112, 111, 110, 101, 32], size = 6 ... while (start < count) { // Convert by toLowerCase as follows. // string = "i'̇pone " [105, 775, 112, 111, 110, 101, 32], size = 7 // firstIndex will be set 6. if ((firstIndex = string.toString().toLowerCase(locale).indexOf(first, start)) == -1) break; boolean found = true; ... if (found) { // In this line, String.toString() will only have a range of 0 to 5. // So here we get a StringIndexOutOfBoundsException. result.append(string.toString().substring(copyStart, firstIndex)); ... } else { start = firstIndex + 1; } } ... } {code} Solving this may not be a big problem. But what do you think about using {code:java} public static final CharSequence escapeWhiteChar(CharSequence str, Locale locale) { ... for (int i = 0; i < escapableWhiteChars.length; i++) { // Use String's replace method. buffer = buffer.toString().replace(escapableWhiteChars[i], "\\"); } return buffer; } {code} instead of {code:java} public static final CharSequence escapeWhiteChar(CharSequence str, Locale locale) { ... for (int i = 0; i < escapableWhiteChars.length; i++) { // Stay current method. buffer = replaceIgnoreCase(buffer, escapableWhiteChars[i].toLowerCase(locale), "\\", locale); } return buffer; } {code} in the escapeWhiteChar method? was (Author: danmuzi): Hi, [~romseygeek], [~thetaphi]. I checked the issue and it could be a logical problem. First, I think it's not a Locale problem, but a replace algorithm(replaceIgnoreCase) itself. When you see the escapeWhiteChar(), it calls the replaceIgnoreCase() internally. (escapeTerm() -> escapeWhiteChar() -> replaceIgnoreCase()) {code:java} private static CharSequence replaceIgnoreCase(CharSequence string, CharSequence sequence1, CharSequence escapeChar, Locale locale) { // string = "İpone " [304, 112, 111, 110, 101, 32], size = 6 ... while (start < count) { // Convert by toLowerCase as follows. // string = "i'̇pone " [105, 775, 112, 111, 110, 101, 32], size = 7 // firstIndex will be set 6. if ((firstIndex = string.toString().toLowerCase(locale).indexOf(first, start)) == -1) break; boolean found = true; ... if (found) { // In this line, String.toString() will only have a range of 0 to 5. // So here we get a StringIndexOutOfBoundsException. result.append(string.toString().substring(copyStart, firstIndex)); ... } else { start = firstIndex + 1; } } ... } {code} Solving this may not be a big problem. But what do you think about using {code:java} public static final CharSequence escapeWhiteChar(CharSequence str, Locale locale) { ... for (int i = 0; i < escapableWhiteChars.length; i++) { // Use String's replace method. buffer = buffer.toString().replace(escapableWhiteChars[i], "\\"); } return buffer; } {code} instead of {code:java} public static final CharSequence escapeWhiteChar(CharSequence str, Locale locale) { ... for (int i = 0; i < escapableWhiteChars.length; i++) { // Stay current method. buffer = replaceIgnoreCase(buffer, escapableWhiteChars[i].toLowerCase(locale), "\\", locale); } return buffer; } {code} in the escapeWhiteChar method? > StringIndexOutOfBoundsException in parser/EscapeQuerySyntaxImpl.java > > > Key: LUCENE-8572 > URL: https://issues.apache.org/jira/browse/LUCENE-8572 > Project: Lucene - Core > Issue Type: Bug > Components: core/queryparser >Affects Versions: 6.3 >Reporter: Octavian Mocanu >Priority: Major > > With "lucene-queryparser-6.3.0", specifically in > "org/apache/lucene/queryparser/flexible/standard/parser/EscapeQuerySyntaxImpl.java" > > when escaping strings containing extended unicode chars, and with a locale > distinct from that of the character set the string uses, the process fails, > with a
[jira] [Commented] (LUCENE-8572) StringIndexOutOfBoundsException in parser/EscapeQuerySyntaxImpl.java
[ https://issues.apache.org/jira/browse/LUCENE-8572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16705249#comment-16705249 ] Namgyu Kim commented on LUCENE-8572: Hi, [~romseygeek], [~thetaphi]. I checked the issue and it could be a logical problem. First, I think it's not a Locale problem, but a replace algorithm(replaceIgnoreCase) itself. When you see the escapeWhiteChar(), it calls the replaceIgnoreCase() internally. (escapeTerm() -> escapeWhiteChar() -> replaceIgnoreCase()) {code:java} private static CharSequence replaceIgnoreCase(CharSequence string, CharSequence sequence1, CharSequence escapeChar, Locale locale) { // string = "İpone " [304, 112, 111, 110, 101, 32], size = 6 ... while (start < count) { // Convert by toLowerCase as follows. // string = "i'̇pone " [105, 775, 112, 111, 110, 101, 32], size = 7 // firstIndex will be set 6. if ((firstIndex = string.toString().toLowerCase(locale).indexOf(first, start)) == -1) break; boolean found = true; ... if (found) { // In this line, String.toString() will only have a range of 0 to 5. // So here we get a StringIndexOutOfBoundsException. result.append(string.toString().substring(copyStart, firstIndex)); ... } else { start = firstIndex + 1; } } ... } {code} Solving this may not be a big problem. But what do you think about using {code:java} public static final CharSequence escapeWhiteChar(CharSequence str, Locale locale) { ... for (int i = 0; i < escapableWhiteChars.length; i++) { // Use String's replace method. buffer = buffer.toString().replace(escapableWhiteChars[i], "\\"); } return buffer; } {code} instead of {code:java} public static final CharSequence escapeWhiteChar(CharSequence str, Locale locale) { ... for (int i = 0; i < escapableWhiteChars.length; i++) { // Stay current method. buffer = replaceIgnoreCase(buffer, escapableWhiteChars[i].toLowerCase(locale), "\\", locale); } return buffer; } {code} in the escapeWhiteChar method? > StringIndexOutOfBoundsException in parser/EscapeQuerySyntaxImpl.java > > > Key: LUCENE-8572 > URL: https://issues.apache.org/jira/browse/LUCENE-8572 > Project: Lucene - Core > Issue Type: Bug > Components: core/queryparser >Affects Versions: 6.3 >Reporter: Octavian Mocanu >Priority: Major > > With "lucene-queryparser-6.3.0", specifically in > "org/apache/lucene/queryparser/flexible/standard/parser/EscapeQuerySyntaxImpl.java" > > when escaping strings containing extended unicode chars, and with a locale > distinct from that of the character set the string uses, the process fails, > with a "java.lang.StringIndexOutOfBoundsException". > > The reason is that the comparison is done by previously converting all of the > characters of the string to lower case chars, and by doing this, the original > string size isn't anymore the same, but less, as of the transformed one, so > that executing > > org/apache/lucene/queryparser/flexible/standard/parser/EscapeQuerySyntaxImpl.java:89 > fails with a java.lang.StringIndexOutOfBoundsException. > I wonder whether the transformation to lower case is really needed when > treating the escape chars, since by avoiding it, the error may be avoided. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-12-ea+12) - Build # 23280 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/23280/ Java: 64bit/jdk-12-ea+12 -XX:+UseCompressedOops -XX:+UseSerialGC 126 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.analytics.facet.ValueFacetTest Error Message: ObjectTracker found 8 object(s) that were not released!!! [SolrZkClient, ZkStateReader, SolrZkClient, SolrZkClient, ZkStateReader, ZkStateReader, SolrZkClient, ZkStateReader] org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: org.apache.solr.common.cloud.SolrZkClient at org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42) at org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:203) at org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:126) at org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:116) at org.apache.solr.common.cloud.ZkStateReader.(ZkStateReader.java:306) at org.apache.solr.client.solrj.impl.ZkClientClusterStateProvider.connect(ZkClientClusterStateProvider.java:160) at org.apache.solr.client.solrj.impl.CloudSolrClient.getZkStateReader(CloudSolrClient.java:342) at org.apache.solr.client.solrj.impl.SolrClientCloudManager.(SolrClientCloudManager.java:69) at org.apache.solr.cloud.ZkController.getSolrCloudManager(ZkController.java:708) at org.apache.solr.core.CoreContainer.createMetricsHistoryHandler(CoreContainer.java:765) at org.apache.solr.core.CoreContainer.load(CoreContainer.java:586) at org.apache.solr.servlet.SolrDispatchFilter.createCoreContainer(SolrDispatchFilter.java:253) at org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:173) at org.eclipse.jetty.servlet.FilterHolder.initialize(FilterHolder.java:139) at org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:741) at org.eclipse.jetty.servlet.ServletHandler.updateMappings(ServletHandler.java:1477) at org.eclipse.jetty.servlet.ServletHandler.setFilterMappings(ServletHandler.java:1542) at org.eclipse.jetty.servlet.ServletHandler.addFilterMapping(ServletHandler.java:1186) at org.eclipse.jetty.servlet.ServletHandler.addFilterWithMapping(ServletHandler.java:1023) at org.eclipse.jetty.servlet.ServletContextHandler.addFilter(ServletContextHandler.java:473) at org.apache.solr.client.solrj.embedded.JettySolrRunner$1.lifeCycleStarted(JettySolrRunner.java:344) at org.eclipse.jetty.util.component.AbstractLifeCycle.setStarted(AbstractLifeCycle.java:179) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:69) at org.apache.solr.client.solrj.embedded.JettySolrRunner.retryOnPortBindFailure(JettySolrRunner.java:507) at org.apache.solr.client.solrj.embedded.JettySolrRunner.start(JettySolrRunner.java:446) at org.apache.solr.client.solrj.embedded.JettySolrRunner.start(JettySolrRunner.java:414) at org.apache.solr.cloud.MiniSolrCloudCluster.startJettySolrRunner(MiniSolrCloudCluster.java:443) at org.apache.solr.cloud.MiniSolrCloudCluster.lambda$new$0(MiniSolrCloudCluster.java:272) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:835) org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: org.apache.solr.common.cloud.ZkStateReader at org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42) at org.apache.solr.common.cloud.ZkStateReader.(ZkStateReader.java:328) at org.apache.solr.client.solrj.impl.ZkClientClusterStateProvider.connect(ZkClientClusterStateProvider.java:160) at org.apache.solr.client.solrj.impl.CloudSolrClient.getZkStateReader(CloudSolrClient.java:342) at org.apache.solr.client.solrj.impl.SolrClientCloudManager.(SolrClientCloudManager.java:69) at org.apache.solr.cloud.ZkController.getSolrCloudManager(ZkController.java:708) at org.apache.solr.cloud.Overseer.getSolrCloudManager(Overseer.java:590) at org.apache.solr.cloud.api.collections.OverseerCollectionMessageHandler.(OverseerCollectionMessageHandler.java:238) at org.apache.solr.cloud.OverseerCollectionConfigSetProcessor.getOverseerMessageHandlerSelector(OverseerCollectionConfigSetProcessor.java:87) at org.apache.solr.cloud.OverseerCollectionConfigSetProcessor.(OverseerCollectionConfigSetProcessor.java:70) at org.apache.solr.cloud.OverseerCollectionConfigSetProcessor.(OverseerCollectionConfigSetProcessor.java:41) at org.apache.solr.cloud.Overseer.start(Overseer.java:561) at org.apache.solr.cloud.OverseerElectionContext.runLeaderProcess(ElectionContext.java:739) at org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:171) at
[jira] [Commented] (SOLR-4362) edismax, phrase query with slop, pf parameter
[ https://issues.apache.org/jira/browse/SOLR-4362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16705244#comment-16705244 ] Elizabeth Haubert commented on SOLR-4362: - This is still an issue on the 7_6 branch; the underlying problem is related to https://issues.apache.org/jira/browse/SOLR-12260?filter=-2, where the list of "normal clauses" used for generating pf/pf2/pf3 is not correct. Note that it is necessary to have a pf2 set for the problem to occur. Using the example query: {code:java} http://localhost:8983/solr/new_core/select?debugQuery=on=edismax=text="phrase query"~10 term {code} It will produce: {code:java} +((text:"phrase query"~10) (text:term)) (text:"10 term") {code} When Solr parses the original query string, it generates 3 clauses: * "phrase query" * ~10 * term In the course of [parseOriginalQuery|https://github.com/apache/lucene-solr/blob/5c4ab188eb09ad3215f461523b9873037803ed7e/solr/core/src/java/org/apache/solr/search/ExtendedDismaxQParser.java#L384], "phrase query" is identified as a phrase, and ~10 is correctly identified as the slop associated with that phrase, and removed from clauses in the context of parseOriginalQuery. That change is not picked up when control flow returns back to parse(). So when addPhraseQueries goes to look for the shingles to glue together, it starts with * "phrase query" * ~10 * term Rejects "phrase query" as a candidate for shingling because it is a phrase, and is left with * ~10 * term It then generates a pf2 clause "10 term", and tacks that on to the query. If the underlying field tokenization strips off punctuation, "10 term" phrases will match accordingly. Updated the test in the unit test to use the same schema field as testPfPs(). > edismax, phrase query with slop, pf parameter > - > > Key: SOLR-4362 > URL: https://issues.apache.org/jira/browse/SOLR-4362 > Project: Solr > Issue Type: Bug > Components: query parsers >Affects Versions: 4.1 >Reporter: Ahmet Arslan >Priority: Major > Labels: edismax, pf > Attachments: SOLR-4362.patch, SOLR-4362.patch > > > When sloppy phrase query (plus additional term) is used with edismax, slop > value is search against fields that are supplied with pf parameter. > Example : With this url ="phrase query"~10 term=text=text document > having "10 term" in its text field is boosted. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-4362) edismax, phrase query with slop, pf parameter
[ https://issues.apache.org/jira/browse/SOLR-4362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16705244#comment-16705244 ] Elizabeth Haubert edited comment on SOLR-4362 at 11/30/18 8:28 PM: --- This is still an issue on the 7_6 branch; the underlying problem is related to https://issues.apache.org/jira/browse/SOLR-12260?filter=-2, where the list of "normal clauses" used for generating pf/pf2/pf3 is not correct. Note that it is necessary to have a pf2 set for the problem to occur. Using the example query: {code:java} http://localhost:8983/solr/new_core/select?debugQuery=on=edismax=text="phrase query"~10 term {code} It will produce: {code:java} +((text:"phrase query"~10) (text:term)) (text:"10 term") {code} When Solr parses the original query string, it generates 3 clauses: * "phrase query" * ~10 * term In the course of [parseOriginalQuery|https://github.com/apache/lucene-solr/blob/5c4ab188eb09ad3215f461523b9873037803ed7e/solr/core/src/java/org/apache/solr/search/ExtendedDismaxQParser.java#L384], "phrase query" is identified as a phrase, and ~10 is correctly identified as the slop associated with that phrase, and removed from clauses in the context of parseOriginalQuery. That change is not picked up when control flow returns back to parse(). So when addPhraseQueries goes to look for the shingles to glue together, it starts with * "phrase query" * ~10 * term Rejects "phrase query" as a candidate for shingling because it is a phrase, and is left with * ~10 * term It then generates a pf2 clause "10 term", and tacks that on to the query. If the underlying field tokenization strips off punctuation, "10 term" phrases will match accordingly. Updated the original unit test to use the same schema field as testPfPs(). was (Author: ehaubert): This is still an issue on the 7_6 branch; the underlying problem is related to https://issues.apache.org/jira/browse/SOLR-12260?filter=-2, where the list of "normal clauses" used for generating pf/pf2/pf3 is not correct. Note that it is necessary to have a pf2 set for the problem to occur. Using the example query: {code:java} http://localhost:8983/solr/new_core/select?debugQuery=on=edismax=text="phrase query"~10 term {code} It will produce: {code:java} +((text:"phrase query"~10) (text:term)) (text:"10 term") {code} When Solr parses the original query string, it generates 3 clauses: * "phrase query" * ~10 * term In the course of [parseOriginalQuery|https://github.com/apache/lucene-solr/blob/5c4ab188eb09ad3215f461523b9873037803ed7e/solr/core/src/java/org/apache/solr/search/ExtendedDismaxQParser.java#L384], "phrase query" is identified as a phrase, and ~10 is correctly identified as the slop associated with that phrase, and removed from clauses in the context of parseOriginalQuery. That change is not picked up when control flow returns back to parse(). So when addPhraseQueries goes to look for the shingles to glue together, it starts with * "phrase query" * ~10 * term Rejects "phrase query" as a candidate for shingling because it is a phrase, and is left with * ~10 * term It then generates a pf2 clause "10 term", and tacks that on to the query. If the underlying field tokenization strips off punctuation, "10 term" phrases will match accordingly. Updated the test in the unit test to use the same schema field as testPfPs(). > edismax, phrase query with slop, pf parameter > - > > Key: SOLR-4362 > URL: https://issues.apache.org/jira/browse/SOLR-4362 > Project: Solr > Issue Type: Bug > Components: query parsers >Affects Versions: 4.1 >Reporter: Ahmet Arslan >Priority: Major > Labels: edismax, pf > Attachments: SOLR-4362.patch, SOLR-4362.patch > > > When sloppy phrase query (plus additional term) is used with edismax, slop > value is search against fields that are supplied with pf parameter. > Example : With this url ="phrase query"~10 term=text=text document > having "10 term" in its text field is boosted. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-4362) edismax, phrase query with slop, pf parameter
[ https://issues.apache.org/jira/browse/SOLR-4362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Elizabeth Haubert updated SOLR-4362: Attachment: SOLR-4362.patch > edismax, phrase query with slop, pf parameter > - > > Key: SOLR-4362 > URL: https://issues.apache.org/jira/browse/SOLR-4362 > Project: Solr > Issue Type: Bug > Components: query parsers >Affects Versions: 4.1 >Reporter: Ahmet Arslan >Priority: Major > Labels: edismax, pf > Attachments: SOLR-4362.patch, SOLR-4362.patch > > > When sloppy phrase query (plus additional term) is used with edismax, slop > value is search against fields that are supplied with pf parameter. > Example : With this url ="phrase query"~10 term=text=text document > having "10 term" in its text field is boosted. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-SmokeRelease-7.x - Build # 387 - Failure
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-7.x/387/ No tests ran. Build Log: [...truncated 23452 lines...] [asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: invalid part, must have at least one section (e.g., chapter, appendix, etc.) [asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid part, must have at least one section (e.g., chapter, appendix, etc.) [java] Processed 2443 links (1994 relative) to 3213 anchors in 247 files [echo] Validated Links & Anchors via: /home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/build/solr-ref-guide/bare-bones-html/ -dist-changes: [copy] Copying 4 files to /home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/package/changes package: -unpack-solr-tgz: -ensure-solr-tgz-exists: [mkdir] Created dir: /home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/build/solr.tgz.unpacked [untar] Expanding: /home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/package/solr-7.7.0.tgz into /home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/build/solr.tgz.unpacked generate-maven-artifacts: resolve: resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml resolve: resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure:
[jira] [Commented] (SOLR-8237) Invalid parsing with solr edismax operators
[ https://issues.apache.org/jira/browse/SOLR-8237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16705205#comment-16705205 ] Erick Erickson commented on SOLR-8237: -- One of the committers has to close it, Alexander R. just did it. > Invalid parsing with solr edismax operators > --- > > Key: SOLR-8237 > URL: https://issues.apache.org/jira/browse/SOLR-8237 > Project: Solr > Issue Type: Bug > Components: query parsers >Affects Versions: 4.8.1 > Environment: Windows 2008 R2 - Apache TomCat 7 >Reporter: Mahmoud Almokadem >Priority: Critical > Labels: edismax > > Using edismax as the parser we got the undesirable parsed queries and > results. The following is two different cases with strange behavior: > Searching with these parameters > "mm":"2", > "df":"TotalField", > "debug":"true", > "indent":"true", > "fl":"Title", > "start":"0", > "q.op":"AND", > "fq":"", > "rows":"10", > "wt":"json" > and the query is > "q":"+(public libraries)", > Retrieve 502 documents with these parsed query > "rawquerystring":"+(public libraries)", > "querystring":"+(public libraries)", > "parsedquery":"(+(+(DisjunctionMaxQuery((Title:public^200.0 | > TotalField:public^0.1)) DisjunctionMaxQuery((Title:libraries^200.0 | > TotalField:libraries^0.1)/no_coord", > "parsedquery_toString":"+(+((Title:public^200.0 | TotalField:public^0.1) > (Title:libraries^200.0 | TotalField:libraries^0.1)))" > and if the query is > "q":" (public libraries) " > then it retrieves 8 documents with these parsed query > "rawquerystring":" (public libraries) ", > "querystring":" (public libraries) ", > "parsedquery":"(+((DisjunctionMaxQuery((Title:public^200.0 | > TotalField:public^0.1)) DisjunctionMaxQuery((Title:libraries^200.0 | > TotalField:libraries^0.1)))~2))/no_coord", > "parsedquery_toString":"+(((Title:public^200.0 | TotalField:public^0.1) > (Title:libraries^200.0 | TotalField:libraries^0.1))~2)" > So the results of adding "+" to get all tokens before the parenthesis > retrieve more results than removing it. > Request Handler > > explicit > 10 > TotalField > AND > edismax > Title^200 TotalField^1 > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-8237) Invalid parsing with solr edismax operators
[ https://issues.apache.org/jira/browse/SOLR-8237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16705205#comment-16705205 ] Erick Erickson edited comment on SOLR-8237 at 11/30/18 7:42 PM: One of the committers has to close it, Alexander R. just did it. Thanks for the prompt! was (Author: erickerickson): One of the committers has to close it, Alexander R. just did it. > Invalid parsing with solr edismax operators > --- > > Key: SOLR-8237 > URL: https://issues.apache.org/jira/browse/SOLR-8237 > Project: Solr > Issue Type: Bug > Components: query parsers >Affects Versions: 4.8.1 > Environment: Windows 2008 R2 - Apache TomCat 7 >Reporter: Mahmoud Almokadem >Priority: Critical > Labels: edismax > > Using edismax as the parser we got the undesirable parsed queries and > results. The following is two different cases with strange behavior: > Searching with these parameters > "mm":"2", > "df":"TotalField", > "debug":"true", > "indent":"true", > "fl":"Title", > "start":"0", > "q.op":"AND", > "fq":"", > "rows":"10", > "wt":"json" > and the query is > "q":"+(public libraries)", > Retrieve 502 documents with these parsed query > "rawquerystring":"+(public libraries)", > "querystring":"+(public libraries)", > "parsedquery":"(+(+(DisjunctionMaxQuery((Title:public^200.0 | > TotalField:public^0.1)) DisjunctionMaxQuery((Title:libraries^200.0 | > TotalField:libraries^0.1)/no_coord", > "parsedquery_toString":"+(+((Title:public^200.0 | TotalField:public^0.1) > (Title:libraries^200.0 | TotalField:libraries^0.1)))" > and if the query is > "q":" (public libraries) " > then it retrieves 8 documents with these parsed query > "rawquerystring":" (public libraries) ", > "querystring":" (public libraries) ", > "parsedquery":"(+((DisjunctionMaxQuery((Title:public^200.0 | > TotalField:public^0.1)) DisjunctionMaxQuery((Title:libraries^200.0 | > TotalField:libraries^0.1)))~2))/no_coord", > "parsedquery_toString":"+(((Title:public^200.0 | TotalField:public^0.1) > (Title:libraries^200.0 | TotalField:libraries^0.1))~2)" > So the results of adding "+" to get all tokens before the parenthesis > retrieve more results than removing it. > Request Handler > > explicit > 10 > TotalField > AND > edismax > Title^200 TotalField^1 > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-http2-MacOSX (64bit/jdk1.8.0) - Build # 4 - Still Failing!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-http2-MacOSX/4/ Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC All tests passed Build Log: [...truncated 25358 lines...] check-licenses: [echo] License check under: /Users/jenkins/workspace/Lucene-Solr-http2-MacOSX/lucene [licenses] MISSING sha1 checksum file for: /Users/jenkins/workspace/Lucene-Solr-http2-MacOSX/lucene/replicator/lib/jetty-continuation-9.4.14.v20181114.jar [licenses] EXPECTED sha1 checksum file : /Users/jenkins/workspace/Lucene-Solr-http2-MacOSX/lucene/licenses/jetty-continuation-9.4.14.v20181114.jar.sha1 [licenses] MISSING sha1 checksum file for: /Users/jenkins/workspace/Lucene-Solr-http2-MacOSX/lucene/replicator/lib/jetty-http-9.4.14.v20181114.jar [licenses] EXPECTED sha1 checksum file : /Users/jenkins/workspace/Lucene-Solr-http2-MacOSX/lucene/licenses/jetty-http-9.4.14.v20181114.jar.sha1 [licenses] MISSING sha1 checksum file for: /Users/jenkins/workspace/Lucene-Solr-http2-MacOSX/lucene/replicator/lib/jetty-io-9.4.14.v20181114.jar [licenses] EXPECTED sha1 checksum file : /Users/jenkins/workspace/Lucene-Solr-http2-MacOSX/lucene/licenses/jetty-io-9.4.14.v20181114.jar.sha1 [licenses] MISSING sha1 checksum file for: /Users/jenkins/workspace/Lucene-Solr-http2-MacOSX/lucene/replicator/lib/jetty-server-9.4.14.v20181114.jar [licenses] EXPECTED sha1 checksum file : /Users/jenkins/workspace/Lucene-Solr-http2-MacOSX/lucene/licenses/jetty-server-9.4.14.v20181114.jar.sha1 [licenses] MISSING sha1 checksum file for: /Users/jenkins/workspace/Lucene-Solr-http2-MacOSX/lucene/replicator/lib/jetty-servlet-9.4.14.v20181114.jar [licenses] EXPECTED sha1 checksum file : /Users/jenkins/workspace/Lucene-Solr-http2-MacOSX/lucene/licenses/jetty-servlet-9.4.14.v20181114.jar.sha1 [...truncated 3 lines...] BUILD FAILED /Users/jenkins/workspace/Lucene-Solr-http2-MacOSX/build.xml:633: The following error occurred while executing this line: /Users/jenkins/workspace/Lucene-Solr-http2-MacOSX/build.xml:117: The following error occurred while executing this line: /Users/jenkins/workspace/Lucene-Solr-http2-MacOSX/lucene/build.xml:90: The following error occurred while executing this line: /Users/jenkins/workspace/Lucene-Solr-http2-MacOSX/lucene/tools/custom-tasks.xml:62: License check failed. Check the logs. If you recently modified ivy-versions.properties or any module's ivy.xml, make sure you run "ant clean-jars jar-checksums" before running precommit. Total time: 118 minutes 15 seconds Build step 'Invoke Ant' marked build as failure Archiving artifacts Setting ANT_1_8_2_HOME=/Users/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2 [WARNINGS] Skipping publisher since build result is FAILURE Recording test results Setting ANT_1_8_2_HOME=/Users/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2 Email was triggered for: Failure - Any Sending email for trigger: Failure - Any Setting ANT_1_8_2_HOME=/Users/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2 Setting ANT_1_8_2_HOME=/Users/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2 Setting ANT_1_8_2_HOME=/Users/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2 Setting ANT_1_8_2_HOME=/Users/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2 - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-8237) Invalid parsing with solr edismax operators
[ https://issues.apache.org/jira/browse/SOLR-8237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexandre Rafalovitch resolved SOLR-8237. - Resolution: Implemented > Invalid parsing with solr edismax operators > --- > > Key: SOLR-8237 > URL: https://issues.apache.org/jira/browse/SOLR-8237 > Project: Solr > Issue Type: Bug > Components: query parsers >Affects Versions: 4.8.1 > Environment: Windows 2008 R2 - Apache TomCat 7 >Reporter: Mahmoud Almokadem >Priority: Critical > Labels: edismax > > Using edismax as the parser we got the undesirable parsed queries and > results. The following is two different cases with strange behavior: > Searching with these parameters > "mm":"2", > "df":"TotalField", > "debug":"true", > "indent":"true", > "fl":"Title", > "start":"0", > "q.op":"AND", > "fq":"", > "rows":"10", > "wt":"json" > and the query is > "q":"+(public libraries)", > Retrieve 502 documents with these parsed query > "rawquerystring":"+(public libraries)", > "querystring":"+(public libraries)", > "parsedquery":"(+(+(DisjunctionMaxQuery((Title:public^200.0 | > TotalField:public^0.1)) DisjunctionMaxQuery((Title:libraries^200.0 | > TotalField:libraries^0.1)/no_coord", > "parsedquery_toString":"+(+((Title:public^200.0 | TotalField:public^0.1) > (Title:libraries^200.0 | TotalField:libraries^0.1)))" > and if the query is > "q":" (public libraries) " > then it retrieves 8 documents with these parsed query > "rawquerystring":" (public libraries) ", > "querystring":" (public libraries) ", > "parsedquery":"(+((DisjunctionMaxQuery((Title:public^200.0 | > TotalField:public^0.1)) DisjunctionMaxQuery((Title:libraries^200.0 | > TotalField:libraries^0.1)))~2))/no_coord", > "parsedquery_toString":"+(((Title:public^200.0 | TotalField:public^0.1) > (Title:libraries^200.0 | TotalField:libraries^0.1))~2)" > So the results of adding "+" to get all tokens before the parenthesis > retrieve more results than removing it. > Request Handler > > explicit > 10 > TotalField > AND > edismax > Title^200 TotalField^1 > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8237) Invalid parsing with solr edismax operators
[ https://issues.apache.org/jira/browse/SOLR-8237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16705198#comment-16705198 ] Elizabeth Haubert commented on SOLR-8237: - [~erickerickson], how do old tickets get closed? Believe that this got fixed automagically with the SOW changes. Unless there is something special about the default field, a vanilla solr.TextField produces: {code:java} q:"+(public libraries)" Produces: PhraseQuery(text:"public libraries") text:"public libraries" vs "q":"(public libraries)" Produces PhraseQuery(text:"public libraries") text:"public libraries" {code} Dropping the quotes gives: {code:java} q:public libraries text:public text:libraries text:public text:libraries vs q:+(public libraries) text:public text:libraries text:public text:libraries {code} > Invalid parsing with solr edismax operators > --- > > Key: SOLR-8237 > URL: https://issues.apache.org/jira/browse/SOLR-8237 > Project: Solr > Issue Type: Bug > Components: query parsers >Affects Versions: 4.8.1 > Environment: Windows 2008 R2 - Apache TomCat 7 >Reporter: Mahmoud Almokadem >Priority: Critical > Labels: edismax > > Using edismax as the parser we got the undesirable parsed queries and > results. The following is two different cases with strange behavior: > Searching with these parameters > "mm":"2", > "df":"TotalField", > "debug":"true", > "indent":"true", > "fl":"Title", > "start":"0", > "q.op":"AND", > "fq":"", > "rows":"10", > "wt":"json" > and the query is > "q":"+(public libraries)", > Retrieve 502 documents with these parsed query > "rawquerystring":"+(public libraries)", > "querystring":"+(public libraries)", > "parsedquery":"(+(+(DisjunctionMaxQuery((Title:public^200.0 | > TotalField:public^0.1)) DisjunctionMaxQuery((Title:libraries^200.0 | > TotalField:libraries^0.1)/no_coord", > "parsedquery_toString":"+(+((Title:public^200.0 | TotalField:public^0.1) > (Title:libraries^200.0 | TotalField:libraries^0.1)))" > and if the query is > "q":" (public libraries) " > then it retrieves 8 documents with these parsed query > "rawquerystring":" (public libraries) ", > "querystring":" (public libraries) ", > "parsedquery":"(+((DisjunctionMaxQuery((Title:public^200.0 | > TotalField:public^0.1)) DisjunctionMaxQuery((Title:libraries^200.0 | > TotalField:libraries^0.1)))~2))/no_coord", > "parsedquery_toString":"+(((Title:public^200.0 | TotalField:public^0.1) > (Title:libraries^200.0 | TotalField:libraries^0.1))~2)" > So the results of adding "+" to get all tokens before the parenthesis > retrieve more results than removing it. > Request Handler > > explicit > 10 > TotalField > AND > edismax > Title^200 TotalField^1 > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk-11) - Build # 3161 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/3161/ Java: 64bit/jdk-11 -XX:+UseCompressedOops -XX:+UseParallelGC 30 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.handler.admin.AutoscalingHistoryHandlerTest Error Message: Could not find collection : AutoscalingHistoryHandlerTest_collection Stack Trace: org.apache.solr.common.SolrException: Could not find collection : AutoscalingHistoryHandlerTest_collection at __randomizedtesting.SeedInfo.seed([29411FC7ED9A516A]:0) at org.apache.solr.common.cloud.ClusterState.getCollection(ClusterState.java:118) at org.apache.solr.cloud.SolrCloudTestCase.getCollectionState(SolrCloudTestCase.java:258) at org.apache.solr.handler.admin.AutoscalingHistoryHandlerTest.waitForRecovery(AutoscalingHistoryHandlerTest.java:403) at org.apache.solr.handler.admin.AutoscalingHistoryHandlerTest.setupCluster(AutoscalingHistoryHandlerTest.java:97) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:875) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.base/java.lang.Thread.run(Thread.java:834) FAILED: junit.framework.TestSuite.org.apache.solr.handler.admin.AutoscalingHistoryHandlerTest Error Message: Could not find collection : AutoscalingHistoryHandlerTest_collection Stack Trace: org.apache.solr.common.SolrException: Could not find collection : AutoscalingHistoryHandlerTest_collection at __randomizedtesting.SeedInfo.seed([29411FC7ED9A516A]:0) at org.apache.solr.common.cloud.ClusterState.getCollection(ClusterState.java:118) at org.apache.solr.cloud.SolrCloudTestCase.getCollectionState(SolrCloudTestCase.java:258) at org.apache.solr.handler.admin.AutoscalingHistoryHandlerTest.waitForRecovery(AutoscalingHistoryHandlerTest.java:403) at org.apache.solr.handler.admin.AutoscalingHistoryHandlerTest.setupCluster(AutoscalingHistoryHandlerTest.java:97) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:875) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891) at
[jira] [Commented] (SOLR-12839) add a 'resort' option to JSON faceting
[ https://issues.apache.org/jira/browse/SOLR-12839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16705178#comment-16705178 ] Yonik Seeley commented on SOLR-12839: - Yeah, I think this is OK - my main objection was going to be the name "approximate" which highly suggests that an estimate is fine. "prelim_sort" seems fine. > add a 'resort' option to JSON faceting > -- > > Key: SOLR-12839 > URL: https://issues.apache.org/jira/browse/SOLR-12839 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: Facet Module >Reporter: Hoss Man >Assignee: Hoss Man >Priority: Major > Attachments: SOLR-12839.patch, SOLR-12839.patch, SOLR-12839.patch, > SOLR-12839.patch > > > As discusssed in SOLR-9480 ... > bq. Similar to how the {{rerank}} request param allows people to collect & > score documents using a "cheap" query, and then re-score the top N using a > ore expensive query, I think it would be handy if JSON Facets supported a > {{resort}} option that could be used on any FacetRequestSorted instance right > along side the {{sort}} param, using the same JSON syntax, so that clients > could have Solr internaly sort all the facet buckets by something simple > (like count) and then "Re-Sort" the top N=limit (or maybe ( > N=limit+overrequest ?) using a more expensive function like skg() -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-8582) Set parent class of DutchAnalyzer to StopwordAnalyzerBase
[ https://issues.apache.org/jira/browse/LUCENE-8582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Namgyu Kim updated LUCENE-8582: --- Attachment: LUCENE-8582.patch > Set parent class of DutchAnalyzer to StopwordAnalyzerBase > - > > Key: LUCENE-8582 > URL: https://issues.apache.org/jira/browse/LUCENE-8582 > Project: Lucene - Core > Issue Type: Task > Components: modules/analysis >Reporter: Namgyu Kim >Priority: Major > Attachments: LUCENE-8582.patch > > > Currently the parent class of DutchAnalyzer is *Analyzer*. > And I saw the comment > {code:java} > // TODO: extend StopwordAnalyzerBase > {code} > in DutchAnalyzer. > > So I changed the code as follows. > {code:java} > public final class DutchAnalyzer extends StopwordAnalyzerBase { > ... > > // This instance is no longer necessary. > // private final CharArraySet stoptable; > > public DutchAnalyzer(CharArraySet stopwords, CharArraySet > stemExclusionTable, Ch1arArrayMap stemOverrideDict) { > super(stopwords); // Use StopwordAnalyzerBase's constructor to set > stopwords. > ... > } > ... > @Override > protected TokenStreamComponents createComponents(String fieldName) { > ... > result = new StopFilter(result, stopwords); // Use StopwordAnalyzerBase's > instance > ... > } > ... > } > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-8582) Set parent class of DutchAnalyzer to StopwordAnalyzerBase
[ https://issues.apache.org/jira/browse/LUCENE-8582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Namgyu Kim updated LUCENE-8582: --- Description: Currently the parent class of DutchAnalyzer is *Analyzer*. And I saw the comment {code:java} // TODO: extend StopwordAnalyzerBase {code} in DutchAnalyzer. So I changed the code as follows. {code:java} public final class DutchAnalyzer extends StopwordAnalyzerBase { ... // This instance is no longer necessary. // private final CharArraySet stoptable; public DutchAnalyzer(CharArraySet stopwords, CharArraySet stemExclusionTable, CharArrayMap stemOverrideDict) { super(stopwords); // Use StopwordAnalyzerBase's constructor to set stopwords. ... } ... @Override protected TokenStreamComponents createComponents(String fieldName) { ... result = new StopFilter(result, stopwords); // Use StopwordAnalyzerBase's instance ... } ... } {code} was: Currently the parent class of DutchAnalyzer is *Analyzer*. And I saw the comment {code:java} // TODO: extend StopwordAnalyzerBase {code} in DutchAnalyzer. So I changed the code as follows. {code:java} public final class DutchAnalyzer extends StopwordAnalyzerBase { ... // This instance is no longer necessary. // private final CharArraySet stoptable; public DutchAnalyzer(CharArraySet stopwords, CharArraySet stemExclusionTable, Ch1arArrayMap stemOverrideDict) { super(stopwords); // Use StopwordAnalyzerBase's constructor to set stopwords. ... } ... @Override protected TokenStreamComponents createComponents(String fieldName) { ... result = new StopFilter(result, stopwords); // Use StopwordAnalyzerBase's instance ... } ... } {code} > Set parent class of DutchAnalyzer to StopwordAnalyzerBase > - > > Key: LUCENE-8582 > URL: https://issues.apache.org/jira/browse/LUCENE-8582 > Project: Lucene - Core > Issue Type: Task > Components: modules/analysis >Reporter: Namgyu Kim >Priority: Major > Attachments: LUCENE-8582.patch > > > Currently the parent class of DutchAnalyzer is *Analyzer*. > And I saw the comment > {code:java} > // TODO: extend StopwordAnalyzerBase > {code} > in DutchAnalyzer. > > So I changed the code as follows. > {code:java} > public final class DutchAnalyzer extends StopwordAnalyzerBase { > ... > > // This instance is no longer necessary. > // private final CharArraySet stoptable; > > public DutchAnalyzer(CharArraySet stopwords, CharArraySet > stemExclusionTable, CharArrayMap stemOverrideDict) { > super(stopwords); // Use StopwordAnalyzerBase's constructor to set > stopwords. > ... > } > ... > @Override > protected TokenStreamComponents createComponents(String fieldName) { > ... > result = new StopFilter(result, stopwords); // Use StopwordAnalyzerBase's > instance > ... > } > ... > } > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (LUCENE-8582) Set parent class of DutchAnalyzer to StopwordAnalyzerBase
Namgyu Kim created LUCENE-8582: -- Summary: Set parent class of DutchAnalyzer to StopwordAnalyzerBase Key: LUCENE-8582 URL: https://issues.apache.org/jira/browse/LUCENE-8582 Project: Lucene - Core Issue Type: Task Components: modules/analysis Reporter: Namgyu Kim Currently the parent class of DutchAnalyzer is *Analyzer*. And I saw the comment {code:java} // TODO: extend StopwordAnalyzerBase {code} in DutchAnalyzer. So I changed the code as follows. {code:java} public final class DutchAnalyzer extends StopwordAnalyzerBase { ... // This instance is no longer necessary. // private final CharArraySet stoptable; public DutchAnalyzer(CharArraySet stopwords, CharArraySet stemExclusionTable, Ch1arArrayMap stemOverrideDict) { super(stopwords); // Use StopwordAnalyzerBase's constructor to set stopwords. ... } ... @Override protected TokenStreamComponents createComponents(String fieldName) { ... result = new StopFilter(result, stopwords); // Use StopwordAnalyzerBase's instance ... } ... } {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-Tests-http2 - Build # 4 - Still Failing
Build: https://builds.apache.org/job/Lucene-Solr-Tests-http2/4/ 1 tests failed. FAILED: org.apache.solr.client.solrj.impl.CloudSolrClientTest.testRouting Error Message: Error from server at https://127.0.0.1:40119/solr/collection1_shard2_replica_n2: Expected mime type application/octet-stream but got text/html.Error 404 Can not find: /solr/collection1_shard2_replica_n2/update HTTP ERROR 404 Problem accessing /solr/collection1_shard2_replica_n2/update. Reason: Can not find: /solr/collection1_shard2_replica_n2/updatehttp://eclipse.org/jetty;>Powered by Jetty:// 9.4.14.v20181114 Stack Trace: org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from server at https://127.0.0.1:40119/solr/collection1_shard2_replica_n2: Expected mime type application/octet-stream but got text/html. Error 404 Can not find: /solr/collection1_shard2_replica_n2/update HTTP ERROR 404 Problem accessing /solr/collection1_shard2_replica_n2/update. Reason: Can not find: /solr/collection1_shard2_replica_n2/updatehttp://eclipse.org/jetty;>Powered by Jetty:// 9.4.14.v20181114 at __randomizedtesting.SeedInfo.seed([B7C2E3A2DF4AE988:7575DFCADC0A19F0]:0) at org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:551) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1016) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817) at org.apache.solr.client.solrj.impl.CloudSolrClientTest.testRouting(CloudSolrClientTest.java:238) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985) at org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:110) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
[jira] [Commented] (SOLR-13023) No clear guidance on how to delete jar from .system collection
[ https://issues.apache.org/jira/browse/SOLR-13023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16705147#comment-16705147 ] Cassandra Targett commented on SOLR-13023: -- I've pushed changes to the Blob Store API page in the Ref Guide to remove the language stating deletes are usual REST DELETE commands, and added a new section explaining that deletes can be done using either delete by query or delete by id as one does with regular indexed documents. In response to a blob store-related issue raised in the linked issue SOLR-12436, I also added a sentence or two about the default size allowed for blobs with a pointer to how one could change that default if necessary. I'll resolve this now, but please reopen if there are more suggestions for how docs can be improved even more here. > No clear guidance on how to delete jar from .system collection > -- > > Key: SOLR-13023 > URL: https://issues.apache.org/jira/browse/SOLR-13023 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: blobstore, documentation >Affects Versions: 7.5 >Reporter: William >Priority: Major > Fix For: 7.6, master (8.0) > > > There is no clear guidance on how to actually delete a JAR that was placed in > the .system collection. I have literally tried every command under the sun > even the ones provided in support. Can someone please tell me how to actually > do a proper delete. All I can seem to do is just add new ones. > > {code:java} > curl -X POST -H 'Content-Type: application/json' > 'https://solr.cloud.statcan.ca/solr/.system/' --data-binary '{"delete": > {"id":"jtds131"}}' > curl -X POST -H 'Content-Type: application/json' > 'https://solr.cloud.statcan.ca/solr/ndm_test/update' --data-binary '{ > "delete": { "id":"jtds131/1" } > }' > curl -X DELETE -H 'Content-Type: application/json' > 'https://solr.cloud.statcan.ca/solr/.system/blob/jtds131' {code} > > I just want to delete any of the following: > > {code:java} > curl https://solr.cloud.statcan.ca/solr/.system/blob\?omitHeader\=true > (docker-for-desktop/default) > { > "response":{"numFound":20,"start":0,"docs":[ > { > "id":"jtds131/7", > "md5":"dd0041fbc51b53613db675520bb1521e", > "blobName":"jtds131", > "version":7, > "timestamp":"2018-11-29T15:16:57.079Z", > "size":35}, > { > "id":"jtds131/6", > "md5":"e45bd303ff39f7daa7a57941b1ff0b85", > "blobName":"jtds131", > "version":6, > "timestamp":"2018-11-29T15:13:20.375Z", > "size":34}, > { > "id":"jtds131/5", > "md5":"4f2e52b6058950bf92a61ff64f747091", > "blobName":"jtds131", > "version":5, > "timestamp":"2018-11-29T15:12:17.120Z", > "size":36}, > { > "id":"jtds131/4", > "md5":"36cf4127af583553605f69c728bff016", > "blobName":"jtds131", > "version":4, > "timestamp":"2018-11-29T15:11:22.136Z", > "size":30}, > { > "id":"jtds131/3", > "md5":"3db86deb7b7f6b1fa1b9fa81b690e68c", > "blobName":"jtds131", > "version":3, > "timestamp":"2018-11-29T15:09:38.075Z", > "size":28}, > { > "id":"ojdbc/3", > "md5":"1ce62f6f367c356671791e4b29506196", > "blobName":"ojdbc", > "version":3, > "timestamp":"2018-11-29T14:45:47.213Z", > "size":24}, > { > "id":"jtds131/2", > "md5":"cac877bfcbd4cc5d4adc04497df81ab", > "blobName":"jtds131", > "version":2, > "timestamp":"2018-11-29T15:08:01.273Z", > "size":25}, > { > "id":"ojdbc/2", > "md5":"a65fc757841c108339744156717b403f", > "blobName":"ojdbc", > "version":2, > "timestamp":"2018-11-29T14:33:22.048Z", > "size":24}, > { > "id":"ndm_test/2", > "md5":"83c579317606eca7d06418cf83a2c0d0", > "blobName":"ndm_test", > "version":2, > "timestamp":"2018-11-29T15:45:24.267Z", > "size":26}, > { > "id":"jtds131/1", > "md5":"499eb508a21e312d3bb9aec337eb38b7", > "blobName":"jtds131", > "version":1, > "timestamp":"2018-11-28T20:11:44.533Z", > "size":317816}] > }}{code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Assigned] (SOLR-13023) No clear guidance on how to delete jar from .system collection
[ https://issues.apache.org/jira/browse/SOLR-13023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Cassandra Targett reassigned SOLR-13023: Assignee: Cassandra Targett > No clear guidance on how to delete jar from .system collection > -- > > Key: SOLR-13023 > URL: https://issues.apache.org/jira/browse/SOLR-13023 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: blobstore, documentation >Affects Versions: 7.5 >Reporter: William >Assignee: Cassandra Targett >Priority: Major > Fix For: 7.6, master (8.0) > > > There is no clear guidance on how to actually delete a JAR that was placed in > the .system collection. I have literally tried every command under the sun > even the ones provided in support. Can someone please tell me how to actually > do a proper delete. All I can seem to do is just add new ones. > > {code:java} > curl -X POST -H 'Content-Type: application/json' > 'https://solr.cloud.statcan.ca/solr/.system/' --data-binary '{"delete": > {"id":"jtds131"}}' > curl -X POST -H 'Content-Type: application/json' > 'https://solr.cloud.statcan.ca/solr/ndm_test/update' --data-binary '{ > "delete": { "id":"jtds131/1" } > }' > curl -X DELETE -H 'Content-Type: application/json' > 'https://solr.cloud.statcan.ca/solr/.system/blob/jtds131' {code} > > I just want to delete any of the following: > > {code:java} > curl https://solr.cloud.statcan.ca/solr/.system/blob\?omitHeader\=true > (docker-for-desktop/default) > { > "response":{"numFound":20,"start":0,"docs":[ > { > "id":"jtds131/7", > "md5":"dd0041fbc51b53613db675520bb1521e", > "blobName":"jtds131", > "version":7, > "timestamp":"2018-11-29T15:16:57.079Z", > "size":35}, > { > "id":"jtds131/6", > "md5":"e45bd303ff39f7daa7a57941b1ff0b85", > "blobName":"jtds131", > "version":6, > "timestamp":"2018-11-29T15:13:20.375Z", > "size":34}, > { > "id":"jtds131/5", > "md5":"4f2e52b6058950bf92a61ff64f747091", > "blobName":"jtds131", > "version":5, > "timestamp":"2018-11-29T15:12:17.120Z", > "size":36}, > { > "id":"jtds131/4", > "md5":"36cf4127af583553605f69c728bff016", > "blobName":"jtds131", > "version":4, > "timestamp":"2018-11-29T15:11:22.136Z", > "size":30}, > { > "id":"jtds131/3", > "md5":"3db86deb7b7f6b1fa1b9fa81b690e68c", > "blobName":"jtds131", > "version":3, > "timestamp":"2018-11-29T15:09:38.075Z", > "size":28}, > { > "id":"ojdbc/3", > "md5":"1ce62f6f367c356671791e4b29506196", > "blobName":"ojdbc", > "version":3, > "timestamp":"2018-11-29T14:45:47.213Z", > "size":24}, > { > "id":"jtds131/2", > "md5":"cac877bfcbd4cc5d4adc04497df81ab", > "blobName":"jtds131", > "version":2, > "timestamp":"2018-11-29T15:08:01.273Z", > "size":25}, > { > "id":"ojdbc/2", > "md5":"a65fc757841c108339744156717b403f", > "blobName":"ojdbc", > "version":2, > "timestamp":"2018-11-29T14:33:22.048Z", > "size":24}, > { > "id":"ndm_test/2", > "md5":"83c579317606eca7d06418cf83a2c0d0", > "blobName":"ndm_test", > "version":2, > "timestamp":"2018-11-29T15:45:24.267Z", > "size":26}, > { > "id":"jtds131/1", > "md5":"499eb508a21e312d3bb9aec337eb38b7", > "blobName":"jtds131", > "version":1, > "timestamp":"2018-11-28T20:11:44.533Z", > "size":317816}] > }}{code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-13023) No clear guidance on how to delete jar from .system collection
[ https://issues.apache.org/jira/browse/SOLR-13023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Cassandra Targett resolved SOLR-13023. -- Resolution: Fixed Fix Version/s: master (8.0) 7.6 > No clear guidance on how to delete jar from .system collection > -- > > Key: SOLR-13023 > URL: https://issues.apache.org/jira/browse/SOLR-13023 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: blobstore, documentation >Affects Versions: 7.5 >Reporter: William >Priority: Major > Fix For: 7.6, master (8.0) > > > There is no clear guidance on how to actually delete a JAR that was placed in > the .system collection. I have literally tried every command under the sun > even the ones provided in support. Can someone please tell me how to actually > do a proper delete. All I can seem to do is just add new ones. > > {code:java} > curl -X POST -H 'Content-Type: application/json' > 'https://solr.cloud.statcan.ca/solr/.system/' --data-binary '{"delete": > {"id":"jtds131"}}' > curl -X POST -H 'Content-Type: application/json' > 'https://solr.cloud.statcan.ca/solr/ndm_test/update' --data-binary '{ > "delete": { "id":"jtds131/1" } > }' > curl -X DELETE -H 'Content-Type: application/json' > 'https://solr.cloud.statcan.ca/solr/.system/blob/jtds131' {code} > > I just want to delete any of the following: > > {code:java} > curl https://solr.cloud.statcan.ca/solr/.system/blob\?omitHeader\=true > (docker-for-desktop/default) > { > "response":{"numFound":20,"start":0,"docs":[ > { > "id":"jtds131/7", > "md5":"dd0041fbc51b53613db675520bb1521e", > "blobName":"jtds131", > "version":7, > "timestamp":"2018-11-29T15:16:57.079Z", > "size":35}, > { > "id":"jtds131/6", > "md5":"e45bd303ff39f7daa7a57941b1ff0b85", > "blobName":"jtds131", > "version":6, > "timestamp":"2018-11-29T15:13:20.375Z", > "size":34}, > { > "id":"jtds131/5", > "md5":"4f2e52b6058950bf92a61ff64f747091", > "blobName":"jtds131", > "version":5, > "timestamp":"2018-11-29T15:12:17.120Z", > "size":36}, > { > "id":"jtds131/4", > "md5":"36cf4127af583553605f69c728bff016", > "blobName":"jtds131", > "version":4, > "timestamp":"2018-11-29T15:11:22.136Z", > "size":30}, > { > "id":"jtds131/3", > "md5":"3db86deb7b7f6b1fa1b9fa81b690e68c", > "blobName":"jtds131", > "version":3, > "timestamp":"2018-11-29T15:09:38.075Z", > "size":28}, > { > "id":"ojdbc/3", > "md5":"1ce62f6f367c356671791e4b29506196", > "blobName":"ojdbc", > "version":3, > "timestamp":"2018-11-29T14:45:47.213Z", > "size":24}, > { > "id":"jtds131/2", > "md5":"cac877bfcbd4cc5d4adc04497df81ab", > "blobName":"jtds131", > "version":2, > "timestamp":"2018-11-29T15:08:01.273Z", > "size":25}, > { > "id":"ojdbc/2", > "md5":"a65fc757841c108339744156717b403f", > "blobName":"ojdbc", > "version":2, > "timestamp":"2018-11-29T14:33:22.048Z", > "size":24}, > { > "id":"ndm_test/2", > "md5":"83c579317606eca7d06418cf83a2c0d0", > "blobName":"ndm_test", > "version":2, > "timestamp":"2018-11-29T15:45:24.267Z", > "size":26}, > { > "id":"jtds131/1", > "md5":"499eb508a21e312d3bb9aec337eb38b7", > "blobName":"jtds131", > "version":1, > "timestamp":"2018-11-28T20:11:44.533Z", > "size":317816}] > }}{code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12932) ant test (without badapples=false) should pass easily for developers.
[ https://issues.apache.org/jira/browse/SOLR-12932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16705142#comment-16705142 ] Mark Miller commented on SOLR-12932: Yeah, I've seen the leak emails - most likely related to reducing leak timeouts drastically - these must be some slow envs because I never saw this locally across 3 machines under a variety of loads, but I'll look at these leaks soon. > ant test (without badapples=false) should pass easily for developers. > - > > Key: SOLR-12932 > URL: https://issues.apache.org/jira/browse/SOLR-12932 > Project: Solr > Issue Type: Sub-task > Security Level: Public(Default Security Level. Issues are Public) > Components: Tests >Reporter: Mark Miller >Assignee: Mark Miller >Priority: Major > > If we fix the tests we will end up here anyway, but we can shortcut this. > Once I get my first patch in, anyone who mentions a test that fails locally > for them at any time (not jenkins), I will fix it. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13023) No clear guidance on how to delete jar from .system collection
[ https://issues.apache.org/jira/browse/SOLR-13023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16705133#comment-16705133 ] ASF subversion and git services commented on SOLR-13023: Commit d30d6b89f3152df3fd671aaae5f18919346dd106 in lucene-solr's branch refs/heads/branch_7_6 from [~ctargett] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=d30d6b8 ] SOLR-13023: Ref Guide: Add section to blob store docs on how to delete blobs > No clear guidance on how to delete jar from .system collection > -- > > Key: SOLR-13023 > URL: https://issues.apache.org/jira/browse/SOLR-13023 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: blobstore, documentation >Affects Versions: 7.5 >Reporter: William >Priority: Major > > There is no clear guidance on how to actually delete a JAR that was placed in > the .system collection. I have literally tried every command under the sun > even the ones provided in support. Can someone please tell me how to actually > do a proper delete. All I can seem to do is just add new ones. > > {code:java} > curl -X POST -H 'Content-Type: application/json' > 'https://solr.cloud.statcan.ca/solr/.system/' --data-binary '{"delete": > {"id":"jtds131"}}' > curl -X POST -H 'Content-Type: application/json' > 'https://solr.cloud.statcan.ca/solr/ndm_test/update' --data-binary '{ > "delete": { "id":"jtds131/1" } > }' > curl -X DELETE -H 'Content-Type: application/json' > 'https://solr.cloud.statcan.ca/solr/.system/blob/jtds131' {code} > > I just want to delete any of the following: > > {code:java} > curl https://solr.cloud.statcan.ca/solr/.system/blob\?omitHeader\=true > (docker-for-desktop/default) > { > "response":{"numFound":20,"start":0,"docs":[ > { > "id":"jtds131/7", > "md5":"dd0041fbc51b53613db675520bb1521e", > "blobName":"jtds131", > "version":7, > "timestamp":"2018-11-29T15:16:57.079Z", > "size":35}, > { > "id":"jtds131/6", > "md5":"e45bd303ff39f7daa7a57941b1ff0b85", > "blobName":"jtds131", > "version":6, > "timestamp":"2018-11-29T15:13:20.375Z", > "size":34}, > { > "id":"jtds131/5", > "md5":"4f2e52b6058950bf92a61ff64f747091", > "blobName":"jtds131", > "version":5, > "timestamp":"2018-11-29T15:12:17.120Z", > "size":36}, > { > "id":"jtds131/4", > "md5":"36cf4127af583553605f69c728bff016", > "blobName":"jtds131", > "version":4, > "timestamp":"2018-11-29T15:11:22.136Z", > "size":30}, > { > "id":"jtds131/3", > "md5":"3db86deb7b7f6b1fa1b9fa81b690e68c", > "blobName":"jtds131", > "version":3, > "timestamp":"2018-11-29T15:09:38.075Z", > "size":28}, > { > "id":"ojdbc/3", > "md5":"1ce62f6f367c356671791e4b29506196", > "blobName":"ojdbc", > "version":3, > "timestamp":"2018-11-29T14:45:47.213Z", > "size":24}, > { > "id":"jtds131/2", > "md5":"cac877bfcbd4cc5d4adc04497df81ab", > "blobName":"jtds131", > "version":2, > "timestamp":"2018-11-29T15:08:01.273Z", > "size":25}, > { > "id":"ojdbc/2", > "md5":"a65fc757841c108339744156717b403f", > "blobName":"ojdbc", > "version":2, > "timestamp":"2018-11-29T14:33:22.048Z", > "size":24}, > { > "id":"ndm_test/2", > "md5":"83c579317606eca7d06418cf83a2c0d0", > "blobName":"ndm_test", > "version":2, > "timestamp":"2018-11-29T15:45:24.267Z", > "size":26}, > { > "id":"jtds131/1", > "md5":"499eb508a21e312d3bb9aec337eb38b7", > "blobName":"jtds131", > "version":1, > "timestamp":"2018-11-28T20:11:44.533Z", > "size":317816}] > }}{code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12932) ant test (without badapples=false) should pass easily for developers.
[ https://issues.apache.org/jira/browse/SOLR-12932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16705126#comment-16705126 ] Hoss Man commented on SOLR-12932: - Although it doesn't seem like there has been an apache jenkins build of master since mark's commit 75b1831967982, there have been 3 jenkins.thetaphi.de builds of master, and 1 jenkins.sarowe.net build of master... * jenkins.sarowe.net ** prior to mark's commit, the most recent master build had 2 test failures *** [http://fucit.org/solr-jenkins-reports/job-data/sarowe/Lucene-Solr-tests-master/19410/] ** The single build since mark's commit had no failures... *** [http://fucit.org/solr-jenkins-reports/job-data/sarowe/Lucene-Solr-tests-master/19411/] * jenkins.thetaphi.de ** prior to mark's commit, the most recent master build coincidently succeeded w/o any test failures. *** [http://fucit.org/solr-jenkins-reports/job-data/thetaphi/Lucene-Solr-master-Linux/23276/] ** All 3 builds of master since that commit, have had 100+ suite level failures – although it should be noted they all used diff JVMs *** [http://fucit.org/solr-jenkins-reports/job-data/thetaphi/Lucene-Solr-master-Linux/23277/] *** [http://fucit.org/solr-jenkins-reports/job-data/thetaphi/Lucene-Solr-master-Linux/23278/] *** [http://fucit.org/solr-jenkins-reports/job-data/thetaphi/Lucene-Solr-master-Linux/23279/] Skimming the logs from jenkins.thetaphi.de #23277, the suite failures mostly seem to fall into 2 types... # Thread Leaks ** *** SolrRrdBackendFactory-* and MetricsHistoryHandler-* threads seem to frequently leak in "pairs" – ie: it's very common to see 2 threads leaked w/one of each – but also 4 threads leaked 2 of each, etc... *** there were other thread's leaked in some tests to various degrees, not enough for a pattern to be obvious # Object Leaks ** *** Notably instances of [ZkStateReader, SolrZkClient] – also typically in pairs (if 2 objects leaked, one of each; if 24 objects leaked, 12 of each) *** there were a handful of object leak failures that sometimes included other objects: in particular some large lists w/multiple SolrCore objects being leaked and ohter objects you'd expect to see hanging off of a SolrCore Hypothosis: I know mark has mentioned cleaning up a lot of "sleep" and "wait" type logic in tests, I'm guessing that in doing this it's exposed some "shutdown" logic bugs that that in the past weren't as obvious because "slow" jenkins machines were waiting longer for other things due to hardcoded "waits" and were getting lucky that the the threads/objects were being cleaned up in a timely manor. > ant test (without badapples=false) should pass easily for developers. > - > > Key: SOLR-12932 > URL: https://issues.apache.org/jira/browse/SOLR-12932 > Project: Solr > Issue Type: Sub-task > Security Level: Public(Default Security Level. Issues are Public) > Components: Tests >Reporter: Mark Miller >Assignee: Mark Miller >Priority: Major > > If we fix the tests we will end up here anyway, but we can shortcut this. > Once I get my first patch in, anyone who mentions a test that fails locally > for them at any time (not jenkins), I will fix it. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13023) No clear guidance on how to delete jar from .system collection
[ https://issues.apache.org/jira/browse/SOLR-13023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16705128#comment-16705128 ] ASF subversion and git services commented on SOLR-13023: Commit 90d0604ef31b243453da465a93491df70571131f in lucene-solr's branch refs/heads/branch_7x from [~ctargett] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=90d0604 ] SOLR-13023: Ref Guide: Add section to blob store docs on how to delete blobs > No clear guidance on how to delete jar from .system collection > -- > > Key: SOLR-13023 > URL: https://issues.apache.org/jira/browse/SOLR-13023 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: blobstore, documentation >Affects Versions: 7.5 >Reporter: William >Priority: Major > > There is no clear guidance on how to actually delete a JAR that was placed in > the .system collection. I have literally tried every command under the sun > even the ones provided in support. Can someone please tell me how to actually > do a proper delete. All I can seem to do is just add new ones. > > {code:java} > curl -X POST -H 'Content-Type: application/json' > 'https://solr.cloud.statcan.ca/solr/.system/' --data-binary '{"delete": > {"id":"jtds131"}}' > curl -X POST -H 'Content-Type: application/json' > 'https://solr.cloud.statcan.ca/solr/ndm_test/update' --data-binary '{ > "delete": { "id":"jtds131/1" } > }' > curl -X DELETE -H 'Content-Type: application/json' > 'https://solr.cloud.statcan.ca/solr/.system/blob/jtds131' {code} > > I just want to delete any of the following: > > {code:java} > curl https://solr.cloud.statcan.ca/solr/.system/blob\?omitHeader\=true > (docker-for-desktop/default) > { > "response":{"numFound":20,"start":0,"docs":[ > { > "id":"jtds131/7", > "md5":"dd0041fbc51b53613db675520bb1521e", > "blobName":"jtds131", > "version":7, > "timestamp":"2018-11-29T15:16:57.079Z", > "size":35}, > { > "id":"jtds131/6", > "md5":"e45bd303ff39f7daa7a57941b1ff0b85", > "blobName":"jtds131", > "version":6, > "timestamp":"2018-11-29T15:13:20.375Z", > "size":34}, > { > "id":"jtds131/5", > "md5":"4f2e52b6058950bf92a61ff64f747091", > "blobName":"jtds131", > "version":5, > "timestamp":"2018-11-29T15:12:17.120Z", > "size":36}, > { > "id":"jtds131/4", > "md5":"36cf4127af583553605f69c728bff016", > "blobName":"jtds131", > "version":4, > "timestamp":"2018-11-29T15:11:22.136Z", > "size":30}, > { > "id":"jtds131/3", > "md5":"3db86deb7b7f6b1fa1b9fa81b690e68c", > "blobName":"jtds131", > "version":3, > "timestamp":"2018-11-29T15:09:38.075Z", > "size":28}, > { > "id":"ojdbc/3", > "md5":"1ce62f6f367c356671791e4b29506196", > "blobName":"ojdbc", > "version":3, > "timestamp":"2018-11-29T14:45:47.213Z", > "size":24}, > { > "id":"jtds131/2", > "md5":"cac877bfcbd4cc5d4adc04497df81ab", > "blobName":"jtds131", > "version":2, > "timestamp":"2018-11-29T15:08:01.273Z", > "size":25}, > { > "id":"ojdbc/2", > "md5":"a65fc757841c108339744156717b403f", > "blobName":"ojdbc", > "version":2, > "timestamp":"2018-11-29T14:33:22.048Z", > "size":24}, > { > "id":"ndm_test/2", > "md5":"83c579317606eca7d06418cf83a2c0d0", > "blobName":"ndm_test", > "version":2, > "timestamp":"2018-11-29T15:45:24.267Z", > "size":26}, > { > "id":"jtds131/1", > "md5":"499eb508a21e312d3bb9aec337eb38b7", > "blobName":"jtds131", > "version":1, > "timestamp":"2018-11-28T20:11:44.533Z", > "size":317816}] > }}{code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13023) No clear guidance on how to delete jar from .system collection
[ https://issues.apache.org/jira/browse/SOLR-13023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16705125#comment-16705125 ] ASF subversion and git services commented on SOLR-13023: Commit 5c4ab188eb09ad3215f461523b9873037803ed7e in lucene-solr's branch refs/heads/master from [~ctargett] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=5c4ab18 ] SOLR-13023: Ref Guide: Add section to blob store docs on how to delete blobs > No clear guidance on how to delete jar from .system collection > -- > > Key: SOLR-13023 > URL: https://issues.apache.org/jira/browse/SOLR-13023 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: blobstore, documentation >Affects Versions: 7.5 >Reporter: William >Priority: Major > > There is no clear guidance on how to actually delete a JAR that was placed in > the .system collection. I have literally tried every command under the sun > even the ones provided in support. Can someone please tell me how to actually > do a proper delete. All I can seem to do is just add new ones. > > {code:java} > curl -X POST -H 'Content-Type: application/json' > 'https://solr.cloud.statcan.ca/solr/.system/' --data-binary '{"delete": > {"id":"jtds131"}}' > curl -X POST -H 'Content-Type: application/json' > 'https://solr.cloud.statcan.ca/solr/ndm_test/update' --data-binary '{ > "delete": { "id":"jtds131/1" } > }' > curl -X DELETE -H 'Content-Type: application/json' > 'https://solr.cloud.statcan.ca/solr/.system/blob/jtds131' {code} > > I just want to delete any of the following: > > {code:java} > curl https://solr.cloud.statcan.ca/solr/.system/blob\?omitHeader\=true > (docker-for-desktop/default) > { > "response":{"numFound":20,"start":0,"docs":[ > { > "id":"jtds131/7", > "md5":"dd0041fbc51b53613db675520bb1521e", > "blobName":"jtds131", > "version":7, > "timestamp":"2018-11-29T15:16:57.079Z", > "size":35}, > { > "id":"jtds131/6", > "md5":"e45bd303ff39f7daa7a57941b1ff0b85", > "blobName":"jtds131", > "version":6, > "timestamp":"2018-11-29T15:13:20.375Z", > "size":34}, > { > "id":"jtds131/5", > "md5":"4f2e52b6058950bf92a61ff64f747091", > "blobName":"jtds131", > "version":5, > "timestamp":"2018-11-29T15:12:17.120Z", > "size":36}, > { > "id":"jtds131/4", > "md5":"36cf4127af583553605f69c728bff016", > "blobName":"jtds131", > "version":4, > "timestamp":"2018-11-29T15:11:22.136Z", > "size":30}, > { > "id":"jtds131/3", > "md5":"3db86deb7b7f6b1fa1b9fa81b690e68c", > "blobName":"jtds131", > "version":3, > "timestamp":"2018-11-29T15:09:38.075Z", > "size":28}, > { > "id":"ojdbc/3", > "md5":"1ce62f6f367c356671791e4b29506196", > "blobName":"ojdbc", > "version":3, > "timestamp":"2018-11-29T14:45:47.213Z", > "size":24}, > { > "id":"jtds131/2", > "md5":"cac877bfcbd4cc5d4adc04497df81ab", > "blobName":"jtds131", > "version":2, > "timestamp":"2018-11-29T15:08:01.273Z", > "size":25}, > { > "id":"ojdbc/2", > "md5":"a65fc757841c108339744156717b403f", > "blobName":"ojdbc", > "version":2, > "timestamp":"2018-11-29T14:33:22.048Z", > "size":24}, > { > "id":"ndm_test/2", > "md5":"83c579317606eca7d06418cf83a2c0d0", > "blobName":"ndm_test", > "version":2, > "timestamp":"2018-11-29T15:45:24.267Z", > "size":26}, > { > "id":"jtds131/1", > "md5":"499eb508a21e312d3bb9aec337eb38b7", > "blobName":"jtds131", > "version":1, > "timestamp":"2018-11-28T20:11:44.533Z", > "size":317816}] > }}{code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12804) Remove static modifier from Overseer queue access.
[ https://issues.apache.org/jira/browse/SOLR-12804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16705100#comment-16705100 ] ASF subversion and git services commented on SOLR-12804: Commit eb652b84edf441d8369f5188cdd5e3ae2b151434 in lucene-solr's branch refs/heads/branch_7x from markrmiller [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=eb652b8 ] SOLR-12801: Make massive improvements to the tests. SOLR-12804: Remove static modifier from Overseer queue access. SOLR-12896: Introduce more checks for shutdown and closed to improve clean close and shutdown. (Partial) SOLR-12897: Introduce AlreadyClosedException to clean up silly close / shutdown logging. (Partial) SOLR-12898: Replace cluster state polling with ZkStateReader#waitFor. (Partial) SOLR-12923: The new AutoScaling tests are way too flaky and need special attention. (Partial) SOLR-12932: ant test (without badapples=false) should pass easily for developers. (Partial) SOLR-12933: Fix SolrCloud distributed commit. > Remove static modifier from Overseer queue access. > -- > > Key: SOLR-12804 > URL: https://issues.apache.org/jira/browse/SOLR-12804 > Project: Solr > Issue Type: Sub-task > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Mark Miller >Assignee: Mark Miller >Priority: Major > > This is kind of a lazy anti-pattern in general, but also does not go well > with mocking. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12923) The new AutoScaling tests are way to flaky and need special attention.
[ https://issues.apache.org/jira/browse/SOLR-12923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16705104#comment-16705104 ] ASF subversion and git services commented on SOLR-12923: Commit eb652b84edf441d8369f5188cdd5e3ae2b151434 in lucene-solr's branch refs/heads/branch_7x from markrmiller [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=eb652b8 ] SOLR-12801: Make massive improvements to the tests. SOLR-12804: Remove static modifier from Overseer queue access. SOLR-12896: Introduce more checks for shutdown and closed to improve clean close and shutdown. (Partial) SOLR-12897: Introduce AlreadyClosedException to clean up silly close / shutdown logging. (Partial) SOLR-12898: Replace cluster state polling with ZkStateReader#waitFor. (Partial) SOLR-12923: The new AutoScaling tests are way too flaky and need special attention. (Partial) SOLR-12932: ant test (without badapples=false) should pass easily for developers. (Partial) SOLR-12933: Fix SolrCloud distributed commit. > The new AutoScaling tests are way to flaky and need special attention. > -- > > Key: SOLR-12923 > URL: https://issues.apache.org/jira/browse/SOLR-12923 > Project: Solr > Issue Type: Sub-task > Security Level: Public(Default Security Level. Issues are Public) > Components: Tests >Reporter: Mark Miller >Priority: Major > > I've already done some work here (not posted yet). We need to address this, > these tests are too new to fail so often and easily. > I want to add beasting to precommit (LUCENE-8545) to help prevent tests that > fail so easily from being committed. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12896) Introduce more checks for shutdown and closed to improve clean close and shutdown.
[ https://issues.apache.org/jira/browse/SOLR-12896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16705101#comment-16705101 ] ASF subversion and git services commented on SOLR-12896: Commit eb652b84edf441d8369f5188cdd5e3ae2b151434 in lucene-solr's branch refs/heads/branch_7x from markrmiller [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=eb652b8 ] SOLR-12801: Make massive improvements to the tests. SOLR-12804: Remove static modifier from Overseer queue access. SOLR-12896: Introduce more checks for shutdown and closed to improve clean close and shutdown. (Partial) SOLR-12897: Introduce AlreadyClosedException to clean up silly close / shutdown logging. (Partial) SOLR-12898: Replace cluster state polling with ZkStateReader#waitFor. (Partial) SOLR-12923: The new AutoScaling tests are way too flaky and need special attention. (Partial) SOLR-12932: ant test (without badapples=false) should pass easily for developers. (Partial) SOLR-12933: Fix SolrCloud distributed commit. > Introduce more checks for shutdown and closed to improve clean close and > shutdown. > -- > > Key: SOLR-12896 > URL: https://issues.apache.org/jira/browse/SOLR-12896 > Project: Solr > Issue Type: Sub-task > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Mark Miller >Assignee: Mark Miller >Priority: Major > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12932) ant test (without badapples=false) should pass easily for developers.
[ https://issues.apache.org/jira/browse/SOLR-12932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16705105#comment-16705105 ] ASF subversion and git services commented on SOLR-12932: Commit eb652b84edf441d8369f5188cdd5e3ae2b151434 in lucene-solr's branch refs/heads/branch_7x from markrmiller [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=eb652b8 ] SOLR-12801: Make massive improvements to the tests. SOLR-12804: Remove static modifier from Overseer queue access. SOLR-12896: Introduce more checks for shutdown and closed to improve clean close and shutdown. (Partial) SOLR-12897: Introduce AlreadyClosedException to clean up silly close / shutdown logging. (Partial) SOLR-12898: Replace cluster state polling with ZkStateReader#waitFor. (Partial) SOLR-12923: The new AutoScaling tests are way too flaky and need special attention. (Partial) SOLR-12932: ant test (without badapples=false) should pass easily for developers. (Partial) SOLR-12933: Fix SolrCloud distributed commit. > ant test (without badapples=false) should pass easily for developers. > - > > Key: SOLR-12932 > URL: https://issues.apache.org/jira/browse/SOLR-12932 > Project: Solr > Issue Type: Sub-task > Security Level: Public(Default Security Level. Issues are Public) > Components: Tests >Reporter: Mark Miller >Assignee: Mark Miller >Priority: Major > > If we fix the tests we will end up here anyway, but we can shortcut this. > Once I get my first patch in, anyone who mentions a test that fails locally > for them at any time (not jenkins), I will fix it. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12898) Replace cluster state polling with ZkStateReader#waitFor.
[ https://issues.apache.org/jira/browse/SOLR-12898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16705103#comment-16705103 ] ASF subversion and git services commented on SOLR-12898: Commit eb652b84edf441d8369f5188cdd5e3ae2b151434 in lucene-solr's branch refs/heads/branch_7x from markrmiller [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=eb652b8 ] SOLR-12801: Make massive improvements to the tests. SOLR-12804: Remove static modifier from Overseer queue access. SOLR-12896: Introduce more checks for shutdown and closed to improve clean close and shutdown. (Partial) SOLR-12897: Introduce AlreadyClosedException to clean up silly close / shutdown logging. (Partial) SOLR-12898: Replace cluster state polling with ZkStateReader#waitFor. (Partial) SOLR-12923: The new AutoScaling tests are way too flaky and need special attention. (Partial) SOLR-12932: ant test (without badapples=false) should pass easily for developers. (Partial) SOLR-12933: Fix SolrCloud distributed commit. > Replace cluster state polling with ZkStateReader#waitFor. > - > > Key: SOLR-12898 > URL: https://issues.apache.org/jira/browse/SOLR-12898 > Project: Solr > Issue Type: Sub-task > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Mark Miller >Assignee: Mark Miller >Priority: Major > > I've done a lot of this on the starburst branch and then again did some in a > related test issue. This issue is to finish comprehensibly and pull > everything from starburst around this in. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12933) Fix SolrCloud distributed commit.
[ https://issues.apache.org/jira/browse/SOLR-12933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16705106#comment-16705106 ] ASF subversion and git services commented on SOLR-12933: Commit eb652b84edf441d8369f5188cdd5e3ae2b151434 in lucene-solr's branch refs/heads/branch_7x from markrmiller [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=eb652b8 ] SOLR-12801: Make massive improvements to the tests. SOLR-12804: Remove static modifier from Overseer queue access. SOLR-12896: Introduce more checks for shutdown and closed to improve clean close and shutdown. (Partial) SOLR-12897: Introduce AlreadyClosedException to clean up silly close / shutdown logging. (Partial) SOLR-12898: Replace cluster state polling with ZkStateReader#waitFor. (Partial) SOLR-12923: The new AutoScaling tests are way too flaky and need special attention. (Partial) SOLR-12932: ant test (without badapples=false) should pass easily for developers. (Partial) SOLR-12933: Fix SolrCloud distributed commit. > Fix SolrCloud distributed commit. > - > > Key: SOLR-12933 > URL: https://issues.apache.org/jira/browse/SOLR-12933 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Mark Miller >Assignee: Mark Miller >Priority: Major > > Well working on the starburst branch, I found that our commit stuff is still > whackey - it's just hard to tell depending on binary vs xml and what you are > doing. > We should pull in my fix for this. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12801) Fix the tests, remove BadApples and AwaitsFix annotations, improve env for test development.
[ https://issues.apache.org/jira/browse/SOLR-12801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16705099#comment-16705099 ] ASF subversion and git services commented on SOLR-12801: Commit eb652b84edf441d8369f5188cdd5e3ae2b151434 in lucene-solr's branch refs/heads/branch_7x from markrmiller [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=eb652b8 ] SOLR-12801: Make massive improvements to the tests. SOLR-12804: Remove static modifier from Overseer queue access. SOLR-12896: Introduce more checks for shutdown and closed to improve clean close and shutdown. (Partial) SOLR-12897: Introduce AlreadyClosedException to clean up silly close / shutdown logging. (Partial) SOLR-12898: Replace cluster state polling with ZkStateReader#waitFor. (Partial) SOLR-12923: The new AutoScaling tests are way too flaky and need special attention. (Partial) SOLR-12932: ant test (without badapples=false) should pass easily for developers. (Partial) SOLR-12933: Fix SolrCloud distributed commit. > Fix the tests, remove BadApples and AwaitsFix annotations, improve env for > test development. > > > Key: SOLR-12801 > URL: https://issues.apache.org/jira/browse/SOLR-12801 > Project: Solr > Issue Type: Task > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Mark Miller >Assignee: Mark Miller >Priority: Critical > Time Spent: 1h > Remaining Estimate: 0h > > A single issue to counteract the single issue adding tons of annotations, the > continued addition of new flakey tests, and the continued addition of > flakiness to existing tests. > Lots more to come. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12897) Introduce AlreadyClosedException to clean up silly close / shutdown logging.
[ https://issues.apache.org/jira/browse/SOLR-12897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16705102#comment-16705102 ] ASF subversion and git services commented on SOLR-12897: Commit eb652b84edf441d8369f5188cdd5e3ae2b151434 in lucene-solr's branch refs/heads/branch_7x from markrmiller [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=eb652b8 ] SOLR-12801: Make massive improvements to the tests. SOLR-12804: Remove static modifier from Overseer queue access. SOLR-12896: Introduce more checks for shutdown and closed to improve clean close and shutdown. (Partial) SOLR-12897: Introduce AlreadyClosedException to clean up silly close / shutdown logging. (Partial) SOLR-12898: Replace cluster state polling with ZkStateReader#waitFor. (Partial) SOLR-12923: The new AutoScaling tests are way too flaky and need special attention. (Partial) SOLR-12932: ant test (without badapples=false) should pass easily for developers. (Partial) SOLR-12933: Fix SolrCloud distributed commit. > Introduce AlreadyClosedException to clean up silly close / shutdown logging. > > > Key: SOLR-12897 > URL: https://issues.apache.org/jira/browse/SOLR-12897 > Project: Solr > Issue Type: Sub-task > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Mark Miller >Assignee: Mark Miller >Priority: Major > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8575) Improve toString() in SegmentInfo
[ https://issues.apache.org/jira/browse/LUCENE-8575?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16705082#comment-16705082 ] Namgyu Kim commented on LUCENE-8575: Thanks for applying my code, [~jpountz] :D > Improve toString() in SegmentInfo > - > > Key: LUCENE-8575 > URL: https://issues.apache.org/jira/browse/LUCENE-8575 > Project: Lucene - Core > Issue Type: Improvement > Components: core/index >Reporter: Namgyu Kim >Priority: Major > Fix For: master (8.0), 7.7 > > Attachments: LUCENE-8575.patch, LUCENE-8575.patch > > > I saw the following code in SegmentInfo class. > {code:java} > // TODO: we could append toString of attributes() here? > {code} > Of course, we can. > > So I wrote a code for that part. > {code:java} > public String toString(int delCount) { > StringBuilder s = new StringBuilder(); > s.append(name).append('(').append(version == null ? "?" : > version).append(')').append(':'); > char cfs = getUseCompoundFile() ? 'c' : 'C'; > s.append(cfs); > s.append(maxDoc); > if (delCount != 0) { > s.append('/').append(delCount); > } > if (indexSort != null) { > s.append(":[indexSort="); > s.append(indexSort); > s.append(']'); > } > // New Code > if (!diagnostics.isEmpty()) { > s.append(":[diagnostics="); > for (Map.Entry entry : diagnostics.entrySet()) > > s.append("<").append(entry.getKey()).append(",").append(entry.getValue()).append(">,"); > s.setLength(s.length() - 1); > s.append(']'); > } > // New Code > if (!attributes.isEmpty()) { > s.append(":[attributes="); > for (Map.Entry entry : attributes.entrySet()) > > s.append("<").append(entry.getKey()).append(",").append(entry.getValue()).append(">,"); > s.setLength(s.length() - 1); > s.append(']'); > } > return s.toString(); > } > {code} > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13021) You cannot safely create and delete the same collection.
[ https://issues.apache.org/jira/browse/SOLR-13021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16705061#comment-16705061 ] Erick Erickson commented on SOLR-13021: --- I'm going to take a whack at writing a failing test case here, will post results. > You cannot safely create and delete the same collection. > > > Key: SOLR-13021 > URL: https://issues.apache.org/jira/browse/SOLR-13021 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Mark Miller >Priority: Major > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-7.6-Linux (32bit/jdk1.8.0_172) - Build # 45 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.6-Linux/45/ Java: 32bit/jdk1.8.0_172 -client -XX:+UseConcMarkSweepGC 1 tests failed. FAILED: org.apache.solr.client.solrj.impl.CloudSolrClientTest.testRouting Error Message: Error from server at https://127.0.0.1:33735/solr/collection1_shard2_replica_n2: Expected mime type application/octet-stream but got text/html.Error 404 Can not find: /solr/collection1_shard2_replica_n2/update HTTP ERROR 404 Problem accessing /solr/collection1_shard2_replica_n2/update. Reason: Can not find: /solr/collection1_shard2_replica_n2/updatehttp://eclipse.org/jetty;>Powered by Jetty:// 9.4.11.v20180605 Stack Trace: org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from server at https://127.0.0.1:33735/solr/collection1_shard2_replica_n2: Expected mime type application/octet-stream but got text/html. Error 404 Can not find: /solr/collection1_shard2_replica_n2/update HTTP ERROR 404 Problem accessing /solr/collection1_shard2_replica_n2/update. Reason: Can not find: /solr/collection1_shard2_replica_n2/updatehttp://eclipse.org/jetty;>Powered by Jetty:// 9.4.11.v20180605 at __randomizedtesting.SeedInfo.seed([443F34C03C0F65D7:868808A83F4F95AF]:0) at org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:551) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1016) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817) at org.apache.solr.client.solrj.impl.CloudSolrClientTest.testRouting(CloudSolrClientTest.java:238) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985) at org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:110) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at
[jira] [Created] (SOLR-13028) Harden AutoAddReplicasPlanActionTest#testSimple
Mark Miller created SOLR-13028: -- Summary: Harden AutoAddReplicasPlanActionTest#testSimple Key: SOLR-13028 URL: https://issues.apache.org/jira/browse/SOLR-13028 Project: Solr Issue Type: Sub-task Security Level: Public (Default Security Level. Issues are Public) Reporter: Mark Miller -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12839) add a 'resort' option to JSON faceting
[ https://issues.apache.org/jira/browse/SOLR-12839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16705018#comment-16705018 ] Lucene/Solr QA commented on SOLR-12839: --- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 4 new or modified test files. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 40s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} Release audit (RAT) {color} | {color:green} 1m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} Check forbidden APIs {color} | {color:green} 1m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} Validate source patterns {color} | {color:green} 1m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} Validate ref guide {color} | {color:green} 1m 37s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 35m 57s{color} | {color:green} core in the patch passed. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 42m 56s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | SOLR-12839 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12950083/SOLR-12839.patch | | Optional Tests | compile javac unit ratsources checkforbiddenapis validatesourcepatterns validaterefguide | | uname | Linux lucene1-us-west 4.4.0-137-generic #163~14.04.1-Ubuntu SMP Mon Sep 24 17:14:57 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | ant | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-SOLR-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh | | git revision | master / 0491623 | | ant | version: Apache Ant(TM) version 1.9.3 compiled on July 24 2018 | | Default Java | 1.8.0_191 | | Test Results | https://builds.apache.org/job/PreCommit-SOLR-Build/237/testReport/ | | modules | C: solr/core solr/solr-ref-guide U: solr | | Console output | https://builds.apache.org/job/PreCommit-SOLR-Build/237/console | | Powered by | Apache Yetus 0.7.0 http://yetus.apache.org | This message was automatically generated. > add a 'resort' option to JSON faceting > -- > > Key: SOLR-12839 > URL: https://issues.apache.org/jira/browse/SOLR-12839 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: Facet Module >Reporter: Hoss Man >Assignee: Hoss Man >Priority: Major > Attachments: SOLR-12839.patch, SOLR-12839.patch, SOLR-12839.patch, > SOLR-12839.patch > > > As discusssed in SOLR-9480 ... > bq. Similar to how the {{rerank}} request param allows people to collect & > score documents using a "cheap" query, and then re-score the top N using a > ore expensive query, I think it would be handy if JSON Facets supported a > {{resort}} option that could be used on any FacetRequestSorted instance right > along side the {{sort}} param, using the same JSON syntax, so that clients > could have Solr internaly sort all the facet buckets by something simple > (like count) and then "Re-Sort" the top N=limit (or maybe ( > N=limit+overrequest ?) using a more expensive function like skg() -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-NightlyTests-7.x - Build # 391 - Still Unstable
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-7.x/391/ 2 tests failed. FAILED: org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica Error Message: Expected new active leader null Live Nodes: [127.0.0.1:38012_solr, 127.0.0.1:44874_solr, 127.0.0.1:45953_solr] Last available state: DocCollection(raceDeleteReplica_false//collections/raceDeleteReplica_false/state.json/13)={ "pullReplicas":"0", "replicationFactor":"2", "shards":{"shard1":{ "range":"8000-7fff", "state":"active", "replicas":{ "core_node4":{ "core":"raceDeleteReplica_false_shard1_replica_n2", "base_url":"http://127.0.0.1:46063/solr;, "node_name":"127.0.0.1:46063_solr", "state":"down", "type":"NRT", "force_set_state":"false", "leader":"true"}, "core_node6":{ "core":"raceDeleteReplica_false_shard1_replica_n5", "base_url":"http://127.0.0.1:46063/solr;, "node_name":"127.0.0.1:46063_solr", "state":"down", "type":"NRT", "force_set_state":"false", "router":{"name":"compositeId"}, "maxShardsPerNode":"1", "autoAddReplicas":"false", "nrtReplicas":"2", "tlogReplicas":"0"} Stack Trace: java.lang.AssertionError: Expected new active leader null Live Nodes: [127.0.0.1:38012_solr, 127.0.0.1:44874_solr, 127.0.0.1:45953_solr] Last available state: DocCollection(raceDeleteReplica_false//collections/raceDeleteReplica_false/state.json/13)={ "pullReplicas":"0", "replicationFactor":"2", "shards":{"shard1":{ "range":"8000-7fff", "state":"active", "replicas":{ "core_node4":{ "core":"raceDeleteReplica_false_shard1_replica_n2", "base_url":"http://127.0.0.1:46063/solr;, "node_name":"127.0.0.1:46063_solr", "state":"down", "type":"NRT", "force_set_state":"false", "leader":"true"}, "core_node6":{ "core":"raceDeleteReplica_false_shard1_replica_n5", "base_url":"http://127.0.0.1:46063/solr;, "node_name":"127.0.0.1:46063_solr", "state":"down", "type":"NRT", "force_set_state":"false", "router":{"name":"compositeId"}, "maxShardsPerNode":"1", "autoAddReplicas":"false", "nrtReplicas":"2", "tlogReplicas":"0"} at __randomizedtesting.SeedInfo.seed([4D1A02C4F54048B0:270C63149DB2027A]:0) at org.junit.Assert.fail(Assert.java:93) at org.apache.solr.cloud.SolrCloudTestCase.waitForState(SolrCloudTestCase.java:280) at org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica(DeleteReplicaTest.java:334) at org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica(DeleteReplicaTest.java:230) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830) at
[jira] [Created] (SOLR-13027) Harden LeaderTragicEventTest.
Mark Miller created SOLR-13027: -- Summary: Harden LeaderTragicEventTest. Key: SOLR-13027 URL: https://issues.apache.org/jira/browse/SOLR-13027 Project: Solr Issue Type: Sub-task Security Level: Public (Default Security Level. Issues are Public) Components: Tests Reporter: Mark Miller Assignee: Mark Miller -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-http2-Linux (64bit/jdk-10.0.1) - Build # 16 - Still Failing!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-http2-Linux/16/ Java: 64bit/jdk-10.0.1 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC All tests passed Build Log: [...truncated 2022 lines...] [junit4] JVM J0: stderr was not empty, see: /home/jenkins/workspace/Lucene-Solr-http2-Linux/lucene/build/core/test/temp/junit4-J0-20181130_142404_1795101520348327525575.syserr [junit4] >>> JVM J0 emitted unexpected output (verbatim) [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release. [junit4] <<< JVM J0: EOF [...truncated 5 lines...] [junit4] JVM J2: stderr was not empty, see: /home/jenkins/workspace/Lucene-Solr-http2-Linux/lucene/build/core/test/temp/junit4-J2-20181130_142404_1795168491909191264849.syserr [junit4] >>> JVM J2 emitted unexpected output (verbatim) [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release. [junit4] <<< JVM J2: EOF [...truncated 3 lines...] [junit4] JVM J1: stderr was not empty, see: /home/jenkins/workspace/Lucene-Solr-http2-Linux/lucene/build/core/test/temp/junit4-J1-20181130_142404_17915246524211610513313.syserr [junit4] >>> JVM J1 emitted unexpected output (verbatim) [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release. [junit4] <<< JVM J1: EOF [...truncated 295 lines...] [junit4] JVM J1: stderr was not empty, see: /home/jenkins/workspace/Lucene-Solr-http2-Linux/lucene/build/test-framework/test/temp/junit4-J1-20181130_143146_3141505387654788677187.syserr [junit4] >>> JVM J1 emitted unexpected output (verbatim) [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release. [junit4] <<< JVM J1: EOF [junit4] JVM J0: stderr was not empty, see: /home/jenkins/workspace/Lucene-Solr-http2-Linux/lucene/build/test-framework/test/temp/junit4-J0-20181130_143146_3148689613174761428278.syserr [junit4] >>> JVM J0 emitted unexpected output (verbatim) [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release. [junit4] <<< JVM J0: EOF [...truncated 12 lines...] [junit4] JVM J2: stderr was not empty, see: /home/jenkins/workspace/Lucene-Solr-http2-Linux/lucene/build/test-framework/test/temp/junit4-J2-20181130_143146_31418259531309150790379.syserr [junit4] >>> JVM J2 emitted unexpected output (verbatim) [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release. [junit4] <<< JVM J2: EOF [...truncated 1080 lines...] [junit4] JVM J0: stderr was not empty, see: /home/jenkins/workspace/Lucene-Solr-http2-Linux/lucene/build/analysis/common/test/temp/junit4-J0-20181130_143302_1309321753471515852876.syserr [junit4] >>> JVM J0 emitted unexpected output (verbatim) [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release. [junit4] <<< JVM J0: EOF [...truncated 3 lines...] [junit4] JVM J2: stderr was not empty, see: /home/jenkins/workspace/Lucene-Solr-http2-Linux/lucene/build/analysis/common/test/temp/junit4-J2-20181130_143302_13214698514026411192129.syserr [junit4] >>> JVM J2 emitted unexpected output (verbatim) [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release. [junit4] <<< JVM J2: EOF [...truncated 3 lines...] [junit4] JVM J1: stderr was not empty, see: /home/jenkins/workspace/Lucene-Solr-http2-Linux/lucene/build/analysis/common/test/temp/junit4-J1-20181130_143302_1326535509510245099412.syserr [junit4] >>> JVM J1 emitted unexpected output (verbatim) [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release. [junit4] <<< JVM J1: EOF [...truncated 261 lines...] [junit4] JVM J1: stderr was not empty, see: /home/jenkins/workspace/Lucene-Solr-http2-Linux/lucene/build/analysis/icu/test/temp/junit4-J1-20181130_143434_16118275509580651521808.syserr [junit4] >>> JVM J1 emitted unexpected output (verbatim) [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release. [junit4] <<< JVM J1: EOF [junit4] JVM J0: stderr was not empty, see: /home/jenkins/workspace/Lucene-Solr-http2-Linux/lucene/build/analysis/icu/test/temp/junit4-J0-20181130_143434_1614436739526320347328.syserr [junit4] >>> JVM J0
[JENKINS] Lucene-Solr-7.x-Windows (64bit/jdk1.8.0_172) - Build # 904 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/904/ Java: 64bit/jdk1.8.0_172 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC 9 tests failed. FAILED: org.apache.solr.cloud.MissingSegmentRecoveryTest.testLeaderRecovery Error Message: Expected a collection with one shard and two replicas null Live Nodes: [127.0.0.1:55351_solr, 127.0.0.1:55352_solr] Last available state: DocCollection(MissingSegmentRecoveryTest//collections/MissingSegmentRecoveryTest/state.json/7)={ "pullReplicas":"0", "replicationFactor":"2", "shards":{"shard1":{ "range":"8000-7fff", "state":"active", "replicas":{ "core_node3":{ "core":"MissingSegmentRecoveryTest_shard1_replica_n1", "base_url":"https://127.0.0.1:55352/solr;, "node_name":"127.0.0.1:55352_solr", "state":"down", "type":"NRT", "force_set_state":"false"}, "core_node4":{ "core":"MissingSegmentRecoveryTest_shard1_replica_n2", "base_url":"https://127.0.0.1:55351/solr;, "node_name":"127.0.0.1:55351_solr", "state":"active", "type":"NRT", "force_set_state":"false", "leader":"true", "router":{"name":"compositeId"}, "maxShardsPerNode":"1", "autoAddReplicas":"false", "nrtReplicas":"2", "tlogReplicas":"0"} Stack Trace: java.lang.AssertionError: Expected a collection with one shard and two replicas null Live Nodes: [127.0.0.1:55351_solr, 127.0.0.1:55352_solr] Last available state: DocCollection(MissingSegmentRecoveryTest//collections/MissingSegmentRecoveryTest/state.json/7)={ "pullReplicas":"0", "replicationFactor":"2", "shards":{"shard1":{ "range":"8000-7fff", "state":"active", "replicas":{ "core_node3":{ "core":"MissingSegmentRecoveryTest_shard1_replica_n1", "base_url":"https://127.0.0.1:55352/solr;, "node_name":"127.0.0.1:55352_solr", "state":"down", "type":"NRT", "force_set_state":"false"}, "core_node4":{ "core":"MissingSegmentRecoveryTest_shard1_replica_n2", "base_url":"https://127.0.0.1:55351/solr;, "node_name":"127.0.0.1:55351_solr", "state":"active", "type":"NRT", "force_set_state":"false", "leader":"true", "router":{"name":"compositeId"}, "maxShardsPerNode":"1", "autoAddReplicas":"false", "nrtReplicas":"2", "tlogReplicas":"0"} at __randomizedtesting.SeedInfo.seed([85F52ED219B35581:D5A0B6D14092E39C]:0) at org.junit.Assert.fail(Assert.java:93) at org.apache.solr.cloud.SolrCloudTestCase.waitForState(SolrCloudTestCase.java:280) at org.apache.solr.cloud.MissingSegmentRecoveryTest.testLeaderRecovery(MissingSegmentRecoveryTest.java:106) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880) at
[jira] [Assigned] (LUCENE-8581) Change LatLonShape encoding to use 4 BYTES Per Dimension
[ https://issues.apache.org/jira/browse/LUCENE-8581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nicholas Knize reassigned LUCENE-8581: -- Assignee: Ignacio Vera > Change LatLonShape encoding to use 4 BYTES Per Dimension > > > Key: LUCENE-8581 > URL: https://issues.apache.org/jira/browse/LUCENE-8581 > Project: Lucene - Core > Issue Type: New Feature >Reporter: Nicholas Knize >Assignee: Ignacio Vera >Priority: Major > > {{LatLonShape}} tessellated triangles currently use a relatively naive > encoding with the first four dimensions as the bounding box of the triangle > and the last three dimensions as the vertices of the triangle. To encode the > {{x,y}} vertices in the last three dimensions requires {{bytesPerDim}} to be > set to 8, with 4 bytes for the x & y axis, respectively. We can reduce > {{bytesPerDim}} to 4 by encoding the index(es) of the vertices shared by the > bounding box along with the orientation of the triangle. This also opens the > door for supporting {{CONTAINS}} queries. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (LUCENE-8581) Change LatLonShape encoding to use 4 BYTES Per Dimension
Nicholas Knize created LUCENE-8581: -- Summary: Change LatLonShape encoding to use 4 BYTES Per Dimension Key: LUCENE-8581 URL: https://issues.apache.org/jira/browse/LUCENE-8581 Project: Lucene - Core Issue Type: New Feature Reporter: Nicholas Knize {{LatLonShape}} tessellated triangles currently use a relatively naive encoding with the first four dimensions as the bounding box of the triangle and the last three dimensions as the vertices of the triangle. To encode the {{x,y}} vertices in the last three dimensions requires {{bytesPerDim}} to be set to 8, with 4 bytes for the x & y axis, respectively. We can reduce {{bytesPerDim}} to 4 by encoding the index(es) of the vertices shared by the bounding box along with the orientation of the triangle. This also opens the door for supporting {{CONTAINS}} queries. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13026) Admin UI - dataimport status has green bar even when import fails
[ https://issues.apache.org/jira/browse/SOLR-13026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16704801#comment-16704801 ] Shawn Heisey commented on SOLR-13026: - Some issues related to problems with computer-based parsing of the DIH status: SOLR-2728 SOLR-2729 SOLR-3319 SOLR-3689 SOLR-4241 > Admin UI - dataimport status has green bar even when import fails > - > > Key: SOLR-13026 > URL: https://issues.apache.org/jira/browse/SOLR-13026 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: Admin UI >Affects Versions: 7.5.0 > Environment: For screenshot, Solr 7.5.0 on Windows. > The production setup is a patched 7.1.0 version on Linux. >Reporter: Shawn Heisey >Priority: Major > Attachments: DIH-failed-UI-green.png > > > In the admin UI, the dataimport status screen is showing a green status bar > even when an import fails to run. The error that occurred in attached > screenshot was a connection problem -- in this case the database didn't > exist. I have seen this in production when a URL for SQL Server is > incorrect. The raw status output clearly shows "Full import failed". > I believe that the status should show in a different color, probably red. > There is an icon of a green check mark in the status also. For those who are > color blind, that should change to an icon with an X in it for a visual > indicator not related to color. > I am painfully aware of how terrible the DIH status output is. It is great > for human readability, but extremely difficult for a computer to understand. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-13026) Admin UI - dataimport status has green bar even when import fails
Shawn Heisey created SOLR-13026: --- Summary: Admin UI - dataimport status has green bar even when import fails Key: SOLR-13026 URL: https://issues.apache.org/jira/browse/SOLR-13026 Project: Solr Issue Type: Bug Security Level: Public (Default Security Level. Issues are Public) Components: Admin UI Affects Versions: 7.5.0 Environment: For screenshot, Solr 7.5.0 on Windows. The production setup is a patched 7.1.0 version on Linux. Reporter: Shawn Heisey Attachments: DIH-failed-UI-green.png In the admin UI, the dataimport status screen is showing a green status bar even when an import fails to run. The error that occurred in attached screenshot was a connection problem -- in this case the database didn't exist. I have seen this in production when a URL for SQL Server is incorrect. The raw status output clearly shows "Full import failed". I believe that the status should show in a different color, probably red. There is an icon of a green check mark in the status also. For those who are color blind, that should change to an icon with an X in it for a visual indicator not related to color. I am painfully aware of how terrible the DIH status output is. It is great for human readability, but extremely difficult for a computer to understand. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-9.0.4) - Build # 23279 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/23279/ Java: 64bit/jdk-9.0.4 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC 109 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.schema.TestICUCollationFieldOptions Error Message: 2 threads leaked from SUITE scope at org.apache.solr.schema.TestICUCollationFieldOptions: 1) Thread[id=21, name=MetricsHistoryHandler-8-thread-1, state=TIMED_WAITING, group=TGRP-TestICUCollationFieldOptions] at java.base@9.0.4/jdk.internal.misc.Unsafe.park(Native Method) at java.base@9.0.4/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234) at java.base@9.0.4/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2104) at java.base@9.0.4/java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1131) at java.base@9.0.4/java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:848) at java.base@9.0.4/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1092) at java.base@9.0.4/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1152) at java.base@9.0.4/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641) at java.base@9.0.4/java.lang.Thread.run(Thread.java:844)2) Thread[id=20, name=SolrRrdBackendFactory-7-thread-1, state=TIMED_WAITING, group=TGRP-TestICUCollationFieldOptions] at java.base@9.0.4/jdk.internal.misc.Unsafe.park(Native Method) at java.base@9.0.4/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234) at java.base@9.0.4/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2104) at java.base@9.0.4/java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1131) at java.base@9.0.4/java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:848) at java.base@9.0.4/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1092) at java.base@9.0.4/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1152) at java.base@9.0.4/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641) at java.base@9.0.4/java.lang.Thread.run(Thread.java:844) Stack Trace: com.carrotsearch.randomizedtesting.ThreadLeakError: 2 threads leaked from SUITE scope at org.apache.solr.schema.TestICUCollationFieldOptions: 1) Thread[id=21, name=MetricsHistoryHandler-8-thread-1, state=TIMED_WAITING, group=TGRP-TestICUCollationFieldOptions] at java.base@9.0.4/jdk.internal.misc.Unsafe.park(Native Method) at java.base@9.0.4/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234) at java.base@9.0.4/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2104) at java.base@9.0.4/java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1131) at java.base@9.0.4/java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:848) at java.base@9.0.4/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1092) at java.base@9.0.4/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1152) at java.base@9.0.4/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641) at java.base@9.0.4/java.lang.Thread.run(Thread.java:844) 2) Thread[id=20, name=SolrRrdBackendFactory-7-thread-1, state=TIMED_WAITING, group=TGRP-TestICUCollationFieldOptions] at java.base@9.0.4/jdk.internal.misc.Unsafe.park(Native Method) at java.base@9.0.4/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234) at java.base@9.0.4/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2104) at java.base@9.0.4/java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1131) at java.base@9.0.4/java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:848) at java.base@9.0.4/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1092) at java.base@9.0.4/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1152) at java.base@9.0.4/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641) at java.base@9.0.4/java.lang.Thread.run(Thread.java:844) at
[jira] [Commented] (LUCENE-7875) Rename or move most of MultiFields
[ https://issues.apache.org/jira/browse/LUCENE-7875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16704791#comment-16704791 ] ASF subversion and git services commented on LUCENE-7875: - Commit 04916239337f4e1435e70ba78bb174c019f9f925 in lucene-solr's branch refs/heads/master from [~dsmiley] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=0491623 ] LUCENE-7875: CHANGES.txt: moved to API changes > Rename or move most of MultiFields > -- > > Key: LUCENE-7875 > URL: https://issues.apache.org/jira/browse/LUCENE-7875 > Project: Lucene - Core > Issue Type: Improvement >Reporter: David Smiley >Assignee: David Smiley >Priority: Minor > Fix For: master (8.0) > > Attachments: LUCENE-7875.patch, LUCENE-7875.patch, LUCENE-7875.patch, > LUCENE-7875.patch > > > MultiFields.java has a bunch of static methods that provide a single > LeafReader's view over a bunch of things. > This goes to MultiBits (which will become public): > * {{Bits getLiveDocs(IndexReader reader)}} > These go to FieldInfos: > * {{FieldInfos getMergedFieldInfos(IndexReader reader)}} > * {{Collection getIndexedFields(IndexReader reader)}} > These go to MultiTerms: > * Terms getTerms(IndexReader r, String field) > * {{PostingsEnum getTermPostingsEnum(IndexReader r, String field, BytesRef > term)}} which is renamed a little -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-repro - Build # 2051 - Still Unstable
Build: https://builds.apache.org/job/Lucene-Solr-repro/2051/ [...truncated 28 lines...] [repro] Jenkins log URL: https://builds.apache.org/job/Lucene-Solr-NightlyTests-7.6/14/consoleText [repro] Revision: aef5a75e1377af41da26dd314b1c0496672d268c [repro] Ant options: -Dtests.multiplier=2 -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.6/test-data/enwiki.random.lines.txt [repro] Repro line: ant test -Dtestcase=IndexSizeTriggerTest -Dtests.method=testSplitIntegration -Dtests.seed=6AD9937FED14CE69 -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.6/test-data/enwiki.random.lines.txt -Dtests.locale=ru -Dtests.timezone=Africa/Casablanca -Dtests.asserts=true -Dtests.file.encoding=UTF-8 [repro] Repro line: ant test -Dtestcase=HdfsRestartWhileUpdatingTest -Dtests.method=test -Dtests.seed=6AD9937FED14CE69 -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.6/test-data/enwiki.random.lines.txt -Dtests.locale=vi-VN -Dtests.timezone=Europe/Zaporozhye -Dtests.asserts=true -Dtests.file.encoding=UTF-8 [repro] Repro line: ant test -Dtestcase=HdfsRestartWhileUpdatingTest -Dtests.seed=6AD9937FED14CE69 -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.6/test-data/enwiki.random.lines.txt -Dtests.locale=vi-VN -Dtests.timezone=Europe/Zaporozhye -Dtests.asserts=true -Dtests.file.encoding=UTF-8 [repro] git rev-parse --abbrev-ref HEAD [repro] git rev-parse HEAD [repro] Initial local git branch/revision: cf016f8987e804bcd858a2a414eacdf1b3c54cf5 [repro] git fetch [repro] git checkout aef5a75e1377af41da26dd314b1c0496672d268c [...truncated 2 lines...] [repro] git merge --ff-only [...truncated 1 lines...] [repro] ant clean [...truncated 6 lines...] [repro] Test suites by module: [repro]solr/core [repro] HdfsRestartWhileUpdatingTest [repro] IndexSizeTriggerTest [repro] ant compile-test [...truncated 3580 lines...] [repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=10 -Dtests.class="*.HdfsRestartWhileUpdatingTest|*.IndexSizeTriggerTest" -Dtests.showOutput=onerror -Dtests.multiplier=2 -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.6/test-data/enwiki.random.lines.txt -Dtests.seed=6AD9937FED14CE69 -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.6/test-data/enwiki.random.lines.txt -Dtests.locale=vi-VN -Dtests.timezone=Europe/Zaporozhye -Dtests.asserts=true -Dtests.file.encoding=UTF-8 [...truncated 19482 lines...] [repro] Setting last failure code to 256 [repro] Failures: [repro] 0/5 failed: org.apache.solr.cloud.hdfs.HdfsRestartWhileUpdatingTest [repro] 2/5 failed: org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest [repro] git checkout cf016f8987e804bcd858a2a414eacdf1b3c54cf5 [...truncated 2 lines...] [repro] Exiting with code 256 [...truncated 5 lines...] - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-Tests-7.x - Build # 1045 - Unstable
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/1045/ 2 tests failed. FAILED: org.apache.solr.prometheus.exporter.SolrExporterTest.testExecute Error Message: Could not load collection from ZK: collection1 Stack Trace: org.apache.solr.common.SolrException: Could not load collection from ZK: collection1 at __randomizedtesting.SeedInfo.seed([F88A16C1E5A829DB:C93A04CCF75FD65C]:0) at org.apache.solr.common.cloud.ZkStateReader.getCollectionLive(ZkStateReader.java:1321) at org.apache.solr.common.cloud.ZkStateReader$LazyCollectionRef.get(ZkStateReader.java:737) at org.apache.solr.common.cloud.ClusterState$CollectionRef.get(ClusterState.java:386) at org.apache.solr.client.solrj.impl.CloudSolrClient.getDocCollection(CloudSolrClient.java:1209) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:849) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817) at org.apache.solr.prometheus.exporter.SolrExporterTest.testExecute(SolrExporterTest.java:71) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at
[jira] [Commented] (SOLR-13014) URI Too Long with large streaming expressions in SolrJ
[ https://issues.apache.org/jira/browse/SOLR-13014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16704753#comment-16704753 ] Jan Høydahl commented on SOLR-13014: We're running fine in production with the current patch, so will aim at committing these GET -> POST changes next week. > URI Too Long with large streaming expressions in SolrJ > -- > > Key: SOLR-13014 > URL: https://issues.apache.org/jira/browse/SOLR-13014 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrJ, streaming expressions >Reporter: Jan Høydahl >Assignee: Jan Høydahl >Priority: Major > Fix For: master (8.0), 7.7 > > Time Spent: 10m > Remaining Estimate: 0h > > For very large expressions (e.g. with a complex search string) we'll hit the > max HTTP GET limit since SolrJ does not enforce POST for all expressions. > This goes at least for {{FacetStream}}, {{StatsStream}} and > {{TimeSeriesStream}}, and I'll link a Pull Request fixing these three. > Here is an example of a stack trace when using TimeSeriesStream with a very > large expression: [https://gist.github.com/ea626cf1ec579daaf253aeb805d1532c] > The fix is simply to use {{new QueryRequest(parameters, > SolrRequest.METHOD.POST);}} to explicitly force POST. > See also solr-user thread > [http://lucene.472066.n3.nabble.com/Streaming-Expressions-GET-vs-POST-td4415044.html] -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org