[GitHub] lucene-solr issue #345: LUCENE-8229: Add Weight.matches() method
Github user jpountz commented on the issue: https://github.com/apache/lucene-solr/pull/345 > instead of directly returning the MatchesIterator I asked for this change so that you always get a fresh new iterator, even if you already iterated positions on the same field. I agree that making the call fully lazy would be ideal but I'm not too worried about it either given that this API is expected to be slow, eg. you can't use it to run queries. --- - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-BadApples-Tests-master - Build # 35 - Still Unstable
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-master/35/ 6 tests failed. FAILED: org.apache.solr.cloud.AliasIntegrationTest.testModifyPropertiesCAR Error Message: Error from server at https://127.0.0.1:41092/solr: Can't modify non-existent alias testModifyPropertiesCAR Stack Trace: org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at https://127.0.0.1:41092/solr: Can't modify non-existent alias testModifyPropertiesCAR at __randomizedtesting.SeedInfo.seed([72759F52BDB780EA:497041864F46F8B1]:0) at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244) at org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483) at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1106) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:886) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:819) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211) at org.apache.solr.cloud.AliasIntegrationTest.testModifyPropertiesCAR(AliasIntegrationTest.java:266) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evalu
[GitHub] lucene-solr pull request #345: LUCENE-8229: Add Weight.matches() method
Github user romseygeek commented on a diff in the pull request: https://github.com/apache/lucene-solr/pull/345#discussion_r180669841 --- Diff: lucene/core/src/java/org/apache/lucene/search/DisjunctionMatchesIterator.java --- @@ -0,0 +1,168 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.lucene.search; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.List; +import java.util.Objects; + +import org.apache.lucene.index.LeafReaderContext; +import org.apache.lucene.index.PostingsEnum; +import org.apache.lucene.index.Term; +import org.apache.lucene.index.Terms; +import org.apache.lucene.index.TermsEnum; +import org.apache.lucene.util.BytesRef; +import org.apache.lucene.util.BytesRefIterator; +import org.apache.lucene.util.PriorityQueue; + +/** + * A {@link MatchesIterator} that combines matches from a set of sub-iterators + * + * Matches are sorted by their start positions, and then by their end positions, so that + * prefixes sort first. Matches may overlap, or be duplicated if they appear in more + * than one of the sub-iterators. + */ +final class DisjunctionMatchesIterator implements MatchesIterator { + + /** + * Create a {@link DisjunctionMatchesIterator} over a list of terms + * + * Only terms that have at least one match in the given document will be included + */ + static MatchesIterator fromTerms(LeafReaderContext context, int doc, String field, List terms) throws IOException { +for (Term term : terms) { + if (Objects.equals(field, term.field()) == false) { --- End diff -- Paranoia mainly :) But this is probably better checked in an Objects.requireNonNull - will update. --- - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] lucene-solr issue #345: LUCENE-8229: Add Weight.matches() method
Github user romseygeek commented on the issue: https://github.com/apache/lucene-solr/pull/345 bq. I agree that making the call fully lazy would be ideal This can't be done, unfortunately, as we need to know up-front if there's a match or not. Hence the slightly odd indirection/eager-call two-step. I'll add a comment in the code to make it clearer. --- - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12161) CloudSolrClient with basic auth enabled will update even if no credentials supplied
[ https://issues.apache.org/jira/browse/SOLR-12161?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Noble Paul updated SOLR-12161: -- Attachment: SOLR-12161.patch > CloudSolrClient with basic auth enabled will update even if no credentials > supplied > --- > > Key: SOLR-12161 > URL: https://issues.apache.org/jira/browse/SOLR-12161 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: Authentication >Affects Versions: 7.3 >Reporter: Erick Erickson >Assignee: Noble Paul >Priority: Major > Attachments: AuthUpdateTest.java, SOLR-12161.patch, tests.patch > > > This is an offshoot of SOLR-9399. When I was writing a test, if I create a > cluster with basic authentication set up, I can _still_ add documents to a > collection even without credentials being set in the request. > However, simple queries, commits etc. all fail without credentials set in the > request. > I'll attach a test class that illustrates the problem. > If I use a new HttpSolrClient instead of a CloudSolrClient, then the update > request fails as expected. > [~noblepaul] do you have any insight here? Possibly something with splitting > up the update request to go to each individual shard? -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12161) CloudSolrClient with basic auth enabled will update even if no credentials supplied
[ https://issues.apache.org/jira/browse/SOLR-12161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16433565#comment-16433565 ] Noble Paul commented on SOLR-12161: --- the fix is as follows The threadpool has an extra flag to signal if it is used inside solr or outside. if it is outside Solr (in SolrJ) then don't set the PKI header. > CloudSolrClient with basic auth enabled will update even if no credentials > supplied > --- > > Key: SOLR-12161 > URL: https://issues.apache.org/jira/browse/SOLR-12161 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: Authentication >Affects Versions: 7.3 >Reporter: Erick Erickson >Assignee: Noble Paul >Priority: Major > Attachments: AuthUpdateTest.java, SOLR-12161.patch, tests.patch > > > This is an offshoot of SOLR-9399. When I was writing a test, if I create a > cluster with basic authentication set up, I can _still_ add documents to a > collection even without credentials being set in the request. > However, simple queries, commits etc. all fail without credentials set in the > request. > I'll attach a test class that illustrates the problem. > If I use a new HttpSolrClient instead of a CloudSolrClient, then the update > request fails as expected. > [~noblepaul] do you have any insight here? Possibly something with splitting > up the update request to go to each individual shard? -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1795 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1795/ Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseG1GC 3 tests failed. FAILED: org.apache.solr.cloud.autoscaling.sim.TestTriggerIntegration.testEventQueue Error Message: action wasn't interrupted Stack Trace: java.lang.AssertionError: action wasn't interrupted at __randomizedtesting.SeedInfo.seed([F64294E2F6294EBA:3FF7D64CFF4E884F]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.apache.solr.cloud.autoscaling.sim.TestTriggerIntegration.testEventQueue(TestTriggerIntegration.java:654) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.lang.Thread.run(Thread.java:748) FAILED: org.apache.solr.cloud.autoscaling.sim.TestTriggerIntegration.testEventQueue Error Message: action wasn't interrupted Stack Trace: java.lang.AssertionError: action wasn't interrupted at __randomizedtesting.SeedInfo.seed([F64294E2F6294EBA:3FF7D64CFF4E88
[jira] [Commented] (SOLR-12181) Add trigger based on document count
[ https://issues.apache.org/jira/browse/SOLR-12181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16433617#comment-16433617 ] Lucene/Solr QA commented on SOLR-12181: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 10 new or modified test files. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 18s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} Release audit (RAT) {color} | {color:green} 2m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} Check forbidden APIs {color} | {color:green} 2m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} Validate source patterns {color} | {color:green} 2m 34s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 87m 2s{color} | {color:red} core in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 22s{color} | {color:green} solrj in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 9s{color} | {color:green} test-framework in the patch passed. {color} | | {color:black}{color} | {color:black} {color} | {color:black}106m 17s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | solr.cloud.ZkControllerTest | | | solr.cloud.autoscaling.sim.TestTriggerIntegration | | | solr.cloud.MultiThreadedOCPTest | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | SOLR-12181 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12918416/SOLR-12181.patch | | Optional Tests | compile javac unit ratsources checkforbiddenapis validatesourcepatterns | | uname | Linux lucene2-us-west.apache.org 4.4.0-112-generic #135-Ubuntu SMP Fri Jan 19 11:48:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | ant | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-SOLR-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh | | git revision | master / 5b250b4 | | ant | version: Apache Ant(TM) version 1.9.6 compiled on July 8 2015 | | Default Java | 1.8.0_152 | | unit | https://builds.apache.org/job/PreCommit-SOLR-Build/49/artifact/out/patch-unit-solr_core.txt | | Test Results | https://builds.apache.org/job/PreCommit-SOLR-Build/49/testReport/ | | modules | C: solr/core solr/solrj solr/test-framework U: solr | | Console output | https://builds.apache.org/job/PreCommit-SOLR-Build/49/console | | Powered by | Apache Yetus 0.7.0 http://yetus.apache.org | This message was automatically generated. > Add trigger based on document count > --- > > Key: SOLR-12181 > URL: https://issues.apache.org/jira/browse/SOLR-12181 > Project: Solr > Issue Type: Sub-task > Security Level: Public(Default Security Level. Issues are Public) > Components: AutoScaling >Affects Versions: master (8.0) >Reporter: Andrzej Bialecki >Assignee: Andrzej Bialecki >Priority: Major > Attachments: SOLR-12181.patch > > > This may turn out to be as simple as using a {{MetricTrigger}} but it's > likely this will require some specialization, and we may want to add this > type of trigger anyway for convenience. > The two control actions associated with this trigger will be SPLITSHARD and > (yet nonexistent) MERGESHARD. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8229) Add a method to Weight to retrieve matches for a single document
[ https://issues.apache.org/jira/browse/LUCENE-8229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16433640#comment-16433640 ] ASF subversion and git services commented on LUCENE-8229: - Commit 502fd4bf12b8860b8eea504a96ad1b49dd52938c in lucene-solr's branch refs/heads/branch_7x from [~romseygeek] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=502fd4b ] LUCENE-8229: Add Weight.matches() to iterate over match positions > Add a method to Weight to retrieve matches for a single document > > > Key: LUCENE-8229 > URL: https://issues.apache.org/jira/browse/LUCENE-8229 > Project: Lucene - Core > Issue Type: New Feature >Reporter: Alan Woodward >Assignee: Alan Woodward >Priority: Major > Attachments: LUCENE-8229.patch > > Time Spent: 5h 20m > Remaining Estimate: 0h > > The ability to find out exactly what a query has matched on is a fairly > frequent feature request, and would also make highlighters much easier to > implement. There have been a few attempts at doing this, including adding > positions to Scorers, or re-writing queries as Spans, but these all either > compromise general performance or involve up-front knowledge of all queries. > Instead, I propose adding a method to Weight that exposes an iterator over > matches in a particular document and field. It should be used in a similar > manner to explain() - ie, just for TopDocs, not as part of the scoring loop, > which relieves some of the pressure on performance. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8229) Add a method to Weight to retrieve matches for a single document
[ https://issues.apache.org/jira/browse/LUCENE-8229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16433641#comment-16433641 ] ASF subversion and git services commented on LUCENE-8229: - Commit 040a9601b1b346391ad37e5a0a4f2f598e72d26e in lucene-solr's branch refs/heads/master from [~romseygeek] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=040a960 ] LUCENE-8229: Add Weight.matches() to iterate over match positions > Add a method to Weight to retrieve matches for a single document > > > Key: LUCENE-8229 > URL: https://issues.apache.org/jira/browse/LUCENE-8229 > Project: Lucene - Core > Issue Type: New Feature >Reporter: Alan Woodward >Assignee: Alan Woodward >Priority: Major > Attachments: LUCENE-8229.patch > > Time Spent: 5h 20m > Remaining Estimate: 0h > > The ability to find out exactly what a query has matched on is a fairly > frequent feature request, and would also make highlighters much easier to > implement. There have been a few attempts at doing this, including adding > positions to Scorers, or re-writing queries as Spans, but these all either > compromise general performance or involve up-front knowledge of all queries. > Instead, I propose adding a method to Weight that exposes an iterator over > matches in a particular document and field. It should be used in a similar > manner to explain() - ie, just for TopDocs, not as part of the scoring loop, > which relieves some of the pressure on performance. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (LUCENE-8229) Add a method to Weight to retrieve matches for a single document
[ https://issues.apache.org/jira/browse/LUCENE-8229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alan Woodward resolved LUCENE-8229. --- Resolution: Fixed Fix Version/s: 7.4 Thanks everyone. I'm really excited about this feature! > Add a method to Weight to retrieve matches for a single document > > > Key: LUCENE-8229 > URL: https://issues.apache.org/jira/browse/LUCENE-8229 > Project: Lucene - Core > Issue Type: New Feature >Reporter: Alan Woodward >Assignee: Alan Woodward >Priority: Major > Fix For: 7.4 > > Attachments: LUCENE-8229.patch > > Time Spent: 5h 20m > Remaining Estimate: 0h > > The ability to find out exactly what a query has matched on is a fairly > frequent feature request, and would also make highlighters much easier to > implement. There have been a few attempts at doing this, including adding > positions to Scorers, or re-writing queries as Spans, but these all either > compromise general performance or involve up-front knowledge of all queries. > Instead, I propose adding a method to Weight that exposes an iterator over > matches in a particular document and field. It should be used in a similar > manner to explain() - ie, just for TopDocs, not as part of the scoring loop, > which relieves some of the pressure on performance. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-7.x-Solaris (64bit/jdk1.8.0) - Build # 553 - Failure!
Error processing tokens: Error while parsing action 'Text/ZeroOrMore/FirstOf/Token/DelimitedToken/DelimitedToken_Action3' at input position (line 79, pos 4): )"} ^ java.lang.OutOfMemoryError: Java heap space - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-2878) Allow Scorer to expose positions and payloads aka. nuke spans
[ https://issues.apache.org/jira/browse/LUCENE-2878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16433777#comment-16433777 ] Alan Woodward commented on LUCENE-2878: --- This ticket has changed direction wildly a few times, but always had two basic requirements: * allow positional queries with a nicer API and a more concrete theoretic footing than Spans * make it possible to build accurate highlighters We now have IntervalQuery (LUCENE-8196) that does the first, and Matches (LUCENE-8229) that does the second. There's still work to be done on both (I don't think we're quite ready to nuke Spans, and phrase/span/interval queries don't report their matches yet), but I think we can close this issue out now? > Allow Scorer to expose positions and payloads aka. nuke spans > -- > > Key: LUCENE-2878 > URL: https://issues.apache.org/jira/browse/LUCENE-2878 > Project: Lucene - Core > Issue Type: Improvement > Components: core/search >Affects Versions: Positions Branch >Reporter: Simon Willnauer >Assignee: Robert Muir >Priority: Major > Labels: gsoc2014 > Fix For: Positions Branch > > Attachments: LUCENE-2878-OR.patch, LUCENE-2878-vs-trunk.patch, > LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, > LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, > LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, > LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, > LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, > LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, > LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, > LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878_trunk.patch, > LUCENE-2878_trunk.patch, PosHighlighter.patch, PosHighlighter.patch > > > Currently we have two somewhat separate types of queries, the one which can > make use of positions (mainly spans) and payloads (spans). Yet Span*Query > doesn't really do scoring comparable to what other queries do and at the end > of the day they are duplicating lot of code all over lucene. Span*Queries are > also limited to other Span*Query instances such that you can not use a > TermQuery or a BooleanQuery with SpanNear or anthing like that. > Beside of the Span*Query limitation other queries lacking a quiet interesting > feature since they can not score based on term proximity since scores doesn't > expose any positional information. All those problems bugged me for a while > now so I stared working on that using the bulkpostings API. I would have done > that first cut on trunk but TermScorer is working on BlockReader that do not > expose positions while the one in this branch does. I started adding a new > Positions class which users can pull from a scorer, to prevent unnecessary > positions enums I added ScorerContext#needsPositions and eventually > Scorere#needsPayloads to create the corresponding enum on demand. Yet, > currently only TermQuery / TermScorer implements this API and other simply > return null instead. > To show that the API really works and our BulkPostings work fine too with > positions I cut over TermSpanQuery to use a TermScorer under the hood and > nuked TermSpans entirely. A nice sideeffect of this was that the Position > BulkReading implementation got some exercise which now :) work all with > positions while Payloads for bulkreading are kind of experimental in the > patch and those only work with Standard codec. > So all spans now work on top of TermScorer ( I truly hate spans since today ) > including the ones that need Payloads (StandardCodec ONLY)!! I didn't bother > to implement the other codecs yet since I want to get feedback on the API and > on this first cut before I go one with it. I will upload the corresponding > patch in a minute. > I also had to cut over SpanQuery.getSpans(IR) to > SpanQuery.getSpans(AtomicReaderContext) which I should probably do on trunk > first but after that pain today I need a break first :). > The patch passes all core tests > (org.apache.lucene.search.highlight.HighlighterTest still fails but I didn't > look into the MemoryIndex BulkPostings API yet) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-2878) Allow Scorer to expose positions and payloads aka. nuke spans
[ https://issues.apache.org/jira/browse/LUCENE-2878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16433796#comment-16433796 ] Adrien Grand commented on LUCENE-2878: -- +1 > Allow Scorer to expose positions and payloads aka. nuke spans > -- > > Key: LUCENE-2878 > URL: https://issues.apache.org/jira/browse/LUCENE-2878 > Project: Lucene - Core > Issue Type: Improvement > Components: core/search >Affects Versions: Positions Branch >Reporter: Simon Willnauer >Assignee: Robert Muir >Priority: Major > Labels: gsoc2014 > Fix For: Positions Branch > > Attachments: LUCENE-2878-OR.patch, LUCENE-2878-vs-trunk.patch, > LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, > LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, > LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, > LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, > LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, > LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, > LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, > LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878_trunk.patch, > LUCENE-2878_trunk.patch, PosHighlighter.patch, PosHighlighter.patch > > > Currently we have two somewhat separate types of queries, the one which can > make use of positions (mainly spans) and payloads (spans). Yet Span*Query > doesn't really do scoring comparable to what other queries do and at the end > of the day they are duplicating lot of code all over lucene. Span*Queries are > also limited to other Span*Query instances such that you can not use a > TermQuery or a BooleanQuery with SpanNear or anthing like that. > Beside of the Span*Query limitation other queries lacking a quiet interesting > feature since they can not score based on term proximity since scores doesn't > expose any positional information. All those problems bugged me for a while > now so I stared working on that using the bulkpostings API. I would have done > that first cut on trunk but TermScorer is working on BlockReader that do not > expose positions while the one in this branch does. I started adding a new > Positions class which users can pull from a scorer, to prevent unnecessary > positions enums I added ScorerContext#needsPositions and eventually > Scorere#needsPayloads to create the corresponding enum on demand. Yet, > currently only TermQuery / TermScorer implements this API and other simply > return null instead. > To show that the API really works and our BulkPostings work fine too with > positions I cut over TermSpanQuery to use a TermScorer under the hood and > nuked TermSpans entirely. A nice sideeffect of this was that the Position > BulkReading implementation got some exercise which now :) work all with > positions while Payloads for bulkreading are kind of experimental in the > patch and those only work with Standard codec. > So all spans now work on top of TermScorer ( I truly hate spans since today ) > including the ones that need Payloads (StandardCodec ONLY)!! I didn't bother > to implement the other codecs yet since I want to get feedback on the API and > on this first cut before I go one with it. I will upload the corresponding > patch in a minute. > I also had to cut over SpanQuery.getSpans(IR) to > SpanQuery.getSpans(AtomicReaderContext) which I should probably do on trunk > first but after that pain today I need a break first :). > The patch passes all core tests > (org.apache.lucene.search.highlight.HighlighterTest still fails but I didn't > look into the MemoryIndex BulkPostings API yet) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (LUCENE-2878) Allow Scorer to expose positions and payloads aka. nuke spans
[ https://issues.apache.org/jira/browse/LUCENE-2878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Simon Willnauer resolved LUCENE-2878. - Resolution: Implemented Fix Version/s: (was: Positions Branch) master (8.0) 7.4 monumental effort thanks everybody involed > Allow Scorer to expose positions and payloads aka. nuke spans > -- > > Key: LUCENE-2878 > URL: https://issues.apache.org/jira/browse/LUCENE-2878 > Project: Lucene - Core > Issue Type: Improvement > Components: core/search >Affects Versions: Positions Branch >Reporter: Simon Willnauer >Assignee: Robert Muir >Priority: Major > Labels: gsoc2014 > Fix For: 7.4, master (8.0) > > Attachments: LUCENE-2878-OR.patch, LUCENE-2878-vs-trunk.patch, > LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, > LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, > LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, > LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, > LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, > LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, > LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878.patch, > LUCENE-2878.patch, LUCENE-2878.patch, LUCENE-2878_trunk.patch, > LUCENE-2878_trunk.patch, PosHighlighter.patch, PosHighlighter.patch > > > Currently we have two somewhat separate types of queries, the one which can > make use of positions (mainly spans) and payloads (spans). Yet Span*Query > doesn't really do scoring comparable to what other queries do and at the end > of the day they are duplicating lot of code all over lucene. Span*Queries are > also limited to other Span*Query instances such that you can not use a > TermQuery or a BooleanQuery with SpanNear or anthing like that. > Beside of the Span*Query limitation other queries lacking a quiet interesting > feature since they can not score based on term proximity since scores doesn't > expose any positional information. All those problems bugged me for a while > now so I stared working on that using the bulkpostings API. I would have done > that first cut on trunk but TermScorer is working on BlockReader that do not > expose positions while the one in this branch does. I started adding a new > Positions class which users can pull from a scorer, to prevent unnecessary > positions enums I added ScorerContext#needsPositions and eventually > Scorere#needsPayloads to create the corresponding enum on demand. Yet, > currently only TermQuery / TermScorer implements this API and other simply > return null instead. > To show that the API really works and our BulkPostings work fine too with > positions I cut over TermSpanQuery to use a TermScorer under the hood and > nuked TermSpans entirely. A nice sideeffect of this was that the Position > BulkReading implementation got some exercise which now :) work all with > positions while Payloads for bulkreading are kind of experimental in the > patch and those only work with Standard codec. > So all spans now work on top of TermScorer ( I truly hate spans since today ) > including the ones that need Payloads (StandardCodec ONLY)!! I didn't bother > to implement the other codecs yet since I want to get feedback on the API and > on this first cut before I go one with it. I will upload the corresponding > patch in a minute. > I also had to cut over SpanQuery.getSpans(IR) to > SpanQuery.getSpans(AtomicReaderContext) which I should probably do on trunk > first but after that pain today I need a break first :). > The patch passes all core tests > (org.apache.lucene.search.highlight.HighlighterTest still fails but I didn't > look into the MemoryIndex BulkPostings API yet) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8245) GeoComplexPolygon fails when intersection of travel plane with edge is near polygon point
[ https://issues.apache.org/jira/browse/LUCENE-8245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16433823#comment-16433823 ] Karl Wright commented on LUCENE-8245: - I believe I've come up with a better way to do this. It involves two basic changes: (1) using only actual intersections with the envelope planes, so no "zone" is needed, and the envelope planes go back to being exactly MINIMUM_RESOLUTION from the main travel planes; and (2) evaluating the envelope plane intersections, one at a time, for whether they constitute a "crossing" from the main travel plane "zone" outside of that "zone". The second requires the generation of a pair of adjoining points, also on the edge plane, that are about MINIMUM_RESOLUTION away from the intersection. We look at both points, and if one is inside the travel plane "zone" and the other is not, then we call it a crossing. I'm coding this up now but it's already a very busy work day and this probably won't be completed and debugged until tonight at the earliest. > GeoComplexPolygon fails when intersection of travel plane with edge is near > polygon point > - > > Key: LUCENE-8245 > URL: https://issues.apache.org/jira/browse/LUCENE-8245 > Project: Lucene - Core > Issue Type: Bug > Components: modules/spatial3d >Reporter: Ignacio Vera >Assignee: Karl Wright >Priority: Major > Fix For: 6.7, 7.4, master (8.0) > > Attachments: LUCENE-8245-case2.patch, LUCENE-8245.jpg, > LUCENE-8245.patch, LUCENE-8245_Polygon.patch, LUCENE-8245_Random.patch, > LUCENE-8245_case3.patch > > > When a travel plane crosses an edge close to an edge point , it is possible > that the above and below planes crosses different edges. In the current > implementation one of the crosses is missed because we only check edges that > are crossed by the main plain and the {{within}} result is wrong. > One possible fix is to check always the intersection of planes and edges > regardless if they are crossed by main plane. That fixed the above issue but > shows other issues like travel planes crossing two edges when it should be > only one due to the fuzziness at edge intersections. > Not sure of a fix so I add the test showing the issue. > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-Tests-7.x - Build # 559 - Failure
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/559/ 4 tests failed. FAILED: org.apache.solr.handler.dataimport.TestZKPropertiesWriter.testZKPropertiesWriter Error Message: No registered leader was found after waiting for 3ms , collection: collection1 slice: shard1 saw state=DocCollection(collection1//clusterstate.json/2)={ "shards":{"shard1":{ "parent":null, "range":null, "state":"active", "replicas":{"10.3.97.65:8983__collection1":{ "core":"collection1", "base_url":"http://10.3.97.65:8983";, "node_name":"10.3.97.65:8983_", "state":"down", "type":"NRT", "router":{"name":"implicit"}} with live_nodes=[10.3.97.65:8983_] Stack Trace: org.apache.solr.common.SolrException: No registered leader was found after waiting for 3ms , collection: collection1 slice: shard1 saw state=DocCollection(collection1//clusterstate.json/2)={ "shards":{"shard1":{ "parent":null, "range":null, "state":"active", "replicas":{"10.3.97.65:8983__collection1":{ "core":"collection1", "base_url":"http://10.3.97.65:8983";, "node_name":"10.3.97.65:8983_", "state":"down", "type":"NRT", "router":{"name":"implicit"}} with live_nodes=[10.3.97.65:8983_] at __randomizedtesting.SeedInfo.seed([724540BFE81C167F:3AC93CC8C44B368B]:0) at org.apache.solr.common.cloud.ZkStateReader.getLeaderRetry(ZkStateReader.java:808) at org.apache.solr.common.cloud.ZkStateReader.getLeaderUrl(ZkStateReader.java:773) at org.apache.solr.handler.dataimport.TestZKPropertiesWriter.testZKPropertiesWriter(TestZKPropertiesWriter.java:101) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:3
[JENKINS-EA] Lucene-Solr-master-Windows (64bit/jdk-11-ea+5) - Build # 7264 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7264/ Java: 64bit/jdk-11-ea+5 -XX:-UseCompressedOops -XX:+UseSerialGC 3 tests failed. FAILED: org.apache.solr.cloud.autoscaling.sim.TestTriggerIntegration.testEventQueue Error Message: action wasn't interrupted Stack Trace: java.lang.AssertionError: action wasn't interrupted at __randomizedtesting.SeedInfo.seed([8BF79FDBC8F9AC87:4242DD75C19E6A72]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.apache.solr.cloud.autoscaling.sim.TestTriggerIntegration.testEventQueue(TestTriggerIntegration.java:654) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:564) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.base/java.lang.Thread.run(Thread.java:841) FAILED: org.apache.solr.cloud.autoscaling.sim.TestTriggerIntegration.testNodeAddedTriggerRestoreState Error Message: The trigger did not fire at all Stack Trace: java.lang.AssertionError: The trigger did not fire at all at __randomizedtesting.SeedInfo.seed([8BF79FDBC8F9AC87:3CA1
[jira] [Commented] (LUCENE-8246) Allow to customize the number of deletes a merge claims
[ https://issues.apache.org/jira/browse/LUCENE-8246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16433911#comment-16433911 ] ASF subversion and git services commented on LUCENE-8246: - Commit e99a19755c75bc71aa53c737b7d4b7b8b15cdbd5 in lucene-solr's branch refs/heads/master from [~simonw] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=e99a197 ] LUCENE-8246: Allow to customize the number of deletes a merge claims With the introduction of soft deletes no every merge claims all documents that are marked as deleted in the segment readers. MergePolicies still need to do accurate accounting in order to select segments for merging and need to decide if segments are merged. This change allows the merge policy to customize the number of deletes a merge of a segment claims. > Allow to customize the number of deletes a merge claims > --- > > Key: LUCENE-8246 > URL: https://issues.apache.org/jira/browse/LUCENE-8246 > Project: Lucene - Core > Issue Type: Improvement >Affects Versions: 7.4, master (8.0) >Reporter: Simon Willnauer >Priority: Major > Fix For: 7.4, master (8.0) > > Attachments: LUCENE-8246.patch > > > With the introduction of soft deletes no every merge claims all documents > that are marked as deleted in the segment readers. MergePolicies still need > to do accurate accounting in order to select segments for merging and need to > decide if segments are merged. This chance allows the merge policy to > customize the number of deletes a merge of a segment claims. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8246) Allow to customize the number of deletes a merge claims
[ https://issues.apache.org/jira/browse/LUCENE-8246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16433924#comment-16433924 ] ASF subversion and git services commented on LUCENE-8246: - Commit e0dcd66ac727f93d8cf3290cc016625f5d296095 in lucene-solr's branch refs/heads/branch_7x from [~simonw] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=e0dcd66 ] LUCENE-8246: Allow to customize the number of deletes a merge claims With the introduction of soft deletes no every merge claims all documents that are marked as deleted in the segment readers. MergePolicies still need to do accurate accounting in order to select segments for merging and need to decide if segments are merged. This change allows the merge policy to customize the number of deletes a merge of a segment claims. > Allow to customize the number of deletes a merge claims > --- > > Key: LUCENE-8246 > URL: https://issues.apache.org/jira/browse/LUCENE-8246 > Project: Lucene - Core > Issue Type: Improvement >Affects Versions: 7.4, master (8.0) >Reporter: Simon Willnauer >Priority: Major > Fix For: 7.4, master (8.0) > > Attachments: LUCENE-8246.patch > > > With the introduction of soft deletes no every merge claims all documents > that are marked as deleted in the segment readers. MergePolicies still need > to do accurate accounting in order to select segments for merging and need to > decide if segments are merged. This chance allows the merge policy to > customize the number of deletes a merge of a segment claims. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (LUCENE-8246) Allow to customize the number of deletes a merge claims
[ https://issues.apache.org/jira/browse/LUCENE-8246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Simon Willnauer resolved LUCENE-8246. - Resolution: Fixed > Allow to customize the number of deletes a merge claims > --- > > Key: LUCENE-8246 > URL: https://issues.apache.org/jira/browse/LUCENE-8246 > Project: Lucene - Core > Issue Type: Improvement >Affects Versions: 7.4, master (8.0) >Reporter: Simon Willnauer >Priority: Major > Fix For: 7.4, master (8.0) > > Attachments: LUCENE-8246.patch > > > With the introduction of soft deletes no every merge claims all documents > that are marked as deleted in the segment readers. MergePolicies still need > to do accurate accounting in order to select segments for merging and need to > decide if segments are merged. This chance allows the merge policy to > customize the number of deletes a merge of a segment claims. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12181) Add trigger based on document count
[ https://issues.apache.org/jira/browse/SOLR-12181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16433929#comment-16433929 ] ASF subversion and git services commented on SOLR-12181: Commit 376f6c494671ed22034bf56e6005e50b06942f28 in lucene-solr's branch refs/heads/master from [~ab] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=376f6c4 ] SOLR-12181: Add trigger based on document count / index size. > Add trigger based on document count > --- > > Key: SOLR-12181 > URL: https://issues.apache.org/jira/browse/SOLR-12181 > Project: Solr > Issue Type: Sub-task > Security Level: Public(Default Security Level. Issues are Public) > Components: AutoScaling >Affects Versions: master (8.0) >Reporter: Andrzej Bialecki >Assignee: Andrzej Bialecki >Priority: Major > Attachments: SOLR-12181.patch > > > This may turn out to be as simple as using a {{MetricTrigger}} but it's > likely this will require some specialization, and we may want to add this > type of trigger anyway for convenience. > The two control actions associated with this trigger will be SPLITSHARD and > (yet nonexistent) MERGESHARD. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12202) cannot run solr-exporter.cmd on Windows
[ https://issues.apache.org/jira/browse/SOLR-12202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Minoru Osuka updated SOLR-12202: Summary: cannot run solr-exporter.cmd on Windows (was: solr-exporter.cmd cannot be run on Windows) > cannot run solr-exporter.cmd on Windows > --- > > Key: SOLR-12202 > URL: https://issues.apache.org/jira/browse/SOLR-12202 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: metrics >Affects Versions: 7.3 >Reporter: Minoru Osuka >Priority: Major > Attachments: SOLR-12202.patch > > > solr-exporter.cmd could not be run on Windows due to following: > - incorrect main class name. > - incorrect classpath specification. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12202) cannot run solr-exporter.cmd on Windows
[ https://issues.apache.org/jira/browse/SOLR-12202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Minoru Osuka updated SOLR-12202: Description: cannot run solr-exporter.cmd on Windows due to following: - incorrect main class name. - incorrect classpath specification. was: solr-exporter.cmd could not be run on Windows due to following: - incorrect main class name. - incorrect classpath specification. > cannot run solr-exporter.cmd on Windows > --- > > Key: SOLR-12202 > URL: https://issues.apache.org/jira/browse/SOLR-12202 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: metrics >Affects Versions: 7.3 >Reporter: Minoru Osuka >Priority: Major > Attachments: SOLR-12202.patch > > > cannot run solr-exporter.cmd on Windows due to following: > - incorrect main class name. > - incorrect classpath specification. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-7.x-Linux (32bit/jdk1.8.0_162) - Build # 1694 - Failure!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/1694/ Java: 32bit/jdk1.8.0_162 -client -XX:+UseParallelGC 1 tests failed. FAILED: org.apache.lucene.index.TestStressNRT.test Error Message: MockDirectoryWrapper: cannot close: there are still 14 open files: {_r.fdt=1, _15.fdt=1, _10.cfs=1, _19.fdt=1, _14.fdt=1, _16.fdt=1, _q.fdt=1, _11.fdt=1, _p.fdt=1, _y.cfs=1, _17.fdt=1, _12.fdt=1, _13.cfs=1, _w.fdt=1} Stack Trace: java.lang.RuntimeException: MockDirectoryWrapper: cannot close: there are still 14 open files: {_r.fdt=1, _15.fdt=1, _10.cfs=1, _19.fdt=1, _14.fdt=1, _16.fdt=1, _q.fdt=1, _11.fdt=1, _p.fdt=1, _y.cfs=1, _17.fdt=1, _12.fdt=1, _13.cfs=1, _w.fdt=1} at org.apache.lucene.store.MockDirectoryWrapper.close(MockDirectoryWrapper.java:841) at org.apache.lucene.index.TestStressNRT.test(TestStressNRT.java:403) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.lang.Thread.run(Thread.java:748) Caused by: java.lang.RuntimeException: unclosed IndexInput: _15.fdt at org.apache.lucene.store.MockDirectoryWrapper.addFileHandle(MockDirectoryWrapper.java:732) at org.apache.lucene.store.MockDirectoryWrapper.openInput(MockDirectoryWrapper.java:776) at org.apache.lucene.codecs.compressing.CompressingStoredFieldsReader.(CompressingStoredFieldsReader.java:150) at org.apache.lucene.codecs.compr
[JENKINS] Lucene-Solr-7.x-Windows (64bit/jdk-10) - Build # 541 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/541/ Java: 64bit/jdk-10 -XX:+UseCompressedOops -XX:+UseSerialGC 7 tests failed. FAILED: org.apache.solr.cloud.autoscaling.ScheduledTriggerTest.testTrigger Error Message: expected:<3> but was:<2> Stack Trace: java.lang.AssertionError: expected:<3> but was:<2> at __randomizedtesting.SeedInfo.seed([3D624B6D66561DE6:5EA97DEFFF996ECB]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.failNotEquals(Assert.java:647) at org.junit.Assert.assertEquals(Assert.java:128) at org.junit.Assert.assertEquals(Assert.java:472) at org.junit.Assert.assertEquals(Assert.java:456) at org.apache.solr.cloud.autoscaling.ScheduledTriggerTest.scheduledTriggerTest(ScheduledTriggerTest.java:111) at org.apache.solr.cloud.autoscaling.ScheduledTriggerTest.testTrigger(ScheduledTriggerTest.java:64) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:564) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.base/java.lang.Thread.run(Thread.java:844) FAILED: org.apache.solr.han
[jira] [Commented] (LUCENE-8248) Make MergePolicy.setMaxCFSSegmentSizeMB final
[ https://issues.apache.org/jira/browse/LUCENE-8248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16433981#comment-16433981 ] Michael McCandless commented on LUCENE-8248: Ahh thank you for the new patch; it looks great! > I also had to add these overrides to {{NoMergePolicy}} in order to get its >test to pass since it verifies that all methods are overridden? Ahh that's great it tests for that; actually, could you please add exactly that same test for {{FilterMergePolicy}} (and I think remove it from the test for {{NoMergePolicy}}). This way if we add new methods to {{MergePolicy}} the test failure will remind us to update {{FilterMergePolicy}} too. > Make MergePolicy.setMaxCFSSegmentSizeMB final > - > > Key: LUCENE-8248 > URL: https://issues.apache.org/jira/browse/LUCENE-8248 > Project: Lucene - Core > Issue Type: Wish > Components: core/index >Reporter: Mike Sokolov >Priority: Minor > Attachments: LUCENE-8248.patch, MergePolicy.patch > > > MergePolicy.getMaxCFSSegmentSizeMB is final, but the corresponding setter is > not, which means that overriding it with anything other than a trivial > delegation can only lead to confusion. > The patch makes the method final and removes the trivial implementations from > MergePolicyWrapper and NoMergePolicy. > [~mikemccand] also pointed out that the class name is nonstandard for similar > adapter classes in Lucene, which are usually Filter*.java. Personally I was > looking for MergePolicyAdapter, but if there is a prevailing convention here > around Filter, does it make sense to change this class's name to > FilterMergePolicy? -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: [JENKINS] Lucene-Solr-7.x-Linux (32bit/jdk1.8.0_162) - Build # 1694 - Failure!
Looks like a soft deletes issue, though the root cause error message is creepy: java.lang.AssertionError: No documents or tombstones found for id 71, expected at least -8, reader=... Why would it expect negative 8 :) Mike McCandless http://blog.mikemccandless.com On Wed, Apr 11, 2018 at 10:12 AM, Policeman Jenkins Server < jenk...@thetaphi.de> wrote: > Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/1694/ > Java: 32bit/jdk1.8.0_162 -client -XX:+UseParallelGC > > 1 tests failed. > FAILED: org.apache.lucene.index.TestStressNRT.test > > Error Message: > MockDirectoryWrapper: cannot close: there are still 14 open files: > {_r.fdt=1, _15.fdt=1, _10.cfs=1, _19.fdt=1, _14.fdt=1, _16.fdt=1, _q.fdt=1, > _11.fdt=1, _p.fdt=1, _y.cfs=1, _17.fdt=1, _12.fdt=1, _13.cfs=1, _w.fdt=1} > > Stack Trace: > java.lang.RuntimeException: MockDirectoryWrapper: cannot close: there are > still 14 open files: {_r.fdt=1, _15.fdt=1, _10.cfs=1, _19.fdt=1, _14.fdt=1, > _16.fdt=1, _q.fdt=1, _11.fdt=1, _p.fdt=1, _y.cfs=1, _17.fdt=1, _12.fdt=1, > _13.cfs=1, _w.fdt=1} > at org.apache.lucene.store.MockDirectoryWrapper.close( > MockDirectoryWrapper.java:841) > at org.apache.lucene.index.TestStressNRT.test( > TestStressNRT.java:403) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at sun.reflect.NativeMethodAccessorImpl.invoke( > NativeMethodAccessorImpl.java:62) > at sun.reflect.DelegatingMethodAccessorImpl.invoke( > DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke( > RandomizedRunner.java:1737) > at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate( > RandomizedRunner.java:934) > at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate( > RandomizedRunner.java:970) > at com.carrotsearch.randomizedtesting. > RandomizedRunner$10.evaluate(RandomizedRunner.java:984) > at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate( > TestRuleSetupTeardownChained.java:49) > at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate( > AbstractBeforeAfterRule.java:45) > at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate( > TestRuleThreadAndTestName.java:48) > at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures > $1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) > at org.apache.lucene.util.TestRuleMarkFailure$1. > evaluate(TestRuleMarkFailure.java:47) > at com.carrotsearch.randomizedtesting.rules. > StatementAdapter.evaluate(StatementAdapter.java:36) > at com.carrotsearch.randomizedtesting.ThreadLeakControl$ > StatementRunner.run(ThreadLeakControl.java:368) > at com.carrotsearch.randomizedtesting.ThreadLeakControl. > forkTimeoutingTask(ThreadLeakControl.java:817) > at com.carrotsearch.randomizedtesting. > ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) > at com.carrotsearch.randomizedtesting.RandomizedRunner. > runSingleTest(RandomizedRunner.java:943) > at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate( > RandomizedRunner.java:829) > at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate( > RandomizedRunner.java:879) > at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate( > RandomizedRunner.java:890) > at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate( > AbstractBeforeAfterRule.java:45) > at com.carrotsearch.randomizedtesting.rules. > StatementAdapter.evaluate(StatementAdapter.java:36) > at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate( > TestRuleStoreClassName.java:41) > at com.carrotsearch.randomizedtesting.rules. > NoShadowingOrOverridesOnMethodsRule$1.evaluate( > NoShadowingOrOverridesOnMethodsRule.java:40) > at com.carrotsearch.randomizedtesting.rules. > NoShadowingOrOverridesOnMethodsRule$1.evaluate( > NoShadowingOrOverridesOnMethodsRule.java:40) > at com.carrotsearch.randomizedtesting.rules. > StatementAdapter.evaluate(StatementAdapter.java:36) > at com.carrotsearch.randomizedtesting.rules. > StatementAdapter.evaluate(StatementAdapter.java:36) > at com.carrotsearch.randomizedtesting.rules. > StatementAdapter.evaluate(StatementAdapter.java:36) > at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate( > TestRuleAssertionsRequired.java:53) > at org.apache.lucene.util.TestRuleMarkFailure$1. > evaluate(TestRuleMarkFailure.java:47) > at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures > $1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) > at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate( > TestRuleIgnoreTestSuites.java:54) > at com.carrotsearch.randomizedtesting.rules. > StatementAdapter.evaluate(StatementAdapter.java:36) > at com.carrotsearch.randomizedtesting.ThreadLeak
[jira] [Updated] (LUCENE-8248) Rename MergePolicyWrapper to FilterMergePolicy and override all of MergePolicy
[ https://issues.apache.org/jira/browse/LUCENE-8248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mike Sokolov updated LUCENE-8248: - Summary: Rename MergePolicyWrapper to FilterMergePolicy and override all of MergePolicy (was: Make MergePolicy.setMaxCFSSegmentSizeMB final) > Rename MergePolicyWrapper to FilterMergePolicy and override all of MergePolicy > -- > > Key: LUCENE-8248 > URL: https://issues.apache.org/jira/browse/LUCENE-8248 > Project: Lucene - Core > Issue Type: Wish > Components: core/index >Reporter: Mike Sokolov >Priority: Minor > Attachments: LUCENE-8248.patch, MergePolicy.patch > > > MergePolicy.getMaxCFSSegmentSizeMB is final, but the corresponding setter is > not, which means that overriding it with anything other than a trivial > delegation can only lead to confusion. > The patch makes the method final and removes the trivial implementations from > MergePolicyWrapper and NoMergePolicy. > [~mikemccand] also pointed out that the class name is nonstandard for similar > adapter classes in Lucene, which are usually Filter*.java. Personally I was > looking for MergePolicyAdapter, but if there is a prevailing convention here > around Filter, does it make sense to change this class's name to > FilterMergePolicy? -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8248) Rename MergePolicyWrapper to FilterMergePolicy and override all of MergePolicy
[ https://issues.apache.org/jira/browse/LUCENE-8248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16433994#comment-16433994 ] Mike Sokolov commented on LUCENE-8248: -- Oh yes that makes sense. Then we are enforcing this Filter pattern in the right place. I also changed the name of the issue to reflect what this is about now. > Rename MergePolicyWrapper to FilterMergePolicy and override all of MergePolicy > -- > > Key: LUCENE-8248 > URL: https://issues.apache.org/jira/browse/LUCENE-8248 > Project: Lucene - Core > Issue Type: Wish > Components: core/index >Reporter: Mike Sokolov >Priority: Minor > Attachments: LUCENE-8248.patch, MergePolicy.patch > > > MergePolicy.getMaxCFSSegmentSizeMB is final, but the corresponding setter is > not, which means that overriding it with anything other than a trivial > delegation can only lead to confusion. > The patch makes the method final and removes the trivial implementations from > MergePolicyWrapper and NoMergePolicy. > [~mikemccand] also pointed out that the class name is nonstandard for similar > adapter classes in Lucene, which are usually Filter*.java. Personally I was > looking for MergePolicyAdapter, but if there is a prevailing convention here > around Filter, does it make sense to change this class's name to > FilterMergePolicy? -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12181) Add trigger based on document count
[ https://issues.apache.org/jira/browse/SOLR-12181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16434040#comment-16434040 ] ASF subversion and git services commented on SOLR-12181: Commit bcf9f5c36b3f5faeb15f2ddcce648da4aeb44b21 in lucene-solr's branch refs/heads/branch_7x from [~ab] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=bcf9f5c ] SOLR-12181: Add trigger based on document count / index size. > Add trigger based on document count > --- > > Key: SOLR-12181 > URL: https://issues.apache.org/jira/browse/SOLR-12181 > Project: Solr > Issue Type: Sub-task > Security Level: Public(Default Security Level. Issues are Public) > Components: AutoScaling >Affects Versions: master (8.0) >Reporter: Andrzej Bialecki >Assignee: Andrzej Bialecki >Priority: Major > Attachments: SOLR-12181.patch > > > This may turn out to be as simple as using a {{MetricTrigger}} but it's > likely this will require some specialization, and we may want to add this > type of trigger anyway for convenience. > The two control actions associated with this trigger will be SPLITSHARD and > (yet nonexistent) MERGESHARD. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8245) GeoComplexPolygon fails when intersection of travel plane with edge is near polygon point
[ https://issues.apache.org/jira/browse/LUCENE-8245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16434044#comment-16434044 ] Karl Wright commented on LUCENE-8245: - The adjoining points code I developed works OK for spheres but not for anything else, which is a problem. Have to think of something better that's not too expensive. No more time today. > GeoComplexPolygon fails when intersection of travel plane with edge is near > polygon point > - > > Key: LUCENE-8245 > URL: https://issues.apache.org/jira/browse/LUCENE-8245 > Project: Lucene - Core > Issue Type: Bug > Components: modules/spatial3d >Reporter: Ignacio Vera >Assignee: Karl Wright >Priority: Major > Fix For: 6.7, 7.4, master (8.0) > > Attachments: LUCENE-8245-case2.patch, LUCENE-8245.jpg, > LUCENE-8245.patch, LUCENE-8245_Polygon.patch, LUCENE-8245_Random.patch, > LUCENE-8245_case3.patch > > > When a travel plane crosses an edge close to an edge point , it is possible > that the above and below planes crosses different edges. In the current > implementation one of the crosses is missed because we only check edges that > are crossed by the main plain and the {{within}} result is wrong. > One possible fix is to check always the intersection of planes and edges > regardless if they are crossed by main plane. That fixed the above issue but > shows other issues like travel planes crossing two edges when it should be > only one due to the fuzziness at edge intersections. > Not sure of a fix so I add the test showing the issue. > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8248) Rename MergePolicyWrapper to FilterMergePolicy and override all of MergePolicy
[ https://issues.apache.org/jira/browse/LUCENE-8248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16434082#comment-16434082 ] Mike Sokolov commented on LUCENE-8248: -- OK there was already a {{TestMergePolicyWrapper.testMethodsOverridden}}. I've renamed the class to {{TestFilterMergePolicy}} and removed that method from {{TestNoMergePolicy}}. > Rename MergePolicyWrapper to FilterMergePolicy and override all of MergePolicy > -- > > Key: LUCENE-8248 > URL: https://issues.apache.org/jira/browse/LUCENE-8248 > Project: Lucene - Core > Issue Type: Wish > Components: core/index >Reporter: Mike Sokolov >Priority: Minor > Attachments: LUCENE-8248.patch, MergePolicy.patch > > > MergePolicy.getMaxCFSSegmentSizeMB is final, but the corresponding setter is > not, which means that overriding it with anything other than a trivial > delegation can only lead to confusion. > The patch makes the method final and removes the trivial implementations from > MergePolicyWrapper and NoMergePolicy. > [~mikemccand] also pointed out that the class name is nonstandard for similar > adapter classes in Lucene, which are usually Filter*.java. Personally I was > looking for MergePolicyAdapter, but if there is a prevailing convention here > around Filter, does it make sense to change this class's name to > FilterMergePolicy? -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-8248) Rename MergePolicyWrapper to FilterMergePolicy and override all of MergePolicy
[ https://issues.apache.org/jira/browse/LUCENE-8248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mike Sokolov updated LUCENE-8248: - Attachment: LUCENE-8248.1.patch > Rename MergePolicyWrapper to FilterMergePolicy and override all of MergePolicy > -- > > Key: LUCENE-8248 > URL: https://issues.apache.org/jira/browse/LUCENE-8248 > Project: Lucene - Core > Issue Type: Wish > Components: core/index >Reporter: Mike Sokolov >Priority: Minor > Attachments: LUCENE-8248.1.patch, LUCENE-8248.patch, MergePolicy.patch > > > MergePolicy.getMaxCFSSegmentSizeMB is final, but the corresponding setter is > not, which means that overriding it with anything other than a trivial > delegation can only lead to confusion. > The patch makes the method final and removes the trivial implementations from > MergePolicyWrapper and NoMergePolicy. > [~mikemccand] also pointed out that the class name is nonstandard for similar > adapter classes in Lucene, which are usually Filter*.java. Personally I was > looking for MergePolicyAdapter, but if there is a prevailing convention here > around Filter, does it make sense to change this class's name to > FilterMergePolicy? -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_162) - Build # 21803 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/21803/ Java: 32bit/jdk1.8.0_162 -server -XX:+UseSerialGC 1 tests failed. FAILED: org.apache.solr.cloud.SolrCloudExampleTest.testLoadDocsIntoGettingStartedCollection Error Message: Delete action failed! Stack Trace: java.lang.AssertionError: Delete action failed! at __randomizedtesting.SeedInfo.seed([2120C2686A22090B:3243F0075B4DB0AD]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.apache.solr.cloud.SolrCloudExampleTest.doTestDeleteAction(SolrCloudExampleTest.java:193) at org.apache.solr.cloud.SolrCloudExampleTest.testLoadDocsIntoGettingStartedCollection(SolrCloudExampleTest.java:169) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:993) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtest
[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 4562 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/4562/ Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseSerialGC 5 tests failed. FAILED: org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testSplitIntegration Error Message: did not finish processing in time Stack Trace: java.lang.AssertionError: did not finish processing in time at __randomizedtesting.SeedInfo.seed([CAE785E4E44D546:3520C11E61BB1CB8]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testSplitIntegration(IndexSizeTriggerTest.java:295) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.lang.Thread.run(Thread.java:748) FAILED: org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testSplitIntegration Error Message: Stack Trace: java.util.concurrent.TimeoutException at __randomizedtesting.SeedInfo.seed([CAE785E4E44D546:3520C11E61BB1CB8]:0) at org.a
[jira] [Commented] (LUCENE-8248) Rename MergePolicyWrapper to FilterMergePolicy and override all of MergePolicy
[ https://issues.apache.org/jira/browse/LUCENE-8248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16434208#comment-16434208 ] Michael McCandless commented on LUCENE-8248: Thanks [~msoko...@gmail.com], new patch looks great. Except, I was confused about {{NoMergePolicy}} (I somehow though it extended {{MergePolicyWrapper}} but it just extends {{MergePolicy}} directly) – so, I think we should restore its test that all methods are overridden (and add back its methods)? (In addition to testing that {{FilterMergePolicy}} also overrides all methods). > Rename MergePolicyWrapper to FilterMergePolicy and override all of MergePolicy > -- > > Key: LUCENE-8248 > URL: https://issues.apache.org/jira/browse/LUCENE-8248 > Project: Lucene - Core > Issue Type: Wish > Components: core/index >Reporter: Mike Sokolov >Priority: Minor > Attachments: LUCENE-8248.1.patch, LUCENE-8248.patch, MergePolicy.patch > > > MergePolicy.getMaxCFSSegmentSizeMB is final, but the corresponding setter is > not, which means that overriding it with anything other than a trivial > delegation can only lead to confusion. > The patch makes the method final and removes the trivial implementations from > MergePolicyWrapper and NoMergePolicy. > [~mikemccand] also pointed out that the class name is nonstandard for similar > adapter classes in Lucene, which are usually Filter*.java. Personally I was > looking for MergePolicyAdapter, but if there is a prevailing convention here > around Filter, does it make sense to change this class's name to > FilterMergePolicy? -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11861) ConfigSets CREATE baseConfigSet param should default to _default
[ https://issues.apache.org/jira/browse/SOLR-11861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16434223#comment-16434223 ] Amrit Sarkar commented on SOLR-11861: - Will finish up the tests by this weekend positively. > ConfigSets CREATE baseConfigSet param should default to _default > > > Key: SOLR-11861 > URL: https://issues.apache.org/jira/browse/SOLR-11861 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Reporter: David Smiley >Priority: Minor > Labels: newdev > Attachments: SOLR-11861.patch, SOLR-11861.patch > > > It would be nice if I didn't have to specify the baseConfigSet param now that > we have a default configSet "_default". -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk-9.0.4) - Build # 1695 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/1695/ Java: 64bit/jdk-9.0.4 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC 1 tests failed. FAILED: org.apache.lucene.index.TestStressNRT.test Error Message: MockDirectoryWrapper: cannot close: there are still 46 open files: {_1i_5_Lucene70_0.dvd=1, _1j.cfs=1, _1o_4_Lucene70_0.dvd=1, _1t.fdt=1, _25_2_Lucene70_0.dvd=1, _1x_1_Lucene70_0.dvd=1, _1r_2_Lucene70_0.dvd=1, _1i.cfs=1, _20.fdt=1, _1u.fdt=1, _1r_3_Lucene70_0.dvd=1, _1n_2_Lucene70_0.dvd=1, _25.fdt=1, _1t_2_Lucene70_0.dvd=1, _1v.fdt=1, _1x.fdt=1, _27.fdt=1, _26.fdt=1, _1t_3_Lucene70_0.dvd=1, _1v_1_Lucene70_0.dvd=1, _1w.fdt=1, _1o.cfs=1, _1y.fdt=1, _1p.fdt=1, _1z.fdt=1, _1n.cfs=1, _23.fdt=1, _1w_2_Lucene70_0.dvd=1, _1o_5_Lucene70_0.dvd=1, _1q.fdt=1, _24.fdt=1, _1m.cfs=1, _1r.fdt=1, _1m_8_Lucene70_0.dvd=1, _1p_1_Lucene70_0.dvd=1, _1l.cfs=1, _1j_1_Lucene70_0.dvd=1, _1u_1_Lucene70_0.dvd=1, _1m_9_Lucene70_0.dvd=1, _21.fdt=1, _1s.fdt=1, _1k_4_Lucene70_0.dvd=1, _1i_6_Lucene70_0.dvd=1, _1l_6_Lucene70_0.dvd=1, _1k.cfs=1, _22.fdt=1} Stack Trace: java.lang.RuntimeException: MockDirectoryWrapper: cannot close: there are still 46 open files: {_1i_5_Lucene70_0.dvd=1, _1j.cfs=1, _1o_4_Lucene70_0.dvd=1, _1t.fdt=1, _25_2_Lucene70_0.dvd=1, _1x_1_Lucene70_0.dvd=1, _1r_2_Lucene70_0.dvd=1, _1i.cfs=1, _20.fdt=1, _1u.fdt=1, _1r_3_Lucene70_0.dvd=1, _1n_2_Lucene70_0.dvd=1, _25.fdt=1, _1t_2_Lucene70_0.dvd=1, _1v.fdt=1, _1x.fdt=1, _27.fdt=1, _26.fdt=1, _1t_3_Lucene70_0.dvd=1, _1v_1_Lucene70_0.dvd=1, _1w.fdt=1, _1o.cfs=1, _1y.fdt=1, _1p.fdt=1, _1z.fdt=1, _1n.cfs=1, _23.fdt=1, _1w_2_Lucene70_0.dvd=1, _1o_5_Lucene70_0.dvd=1, _1q.fdt=1, _24.fdt=1, _1m.cfs=1, _1r.fdt=1, _1m_8_Lucene70_0.dvd=1, _1p_1_Lucene70_0.dvd=1, _1l.cfs=1, _1j_1_Lucene70_0.dvd=1, _1u_1_Lucene70_0.dvd=1, _1m_9_Lucene70_0.dvd=1, _21.fdt=1, _1s.fdt=1, _1k_4_Lucene70_0.dvd=1, _1i_6_Lucene70_0.dvd=1, _1l_6_Lucene70_0.dvd=1, _1k.cfs=1, _22.fdt=1} at org.apache.lucene.store.MockDirectoryWrapper.close(MockDirectoryWrapper.java:841) at org.apache.lucene.index.TestStressNRT.test(TestStressNRT.java:403) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:564) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java
[jira] [Commented] (SOLR-12161) CloudSolrClient with basic auth enabled will update even if no credentials supplied
[ https://issues.apache.org/jira/browse/SOLR-12161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16434257#comment-16434257 ] Erick Erickson commented on SOLR-12161: --- Thanks! I thought I'd see this with a stand-alone program, but just checked and that turned out to be wrong so it looks like a test-only issue. Whew! > CloudSolrClient with basic auth enabled will update even if no credentials > supplied > --- > > Key: SOLR-12161 > URL: https://issues.apache.org/jira/browse/SOLR-12161 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: Authentication >Affects Versions: 7.3 >Reporter: Erick Erickson >Assignee: Noble Paul >Priority: Major > Attachments: AuthUpdateTest.java, SOLR-12161.patch, tests.patch > > > This is an offshoot of SOLR-9399. When I was writing a test, if I create a > cluster with basic authentication set up, I can _still_ add documents to a > collection even without credentials being set in the request. > However, simple queries, commits etc. all fail without credentials set in the > request. > I'll attach a test class that illustrates the problem. > If I use a new HttpSolrClient instead of a CloudSolrClient, then the update > request fails as expected. > [~noblepaul] do you have any insight here? Possibly something with splitting > up the update request to go to each individual shard? -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-7.x-MacOSX (64bit/jdk-9) - Build # 572 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-MacOSX/572/ Java: 64bit/jdk-9 -XX:+UseCompressedOops -XX:+UseSerialGC 7 tests failed. FAILED: org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testTrigger Error Message: waitFor not elapsed but produced an event Stack Trace: java.lang.AssertionError: waitFor not elapsed but produced an event at __randomizedtesting.SeedInfo.seed([588A995DBAC1E454:3B41AFDF230E9779]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.junit.Assert.assertNull(Assert.java:551) at org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testTrigger(IndexSizeTriggerTest.java:177) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:564) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.base/java.lang.Thread.run(Thread.java:844) FAILED: org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testSplitIntegration Error Message: Stack Trace: java.util.concurrent.TimeoutException at __randomizedtesting.SeedInfo.seed([588A995DBAC1E454:6104201D953E2DAA]
[jira] [Created] (SOLR-12212) SolrQueryParser not handling q.op=AND correctly for parenthesized NOT fq
Michael Braun created SOLR-12212: Summary: SolrQueryParser not handling q.op=AND correctly for parenthesized NOT fq Key: SOLR-12212 URL: https://issues.apache.org/jira/browse/SOLR-12212 Project: Solr Issue Type: Bug Security Level: Public (Default Security Level. Issues are Public) Affects Versions: 6.6.2, 7.3, master (8.0) Reporter: Michael Braun Add this to TestSolrQueryParser to demonstrate: {code} @Test public void testDefaultOpAndForFq() { String query = "id:12"; String fq = "(NOT(eee_s:(Y)))"; assertQ(req("q", query, "fq", fq, "q.op", "AND"), "//*[@numFound='1']"); } {code} With q.op = OR, this passes. With q.op = AND, this fails. When the outer parenthesis are removed in the fq, it passes in both cases. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12187) Replica should watch clusterstate and unload itself if its entry is removed
[ https://issues.apache.org/jira/browse/SOLR-12187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16434439#comment-16434439 ] Tomás Fernández Löbbe commented on SOLR-12187: -- Good finding. With this patch, a core would be unloaded in two ways: 1) by an UNLOAD command issued from the {{DeleteReplicaCmd}} 2) by a change to the clusterstate that triggers the state watcher, which unloads the core. Am I right? Do you think we need to keep #1? Also, a thought on the test, should it also cover the legacyCloud=true case? > Replica should watch clusterstate and unload itself if its entry is removed > --- > > Key: SOLR-12187 > URL: https://issues.apache.org/jira/browse/SOLR-12187 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Cao Manh Dat >Assignee: Cao Manh Dat >Priority: Major > Attachments: SOLR-12187.patch, SOLR-12187.patch, SOLR-12187.patch, > SOLR-12187.patch > > > With the introduction of autoscaling framework, we have seen an increase in > the number of issues related to the race condition between delete a replica > and other stuff. > Case 1: DeleteReplicaCmd failed to send UNLOAD request to a replica, > therefore, forcefully remove its entry from clusterstate, but the replica > still function normally and be able to become a leader -> SOLR-12176 > Case 2: > * DeleteReplicaCmd enqueue a DELETECOREOP (without sending a request to > replica because the node is not live) > * The node start and the replica get loaded > * DELETECOREOP has not processed hence the replica still present in > clusterstate --> pass checkStateInZk > * DELETECOREOP is executed, DeleteReplicaCmd finished > ** result 1: the replica start recovering, finish it and publish itself as > ACTIVE --> state of the replica is ACTIVE > ** result 2: the replica throw an exception (probably: NPE) > --> state of the replica is DOWN, not join leader election -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Assigned] (SOLR-11487) Collection Alias metadata for time partitioned collections
[ https://issues.apache.org/jira/browse/SOLR-11487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Varun Thacker reassigned SOLR-11487: Assignee: Varun Thacker (was: David Smiley) > Collection Alias metadata for time partitioned collections > -- > > Key: SOLR-11487 > URL: https://issues.apache.org/jira/browse/SOLR-11487 > Project: Solr > Issue Type: Sub-task > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Reporter: David Smiley >Assignee: Varun Thacker >Priority: Major > Fix For: 7.2 > > Attachments: SOLR_11487.patch, SOLR_11487.patch, SOLR_11487.patch, > SOLR_11487.patch, SOLR_11487.patch, SOLR_11487.patch, SOLR_11487.patch, > SOLR_11487.patch, SOLR_11487.patch > > > SOLR-11299 outlines an approach to using a collection Alias to refer to a > series of collections of a time series. We'll need to store some metadata > about these time series collections, such as which field of the document > contains the timestamp to route on. > The current {{/aliases.json}} is a Map with a key {{collection}} which is in > turn a Map of alias name strings to a comma delimited list of the collections. > _If we change the comma delimited list to be another Map to hold the existing > list and more stuff, older CloudSolrClient (configured to talk to ZooKeeper) > will break_. Although if it's configured with an HTTP Solr URL then it would > not break. There's also some read/write hassle to worry about -- we may need > to continue to read an aliases.json in the older format. > Alternatively, we could add a new map entry to aliases.json, say, > {{collection_metadata}} keyed by alias name? > Perhaps another very different approach is to attach metadata to the > configset in use? -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-11487) Collection Alias metadata for time partitioned collections
[ https://issues.apache.org/jira/browse/SOLR-11487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Varun Thacker updated SOLR-11487: - Fix Version/s: 7.2 > Collection Alias metadata for time partitioned collections > -- > > Key: SOLR-11487 > URL: https://issues.apache.org/jira/browse/SOLR-11487 > Project: Solr > Issue Type: Sub-task > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Reporter: David Smiley >Assignee: David Smiley >Priority: Major > Fix For: 7.2 > > Attachments: SOLR_11487.patch, SOLR_11487.patch, SOLR_11487.patch, > SOLR_11487.patch, SOLR_11487.patch, SOLR_11487.patch, SOLR_11487.patch, > SOLR_11487.patch, SOLR_11487.patch > > > SOLR-11299 outlines an approach to using a collection Alias to refer to a > series of collections of a time series. We'll need to store some metadata > about these time series collections, such as which field of the document > contains the timestamp to route on. > The current {{/aliases.json}} is a Map with a key {{collection}} which is in > turn a Map of alias name strings to a comma delimited list of the collections. > _If we change the comma delimited list to be another Map to hold the existing > list and more stuff, older CloudSolrClient (configured to talk to ZooKeeper) > will break_. Although if it's configured with an HTTP Solr URL then it would > not break. There's also some read/write hassle to worry about -- we may need > to continue to read an aliases.json in the older format. > Alternatively, we could add a new map entry to aliases.json, say, > {{collection_metadata}} keyed by alias name? > Perhaps another very different approach is to attach metadata to the > configset in use? -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Assigned] (SOLR-11487) Collection Alias metadata for time partitioned collections
[ https://issues.apache.org/jira/browse/SOLR-11487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Varun Thacker reassigned SOLR-11487: Assignee: David Smiley (was: Varun Thacker) > Collection Alias metadata for time partitioned collections > -- > > Key: SOLR-11487 > URL: https://issues.apache.org/jira/browse/SOLR-11487 > Project: Solr > Issue Type: Sub-task > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Reporter: David Smiley >Assignee: David Smiley >Priority: Major > Fix For: 7.2 > > Attachments: SOLR_11487.patch, SOLR_11487.patch, SOLR_11487.patch, > SOLR_11487.patch, SOLR_11487.patch, SOLR_11487.patch, SOLR_11487.patch, > SOLR_11487.patch, SOLR_11487.patch > > > SOLR-11299 outlines an approach to using a collection Alias to refer to a > series of collections of a time series. We'll need to store some metadata > about these time series collections, such as which field of the document > contains the timestamp to route on. > The current {{/aliases.json}} is a Map with a key {{collection}} which is in > turn a Map of alias name strings to a comma delimited list of the collections. > _If we change the comma delimited list to be another Map to hold the existing > list and more stuff, older CloudSolrClient (configured to talk to ZooKeeper) > will break_. Although if it's configured with an HTTP Solr URL then it would > not break. There's also some read/write hassle to worry about -- we may need > to continue to read an aliases.json in the older format. > Alternatively, we could add a new map entry to aliases.json, say, > {{collection_metadata}} keyed by alias name? > Perhaps another very different approach is to attach metadata to the > configset in use? -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-11487) Collection Alias metadata for time partitioned collections
[ https://issues.apache.org/jira/browse/SOLR-11487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16434467#comment-16434467 ] Varun Thacker edited comment on SOLR-11487 at 4/11/18 7:43 PM: --- Hi David, I've added the fix version as 7.2 for this Jira for reference We recently had a user who ran into a race condition when updating aliases on a Solr 5.x . Looking at master today it looks like we're dealing with race conditions and this comment confirms that we fixed it as part of this Jira {quote}Decomposing aliases.json has pros/cons, but it won't remove the possibility of races between modifying some portion of the aliases state; it just makes it more rare. So we still need to deal with races in code using a zkVersion with a retry and eventual timeout, etc. {quote} was (Author: varunthacker): Hi David, I've added the fix version as 7.2 for this Jira for reference We recently had a user who ran into a race condition when updating aliases on a Solr 5.x . Looking at master today it looks like we're dealing with race conditions and this comment confirms that we fixed it as part of this Jira {quote}Decomposing aliases.json has pros/cons, but it won't remove the possibility of races between modifying some portion of the aliases state; it just makes it more rare. So we still need to deal with races in code using a zkVersion with a retry and eventual timeout, etc. {quote} > Collection Alias metadata for time partitioned collections > -- > > Key: SOLR-11487 > URL: https://issues.apache.org/jira/browse/SOLR-11487 > Project: Solr > Issue Type: Sub-task > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Reporter: David Smiley >Assignee: David Smiley >Priority: Major > Fix For: 7.2 > > Attachments: SOLR_11487.patch, SOLR_11487.patch, SOLR_11487.patch, > SOLR_11487.patch, SOLR_11487.patch, SOLR_11487.patch, SOLR_11487.patch, > SOLR_11487.patch, SOLR_11487.patch > > > SOLR-11299 outlines an approach to using a collection Alias to refer to a > series of collections of a time series. We'll need to store some metadata > about these time series collections, such as which field of the document > contains the timestamp to route on. > The current {{/aliases.json}} is a Map with a key {{collection}} which is in > turn a Map of alias name strings to a comma delimited list of the collections. > _If we change the comma delimited list to be another Map to hold the existing > list and more stuff, older CloudSolrClient (configured to talk to ZooKeeper) > will break_. Although if it's configured with an HTTP Solr URL then it would > not break. There's also some read/write hassle to worry about -- we may need > to continue to read an aliases.json in the older format. > Alternatively, we could add a new map entry to aliases.json, say, > {{collection_metadata}} keyed by alias name? > Perhaps another very different approach is to attach metadata to the > configset in use? -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11487) Collection Alias metadata for time partitioned collections
[ https://issues.apache.org/jira/browse/SOLR-11487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16434467#comment-16434467 ] Varun Thacker commented on SOLR-11487: -- Hi David, I've added the fix version as 7.2 for this Jira for reference We recently had a user who ran into a race condition when updating aliases on a Solr 5.x . Looking at master today it looks like we're dealing with race conditions and this comment confirms that we fixed it as part of this Jira {quote}Decomposing aliases.json has pros/cons, but it won't remove the possibility of races between modifying some portion of the aliases state; it just makes it more rare. So we still need to deal with races in code using a zkVersion with a retry and eventual timeout, etc. {quote} > Collection Alias metadata for time partitioned collections > -- > > Key: SOLR-11487 > URL: https://issues.apache.org/jira/browse/SOLR-11487 > Project: Solr > Issue Type: Sub-task > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Reporter: David Smiley >Assignee: David Smiley >Priority: Major > Fix For: 7.2 > > Attachments: SOLR_11487.patch, SOLR_11487.patch, SOLR_11487.patch, > SOLR_11487.patch, SOLR_11487.patch, SOLR_11487.patch, SOLR_11487.patch, > SOLR_11487.patch, SOLR_11487.patch > > > SOLR-11299 outlines an approach to using a collection Alias to refer to a > series of collections of a time series. We'll need to store some metadata > about these time series collections, such as which field of the document > contains the timestamp to route on. > The current {{/aliases.json}} is a Map with a key {{collection}} which is in > turn a Map of alias name strings to a comma delimited list of the collections. > _If we change the comma delimited list to be another Map to hold the existing > list and more stuff, older CloudSolrClient (configured to talk to ZooKeeper) > will break_. Although if it's configured with an HTTP Solr URL then it would > not break. There's also some read/write hassle to worry about -- we may need > to continue to read an aliases.json in the older format. > Alternatively, we could add a new map entry to aliases.json, say, > {{collection_metadata}} keyed by alias name? > Perhaps another very different approach is to attach metadata to the > configset in use? -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (LUCENE-8248) Rename MergePolicyWrapper to FilterMergePolicy and override all of MergePolicy
[ https://issues.apache.org/jira/browse/LUCENE-8248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16434493#comment-16434493 ] Mike Sokolov edited comment on LUCENE-8248 at 4/11/18 8:07 PM: --- Oh! Maybe it *should* extend {{FilterMergePolicy}}? Or do you think that's too much change all at once? was (Author: sokolov): Oh! Maybe it *should* extend `FilterMergePolicy`? Or do you think that's too much change all at once? > Rename MergePolicyWrapper to FilterMergePolicy and override all of MergePolicy > -- > > Key: LUCENE-8248 > URL: https://issues.apache.org/jira/browse/LUCENE-8248 > Project: Lucene - Core > Issue Type: Wish > Components: core/index >Reporter: Mike Sokolov >Priority: Minor > Attachments: LUCENE-8248.1.patch, LUCENE-8248.patch, MergePolicy.patch > > > MergePolicy.getMaxCFSSegmentSizeMB is final, but the corresponding setter is > not, which means that overriding it with anything other than a trivial > delegation can only lead to confusion. > The patch makes the method final and removes the trivial implementations from > MergePolicyWrapper and NoMergePolicy. > [~mikemccand] also pointed out that the class name is nonstandard for similar > adapter classes in Lucene, which are usually Filter*.java. Personally I was > looking for MergePolicyAdapter, but if there is a prevailing convention here > around Filter, does it make sense to change this class's name to > FilterMergePolicy? -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8248) Rename MergePolicyWrapper to FilterMergePolicy and override all of MergePolicy
[ https://issues.apache.org/jira/browse/LUCENE-8248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16434493#comment-16434493 ] Mike Sokolov commented on LUCENE-8248: -- Oh! Maybe it *should* extend `FilterMergePolicy`? Or do you think that's too much change all at once? > Rename MergePolicyWrapper to FilterMergePolicy and override all of MergePolicy > -- > > Key: LUCENE-8248 > URL: https://issues.apache.org/jira/browse/LUCENE-8248 > Project: Lucene - Core > Issue Type: Wish > Components: core/index >Reporter: Mike Sokolov >Priority: Minor > Attachments: LUCENE-8248.1.patch, LUCENE-8248.patch, MergePolicy.patch > > > MergePolicy.getMaxCFSSegmentSizeMB is final, but the corresponding setter is > not, which means that overriding it with anything other than a trivial > delegation can only lead to confusion. > The patch makes the method final and removes the trivial implementations from > MergePolicyWrapper and NoMergePolicy. > [~mikemccand] also pointed out that the class name is nonstandard for similar > adapter classes in Lucene, which are usually Filter*.java. Personally I was > looking for MergePolicyAdapter, but if there is a prevailing convention here > around Filter, does it make sense to change this class's name to > FilterMergePolicy? -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Issue Comment Deleted] (LUCENE-8248) Rename MergePolicyWrapper to FilterMergePolicy and override all of MergePolicy
[ https://issues.apache.org/jira/browse/LUCENE-8248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mike Sokolov updated LUCENE-8248: - Comment: was deleted (was: Oh! Maybe it *should* extend {{FilterMergePolicy}}? Or do you think that's too much change all at once?) > Rename MergePolicyWrapper to FilterMergePolicy and override all of MergePolicy > -- > > Key: LUCENE-8248 > URL: https://issues.apache.org/jira/browse/LUCENE-8248 > Project: Lucene - Core > Issue Type: Wish > Components: core/index >Reporter: Mike Sokolov >Priority: Minor > Attachments: LUCENE-8248.1.patch, LUCENE-8248.patch, MergePolicy.patch > > > MergePolicy.getMaxCFSSegmentSizeMB is final, but the corresponding setter is > not, which means that overriding it with anything other than a trivial > delegation can only lead to confusion. > The patch makes the method final and removes the trivial implementations from > MergePolicyWrapper and NoMergePolicy. > [~mikemccand] also pointed out that the class name is nonstandard for similar > adapter classes in Lucene, which are usually Filter*.java. Personally I was > looking for MergePolicyAdapter, but if there is a prevailing convention here > around Filter, does it make sense to change this class's name to > FilterMergePolicy? -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8248) Rename MergePolicyWrapper to FilterMergePolicy and override all of MergePolicy
[ https://issues.apache.org/jira/browse/LUCENE-8248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16434509#comment-16434509 ] Mike Sokolov commented on LUCENE-8248: -- I guess I'm confused why NoMergePolicy must override all inherited methods. That makes sense for a filter / adapter class -- that is its sole purpose after all. > Rename MergePolicyWrapper to FilterMergePolicy and override all of MergePolicy > -- > > Key: LUCENE-8248 > URL: https://issues.apache.org/jira/browse/LUCENE-8248 > Project: Lucene - Core > Issue Type: Wish > Components: core/index >Reporter: Mike Sokolov >Priority: Minor > Attachments: LUCENE-8248.1.patch, LUCENE-8248.patch, MergePolicy.patch > > > MergePolicy.getMaxCFSSegmentSizeMB is final, but the corresponding setter is > not, which means that overriding it with anything other than a trivial > delegation can only lead to confusion. > The patch makes the method final and removes the trivial implementations from > MergePolicyWrapper and NoMergePolicy. > [~mikemccand] also pointed out that the class name is nonstandard for similar > adapter classes in Lucene, which are usually Filter*.java. Personally I was > looking for MergePolicyAdapter, but if there is a prevailing convention here > around Filter, does it make sense to change this class's name to > FilterMergePolicy? -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (LUCENE-8248) Rename MergePolicyWrapper to FilterMergePolicy and override all of MergePolicy
[ https://issues.apache.org/jira/browse/LUCENE-8248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16434509#comment-16434509 ] Mike Sokolov edited comment on LUCENE-8248 at 4/11/18 8:17 PM: --- I guess I'm confused why NoMergePolicy must override all inherited methods. That makes sense for a filter / adapter class -- that is its sole purpose after all. But NoMergePolicy simply calls the base class method in many cases; why force it to do so? It doesn't have any risk of shadowing since it is not supplying its own instance variables was (Author: sokolov): I guess I'm confused why NoMergePolicy must override all inherited methods. That makes sense for a filter / adapter class -- that is its sole purpose after all. > Rename MergePolicyWrapper to FilterMergePolicy and override all of MergePolicy > -- > > Key: LUCENE-8248 > URL: https://issues.apache.org/jira/browse/LUCENE-8248 > Project: Lucene - Core > Issue Type: Wish > Components: core/index >Reporter: Mike Sokolov >Priority: Minor > Attachments: LUCENE-8248.1.patch, LUCENE-8248.patch, MergePolicy.patch > > > MergePolicy.getMaxCFSSegmentSizeMB is final, but the corresponding setter is > not, which means that overriding it with anything other than a trivial > delegation can only lead to confusion. > The patch makes the method final and removes the trivial implementations from > MergePolicyWrapper and NoMergePolicy. > [~mikemccand] also pointed out that the class name is nonstandard for similar > adapter classes in Lucene, which are usually Filter*.java. Personally I was > looking for MergePolicyAdapter, but if there is a prevailing convention here > around Filter, does it make sense to change this class's name to > FilterMergePolicy? -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-NightlyTests-7.x - Build # 198 - Failure
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-7.x/198/ 3 tests failed. FAILED: org.apache.solr.cloud.api.collections.TestHdfsCloudBackupRestore.test Error Message: Error from server at https://127.0.0.1:40759/solr: Could not restore core Stack Trace: org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at https://127.0.0.1:40759/solr: Could not restore core at __randomizedtesting.SeedInfo.seed([93925E63CC51A4B4:1BC661B962ADC94C]:0) at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244) at org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483) at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1106) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:886) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:819) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211) at org.apache.solr.cloud.api.collections.AbstractCloudBackupRestoreTestCase.testBackupAndRestore(AbstractCloudBackupRestoreTestCase.java:287) at org.apache.solr.cloud.api.collections.AbstractCloudBackupRestoreTestCase.test(AbstractCloudBackupRestoreTestCase.java:142) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.eval
[jira] [Created] (SOLR-12213) Consider removing the collectionPropsNotifications thread from ZkStateReader
Tomás Fernández Löbbe created SOLR-12213: Summary: Consider removing the collectionPropsNotifications thread from ZkStateReader Key: SOLR-12213 URL: https://issues.apache.org/jira/browse/SOLR-12213 Project: Solr Issue Type: Bug Security Level: Public (Default Security Level. Issues are Public) Affects Versions: 7.4 Reporter: Tomás Fernández Löbbe >From SOLR-12172: [~shalinmangar] {quote} Tomás Fernández Löbbe – I don't think we should introduce another thread(pool) just for this feature. We can use a method similar to updateWatchedCollection which checks if the new znode version is greater than the old one. This ensures that we replace the old collection props only if the new one is actually newer. {quote} [~tomasflobbe] {quote} Thanks for the review Shalin Shekhar Mangar. I thought about doing something like that, but decided not to since it requires to keep something like a map with collection -> version and handling it made the code more complex. I'll put up a patch, maybe it's still better to go that route anyway {quote} [~tomasflobbe] {quote} Shalin Shekhar Mangar, just by keeping the synchronization I added to refreshAndWatch in the previous commit we can guarantee that we won't be setting the collection property map to an older value, however, I don't think we can guarantee that the notifications to watchers won't be out of order without using the single thread executor. Are you suggesting that we go that way anyway? {quote} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-12172) Race condition in collection properties can cause invalid cache of properties
[ https://issues.apache.org/jira/browse/SOLR-12172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tomás Fernández Löbbe resolved SOLR-12172. -- Resolution: Fixed > Race condition in collection properties can cause invalid cache of properties > - > > Key: SOLR-12172 > URL: https://issues.apache.org/jira/browse/SOLR-12172 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: Tests >Reporter: Tomás Fernández Löbbe >Assignee: Tomás Fernández Löbbe >Priority: Minor > Fix For: 7.4, master (8.0) > > Attachments: SOLR-12172.patch > > > From: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-master/24 > {noformat} > java.lang.AssertionError: Could not see value change after setting collection > property. Name: property2, current value: value2, expected value: newValue > at > __randomizedtesting.SeedInfo.seed([1BCE6473A2A5E68A:FD89A9BD30939A79]:0) > at org.junit.Assert.fail(Assert.java:93) > at > org.apache.solr.cloud.CollectionPropsTest.waitForValue(CollectionPropsTest.java:146) > at > org.apache.solr.cloud.CollectionPropsTest.testReadWriteCached(CollectionPropsTest.java:115){noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12172) Race condition in collection properties can cause invalid cache of properties
[ https://issues.apache.org/jira/browse/SOLR-12172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16434567#comment-16434567 ] Tomás Fernández Löbbe commented on SOLR-12172: -- Shalin, I created SOLR-12213 to continue the discussion there. I'll resolve this one since the test bug is fixed > Race condition in collection properties can cause invalid cache of properties > - > > Key: SOLR-12172 > URL: https://issues.apache.org/jira/browse/SOLR-12172 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: Tests >Reporter: Tomás Fernández Löbbe >Assignee: Tomás Fernández Löbbe >Priority: Minor > Fix For: 7.4, master (8.0) > > Attachments: SOLR-12172.patch > > > From: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-master/24 > {noformat} > java.lang.AssertionError: Could not see value change after setting collection > property. Name: property2, current value: value2, expected value: newValue > at > __randomizedtesting.SeedInfo.seed([1BCE6473A2A5E68A:FD89A9BD30939A79]:0) > at org.junit.Assert.fail(Assert.java:93) > at > org.apache.solr.cloud.CollectionPropsTest.waitForValue(CollectionPropsTest.java:146) > at > org.apache.solr.cloud.CollectionPropsTest.testReadWriteCached(CollectionPropsTest.java:115){noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12213) Consider removing the collectionPropsNotifications thread from ZkStateReader
[ https://issues.apache.org/jira/browse/SOLR-12213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tomás Fernández Löbbe updated SOLR-12213: - Issue Type: Improvement (was: Bug) > Consider removing the collectionPropsNotifications thread from ZkStateReader > > > Key: SOLR-12213 > URL: https://issues.apache.org/jira/browse/SOLR-12213 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 7.4, master (8.0) >Reporter: Tomás Fernández Löbbe >Priority: Minor > > From SOLR-12172: > [~shalinmangar] > {quote} > Tomás Fernández Löbbe – I don't think we should introduce another > thread(pool) just for this feature. We can use a method similar to > updateWatchedCollection which checks if the new znode version is greater than > the old one. This ensures that we replace the old collection props only if > the new one is actually newer. > {quote} > [~tomasflobbe] > {quote} > Thanks for the review Shalin Shekhar Mangar. I thought about doing something > like that, but decided not to since it requires to keep something like a map > with collection -> version and handling it made the code more complex. I'll > put up a patch, maybe it's still better to go that route anyway > {quote} > [~tomasflobbe] > {quote} > Shalin Shekhar Mangar, just by keeping the synchronization I added to > refreshAndWatch in the previous commit we can guarantee that we won't be > setting the collection property map to an older value, however, I don't think > we can guarantee that the notifications to watchers won't be out of order > without using the single thread executor. Are you suggesting that we go that > way anyway? > {quote} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12213) Consider removing the collectionPropsNotifications thread from ZkStateReader
[ https://issues.apache.org/jira/browse/SOLR-12213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tomás Fernández Löbbe updated SOLR-12213: - Affects Version/s: master (8.0) > Consider removing the collectionPropsNotifications thread from ZkStateReader > > > Key: SOLR-12213 > URL: https://issues.apache.org/jira/browse/SOLR-12213 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 7.4, master (8.0) >Reporter: Tomás Fernández Löbbe >Priority: Minor > > From SOLR-12172: > [~shalinmangar] > {quote} > Tomás Fernández Löbbe – I don't think we should introduce another > thread(pool) just for this feature. We can use a method similar to > updateWatchedCollection which checks if the new znode version is greater than > the old one. This ensures that we replace the old collection props only if > the new one is actually newer. > {quote} > [~tomasflobbe] > {quote} > Thanks for the review Shalin Shekhar Mangar. I thought about doing something > like that, but decided not to since it requires to keep something like a map > with collection -> version and handling it made the code more complex. I'll > put up a patch, maybe it's still better to go that route anyway > {quote} > [~tomasflobbe] > {quote} > Shalin Shekhar Mangar, just by keeping the synchronization I added to > refreshAndWatch in the previous commit we can guarantee that we won't be > setting the collection property map to an older value, however, I don't think > we can guarantee that the notifications to watchers won't be out of order > without using the single thread executor. Are you suggesting that we go that > way anyway? > {quote} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-SmokeRelease-7.x - Build # 198 - Failure
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-7.x/198/ No tests ran. Build Log: [...truncated 9247 lines...] [javadoc] Generating Javadoc [javadoc] Javadoc execution [javadoc] Loading source files for package org.apache.lucene.analysis... [javadoc] Loading source files for package org.apache.lucene.analysis.standard... [javadoc] Loading source files for package org.apache.lucene.codecs... [javadoc] Loading source files for package org.apache.lucene.codecs.asserting... [javadoc] Loading source files for package org.apache.lucene.codecs.blockterms... [javadoc] Loading source files for package org.apache.lucene.codecs.bloom... [javadoc] Loading source files for package org.apache.lucene.codecs.cheapbastard... [javadoc] Loading source files for package org.apache.lucene.codecs.compressing... [javadoc] Loading source files for package org.apache.lucene.codecs.compressing.dummy... [javadoc] Loading source files for package org.apache.lucene.codecs.cranky... [javadoc] Loading source files for package org.apache.lucene.codecs.mockrandom... [javadoc] Loading source files for package org.apache.lucene.codecs.ramonly... [javadoc] Loading source files for package org.apache.lucene.geo... [javadoc] Loading source files for package org.apache.lucene.index... [javadoc] Loading source files for package org.apache.lucene.mockfile... [javadoc] Loading source files for package org.apache.lucene.search... [javadoc] Loading source files for package org.apache.lucene.search.similarities... [javadoc] Loading source files for package org.apache.lucene.search.spans... [javadoc] Loading source files for package org.apache.lucene.store... [javadoc] Loading source files for package org.apache.lucene.util... [javadoc] Loading source files for package org.apache.lucene.util.automaton... [javadoc] Loading source files for package org.apache.lucene.util.fst... [javadoc] Constructing Javadoc information... [javadoc] Standard Doclet version 1.8.0_152 [javadoc] Building tree for all the packages and classes... [javadoc] /home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/test-framework/src/java/org/apache/lucene/search/CheckHits.java:88: warning: empty tag [javadoc]* [javadoc] ^ [javadoc] /home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/test-framework/src/java/org/apache/lucene/search/CheckHits.java:162: warning: empty tag [javadoc]* [javadoc] ^ [javadoc] Building index for all the packages and classes... [javadoc] Building index for all classes... [javadoc] Generating /home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/docs/test-framework/help-doc.html... [javadoc] 2 warnings BUILD FAILED /home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/build.xml:446: The following error occurred while executing this line: /home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/common-build.xml:885: The following error occurred while executing this line: /home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/test-framework/build.xml:67: The following error occurred while executing this line: /home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/common-build.xml:2240: Javadocs warnings were found! Total time: 5 minutes 5 seconds Build step 'Invoke Ant' marked build as failure Setting JDK_1_9_LATEST__HOME=/home/jenkins/tools/java/latest1.9 Setting JDK_1_9_LATEST__HOME=/home/jenkins/tools/java/latest1.9 Setting JDK_1_9_LATEST__HOME=/home/jenkins/tools/java/latest1.9 Setting JDK_1_9_LATEST__HOME=/home/jenkins/tools/java/latest1.9 Setting JDK_1_9_LATEST__HOME=/home/jenkins/tools/java/latest1.9 Email was triggered for: Failure - Any Sending email for trigger: Failure - Any Setting JDK_1_9_LATEST__HOME=/home/jenkins/tools/java/latest1.9 Setting JDK_1_9_LATEST__HOME=/home/jenkins/tools/java/latest1.9 - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-7.x-Solaris (64bit/jdk1.8.0) - Build # 554 - Still Failing!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Solaris/554/ Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC 1 tests failed. FAILED: org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testMixedBounds Error Message: did not finish processing in time Stack Trace: java.lang.AssertionError: did not finish processing in time at __randomizedtesting.SeedInfo.seed([26B2002634EB1416:2C31BF8B79501F4C]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testMixedBounds(IndexSizeTriggerTest.java:524) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.lang.Thread.run(Thread.java:748) Build Log: [...truncated 14188 lines...] [junit4] Suite: org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest [junit4] 2> Creating dataDir: /export/home/jenkins/workspace/Lucene-Solr-7.x-Solaris/solr/build/solr-core/test/J1/temp/solr.cloud.autoscali
Re: [JENKINS] Lucene-Solr-SmokeRelease-7.x - Build # 198 - Failure
Think this is a merge error on my part, will fix > On 11 Apr 2018, at 22:00, Apache Jenkins Server > wrote: > > Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-7.x/198/ > > No tests ran. > > Build Log: > [...truncated 9247 lines...] > [javadoc] Generating Javadoc > [javadoc] Javadoc execution > [javadoc] Loading source files for package org.apache.lucene.analysis... > [javadoc] Loading source files for package > org.apache.lucene.analysis.standard... > [javadoc] Loading source files for package org.apache.lucene.codecs... > [javadoc] Loading source files for package > org.apache.lucene.codecs.asserting... > [javadoc] Loading source files for package > org.apache.lucene.codecs.blockterms... > [javadoc] Loading source files for package org.apache.lucene.codecs.bloom... > [javadoc] Loading source files for package > org.apache.lucene.codecs.cheapbastard... > [javadoc] Loading source files for package > org.apache.lucene.codecs.compressing... > [javadoc] Loading source files for package > org.apache.lucene.codecs.compressing.dummy... > [javadoc] Loading source files for package org.apache.lucene.codecs.cranky... > [javadoc] Loading source files for package > org.apache.lucene.codecs.mockrandom... > [javadoc] Loading source files for package org.apache.lucene.codecs.ramonly... > [javadoc] Loading source files for package org.apache.lucene.geo... > [javadoc] Loading source files for package org.apache.lucene.index... > [javadoc] Loading source files for package org.apache.lucene.mockfile... > [javadoc] Loading source files for package org.apache.lucene.search... > [javadoc] Loading source files for package > org.apache.lucene.search.similarities... > [javadoc] Loading source files for package org.apache.lucene.search.spans... > [javadoc] Loading source files for package org.apache.lucene.store... > [javadoc] Loading source files for package org.apache.lucene.util... > [javadoc] Loading source files for package org.apache.lucene.util.automaton... > [javadoc] Loading source files for package org.apache.lucene.util.fst... > [javadoc] Constructing Javadoc information... > [javadoc] Standard Doclet version 1.8.0_152 > [javadoc] Building tree for all the packages and classes... > [javadoc] > /home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/test-framework/src/java/org/apache/lucene/search/CheckHits.java:88: > warning: empty tag > [javadoc]* > [javadoc] ^ > [javadoc] > /home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/test-framework/src/java/org/apache/lucene/search/CheckHits.java:162: > warning: empty tag > [javadoc]* > [javadoc] ^ > [javadoc] Building index for all the packages and classes... > [javadoc] Building index for all classes... > [javadoc] Generating > /home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/docs/test-framework/help-doc.html... > [javadoc] 2 warnings > > BUILD FAILED > /home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/build.xml:446: > The following error occurred while executing this line: > /home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/common-build.xml:885: > The following error occurred while executing this line: > /home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/test-framework/build.xml:67: > The following error occurred while executing this line: > /home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/common-build.xml:2240: > Javadocs warnings were found! > > Total time: 5 minutes 5 seconds > Build step 'Invoke Ant' marked build as failure > Setting JDK_1_9_LATEST__HOME=/home/jenkins/tools/java/latest1.9 > Setting JDK_1_9_LATEST__HOME=/home/jenkins/tools/java/latest1.9 > Setting JDK_1_9_LATEST__HOME=/home/jenkins/tools/java/latest1.9 > Setting JDK_1_9_LATEST__HOME=/home/jenkins/tools/java/latest1.9 > Setting JDK_1_9_LATEST__HOME=/home/jenkins/tools/java/latest1.9 > Email was triggered for: Failure - Any > Sending email for trigger: Failure - Any > Setting JDK_1_9_LATEST__HOME=/home/jenkins/tools/java/latest1.9 > Setting JDK_1_9_LATEST__HOME=/home/jenkins/tools/java/latest1.9 > > - > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org > For additional commands, e-mail: dev-h...@lucene.apache.org - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: [JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk-9.0.4) - Build # 1695 - Unstable!
I am looking into this... On Wed, Apr 11, 2018 at 7:18 PM, Policeman Jenkins Server wrote: > Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/1695/ > Java: 64bit/jdk-9.0.4 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC > > 1 tests failed. > FAILED: org.apache.lucene.index.TestStressNRT.test > > Error Message: > MockDirectoryWrapper: cannot close: there are still 46 open files: > {_1i_5_Lucene70_0.dvd=1, _1j.cfs=1, _1o_4_Lucene70_0.dvd=1, _1t.fdt=1, > _25_2_Lucene70_0.dvd=1, _1x_1_Lucene70_0.dvd=1, _1r_2_Lucene70_0.dvd=1, > _1i.cfs=1, _20.fdt=1, _1u.fdt=1, _1r_3_Lucene70_0.dvd=1, > _1n_2_Lucene70_0.dvd=1, _25.fdt=1, _1t_2_Lucene70_0.dvd=1, _1v.fdt=1, > _1x.fdt=1, _27.fdt=1, _26.fdt=1, _1t_3_Lucene70_0.dvd=1, > _1v_1_Lucene70_0.dvd=1, _1w.fdt=1, _1o.cfs=1, _1y.fdt=1, _1p.fdt=1, > _1z.fdt=1, _1n.cfs=1, _23.fdt=1, _1w_2_Lucene70_0.dvd=1, > _1o_5_Lucene70_0.dvd=1, _1q.fdt=1, _24.fdt=1, _1m.cfs=1, _1r.fdt=1, > _1m_8_Lucene70_0.dvd=1, _1p_1_Lucene70_0.dvd=1, _1l.cfs=1, > _1j_1_Lucene70_0.dvd=1, _1u_1_Lucene70_0.dvd=1, _1m_9_Lucene70_0.dvd=1, > _21.fdt=1, _1s.fdt=1, _1k_4_Lucene70_0.dvd=1, _1i_6_Lucene70_0.dvd=1, > _1l_6_Lucene70_0.dvd=1, _1k.cfs=1, _22.fdt=1} > > Stack Trace: > java.lang.RuntimeException: MockDirectoryWrapper: cannot close: there are > still 46 open files: {_1i_5_Lucene70_0.dvd=1, _1j.cfs=1, > _1o_4_Lucene70_0.dvd=1, _1t.fdt=1, _25_2_Lucene70_0.dvd=1, > _1x_1_Lucene70_0.dvd=1, _1r_2_Lucene70_0.dvd=1, _1i.cfs=1, _20.fdt=1, > _1u.fdt=1, _1r_3_Lucene70_0.dvd=1, _1n_2_Lucene70_0.dvd=1, _25.fdt=1, > _1t_2_Lucene70_0.dvd=1, _1v.fdt=1, _1x.fdt=1, _27.fdt=1, _26.fdt=1, > _1t_3_Lucene70_0.dvd=1, _1v_1_Lucene70_0.dvd=1, _1w.fdt=1, _1o.cfs=1, > _1y.fdt=1, _1p.fdt=1, _1z.fdt=1, _1n.cfs=1, _23.fdt=1, > _1w_2_Lucene70_0.dvd=1, _1o_5_Lucene70_0.dvd=1, _1q.fdt=1, _24.fdt=1, > _1m.cfs=1, _1r.fdt=1, _1m_8_Lucene70_0.dvd=1, _1p_1_Lucene70_0.dvd=1, > _1l.cfs=1, _1j_1_Lucene70_0.dvd=1, _1u_1_Lucene70_0.dvd=1, > _1m_9_Lucene70_0.dvd=1, _21.fdt=1, _1s.fdt=1, _1k_4_Lucene70_0.dvd=1, > _1i_6_Lucene70_0.dvd=1, _1l_6_Lucene70_0.dvd=1, _1k.cfs=1, _22.fdt=1} > at > org.apache.lucene.store.MockDirectoryWrapper.close(MockDirectoryWrapper.java:841) > at org.apache.lucene.index.TestStressNRT.test(TestStressNRT.java:403) > at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.base/java.lang.reflect.Method.invoke(Method.java:564) > at > com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) > at > com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) > at > com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) > at > com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) > at > org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) > at > org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) > at > org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) > at > org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) > at > org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) > at > com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) > at > com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) > at > com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) > at > com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) > at > com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) > at > com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) > at > com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) > at > com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) > at > org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) > at > com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) > at > org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) > at > com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMe
[jira] [Commented] (LUCENE-8245) GeoComplexPolygon fails when intersection of travel plane with edge is near polygon point
[ https://issues.apache.org/jira/browse/LUCENE-8245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16434621#comment-16434621 ] ASF subversion and git services commented on LUCENE-8245: - Commit 0c71503448a66e8766008ae0447e36115ffbdd08 in lucene-solr's branch refs/heads/master from [~kwri...@metacarta.com] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=0c71503 ] LUCENE-8245: Change how crossings are computed. > GeoComplexPolygon fails when intersection of travel plane with edge is near > polygon point > - > > Key: LUCENE-8245 > URL: https://issues.apache.org/jira/browse/LUCENE-8245 > Project: Lucene - Core > Issue Type: Bug > Components: modules/spatial3d >Reporter: Ignacio Vera >Assignee: Karl Wright >Priority: Major > Fix For: 6.7, 7.4, master (8.0) > > Attachments: LUCENE-8245-case2.patch, LUCENE-8245.jpg, > LUCENE-8245.patch, LUCENE-8245_Polygon.patch, LUCENE-8245_Random.patch, > LUCENE-8245_case3.patch > > > When a travel plane crosses an edge close to an edge point , it is possible > that the above and below planes crosses different edges. In the current > implementation one of the crosses is missed because we only check edges that > are crossed by the main plain and the {{within}} result is wrong. > One possible fix is to check always the intersection of planes and edges > regardless if they are crossed by main plane. That fixed the above issue but > shows other issues like travel planes crossing two edges when it should be > only one due to the fuzziness at edge intersections. > Not sure of a fix so I add the test showing the issue. > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8245) GeoComplexPolygon fails when intersection of travel plane with edge is near polygon point
[ https://issues.apache.org/jira/browse/LUCENE-8245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16434622#comment-16434622 ] ASF subversion and git services commented on LUCENE-8245: - Commit 6ce215d6f2e079e505c664e00b6898056dbcc324 in lucene-solr's branch refs/heads/branch_7x from [~kwri...@metacarta.com] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=6ce215d ] LUCENE-8245: Change how crossings are computed. > GeoComplexPolygon fails when intersection of travel plane with edge is near > polygon point > - > > Key: LUCENE-8245 > URL: https://issues.apache.org/jira/browse/LUCENE-8245 > Project: Lucene - Core > Issue Type: Bug > Components: modules/spatial3d >Reporter: Ignacio Vera >Assignee: Karl Wright >Priority: Major > Fix For: 6.7, 7.4, master (8.0) > > Attachments: LUCENE-8245-case2.patch, LUCENE-8245.jpg, > LUCENE-8245.patch, LUCENE-8245_Polygon.patch, LUCENE-8245_Random.patch, > LUCENE-8245_case3.patch > > > When a travel plane crosses an edge close to an edge point , it is possible > that the above and below planes crosses different edges. In the current > implementation one of the crosses is missed because we only check edges that > are crossed by the main plain and the {{within}} result is wrong. > One possible fix is to check always the intersection of planes and edges > regardless if they are crossed by main plane. That fixed the above issue but > shows other issues like travel planes crossing two edges when it should be > only one due to the fuzziness at edge intersections. > Not sure of a fix so I add the test showing the issue. > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8245) GeoComplexPolygon fails when intersection of travel plane with edge is near polygon point
[ https://issues.apache.org/jira/browse/LUCENE-8245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16434623#comment-16434623 ] ASF subversion and git services commented on LUCENE-8245: - Commit 6edfd9f2bd07412ed96c6402cd7c7c1495ae13db in lucene-solr's branch refs/heads/branch_6x from [~kwri...@metacarta.com] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=6edfd9f ] LUCENE-8245: Change how crossings are computed. > GeoComplexPolygon fails when intersection of travel plane with edge is near > polygon point > - > > Key: LUCENE-8245 > URL: https://issues.apache.org/jira/browse/LUCENE-8245 > Project: Lucene - Core > Issue Type: Bug > Components: modules/spatial3d >Reporter: Ignacio Vera >Assignee: Karl Wright >Priority: Major > Fix For: 6.7, 7.4, master (8.0) > > Attachments: LUCENE-8245-case2.patch, LUCENE-8245.jpg, > LUCENE-8245.patch, LUCENE-8245_Polygon.patch, LUCENE-8245_Random.patch, > LUCENE-8245_case3.patch > > > When a travel plane crosses an edge close to an edge point , it is possible > that the above and below planes crosses different edges. In the current > implementation one of the crosses is missed because we only check edges that > are crossed by the main plain and the {{within}} result is wrong. > One possible fix is to check always the intersection of planes and edges > regardless if they are crossed by main plane. That fixed the above issue but > shows other issues like travel planes crossing two edges when it should be > only one due to the fuzziness at edge intersections. > Not sure of a fix so I add the test showing the issue. > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-Windows (64bit/jdk1.8.0_144) - Build # 7265 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7265/ Java: 64bit/jdk1.8.0_144 -XX:+UseCompressedOops -XX:+UseSerialGC 8 tests failed. FAILED: org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.test Error Message: Timeout occured while waiting response from server at: http://127.0.0.1:59222/collection1 Stack Trace: org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting response from server at: http://127.0.0.1:59222/collection1 at __randomizedtesting.SeedInfo.seed([34C2FF719BD7DD12:BC96C0AB352BB0EA]:0) at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:654) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244) at org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483) at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1106) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:886) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:819) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194) at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:484) at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:463) at org.apache.solr.cloud.AbstractFullDistribZkTestBase.commit(AbstractFullDistribZkTestBase.java:1591) at org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.test(ChaosMonkeyNothingIsSafeTest.java:212) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:993) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShad
[JENKINS-EA] Lucene-Solr-7.x-Linux (64bit/jdk-11-ea+5) - Build # 1697 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/1697/ Java: 64bit/jdk-11-ea+5 -XX:+UseCompressedOops -XX:+UseParallelGC 1 tests failed. FAILED: org.apache.solr.TestDistributedSearch.test Error Message: IOException occured when talking to server at: http://127.0.0.1:41843//collection1 Stack Trace: org.apache.solr.client.solrj.SolrServerException: IOException occured when talking to server at: http://127.0.0.1:41843//collection1 at __randomizedtesting.SeedInfo.seed([E0C7CA938A6B12DA:6893F54924977F22]:0) at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:657) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194) at org.apache.solr.client.solrj.SolrClient.deleteByQuery(SolrClient.java:895) at org.apache.solr.client.solrj.SolrClient.deleteByQuery(SolrClient.java:858) at org.apache.solr.client.solrj.SolrClient.deleteByQuery(SolrClient.java:873) at org.apache.solr.BaseDistributedSearchTestCase.del(BaseDistributedSearchTestCase.java:542) at org.apache.solr.TestDistributedSearch.test(TestDistributedSearch.java:1034) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:564) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsRepeatStatement.callStatement(BaseDistributedSearchTestCase.java:1019) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
[jira] [Commented] (SOLR-11982) Add support for indicating preferred replica types for queries
[ https://issues.apache.org/jira/browse/SOLR-11982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16434703#comment-16434703 ] ASF subversion and git services commented on SOLR-11982: Commit 8927d469cb05c27db6864076010de039302c7e25 in lucene-solr's branch refs/heads/master from [~tomasflobbe] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=8927d46 ] SOLR-11982: Add support for indicating preferred replica types for queries > Add support for indicating preferred replica types for queries > -- > > Key: SOLR-11982 > URL: https://issues.apache.org/jira/browse/SOLR-11982 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Affects Versions: 7.4, master (8.0) >Reporter: Ere Maijala >Assignee: Tomás Fernández Löbbe >Priority: Minor > Labels: patch-available, patch-with-test > Attachments: SOLR-11982-preferReplicaTypes.patch, > SOLR-11982-preferReplicaTypes.patch, SOLR-11982.patch, SOLR-11982.patch, > SOLR-11982.patch, SOLR-11982.patch, SOLR-11982.patch, SOLR-11982.patch, > SOLR-11982.patch, SOLR-11982.patch, SOLR-11982.patch > > > It would be nice to have the possibility to easily sort the shards in the > preferred order e.g. by replica type. The attached patch adds support for > {{shards.sort}} parameter that allows one to sort e.g. PULL and TLOG replicas > first with \{{shards.sort=replicaType:PULL|TLOG }}(which would mean that NRT > replicas wouldn't be hit with queries unless they're the only ones available) > and/or to sort by replica location (like preferLocalShards=true but more > versatile). -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11982) Add support for indicating preferred replica types for queries
[ https://issues.apache.org/jira/browse/SOLR-11982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16434704#comment-16434704 ] ASF subversion and git services commented on SOLR-11982: Commit ba26bf7c6f8dd28ef0438e719d9b0bb45a60dd58 in lucene-solr's branch refs/heads/branch_7x from [~tomasflobbe] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ba26bf7 ] SOLR-11982: Add support for indicating preferred replica types for queries > Add support for indicating preferred replica types for queries > -- > > Key: SOLR-11982 > URL: https://issues.apache.org/jira/browse/SOLR-11982 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Affects Versions: 7.4, master (8.0) >Reporter: Ere Maijala >Assignee: Tomás Fernández Löbbe >Priority: Minor > Labels: patch-available, patch-with-test > Attachments: SOLR-11982-preferReplicaTypes.patch, > SOLR-11982-preferReplicaTypes.patch, SOLR-11982.patch, SOLR-11982.patch, > SOLR-11982.patch, SOLR-11982.patch, SOLR-11982.patch, SOLR-11982.patch, > SOLR-11982.patch, SOLR-11982.patch, SOLR-11982.patch > > > It would be nice to have the possibility to easily sort the shards in the > preferred order e.g. by replica type. The attached patch adds support for > {{shards.sort}} parameter that allows one to sort e.g. PULL and TLOG replicas > first with \{{shards.sort=replicaType:PULL|TLOG }}(which would mean that NRT > replicas wouldn't be hit with queries unless they're the only ones available) > and/or to sort by replica location (like preferLocalShards=true but more > versatile). -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12187) Replica should watch clusterstate and unload itself if its entry is removed
[ https://issues.apache.org/jira/browse/SOLR-12187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16434724#comment-16434724 ] Cao Manh Dat commented on SOLR-12187: - Yeah, you're right, the reason I keep the UNLOAD command issued from DeleteReplicaCmd is for backward compatibility. {quote} Also, a thought on the test, should it also cover the legacyCloud=true case? {quote} right, I will update the patch for including legacyCloud=true case. > Replica should watch clusterstate and unload itself if its entry is removed > --- > > Key: SOLR-12187 > URL: https://issues.apache.org/jira/browse/SOLR-12187 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Cao Manh Dat >Assignee: Cao Manh Dat >Priority: Major > Attachments: SOLR-12187.patch, SOLR-12187.patch, SOLR-12187.patch, > SOLR-12187.patch > > > With the introduction of autoscaling framework, we have seen an increase in > the number of issues related to the race condition between delete a replica > and other stuff. > Case 1: DeleteReplicaCmd failed to send UNLOAD request to a replica, > therefore, forcefully remove its entry from clusterstate, but the replica > still function normally and be able to become a leader -> SOLR-12176 > Case 2: > * DeleteReplicaCmd enqueue a DELETECOREOP (without sending a request to > replica because the node is not live) > * The node start and the replica get loaded > * DELETECOREOP has not processed hence the replica still present in > clusterstate --> pass checkStateInZk > * DELETECOREOP is executed, DeleteReplicaCmd finished > ** result 1: the replica start recovering, finish it and publish itself as > ACTIVE --> state of the replica is ACTIVE > ** result 2: the replica throw an exception (probably: NPE) > --> state of the replica is DOWN, not join leader election -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12065) Restore replica always in buffering state
[ https://issues.apache.org/jira/browse/SOLR-12065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16434763#comment-16434763 ] Varun Thacker commented on SOLR-12065: -- {code:java} for (Slice shard : restoreCollection.getSlices()) { ocmh.waitForNewShard(restoreCollectionName, shard.getName()); }{code} I don't think we need to explicitly do this in RestoreCmd. We call createcollection command which internally wait's for the shard to become active. So we are good here. I removed this logging line {code:java} int numberOfActiveShards = restoreCollection.getActiveSlices().size(); log.info("Number of activeShards: " + numberOfActiveShards);{code} Also in the test case I cleaned up how we are counting docs before and after indexing With the latest patch , the HDFS backup test fails quite regularly on my machine. here's what's happening: - we add docs after the restore is complete - call commit - query to assert doc count. Now if the query hits a non-leader replica and open searcher hasn't been called the replica gives the old count > Restore replica always in buffering state > - > > Key: SOLR-12065 > URL: https://issues.apache.org/jira/browse/SOLR-12065 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Varun Thacker >Assignee: Varun Thacker >Priority: Major > Attachments: 12065.patch, 12605UTLogs.txt.zip, SOLR-12065.patch, > logs_and_metrics.zip, restore_snippet.log > > > Steps to reproduce: > > - > [http://localhost:8983/solr/admin/collections?action=CREATE&name=test_backup&numShards=1&nrtReplicas=1] > - curl [http://127.0.0.1:8983/solr/test_backup/update?commit=true] -H > 'Content-type:application/json' -d ' > [ \{"id" : "1"} > ]' > - > [http://localhost:8983/solr/admin/collections?action=BACKUP&name=test_backup&collection=test_backup&location=/Users/varunthacker/backups] > - > [http://localhost:8983/solr/admin/collections?action=RESTORE&name=test_backup&location=/Users/varunthacker/backups&collection=test_restore] > * curl [http://127.0.0.1:8983/solr/test_restore/update?commit=true] -H > 'Content-type:application/json' -d ' > [ > {"id" : "2"} > ]' > * Snippet when you try adding a document > {code:java} > INFO - 2018-03-07 22:48:11.555; [c:test_restore s:shard1 r:core_node22 > x:test_restore_shard1_replica_n21] > org.apache.solr.update.processor.DistributedUpdateProcessor; Ignoring commit > while not ACTIVE - state: BUFFERING replay: false > INFO - 2018-03-07 22:48:11.556; [c:test_restore s:shard1 r:core_node22 > x:test_restore_shard1_replica_n21] > org.apache.solr.update.processor.LogUpdateProcessorFactory$LogUpdateProcessor; > [test_restore_shard1_replica_n21] webapp=/solr path=/update > params={commit=true}{add=[2 (1594320896973078528)],commit=} 0 4{code} > * If you see "TLOG.state" from [http://localhost:8983/solr/admin/metrics] > it's always 1 (BUFFERING) > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12065) Restore replica always in buffering state
[ https://issues.apache.org/jira/browse/SOLR-12065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Varun Thacker updated SOLR-12065: - Attachment: SOLR-12065.patch > Restore replica always in buffering state > - > > Key: SOLR-12065 > URL: https://issues.apache.org/jira/browse/SOLR-12065 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Varun Thacker >Assignee: Varun Thacker >Priority: Major > Attachments: 12065.patch, 12605UTLogs.txt.zip, SOLR-12065.patch, > SOLR-12065.patch, logs_and_metrics.zip, restore_snippet.log > > > Steps to reproduce: > > - > [http://localhost:8983/solr/admin/collections?action=CREATE&name=test_backup&numShards=1&nrtReplicas=1] > - curl [http://127.0.0.1:8983/solr/test_backup/update?commit=true] -H > 'Content-type:application/json' -d ' > [ \{"id" : "1"} > ]' > - > [http://localhost:8983/solr/admin/collections?action=BACKUP&name=test_backup&collection=test_backup&location=/Users/varunthacker/backups] > - > [http://localhost:8983/solr/admin/collections?action=RESTORE&name=test_backup&location=/Users/varunthacker/backups&collection=test_restore] > * curl [http://127.0.0.1:8983/solr/test_restore/update?commit=true] -H > 'Content-type:application/json' -d ' > [ > {"id" : "2"} > ]' > * Snippet when you try adding a document > {code:java} > INFO - 2018-03-07 22:48:11.555; [c:test_restore s:shard1 r:core_node22 > x:test_restore_shard1_replica_n21] > org.apache.solr.update.processor.DistributedUpdateProcessor; Ignoring commit > while not ACTIVE - state: BUFFERING replay: false > INFO - 2018-03-07 22:48:11.556; [c:test_restore s:shard1 r:core_node22 > x:test_restore_shard1_replica_n21] > org.apache.solr.update.processor.LogUpdateProcessorFactory$LogUpdateProcessor; > [test_restore_shard1_replica_n21] webapp=/solr path=/update > params={commit=true}{add=[2 (1594320896973078528)],commit=} 0 4{code} > * If you see "TLOG.state" from [http://localhost:8983/solr/admin/metrics] > it's always 1 (BUFFERING) > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-12065) Restore replica always in buffering state
[ https://issues.apache.org/jira/browse/SOLR-12065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16434763#comment-16434763 ] Varun Thacker edited comment on SOLR-12065 at 4/12/18 12:39 AM: {code:java} for (Slice shard : restoreCollection.getSlices()) { ocmh.waitForNewShard(restoreCollectionName, shard.getName()); }{code} I don't think we need to explicitly do this in RestoreCmd. We call createcollection command which internally wait's for the shard to become active. So we are good here. I removed this logging line {code:java} int numberOfActiveShards = restoreCollection.getActiveSlices().size(); log.info("Number of activeShards: " + numberOfActiveShards);{code} Also in the test case I cleaned up how we are counting docs before and after indexing With the latest patch , the HDFS backup test fails quite regularly on my machine. here's what's happening: * we add docs after the restore is complete * call commit * query to assert doc count. Now if the query hits a non-leader replica and open searcher hasn't been executed on the replica then gives the old count and the test fails was (Author: varunthacker): {code:java} for (Slice shard : restoreCollection.getSlices()) { ocmh.waitForNewShard(restoreCollectionName, shard.getName()); }{code} I don't think we need to explicitly do this in RestoreCmd. We call createcollection command which internally wait's for the shard to become active. So we are good here. I removed this logging line {code:java} int numberOfActiveShards = restoreCollection.getActiveSlices().size(); log.info("Number of activeShards: " + numberOfActiveShards);{code} Also in the test case I cleaned up how we are counting docs before and after indexing With the latest patch , the HDFS backup test fails quite regularly on my machine. here's what's happening: - we add docs after the restore is complete - call commit - query to assert doc count. Now if the query hits a non-leader replica and open searcher hasn't been called the replica gives the old count > Restore replica always in buffering state > - > > Key: SOLR-12065 > URL: https://issues.apache.org/jira/browse/SOLR-12065 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Varun Thacker >Assignee: Varun Thacker >Priority: Major > Attachments: 12065.patch, 12605UTLogs.txt.zip, SOLR-12065.patch, > SOLR-12065.patch, logs_and_metrics.zip, restore_snippet.log > > > Steps to reproduce: > > - > [http://localhost:8983/solr/admin/collections?action=CREATE&name=test_backup&numShards=1&nrtReplicas=1] > - curl [http://127.0.0.1:8983/solr/test_backup/update?commit=true] -H > 'Content-type:application/json' -d ' > [ \{"id" : "1"} > ]' > - > [http://localhost:8983/solr/admin/collections?action=BACKUP&name=test_backup&collection=test_backup&location=/Users/varunthacker/backups] > - > [http://localhost:8983/solr/admin/collections?action=RESTORE&name=test_backup&location=/Users/varunthacker/backups&collection=test_restore] > * curl [http://127.0.0.1:8983/solr/test_restore/update?commit=true] -H > 'Content-type:application/json' -d ' > [ > {"id" : "2"} > ]' > * Snippet when you try adding a document > {code:java} > INFO - 2018-03-07 22:48:11.555; [c:test_restore s:shard1 r:core_node22 > x:test_restore_shard1_replica_n21] > org.apache.solr.update.processor.DistributedUpdateProcessor; Ignoring commit > while not ACTIVE - state: BUFFERING replay: false > INFO - 2018-03-07 22:48:11.556; [c:test_restore s:shard1 r:core_node22 > x:test_restore_shard1_replica_n21] > org.apache.solr.update.processor.LogUpdateProcessorFactory$LogUpdateProcessor; > [test_restore_shard1_replica_n21] webapp=/solr path=/update > params={commit=true}{add=[2 (1594320896973078528)],commit=} 0 4{code} > * If you see "TLOG.state" from [http://localhost:8983/solr/admin/metrics] > it's always 1 (BUFFERING) > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-7.x-Windows (32bit/jdk1.8.0_144) - Build # 542 - Still Unstable!
Error processing tokens: Error while parsing action 'Text/ZeroOrMore/FirstOf/Token/DelimitedToken/DelimitedToken_Action3' at input position (line 79, pos 4): )"} ^ java.lang.OutOfMemoryError: Java heap space - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk-9.0.4) - Build # 1698 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/1698/ Java: 64bit/jdk-9.0.4 -XX:-UseCompressedOops -XX:+UseG1GC 3 tests failed. FAILED: org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testMixedBounds Error Message: failed to create testMixedBounds_collection Live Nodes: [127.0.0.1:35429_solr, 127.0.0.1:33645_solr] Last available state: DocCollection(testMixedBounds_collection//collections/testMixedBounds_collection/state.json/18)={ "pullReplicas":"0", "replicationFactor":"2", "shards":{ "shard1":{ "range":"8000-", "state":"active", "replicas":{ "core_node3":{ "core":"testMixedBounds_collection_shard1_replica_n1", "base_url":"https://127.0.0.1:33645/solr";, "node_name":"127.0.0.1:33645_solr", "state":"active", "type":"NRT"}, "core_node5":{ "core":"testMixedBounds_collection_shard1_replica_n2", "base_url":"https://127.0.0.1:35429/solr";, "node_name":"127.0.0.1:35429_solr", "state":"active", "type":"NRT", "leader":"true"}}}, "shard2":{ "range":"0-7fff", "state":"inactive", "replicas":{ "core_node7":{ "core":"testMixedBounds_collection_shard2_replica_n4", "base_url":"https://127.0.0.1:33645/solr";, "node_name":"127.0.0.1:33645_solr", "state":"active", "type":"NRT"}, "core_node8":{ "core":"testMixedBounds_collection_shard2_replica_n6", "base_url":"https://127.0.0.1:35429/solr";, "node_name":"127.0.0.1:35429_solr", "state":"active", "type":"NRT", "leader":"true"}}, "stateTimestamp":"1523498850182057484"}, "shard2_0":{ "range":"0-3fff", "state":"active", "replicas":{ "core_node11":{ "core":"testMixedBounds_collection_shard2_0_replica_n9", "base_url":"https://127.0.0.1:35429/solr";, "node_name":"127.0.0.1:35429_solr", "state":"active", "type":"NRT", "leader":"true"}, "core_node13":{ "core":"testMixedBounds_collection_shard2_0_replica0", "base_url":"https://127.0.0.1:35429/solr";, "node_name":"127.0.0.1:35429_solr", "state":"active", "type":"NRT"}}, "stateTimestamp":"1523498850182090588"}, "shard2_1":{ "range":"4000-7fff", "state":"active", "replicas":{ "core_node12":{ "core":"testMixedBounds_collection_shard2_1_replica_n10", "base_url":"https://127.0.0.1:35429/solr";, "node_name":"127.0.0.1:35429_solr", "state":"active", "type":"NRT", "leader":"true"}, "core_node14":{ "core":"testMixedBounds_collection_shard2_1_replica0", "base_url":"https://127.0.0.1:33645/solr";, "node_name":"127.0.0.1:33645_solr", "state":"active", "type":"NRT"}}, "stateTimestamp":"1523498850182075289"}}, "router":{"name":"compositeId"}, "maxShardsPerNode":"2", "autoAddReplicas":"false", "nrtReplicas":"2", "tlogReplicas":"0"} Stack Trace: java.lang.AssertionError: failed to create testMixedBounds_collection Live Nodes: [127.0.0.1:35429_solr, 127.0.0.1:33645_solr] Last available state: DocCollection(testMixedBounds_collection//collections/testMixedBounds_collection/state.json/18)={ "pullReplicas":"0", "replicationFactor":"2", "shards":{ "shard1":{ "range":"8000-", "state":"active", "replicas":{ "core_node3":{ "core":"testMixedBounds_collection_shard1_replica_n1", "base_url":"https://127.0.0.1:33645/solr";, "node_name":"127.0.0.1:33645_solr", "state":"active", "type":"NRT"}, "core_node5":{ "core":"testMixedBounds_collection_shard1_replica_n2", "base_url":"https://127.0.0.1:35429/solr";, "node_name":"127.0.0.1:35429_solr", "state":"active", "type":"NRT", "leader":"true"}}}, "shard2":{ "range":"0-7fff", "state":"inactive", "replicas":{ "core_node7":{ "core":"testMixedBounds_collection_shard2_replica_n4", "base_url":"https://127.0.0.1:33645/solr";, "node_name":"127.0.0.1:33645_solr", "state":"active", "type":"NRT"}, "core_node8":{ "core":"testMixedBounds_collection_shard2_replica_n6", "base_url":"https://127.0.0.1:35429/solr";, "node_name":"127.0.0.1:35429_solr", "state":"active", "type":"NRT", "leader":"true"}}, "stateTimestamp":"1523498850182057484"}, "shard2_0":{ "range":"0-3fff", "state":"active", "replicas":{ "core_node11":{ "core":"testMixedBounds_collection_shard2_0_repli
[jira] [Commented] (SOLR-12065) Restore replica always in buffering state
[ https://issues.apache.org/jira/browse/SOLR-12065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16434833#comment-16434833 ] Varun Thacker commented on SOLR-12065: -- {quote}With the latest patch , the HDFS backup test fails quite regularly on my machine. {quote} Looking more closely at the logs, there were 3 replicas for the shard . And the replica that got queried was a pull replica. I think I'll revert to querying the leaders explicitly and counting the total > Restore replica always in buffering state > - > > Key: SOLR-12065 > URL: https://issues.apache.org/jira/browse/SOLR-12065 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Varun Thacker >Assignee: Varun Thacker >Priority: Major > Attachments: 12065.patch, 12605UTLogs.txt.zip, SOLR-12065.patch, > SOLR-12065.patch, logs_and_metrics.zip, restore_snippet.log > > > Steps to reproduce: > > - > [http://localhost:8983/solr/admin/collections?action=CREATE&name=test_backup&numShards=1&nrtReplicas=1] > - curl [http://127.0.0.1:8983/solr/test_backup/update?commit=true] -H > 'Content-type:application/json' -d ' > [ \{"id" : "1"} > ]' > - > [http://localhost:8983/solr/admin/collections?action=BACKUP&name=test_backup&collection=test_backup&location=/Users/varunthacker/backups] > - > [http://localhost:8983/solr/admin/collections?action=RESTORE&name=test_backup&location=/Users/varunthacker/backups&collection=test_restore] > * curl [http://127.0.0.1:8983/solr/test_restore/update?commit=true] -H > 'Content-type:application/json' -d ' > [ > {"id" : "2"} > ]' > * Snippet when you try adding a document > {code:java} > INFO - 2018-03-07 22:48:11.555; [c:test_restore s:shard1 r:core_node22 > x:test_restore_shard1_replica_n21] > org.apache.solr.update.processor.DistributedUpdateProcessor; Ignoring commit > while not ACTIVE - state: BUFFERING replay: false > INFO - 2018-03-07 22:48:11.556; [c:test_restore s:shard1 r:core_node22 > x:test_restore_shard1_replica_n21] > org.apache.solr.update.processor.LogUpdateProcessorFactory$LogUpdateProcessor; > [test_restore_shard1_replica_n21] webapp=/solr path=/update > params={commit=true}{add=[2 (1594320896973078528)],commit=} 0 4{code} > * If you see "TLOG.state" from [http://localhost:8983/solr/admin/metrics] > it's always 1 (BUFFERING) > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12065) Restore replica always in buffering state
[ https://issues.apache.org/jira/browse/SOLR-12065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Varun Thacker updated SOLR-12065: - Attachment: SOLR-12065.patch > Restore replica always in buffering state > - > > Key: SOLR-12065 > URL: https://issues.apache.org/jira/browse/SOLR-12065 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Varun Thacker >Assignee: Varun Thacker >Priority: Major > Attachments: 12065.patch, 12605UTLogs.txt.zip, SOLR-12065.patch, > SOLR-12065.patch, SOLR-12065.patch, logs_and_metrics.zip, restore_snippet.log > > > Steps to reproduce: > > - > [http://localhost:8983/solr/admin/collections?action=CREATE&name=test_backup&numShards=1&nrtReplicas=1] > - curl [http://127.0.0.1:8983/solr/test_backup/update?commit=true] -H > 'Content-type:application/json' -d ' > [ \{"id" : "1"} > ]' > - > [http://localhost:8983/solr/admin/collections?action=BACKUP&name=test_backup&collection=test_backup&location=/Users/varunthacker/backups] > - > [http://localhost:8983/solr/admin/collections?action=RESTORE&name=test_backup&location=/Users/varunthacker/backups&collection=test_restore] > * curl [http://127.0.0.1:8983/solr/test_restore/update?commit=true] -H > 'Content-type:application/json' -d ' > [ > {"id" : "2"} > ]' > * Snippet when you try adding a document > {code:java} > INFO - 2018-03-07 22:48:11.555; [c:test_restore s:shard1 r:core_node22 > x:test_restore_shard1_replica_n21] > org.apache.solr.update.processor.DistributedUpdateProcessor; Ignoring commit > while not ACTIVE - state: BUFFERING replay: false > INFO - 2018-03-07 22:48:11.556; [c:test_restore s:shard1 r:core_node22 > x:test_restore_shard1_replica_n21] > org.apache.solr.update.processor.LogUpdateProcessorFactory$LogUpdateProcessor; > [test_restore_shard1_replica_n21] webapp=/solr path=/update > params={commit=true}{add=[2 (1594320896973078528)],commit=} 0 4{code} > * If you see "TLOG.state" from [http://localhost:8983/solr/admin/metrics] > it's always 1 (BUFFERING) > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12065) Restore replica always in buffering state
[ https://issues.apache.org/jira/browse/SOLR-12065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16434857#comment-16434857 ] Varun Thacker commented on SOLR-12065: -- Uploaded a new patch with a CHANGES entry . Running tests and precommit on this patch before committing it > Restore replica always in buffering state > - > > Key: SOLR-12065 > URL: https://issues.apache.org/jira/browse/SOLR-12065 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Varun Thacker >Assignee: Varun Thacker >Priority: Major > Attachments: 12065.patch, 12605UTLogs.txt.zip, SOLR-12065.patch, > SOLR-12065.patch, SOLR-12065.patch, SOLR-12065.patch, logs_and_metrics.zip, > restore_snippet.log > > > Steps to reproduce: > > - > [http://localhost:8983/solr/admin/collections?action=CREATE&name=test_backup&numShards=1&nrtReplicas=1] > - curl [http://127.0.0.1:8983/solr/test_backup/update?commit=true] -H > 'Content-type:application/json' -d ' > [ \{"id" : "1"} > ]' > - > [http://localhost:8983/solr/admin/collections?action=BACKUP&name=test_backup&collection=test_backup&location=/Users/varunthacker/backups] > - > [http://localhost:8983/solr/admin/collections?action=RESTORE&name=test_backup&location=/Users/varunthacker/backups&collection=test_restore] > * curl [http://127.0.0.1:8983/solr/test_restore/update?commit=true] -H > 'Content-type:application/json' -d ' > [ > {"id" : "2"} > ]' > * Snippet when you try adding a document > {code:java} > INFO - 2018-03-07 22:48:11.555; [c:test_restore s:shard1 r:core_node22 > x:test_restore_shard1_replica_n21] > org.apache.solr.update.processor.DistributedUpdateProcessor; Ignoring commit > while not ACTIVE - state: BUFFERING replay: false > INFO - 2018-03-07 22:48:11.556; [c:test_restore s:shard1 r:core_node22 > x:test_restore_shard1_replica_n21] > org.apache.solr.update.processor.LogUpdateProcessorFactory$LogUpdateProcessor; > [test_restore_shard1_replica_n21] webapp=/solr path=/update > params={commit=true}{add=[2 (1594320896973078528)],commit=} 0 4{code} > * If you see "TLOG.state" from [http://localhost:8983/solr/admin/metrics] > it's always 1 (BUFFERING) > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12065) Restore replica always in buffering state
[ https://issues.apache.org/jira/browse/SOLR-12065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Varun Thacker updated SOLR-12065: - Attachment: SOLR-12065.patch > Restore replica always in buffering state > - > > Key: SOLR-12065 > URL: https://issues.apache.org/jira/browse/SOLR-12065 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Varun Thacker >Assignee: Varun Thacker >Priority: Major > Attachments: 12065.patch, 12605UTLogs.txt.zip, SOLR-12065.patch, > SOLR-12065.patch, SOLR-12065.patch, SOLR-12065.patch, logs_and_metrics.zip, > restore_snippet.log > > > Steps to reproduce: > > - > [http://localhost:8983/solr/admin/collections?action=CREATE&name=test_backup&numShards=1&nrtReplicas=1] > - curl [http://127.0.0.1:8983/solr/test_backup/update?commit=true] -H > 'Content-type:application/json' -d ' > [ \{"id" : "1"} > ]' > - > [http://localhost:8983/solr/admin/collections?action=BACKUP&name=test_backup&collection=test_backup&location=/Users/varunthacker/backups] > - > [http://localhost:8983/solr/admin/collections?action=RESTORE&name=test_backup&location=/Users/varunthacker/backups&collection=test_restore] > * curl [http://127.0.0.1:8983/solr/test_restore/update?commit=true] -H > 'Content-type:application/json' -d ' > [ > {"id" : "2"} > ]' > * Snippet when you try adding a document > {code:java} > INFO - 2018-03-07 22:48:11.555; [c:test_restore s:shard1 r:core_node22 > x:test_restore_shard1_replica_n21] > org.apache.solr.update.processor.DistributedUpdateProcessor; Ignoring commit > while not ACTIVE - state: BUFFERING replay: false > INFO - 2018-03-07 22:48:11.556; [c:test_restore s:shard1 r:core_node22 > x:test_restore_shard1_replica_n21] > org.apache.solr.update.processor.LogUpdateProcessorFactory$LogUpdateProcessor; > [test_restore_shard1_replica_n21] webapp=/solr path=/update > params={commit=true}{add=[2 (1594320896973078528)],commit=} 0 4{code} > * If you see "TLOG.state" from [http://localhost:8983/solr/admin/metrics] > it's always 1 (BUFFERING) > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 4563 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/4563/ Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC 1 tests failed. FAILED: org.apache.lucene.index.TestIndexWriter.testSoftUpdatesConcurrently Error Message: expected:<1> but was:<15> Stack Trace: java.lang.AssertionError: expected:<1> but was:<15> at __randomizedtesting.SeedInfo.seed([5BFDF077A40231E5:B7B5077189E7543]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.failNotEquals(Assert.java:647) at org.junit.Assert.assertEquals(Assert.java:128) at org.junit.Assert.assertEquals(Assert.java:472) at org.junit.Assert.assertEquals(Assert.java:456) at org.apache.lucene.index.TestIndexWriter.softUpdatesConcurrently(TestIndexWriter.java:3156) at org.apache.lucene.index.TestIndexWriter.testSoftUpdatesConcurrently(TestIndexWriter.java:3036) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.lang.Thread.run(Thread.java:748) Build Log: [...truncated 803 lines...] [junit4] Suite: org.apache.lucene.index.TestIndexWriter [junit4] 2> NOTE: reproduce with: ant test -Dtestcase=TestIndexWriter -Dtests.method=testSoftUpdatesConcurrently -Dtests.seed=5BFDF077A40231E5 -Dtests.slow=true -Dtests.locale=es-CU -Dtests.timezone=America/Scoresbysund -Dtests.asserts=true -Dtests.file.encoding=U
[jira] [Commented] (LUCENE-8231) Nori, a Korean analyzer based on mecab-ko-dic
[ https://issues.apache.org/jira/browse/LUCENE-8231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16434869#comment-16434869 ] Robert Muir commented on LUCENE-8231: - We may want to tweak the attributes reflection, or look at this POS enum some. It makes things a bit hard for debugging any analysis issues, e.g. {noformat} term=한국,bytes=[ed 95 9c ea b5 ad],startOffset=0,endOffset=2,positionIncrement=1,positionLength=1,type=word,termFrequency=1,posType=MORPHEME,leftPOS=NNP,rightPOS=NNP,morphemes=null,reading=null term=대단,bytes=[eb 8c 80 eb 8b a8],startOffset=4,endOffset=6,positionIncrement=1,positionLength=1,type=word,termFrequency=1,posType=MORPHEME,leftPOS=XR,rightPOS=XR,morphemes=null,reading=null term=나라,bytes=[eb 82 98 eb 9d bc],startOffset=8,endOffset=10,positionIncrement=1,positionLength=1,type=word,termFrequency=1,posType=MORPHEME,leftPOS=NNG,rightPOS=NNG,morphemes=null,reading=null term=이,bytes=[ec 9d b4],startOffset=10,endOffset=13,positionIncrement=1,positionLength=1,type=word,termFrequency=1,posType=MORPHEME,leftPOS=VCP,rightPOS=VCP,morphemes=null,reading=null {noformat} Can we make it so the user doesn't have to decode the tag abbreviations, at least when using the toString (reflector stuff)? If they have to look this up from comments in the source code it will be more difficult. For example kuromoji: {noformat} term=多く,bytes=[e5 a4 9a e3 81 8f],startOffset=0,endOffset=2,positionIncrement=1,positionLength=1,type=word,termFrequency=1,baseForm=null,partOfSpeech=名詞-副詞可能,partOfSpeech (en)=noun-adverbial,reading=オオク,reading (en)=ooku,pronunciation=オーク,pronunciation (en)=oku,inflectionType=null,inflectionType (en)=null,inflectionForm=null,inflectionForm (en)=null,keyword=false term=学生,bytes=[e5 ad a6 e7 94 9f],startOffset=3,endOffset=5,positionIncrement=2,positionLength=1,type=word,termFrequency=1,baseForm=null,partOfSpeech=名詞-一般,partOfSpeech (en)=noun-common,reading=ガクセイ,reading (en)=gakusei,pronunciation=ガクセイ,pronunciation (en)=gakusei,inflectionType=null,inflectionType (en)=null,inflectionForm=null,inflectionForm (en)=null,keyword=false term=試験,bytes=[e8 a9 a6 e9 a8 93],startOffset=6,endOffset=8,positionIncrement=2,positionLength=1,type=word,termFrequency=1,baseForm=null,partOfSpeech=名詞-サ変接続,partOfSpeech (en)=noun-verbal,reading=シケン,reading (en)=shiken,pronunciation=シケン,pronunciation (en)=shiken,inflectionType=null,inflectionType (en)=null,inflectionForm=null,inflectionForm (en)=null,keyword=false {noformat} > Nori, a Korean analyzer based on mecab-ko-dic > - > > Key: LUCENE-8231 > URL: https://issues.apache.org/jira/browse/LUCENE-8231 > Project: Lucene - Core > Issue Type: New Feature >Reporter: Jim Ferenczi >Priority: Major > Attachments: LUCENE-8231-remap-hangul.patch, LUCENE-8231.patch, > LUCENE-8231.patch, LUCENE-8231.patch, LUCENE-8231.patch, LUCENE-8231.patch, > LUCENE-8231.patch > > > There is a dictionary similar to IPADIC but for Korean called mecab-ko-dic: > It is available under an Apache license here: > https://bitbucket.org/eunjeon/mecab-ko-dic > This dictionary was built with MeCab, it defines a format for the features > adapted for the Korean language. > Since the Kuromoji tokenizer uses the same format for the morphological > analysis (left cost + right cost + word cost) I tried to adapt the module to > handle Korean with the mecab-ko-dic. I've started with a POC that copies the > Kuromoji module and adapts it for the mecab-ko-dic. > I used the same classes to build and read the dictionary but I had to make > some modifications to handle the differences with the IPADIC and Japanese. > The resulting binary dictionary takes 28MB on disk, it's bigger than the > IPADIC but mainly because the source is bigger and there are a lot of > compound and inflect terms that define a group of terms and the segmentation > that can be applied. > I attached the patch that contains this new Korean module called -godori- > nori. It is an adaptation of the Kuromoji module so currently > the two modules don't share any code. I wanted to validate the approach first > and check the relevancy of the results. I don't speak Korean so I used the > relevancy > tests that was added for another Korean tokenizer > (https://issues.apache.org/jira/browse/LUCENE-4956) and tested the output > against mecab-ko which is the official fork of mecab to use the mecab-ko-dic. > I had to simplify the JapaneseTokenizer, my version removes the nBest output > and the decomposition of too long tokens. I also > modified the handling of whitespaces since they are important in Korean. > Whitespaces that appear before a term are attached to that term and this > information is used to compute a penalty based on the Part of Speech of the > token. The penalty cost is a feature a
[jira] [Commented] (LUCENE-8231) Nori, a Korean analyzer based on mecab-ko-dic
[ https://issues.apache.org/jira/browse/LUCENE-8231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16434875#comment-16434875 ] Robert Muir commented on LUCENE-8231: - An easy win related to this is to make enum values have real javadocs comments instead of source code ones. e.g. today it looks like: {code} public enum Tag { // Infix E(100), // Interjection IC(110), {code} But if we make "Infix" and "Interjection" real javadocs comments then the user has a way to see the whole POS tagset in the generated documentation. > Nori, a Korean analyzer based on mecab-ko-dic > - > > Key: LUCENE-8231 > URL: https://issues.apache.org/jira/browse/LUCENE-8231 > Project: Lucene - Core > Issue Type: New Feature >Reporter: Jim Ferenczi >Priority: Major > Attachments: LUCENE-8231-remap-hangul.patch, LUCENE-8231.patch, > LUCENE-8231.patch, LUCENE-8231.patch, LUCENE-8231.patch, LUCENE-8231.patch, > LUCENE-8231.patch > > > There is a dictionary similar to IPADIC but for Korean called mecab-ko-dic: > It is available under an Apache license here: > https://bitbucket.org/eunjeon/mecab-ko-dic > This dictionary was built with MeCab, it defines a format for the features > adapted for the Korean language. > Since the Kuromoji tokenizer uses the same format for the morphological > analysis (left cost + right cost + word cost) I tried to adapt the module to > handle Korean with the mecab-ko-dic. I've started with a POC that copies the > Kuromoji module and adapts it for the mecab-ko-dic. > I used the same classes to build and read the dictionary but I had to make > some modifications to handle the differences with the IPADIC and Japanese. > The resulting binary dictionary takes 28MB on disk, it's bigger than the > IPADIC but mainly because the source is bigger and there are a lot of > compound and inflect terms that define a group of terms and the segmentation > that can be applied. > I attached the patch that contains this new Korean module called -godori- > nori. It is an adaptation of the Kuromoji module so currently > the two modules don't share any code. I wanted to validate the approach first > and check the relevancy of the results. I don't speak Korean so I used the > relevancy > tests that was added for another Korean tokenizer > (https://issues.apache.org/jira/browse/LUCENE-4956) and tested the output > against mecab-ko which is the official fork of mecab to use the mecab-ko-dic. > I had to simplify the JapaneseTokenizer, my version removes the nBest output > and the decomposition of too long tokens. I also > modified the handling of whitespaces since they are important in Korean. > Whitespaces that appear before a term are attached to that term and this > information is used to compute a penalty based on the Part of Speech of the > token. The penalty cost is a feature added to mecab-ko to handle > morphemes that should not appear after a morpheme and is described in the > mecab-ko page: > https://bitbucket.org/eunjeon/mecab-ko > Ignoring whitespaces is also more inlined with the official MeCab library > which attach the whitespaces to the term that follows. > I also added a decompounder filter that expand the compounds and inflects > defined in the dictionary and a part of speech filter similar to the Japanese > that removes the morpheme that are not useful for relevance (suffix, prefix, > interjection, ...). These filters don't play well with the tokenizer if it > can > output multiple paths (nBest output for instance) so for simplicity I removed > this ability and the Korean tokenizer only outputs the best path. > I compared the result with mecab-ko to confirm that the analyzer is working > and ran the relevancy test that is defined in HantecRel.java included > in the patch (written by Robert for another Korean analyzer). Here are the > results: > ||Analyzer||Index Time||Index Size||MAP(CLASSIC)||MAP(BM25)||MAP(GL2)|| > |Standard|35s|131MB|.007|.1044|.1053| > |CJK|36s|164MB|.1418|.1924|.1916| > |Korean|212s|90MB|.1628|.2094|.2078| > I find the results very promising so I plan to continue to work on this > project. I started to extract the part of the code that could be shared with > the > Kuromoji module but I wanted to share the status and this POC first to > confirm that this approach is viable. The advantages of using the same model > than > the Japanese analyzer are multiple: we don't have a Korean analyzer at the > moment ;), the resulting dictionary is small compared to other libraries that > use the mecab-ko-dic (the FST takes only 5.4MB) and the Tokenizer prunes the > lattice on the fly to select the best path efficiently. > The dictionary can be built directly from the godori module with the > following command: > ant regenerate (you need to creat
[jira] [Created] (SOLR-12214) Leader may skip publish itself as ACTIVE when its last published state is DOWN
Cao Manh Dat created SOLR-12214: --- Summary: Leader may skip publish itself as ACTIVE when its last published state is DOWN Key: SOLR-12214 URL: https://issues.apache.org/jira/browse/SOLR-12214 Project: Solr Issue Type: Bug Security Level: Public (Default Security Level. Issues are Public) Reporter: Cao Manh Dat Assignee: Cao Manh Dat Found this problem on beasting LeaderVoteWaitTimeout, * a replica publish itself as DOWN on RecoveryStrategy.pingLeader() * It won the election (by vote wait timeout) then skipping publish itself as ACTIVE (by looking into clusterstate, its state still ACTIVE). * ends up with a leader with DOWN state! Therefore, replica should look into both clusterstate and its local last published state. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12214) Leader may skip publish itself as ACTIVE when its last published state is DOWN
[ https://issues.apache.org/jira/browse/SOLR-12214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Cao Manh Dat updated SOLR-12214: Attachment: SOLR-12214.patch > Leader may skip publish itself as ACTIVE when its last published state is DOWN > -- > > Key: SOLR-12214 > URL: https://issues.apache.org/jira/browse/SOLR-12214 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Cao Manh Dat >Assignee: Cao Manh Dat >Priority: Major > Attachments: SOLR-12214.patch > > > Found this problem on beasting LeaderVoteWaitTimeout, > * a replica publish itself as DOWN on RecoveryStrategy.pingLeader() > * It won the election (by vote wait timeout) then skipping publish itself as > ACTIVE (by looking into clusterstate, its state still ACTIVE). > * ends up with a leader with DOWN state! > Therefore, replica should look into both clusterstate and its local last > published state. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12214) Leader may skip publish itself as ACTIVE when its last published state is DOWN
[ https://issues.apache.org/jira/browse/SOLR-12214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Cao Manh Dat updated SOLR-12214: Description: Found this problem on beasting LeaderVoteWaitTimeout, * a replica publish itself as DOWN on RecoveryStrategy.pingLeader() * It won the election (by vote wait timeout) then skipping publish itself as ACTIVE (by looking into clusterstate, its state still ACTIVE). * ends up with a leader with DOWN state! Therefore, replica should look into both clusterstate and its local last published state on skipping publish itself as ACTIVE was: Found this problem on beasting LeaderVoteWaitTimeout, * a replica publish itself as DOWN on RecoveryStrategy.pingLeader() * It won the election (by vote wait timeout) then skipping publish itself as ACTIVE (by looking into clusterstate, its state still ACTIVE). * ends up with a leader with DOWN state! Therefore, replica should look into both clusterstate and its local last published state. > Leader may skip publish itself as ACTIVE when its last published state is DOWN > -- > > Key: SOLR-12214 > URL: https://issues.apache.org/jira/browse/SOLR-12214 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Cao Manh Dat >Assignee: Cao Manh Dat >Priority: Major > Attachments: SOLR-12214.patch > > > Found this problem on beasting LeaderVoteWaitTimeout, > * a replica publish itself as DOWN on RecoveryStrategy.pingLeader() > * It won the election (by vote wait timeout) then skipping publish itself as > ACTIVE (by looking into clusterstate, its state still ACTIVE). > * ends up with a leader with DOWN state! > Therefore, replica should look into both clusterstate and its local last > published state on skipping publish itself as ACTIVE -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-BadApples-master-Linux (32bit/jdk1.8.0_162) - Build # 21 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-BadApples-master-Linux/21/ Java: 32bit/jdk1.8.0_162 -server -XX:+UseG1GC 1 tests failed. FAILED: org.apache.solr.cloud.TestAuthenticationFramework.testBasics Error Message: Error from server at https://127.0.0.1:45651/solr/testcollection_shard1_replica_n2: Expected mime type application/octet-stream but got text/html.Error 404 Can not find: /solr/testcollection_shard1_replica_n2/update HTTP ERROR 404 Problem accessing /solr/testcollection_shard1_replica_n2/update. Reason: Can not find: /solr/testcollection_shard1_replica_n2/updatehttp://eclipse.org/jetty";>Powered by Jetty:// 9.4.8.v20171121 Stack Trace: org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from server at https://127.0.0.1:45651/solr/testcollection_shard1_replica_n2: Expected mime type application/octet-stream but got text/html. Error 404 Can not find: /solr/testcollection_shard1_replica_n2/update HTTP ERROR 404 Problem accessing /solr/testcollection_shard1_replica_n2/update. Reason: Can not find: /solr/testcollection_shard1_replica_n2/updatehttp://eclipse.org/jetty";>Powered by Jetty:// 9.4.8.v20171121 at __randomizedtesting.SeedInfo.seed([60110C3757D72F3E:5DC9A21B6F39714E]:0) at org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:551) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1015) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:886) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:948) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:948) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:948) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:948) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:948) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:819) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194) at org.apache.solr.client.solrj.request.UpdateRequest.commit(UpdateRequest.java:233) at org.apache.solr.cloud.TestAuthenticationFramework.collectionCreateSearchDeleteTwice(TestAuthenticationFramework.java:127) at org.apache.solr.cloud.TestAuthenticationFramework.testBasics(TestAuthenticationFramework.java:75) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementA
[jira] [Commented] (SOLR-12190) need to properly escape output in GraphMLResponseWriter
[ https://issues.apache.org/jira/browse/SOLR-12190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16434913#comment-16434913 ] ASF subversion and git services commented on SOLR-12190: Commit 8d20fc575bab2c4d0353bb34c6dc66566290f094 in lucene-solr's branch refs/heads/master from [~yo...@apache.org] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=8d20fc5 ] SOLR-12190: properly escape output in GraphMLResponseWriter > need to properly escape output in GraphMLResponseWriter > --- > > Key: SOLR-12190 > URL: https://issues.apache.org/jira/browse/SOLR-12190 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Yonik Seeley >Assignee: Yonik Seeley >Priority: Major > Attachments: SOLR-12190.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12190) need to properly escape output in GraphMLResponseWriter
[ https://issues.apache.org/jira/browse/SOLR-12190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16434917#comment-16434917 ] ASF subversion and git services commented on SOLR-12190: Commit c7548f9d1bec658e8caade2348bceb365568647e in lucene-solr's branch refs/heads/branch_7x from [~yo...@apache.org] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=c7548f9 ] SOLR-12190: properly escape output in GraphMLResponseWriter > need to properly escape output in GraphMLResponseWriter > --- > > Key: SOLR-12190 > URL: https://issues.apache.org/jira/browse/SOLR-12190 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Yonik Seeley >Assignee: Yonik Seeley >Priority: Major > Attachments: SOLR-12190.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-12190) need to properly escape output in GraphMLResponseWriter
[ https://issues.apache.org/jira/browse/SOLR-12190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yonik Seeley resolved SOLR-12190. - Resolution: Fixed Fix Version/s: master (8.0) 7.4 > need to properly escape output in GraphMLResponseWriter > --- > > Key: SOLR-12190 > URL: https://issues.apache.org/jira/browse/SOLR-12190 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Yonik Seeley >Assignee: Yonik Seeley >Priority: Major > Fix For: 7.4, master (8.0) > > Attachments: SOLR-12190.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-8229) Add a method to Weight to retrieve matches for a single document
[ https://issues.apache.org/jira/browse/LUCENE-8229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Smiley updated LUCENE-8229: - Attachment: LUCENE-8229_small_improvements.patch > Add a method to Weight to retrieve matches for a single document > > > Key: LUCENE-8229 > URL: https://issues.apache.org/jira/browse/LUCENE-8229 > Project: Lucene - Core > Issue Type: New Feature >Reporter: Alan Woodward >Assignee: Alan Woodward >Priority: Major > Fix For: 7.4 > > Attachments: LUCENE-8229.patch, LUCENE-8229_small_improvements.patch > > Time Spent: 5h 20m > Remaining Estimate: 0h > > The ability to find out exactly what a query has matched on is a fairly > frequent feature request, and would also make highlighters much easier to > implement. There have been a few attempts at doing this, including adding > positions to Scorers, or re-writing queries as Spans, but these all either > compromise general performance or involve up-front knowledge of all queries. > Instead, I propose adding a method to Weight that exposes an iterator over > matches in a particular document and field. It should be used in a similar > manner to explain() - ie, just for TopDocs, not as part of the scoring loop, > which relieves some of the pressure on performance. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8229) Add a method to Weight to retrieve matches for a single document
[ https://issues.apache.org/jira/browse/LUCENE-8229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16434924#comment-16434924 ] David Smiley commented on LUCENE-8229: -- I'm excited about this too :-) I made some small tweaks, such as adding lucene.experimental annotations (at least until 8.0), and using java 8 streams in one place. What do you think? > Add a method to Weight to retrieve matches for a single document > > > Key: LUCENE-8229 > URL: https://issues.apache.org/jira/browse/LUCENE-8229 > Project: Lucene - Core > Issue Type: New Feature >Reporter: Alan Woodward >Assignee: Alan Woodward >Priority: Major > Fix For: 7.4 > > Attachments: LUCENE-8229.patch, LUCENE-8229_small_improvements.patch > > Time Spent: 5h 20m > Remaining Estimate: 0h > > The ability to find out exactly what a query has matched on is a fairly > frequent feature request, and would also make highlighters much easier to > implement. There have been a few attempts at doing this, including adding > positions to Scorers, or re-writing queries as Spans, but these all either > compromise general performance or involve up-front knowledge of all queries. > Instead, I propose adding a method to Weight that exposes an iterator over > matches in a particular document and field. It should be used in a similar > manner to explain() - ie, just for TopDocs, not as part of the scoring loop, > which relieves some of the pressure on performance. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8245) GeoComplexPolygon fails when intersection of travel plane with edge is near polygon point
[ https://issues.apache.org/jira/browse/LUCENE-8245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16434925#comment-16434925 ] Karl Wright commented on LUCENE-8245: - [~ivera], I committed revised code that I'm happy with, but I needed to disable tests for two cases in the GeoPolygonTest suite. One of the tests fails again because it picks a bad path -- I thought maybe you'd like to analyze that one while I look at the other. This is the "bad path picked" bug: {code} ant test -Dtestcase=GeoPolygonTest -Dtests.method=testComplexPolygonPlaneOutsideWorld {code} > GeoComplexPolygon fails when intersection of travel plane with edge is near > polygon point > - > > Key: LUCENE-8245 > URL: https://issues.apache.org/jira/browse/LUCENE-8245 > Project: Lucene - Core > Issue Type: Bug > Components: modules/spatial3d >Reporter: Ignacio Vera >Assignee: Karl Wright >Priority: Major > Fix For: 6.7, 7.4, master (8.0) > > Attachments: LUCENE-8245-case2.patch, LUCENE-8245.jpg, > LUCENE-8245.patch, LUCENE-8245_Polygon.patch, LUCENE-8245_Random.patch, > LUCENE-8245_case3.patch > > > When a travel plane crosses an edge close to an edge point , it is possible > that the above and below planes crosses different edges. In the current > implementation one of the crosses is missed because we only check edges that > are crossed by the main plain and the {{within}} result is wrong. > One possible fix is to check always the intersection of planes and edges > regardless if they are crossed by main plane. That fixed the above issue but > shows other issues like travel planes crossing two edges when it should be > only one due to the fuzziness at edge intersections. > Not sure of a fix so I add the test showing the issue. > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS-EA] Lucene-Solr-BadApples-7.x-Linux (64bit/jdk-11-ea+5) - Build # 21 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-BadApples-7.x-Linux/21/ Java: 64bit/jdk-11-ea+5 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC 5 tests failed. FAILED: org.apache.solr.cloud.CreateRoutedAliasTest.testV1 Error Message: Unexpected status: HTTP/1.1 400 Bad Request Stack Trace: java.lang.AssertionError: Unexpected status: HTTP/1.1 400 Bad Request at __randomizedtesting.SeedInfo.seed([E3D0187761E00DB8:4C1EDF07200DA128]:0) at org.junit.Assert.fail(Assert.java:93) at org.apache.solr.cloud.CreateRoutedAliasTest.assertSuccess(CreateRoutedAliasTest.java:366) at org.apache.solr.cloud.CreateRoutedAliasTest.testV1(CreateRoutedAliasTest.java:198) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:564) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.base/java.lang.Thread.run(Thread.java:841) FAILED: org.apache.solr.cloud.CreateRoutedAliasTest.testV2 Error Message: Unexpected status: HTTP/1.1 400 Bad Request Stack Trace: java.lang.AssertionError: Unexpected status: HTTP/1.1 400 Bad Request at __randomizedtesting.SeedInfo.seed([
[jira] [Updated] (SOLR-12214) Leader may skip publish itself as ACTIVE even its last published state is DOWN
[ https://issues.apache.org/jira/browse/SOLR-12214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Cao Manh Dat updated SOLR-12214: Summary: Leader may skip publish itself as ACTIVE even its last published state is DOWN (was: Leader may skip publish itself as ACTIVE when its last published state is DOWN) > Leader may skip publish itself as ACTIVE even its last published state is DOWN > -- > > Key: SOLR-12214 > URL: https://issues.apache.org/jira/browse/SOLR-12214 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Cao Manh Dat >Assignee: Cao Manh Dat >Priority: Major > Attachments: SOLR-12214.patch > > > Found this problem on beasting LeaderVoteWaitTimeout, > * a replica publish itself as DOWN on RecoveryStrategy.pingLeader() > * It won the election (by vote wait timeout) then skipping publish itself as > ACTIVE (by looking into clusterstate, its state still ACTIVE). > * ends up with a leader with DOWN state! > Therefore, replica should look into both clusterstate and its local last > published state on skipping publish itself as ACTIVE -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8245) GeoComplexPolygon fails when intersection of travel plane with edge is near polygon point
[ https://issues.apache.org/jira/browse/LUCENE-8245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16434960#comment-16434960 ] Ignacio Vera commented on LUCENE-8245: -- Sure! I have a look time permitting. > GeoComplexPolygon fails when intersection of travel plane with edge is near > polygon point > - > > Key: LUCENE-8245 > URL: https://issues.apache.org/jira/browse/LUCENE-8245 > Project: Lucene - Core > Issue Type: Bug > Components: modules/spatial3d >Reporter: Ignacio Vera >Assignee: Karl Wright >Priority: Major > Fix For: 6.7, 7.4, master (8.0) > > Attachments: LUCENE-8245-case2.patch, LUCENE-8245.jpg, > LUCENE-8245.patch, LUCENE-8245_Polygon.patch, LUCENE-8245_Random.patch, > LUCENE-8245_case3.patch > > > When a travel plane crosses an edge close to an edge point , it is possible > that the above and below planes crosses different edges. In the current > implementation one of the crosses is missed because we only check edges that > are crossed by the main plain and the {{within}} result is wrong. > One possible fix is to check always the intersection of planes and edges > regardless if they are crossed by main plane. That fixed the above issue but > shows other issues like travel planes crossing two edges when it should be > only one due to the fuzziness at edge intersections. > Not sure of a fix so I add the test showing the issue. > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12214) Leader may skip publish itself as ACTIVE even its last published state is DOWN
[ https://issues.apache.org/jira/browse/SOLR-12214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16434958#comment-16434958 ] ASF subversion and git services commented on SOLR-12214: Commit 11d54b0cc4fd1df567afe9f4690e0f5a8a55f1ab in lucene-solr's branch refs/heads/master from [~caomanhdat] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=11d54b0 ] SOLR-12214: Leader may skip publish itself as ACTIVE even its last published state is DOWN > Leader may skip publish itself as ACTIVE even its last published state is DOWN > -- > > Key: SOLR-12214 > URL: https://issues.apache.org/jira/browse/SOLR-12214 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Cao Manh Dat >Assignee: Cao Manh Dat >Priority: Major > Attachments: SOLR-12214.patch > > > Found this problem on beasting LeaderVoteWaitTimeout, > * a replica publish itself as DOWN on RecoveryStrategy.pingLeader() > * It won the election (by vote wait timeout) then skipping publish itself as > ACTIVE (by looking into clusterstate, its state still ACTIVE). > * ends up with a leader with DOWN state! > Therefore, replica should look into both clusterstate and its local last > published state on skipping publish itself as ACTIVE -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12214) Leader may skip publish itself as ACTIVE even its last published state is DOWN
[ https://issues.apache.org/jira/browse/SOLR-12214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16434961#comment-16434961 ] ASF subversion and git services commented on SOLR-12214: Commit df736af82d313284ad8f7462a6425d6021da8015 in lucene-solr's branch refs/heads/branch_7x from [~caomanhdat] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=df736af ] SOLR-12214: Leader may skip publish itself as ACTIVE even its last published state is DOWN > Leader may skip publish itself as ACTIVE even its last published state is DOWN > -- > > Key: SOLR-12214 > URL: https://issues.apache.org/jira/browse/SOLR-12214 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Cao Manh Dat >Assignee: Cao Manh Dat >Priority: Major > Attachments: SOLR-12214.patch > > > Found this problem on beasting LeaderVoteWaitTimeout, > * a replica publish itself as DOWN on RecoveryStrategy.pingLeader() > * It won the election (by vote wait timeout) then skipping publish itself as > ACTIVE (by looking into clusterstate, its state still ACTIVE). > * ends up with a leader with DOWN state! > Therefore, replica should look into both clusterstate and its local last > published state on skipping publish itself as ACTIVE -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8245) GeoComplexPolygon fails when intersection of travel plane with edge is near polygon point
[ https://issues.apache.org/jira/browse/LUCENE-8245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16434976#comment-16434976 ] Karl Wright commented on LUCENE-8245: - The remaining other failure is due to inconsistent detection of whether or not one of the envelope planes is intersected. Two edges share the same endpoint, and that endpoint is very close to the envelope travel plane. One of edges detects the intersection, but the other does not, and that leads to a miscount: {code} [junit4] 1> The following edges should intersect the travel/testpoint planes: [junit4] 1> Travel plane: [lat=-1.4506713533447755, lon=-1.2251247551355924([X=0.04060395941016342, Y=-0.11274773940940182, Z=-0.9927936672533157])] -> [lat=4.9E-324, lon=0.0([X=1.0, Y=0.0, Z=4.9E-324])] [junit4] 1> Test point plane: [lat=-1.4506713533447755, lon=-1.2251247551355924([X=0.04060395941016342, Y=-0.11274773940940182, Z=-0.9927936672533157])] -> [lat=4.9E-324, lon=0.0([X=1.0, Y=0.0, Z=4.9E-324])] [junit4] 1> Travel plane: [lat=4.9E-324, lon=0.0([X=1.0, Y=0.0, Z=4.9E-324])] -> [lat=0.13953211802880663, lon=-2.443438340098597([X=-0.7585849990851791, Y=-0.6365576248361361, Z=0.139079795174987])] [junit4] 1> [junit4] 1> Considering edge [lat=-1.4506713533447755, lon=-1.2251247551355924([X=0.04060395941016342, Y=-0.11274773940940182, Z=-0.9927936672533157])] -> [lat=4.9E-324, lon=0.0([X=1.0, Y=0.0, Z=4.9E-324])] [junit4] 1> Edge intersects travel or testPoint plane [junit4] 1> Assessing inner crossings... [junit4] 1> Assessing travel intersection point [X=1.0, Y=-1.3268072236836583E-12, Z=-1.1683123903318337E-11]... [junit4] 1>Adjoining point [X=1.0, Y=-1.2139664266344454E-12, Z=-1.068951082242553E-11] (dist = 9.994E-13) is within [junit4] 1>Adjoining point [X=1.0, Y=-1.4396480207328712E-12, Z=-1.2676736984211145E-11] (dist = 9.994E-13) is not within [junit4] 1> Assessing testpoint intersection point [X=0.7540698997149328, Y=-0.07411317803505024, Z=-0.6525992822440554]... [junit4] 1>Adjoining point [X=0.7540698997155896, Y=-0.07411317803496516, Z=-0.6525992822433061] (dist = 1.354317132432E-12) is not within [junit4] 1>Adjoining point [X=0.7540698997142758, Y=-0.07411317803513531, Z=-0.6525992822448046] (dist = 1.23996169918E-12) is within [junit4] 1> Assessing outer crossings... // We don't get a travel outer intersection here!!! But we did below, and that's the problem. One crossing is direct, the other is oblique, and that's enough to allow // numerical imprecision to give us different results even though the endpoint for both edges is the same. [junit4] 1> Assessing testpoint intersection point [X=0.7540698997131794, Y=-0.0741131780352774, Z=-0.6525992822460556]... [junit4] 1>Adjoining point [X=0.7540698997138362, Y=-0.07411317803519231, Z=-0.6525992822453063] (dist = 1.354317132432E-12) is within [junit4] 1>Adjoining point [X=0.7540698997125226, Y=-0.07411317803536248, Z=-0.6525992822468049] (dist = 1.354317132432E-12) is not within [junit4] 1> [junit4] 1> Considering edge [lat=4.9E-324, lon=0.0([X=1.0, Y=0.0, Z=4.9E-324])] -> [lat=0.13953211802880663, lon=-2.443438340098597([X=-0.7585849990851791, Y=-0.6365576248361361, Z=0.139079795174987])] [junit4] 1> Edge intersects travel or testPoint plane [junit4] 1> Assessing inner crossings... [junit4] 1> Assessing travel intersection point [X=1.0, Y=-1.326807223683658E-12, Z=2.8989060802487275E-13]... [junit4] 1>Adjoining point [X=1.0, Y=-2.3037607755580197E-12, Z=5.033426107797519E-13] (dist = 1.0E-12) is not within [junit4] 1>Adjoining point [X=1.0, Y=-3.4985367180929626E-13, Z=7.643860526999353E-14] (dist = 1.0002E-12) is within [junit4] 1> Assessing outer crossings... [junit4] 1> Assessing travel intersection point [X=1.0, Y=6.731927763163422E-13, Z=-1.470841127187181E-13]... [junit4] 1>Adjoining point [X=1.0, Y=-3.037607755580196E-13, Z=6.636789003616112E-14] (dist = 1.0002E-12) is within [junit4] 1>Adjoining point [X=1.0, Y=1.650146328190704E-12, Z=-3.6053611547359735E-13] (dist = 1.0E-12) is not within {code} This may be due to the angle of intersection; probably neither should be considered "intersecting", but 1e-12 extra along one is enough, but along the other is not. The only simple way to correct this is to alternatively tighten the cutoff leeway, so that neither detects intersection, or loosen it enough that both edges would detect intersection. I fear that either solution would leave the same situation in place, albeit at a tighter range of values, so I need to think through alternatives. One potential solution is to notice that the endpoint itself is within the envelope plane. Th
[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1528 - Still Unstable
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1528/ 9 tests failed. FAILED: org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.test Error Message: Timeout occured while waiting response from server at: http://127.0.0.1:36679/t/hj/collection1 Stack Trace: org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting response from server at: http://127.0.0.1:36679/t/hj/collection1 at __randomizedtesting.SeedInfo.seed([28A40EFF3C7A8E3E:A0F031259286E3C6]:0) at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:654) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244) at org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483) at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1106) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:886) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:819) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194) at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:484) at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:463) at org.apache.solr.cloud.AbstractFullDistribZkTestBase.commit(AbstractFullDistribZkTestBase.java:1591) at org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.test(ChaosMonkeyNothingIsSafeTest.java:212) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:993) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at c