[jira] [Assigned] (SOLR-11914) Remove/move questionable SolrParams methods
[ https://issues.apache.org/jira/browse/SOLR-11914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Smiley reassigned SOLR-11914: --- Assignee: David Smiley The latest patch adds a tiny bit more comments, and I adjusted all callers of SolrParams.toSolrParams(nl) to instead call nl.toSolrParams() (which is a ton of places). I plan to commit early next week. > Remove/move questionable SolrParams methods > --- > > Key: SOLR-11914 > URL: https://issues.apache.org/jira/browse/SOLR-11914 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrJ >Reporter: David Smiley >Assignee: David Smiley >Priority: Minor > Labels: newdev > Attachments: SOLR-11914.patch, SOLR-11914.patch > > > {{MapgetAll(Map sink, Collection > params)}} > Is only used by the CollectionsHandler, and has particular rules about how it > handles multi-valued data that make it not very generic, and thus I think > doesn't belong here. Furthermore the existence of this method is confusing > in that it gives the user another choice against it use versus toMap (there > are two overloaded variants). > {{SolrParams toFilteredSolrParams(List names)}} > Is only called in one place, and something about it bothers me, perhaps just > the name or that it ought to be a view maybe. > {{static Map toMap(NamedList params)}} > Isn't used and I don't like it; it doesn't even involve a SolrParams! Legacy > of 2006. > {{static Map toMultiMap(NamedList params)}} > It doesn't even involve a SolrParams! Legacy of 2006 with some updates since. > Used in some places. Perhaps should be moved to NamedList as an instance > method. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-11914) Remove/move questionable SolrParams methods
[ https://issues.apache.org/jira/browse/SOLR-11914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Smiley updated SOLR-11914: Attachment: SOLR-11914.patch > Remove/move questionable SolrParams methods > --- > > Key: SOLR-11914 > URL: https://issues.apache.org/jira/browse/SOLR-11914 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrJ >Reporter: David Smiley >Priority: Minor > Labels: newdev > Attachments: SOLR-11914.patch, SOLR-11914.patch > > > {{MapgetAll(Map sink, Collection > params)}} > Is only used by the CollectionsHandler, and has particular rules about how it > handles multi-valued data that make it not very generic, and thus I think > doesn't belong here. Furthermore the existence of this method is confusing > in that it gives the user another choice against it use versus toMap (there > are two overloaded variants). > {{SolrParams toFilteredSolrParams(List names)}} > Is only called in one place, and something about it bothers me, perhaps just > the name or that it ought to be a view maybe. > {{static Map toMap(NamedList params)}} > Isn't used and I don't like it; it doesn't even involve a SolrParams! Legacy > of 2006. > {{static Map toMultiMap(NamedList params)}} > It doesn't even involve a SolrParams! Legacy of 2006 with some updates since. > Used in some places. Perhaps should be moved to NamedList as an instance > method. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12253) Remove optimize button from the core admin page too
[ https://issues.apache.org/jira/browse/SOLR-12253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16446617#comment-16446617 ] Erick Erickson commented on SOLR-12253: --- Don't think the optimize button in the DIH screen should be there either, remove it. > Remove optimize button from the core admin page too > --- > > Key: SOLR-12253 > URL: https://issues.apache.org/jira/browse/SOLR-12253 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Erick Erickson >Assignee: Erick Erickson >Priority: Minor > Attachments: SOLR-12253.patch, SOLR-12253.patch > > > SOLR-7733 removed the optimize button in the individual core display but not > the "core admin" link. Further the optimize button does nothing. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12253) Remove optimize button from the core admin page too
[ https://issues.apache.org/jira/browse/SOLR-12253?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erick Erickson updated SOLR-12253: -- Attachment: SOLR-12253.patch > Remove optimize button from the core admin page too > --- > > Key: SOLR-12253 > URL: https://issues.apache.org/jira/browse/SOLR-12253 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Erick Erickson >Assignee: Erick Erickson >Priority: Minor > Attachments: SOLR-12253.patch, SOLR-12253.patch > > > SOLR-7733 removed the optimize button in the individual core display but not > the "core admin" link. Further the optimize button does nothing. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-BadApples-Tests-7.x - Build # 45 - Unstable
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-7.x/45/ 2 tests failed. FAILED: org.apache.solr.cloud.MoveReplicaHDFSTest.testFailedMove Error Message: No live SolrServers available to handle this request:[http://127.0.0.1:49247/solr/MoveReplicaHDFSTest_failed_coll_true, http://127.0.0.1:42924/solr/MoveReplicaHDFSTest_failed_coll_true] Stack Trace: org.apache.solr.client.solrj.SolrServerException: No live SolrServers available to handle this request:[http://127.0.0.1:49247/solr/MoveReplicaHDFSTest_failed_coll_true, http://127.0.0.1:42924/solr/MoveReplicaHDFSTest_failed_coll_true] at __randomizedtesting.SeedInfo.seed([CB120904B3C6130E:61DFDAF60415C6DE]:0) at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:462) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1106) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:886) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:993) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:819) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194) at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:942) at org.apache.solr.cloud.MoveReplicaTest.testFailedMove(MoveReplicaTest.java:288) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at
[jira] [Commented] (SOLR-12253) Remove optimize button from the core admin page too
[ https://issues.apache.org/jira/browse/SOLR-12253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16446609#comment-16446609 ] Erick Erickson commented on SOLR-12253: --- Anyone with more UI chops than me want to take a look? There's still some references to "optimize" in DIH that I want to look at, all help appreciated. > Remove optimize button from the core admin page too > --- > > Key: SOLR-12253 > URL: https://issues.apache.org/jira/browse/SOLR-12253 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Erick Erickson >Assignee: Erick Erickson >Priority: Minor > Attachments: SOLR-12253.patch > > > SOLR-7733 removed the optimize button in the individual core display but not > the "core admin" link. Further the optimize button does nothing. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12253) Remove optimize button from the core admin page too
[ https://issues.apache.org/jira/browse/SOLR-12253?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erick Erickson updated SOLR-12253: -- Attachment: SOLR-12253.patch > Remove optimize button from the core admin page too > --- > > Key: SOLR-12253 > URL: https://issues.apache.org/jira/browse/SOLR-12253 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Erick Erickson >Assignee: Erick Erickson >Priority: Minor > Attachments: SOLR-12253.patch > > > SOLR-7733 removed the optimize button in the individual core display but not > the "core admin" link. Further the optimize button does nothing. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-11418) Allow comments in Streaming Expressions
[ https://issues.apache.org/jira/browse/SOLR-11418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joel Bernstein resolved SOLR-11418. --- Resolution: Resolved > Allow comments in Streaming Expressions > --- > > Key: SOLR-11418 > URL: https://issues.apache.org/jira/browse/SOLR-11418 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Joel Bernstein >Assignee: Joel Bernstein >Priority: Major > Fix For: 7.4, master (8.0) > > Attachments: SOLR-11418.patch > > > Now that Streaming Expressions supports variables and data structures it > would be great to support comments. Here is the proposed syntax: > {code} > # comment above... > let( > # Start of line comment > a=random(), > > # Start of line comment > b=random(...) >) > # comment below > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-12054) ebeAdd and ebeSubtract should support matrix operations
[ https://issues.apache.org/jira/browse/SOLR-12054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joel Bernstein resolved SOLR-12054. --- Resolution: Resolved > ebeAdd and ebeSubtract should support matrix operations > --- > > Key: SOLR-12054 > URL: https://issues.apache.org/jira/browse/SOLR-12054 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Joel Bernstein >Assignee: Joel Bernstein >Priority: Major > Fix For: 7.4, master (8.0) > > Attachments: SOLR-12054.patch > > > Currently ebeAdd and ebeSubtract perform element-by-element addition and > subtraction of vectors. This ticket will allow them to perform > element-by-element addition and subtraction of matrices as well. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-11212) Allow the predict StreamEvaluator to work on arrays as well as a single numeric parameter
[ https://issues.apache.org/jira/browse/SOLR-11212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joel Bernstein resolved SOLR-11212. --- Resolution: Fixed > Allow the predict StreamEvaluator to work on arrays as well as a single > numeric parameter > - > > Key: SOLR-11212 > URL: https://issues.apache.org/jira/browse/SOLR-11212 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Joel Bernstein >Assignee: Joel Bernstein >Priority: Major > Fix For: 7.4, master (8.0) > > Attachments: SOLR-11212.patch > > > Currently the simple regression's predict function only provides a prediction > for a single numeric parameter. This ticket will allow the predict function > to work on an array of numbers. In this scenario predict will return an array > of predictions. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-Windows (64bit/jdk-9.0.4) - Build # 7278 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7278/ Java: 64bit/jdk-9.0.4 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC 28 tests failed. FAILED: org.apache.solr.cloud.TestCloudRecovery.corruptedLogTest Error Message: Timeout waiting for all live and active Stack Trace: java.lang.AssertionError: Timeout waiting for all live and active at __randomizedtesting.SeedInfo.seed([DB179BCF638277C:8EC7264E204129DD]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.apache.solr.cloud.TestCloudRecovery.corruptedLogTest(TestCloudRecovery.java:185) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:564) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.base/java.lang.Thread.run(Thread.java:844) FAILED: org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testTrigger Error Message: should have fired an event Stack Trace: java.lang.AssertionError: should have fired an event at __randomizedtesting.SeedInfo.seed([DB179BCF638277C:6E7A4F3E6FF75451]:0) at
[jira] [Commented] (SOLR-4793) Solr Cloud can't upload large config files ( > 1MB) to Zookeeper
[ https://issues.apache.org/jira/browse/SOLR-4793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16446404#comment-16446404 ] Jan Høydahl commented on SOLR-4793: --- Yes, better start simple and choose one location for config sets. Blob, HDFS, FileSystem... Looks like we have a cleanup job with the SolrResourceLoader hierarchy, there is a ton of zkSolrResourceLoader hardcoding, and we should probably generalise some of the zkRL features up into SolrResourceLoader, or a new SolrResourceLoaderBase. > Solr Cloud can't upload large config files ( > 1MB) to Zookeeper > - > > Key: SOLR-4793 > URL: https://issues.apache.org/jira/browse/SOLR-4793 > Project: Solr > Issue Type: Improvement >Reporter: Son Nguyen >Assignee: Steve Rowe >Priority: Major > Fix For: 7.4, master (8.0) > > Attachments: SOLR-4793.patch > > > Zookeeper set znode size limit to 1MB by default. So we can't start Solr > Cloud with some large config files, like synonyms.txt. > Jan Høydahl has a good idea: > "SolrCloud is designed with an assumption that you should be able to upload > your whole disk-based conf folder into ZK, and that you should be able to add > an empty Solr node to a cluster and it would download all config from ZK. So > immediately a splitting strategy automatically handled by ZkSolresourceLoader > for large files could be one way forward, i.e. store synonyms.txt as e.g. > __001_synonyms.txt __002_synonyms.txt" -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-7.x-Solaris (64bit/jdk1.8.0) - Build # 566 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Solaris/566/ Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseSerialGC 17 tests failed. FAILED: org.apache.solr.cloud.autoscaling.NodeAddedTriggerTest.testRestoreState Error Message: Did not expect the processor to fire on first run! event={ "id":"b819a88f35c7aT8echsi71v5pk7rizrjub6xfx4", "source":"node_added_trigger", "eventTime":3238725466414202, "eventType":"NODEADDED", "properties":{ "eventTimes":[ 3238725466414202, 3238725466427362], "nodeNames":[ "127.0.0.1:49291_solr", "127.0.0.1:35087_solr"]}} Stack Trace: java.lang.AssertionError: Did not expect the processor to fire on first run! event={ "id":"b819a88f35c7aT8echsi71v5pk7rizrjub6xfx4", "source":"node_added_trigger", "eventTime":3238725466414202, "eventType":"NODEADDED", "properties":{ "eventTimes":[ 3238725466414202, 3238725466427362], "nodeNames":[ "127.0.0.1:49291_solr", "127.0.0.1:35087_solr"]}} at __randomizedtesting.SeedInfo.seed([8F48ED425233DE67:41E649D1AA0AA671]:0) at org.junit.Assert.fail(Assert.java:93) at org.apache.solr.cloud.autoscaling.NodeAddedTriggerTest.lambda$new$0(NodeAddedTriggerTest.java:49) at org.apache.solr.cloud.autoscaling.NodeAddedTrigger.run(NodeAddedTrigger.java:161) at org.apache.solr.cloud.autoscaling.NodeAddedTriggerTest.testRestoreState(NodeAddedTriggerTest.java:257) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at
[jira] [Created] (SOLR-12258) V2 API should "retry" for unresolved collections/aliases (like V1 does)
David Smiley created SOLR-12258: --- Summary: V2 API should "retry" for unresolved collections/aliases (like V1 does) Key: SOLR-12258 URL: https://issues.apache.org/jira/browse/SOLR-12258 Project: Solr Issue Type: Improvement Security Level: Public (Default Security Level. Issues are Public) Components: SolrCloud, v2 API Reporter: David Smiley When using V1, if the request refers to a possible collection/alias that fails to resolve, HttpSolrCall will invoke AliasesManager.update() then retry the request as if anew (in collaboration with SolrDispatchFilter). If it fails to resolve again we stop there and return an error; it doesn't go on forever. V2 (V2HttpCall specifically) doesn't have this retry mechanism. It'll return "no such collection or alias". The retry will not only work for an alias but the retrying is a delay that will at least help the odds of a newly made collection from being known to this Solr node. It'd be nice if this was more explicit – i.e. if there was a mechanism similar to AliasesManager.update() but for a collection. I'm not sure how to do that? BTW I discovered this while debugging a Jenkins failure of TimeRoutedAliasUpdateProcessorTest.test where it early on simply goes to issue a V2 based request to change the configuration of a collection that was created immediately before it. It's pretty mysterious. I am aware of SolrCloudTestCase.waitForState which is maybe something that needs to be called? But if that were true then *every* SolrCloud test would need to use it; it just seems wrong to me that we ought to use this method commonly. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8207) Modernise cloud tab on Admin UI
[ https://issues.apache.org/jira/browse/SOLR-8207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16446376#comment-16446376 ] Erick Erickson commented on SOLR-8207: -- Oh, and anything you see having to do with optimizing indexes, please remove it. I missed the one that comes in stand-alone mode under "core admin" (SOLR-12253) > Modernise cloud tab on Admin UI > --- > > Key: SOLR-8207 > URL: https://issues.apache.org/jira/browse/SOLR-8207 > Project: Solr > Issue Type: Improvement > Components: Admin UI >Affects Versions: 5.3 >Reporter: Upayavira >Assignee: Jan Høydahl >Priority: Major > Attachments: nodes-tab.png > > > The various sub-tabs of the "Cloud tab" were designed before anyone was > making real use of SolrCloud, and when we didn't really know the use-cases we > would need to support. I would argue that, whilst they are pretty (and > clever) they aren't really fit for purpose (with the exception of tree view). > Issues: > * Radial view doesn't scale beyond a small number of nodes/collections > * Paging on the graph view is based on collections - so a collection with > many replicas won't be subject to pagination > * The Dump feature is kinda redundant and should be removed > * There is now a major overlap in functionality with the new Collections tab > What I'd propose is that we: > * promote the tree tab to top level > * remove the graph views and the dump tab > * add a new Nodes tab > This nodes tab would complement the collections tab - showing nodes, and > their associated replicas/collections. From this view, it would be possible > to add/remove replicas and to see the status of nodes. It would also be > possible to filter nodes by status: "show me only up nodes", "show me nodes > that are in trouble", "show me nodes that have leaders on them", etc. > Presumably, if we have APIs to support it, we might have a "decommission > node" option, that would ensure that no replicas on this node are leaders, > and then remove all replicas from the node, ready for it to be removed from > the cluster. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-8207) Modernise cloud tab on Admin UI
[ https://issues.apache.org/jira/browse/SOLR-8207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16446334#comment-16446334 ] Tomás Fernández Löbbe edited comment on SOLR-8207 at 4/20/18 9:04 PM: -- Thanks for working on this [~janhoy]. Since the title in this Jira is pretty open, I'd like to take the opportunity to express my wishes :) (maybe some of these deserve their own Jiras): * I’d like the controls in the Graph section to be on top (only at the top or maybe duplicated). Every time I want to use some of the nice filters the UI offers like “Show me degraded collections” I need to scroll to the bottom, do the change, and then scroll up to start looking at the results of my filter. This can be a lot of scrolling if you have collections with many shards and many replicas. * I’d like to have a “refresh” button, or some way to update the view with the latest real information, otherwise you can only refresh the page, which drops the filters (this is already SOLR-11559, but putting it here for completeness) * I’d like to be able to link to a specific filter (i.e. I want to share a link to the cloud tab but that shows a specific collection that I filtered by name). Right now the filters can’t take parameters from the URL I believe. * I’d like to be able to display more information of the replicas/shards than just ip:port and replica state (this is SOLR-11558) * Similar to the filters in the Graph tab, I’d like to be able to link to a specific znode in the tree (or be able to reload the page and remain on the same znode in the UI) * It would be nice to have an Overseer stats page in the Cloud section, showing some of the data returned by OVERSEERSTATUS Also : +1 to s/Graph/Collections/ +1 to remove the Dump section +1 to remove the Graph (Radial) section was (Author: tomasflobbe): Thanks for working on this [~janhoy]. Since the title in this Jira is pretty open, I'd like to take the opportunity to express my wishes :) (maybe some of these deserve their own Jiras): * I’d like the controls in the Graph section to be on top (only at the top or maybe duplicated). Every time I want to use some of the nice filters the UI offers like “Show me degraded collections” I need to page to the bottom, do the change, and then scroll up to start looking at the results of my filter. This can be a lot of scrolling if you have collections with many shards and many replicas. * I’d like to have a “refresh” button, or some way to update the view with the latest real information, otherwise you can only refresh the page, which drops the filters (this is already SOLR-11559, but putting it here for completeness) * I’d like to be able to link to a specific filter (i.e. I want to share a link to the cloud tab but that shows a specific collection that I filtered by name). Right now the filters can’t take parameters from the URL I believe. * I’d like to be able to display more information of the replicas/shards than just ip:port and replica state (this is SOLR-11558) * Similar to the filters in the Graph tab, I’d like to be able to link to a specific znode in the tree (or be able to reload the page and remain on the same znode in the UI) * It would be nice to have an Overseer stats page in the Cloud section, showing some of the data returned by OVERSEERSTATUS Also : +1 to s/Graph/Collections/ +1 to remove the Dump section +1 to remove the Graph (Radial) section > Modernise cloud tab on Admin UI > --- > > Key: SOLR-8207 > URL: https://issues.apache.org/jira/browse/SOLR-8207 > Project: Solr > Issue Type: Improvement > Components: Admin UI >Affects Versions: 5.3 >Reporter: Upayavira >Assignee: Jan Høydahl >Priority: Major > Attachments: nodes-tab.png > > > The various sub-tabs of the "Cloud tab" were designed before anyone was > making real use of SolrCloud, and when we didn't really know the use-cases we > would need to support. I would argue that, whilst they are pretty (and > clever) they aren't really fit for purpose (with the exception of tree view). > Issues: > * Radial view doesn't scale beyond a small number of nodes/collections > * Paging on the graph view is based on collections - so a collection with > many replicas won't be subject to pagination > * The Dump feature is kinda redundant and should be removed > * There is now a major overlap in functionality with the new Collections tab > What I'd propose is that we: > * promote the tree tab to top level > * remove the graph views and the dump tab > * add a new Nodes tab > This nodes tab would complement the collections tab - showing nodes, and > their associated replicas/collections. From this view, it would be possible > to add/remove replicas and to see the status of nodes. It would also be >
[jira] [Commented] (SOLR-11200) provide a config to enable disable ConcurrentMergeSchedule.doAutoIOThrottle
[ https://issues.apache.org/jira/browse/SOLR-11200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16446341#comment-16446341 ] Dawid Weiss commented on SOLR-11200: Hmm... sure, I guess? Please feel free to commit a follow-up, Varun (under the same issue number)? > provide a config to enable disable ConcurrentMergeSchedule.doAutoIOThrottle > --- > > Key: SOLR-11200 > URL: https://issues.apache.org/jira/browse/SOLR-11200 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Nawab Zada Asad iqbal >Assignee: Dawid Weiss >Priority: Minor > Fix For: 7.4 > > Attachments: SOLR-11200.patch, SOLR-11200.patch, SOLR-11200.patch > > > This config can be useful while bulk indexing. Lucene introduced it > https://issues.apache.org/jira/browse/LUCENE-6119 . -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8207) Modernise cloud tab on Admin UI
[ https://issues.apache.org/jira/browse/SOLR-8207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16446334#comment-16446334 ] Tomás Fernández Löbbe commented on SOLR-8207: - Thanks for working on this [~janhoy]. Since the title in this Jira is pretty open, I'd like to take the opportunity to express my wishes :) (maybe some of these deserve their own Jiras): * I’d like the controls in the Graph section to be on top (only at the top or maybe duplicated). Every time I want to use some of the nice filters the UI offers like “Show me degraded collections” I need to page to the bottom, do the change, and then scroll up to start looking at the results of my filter. This can be a lot of scrolling if you have collections with many shards and many replicas. * I’d like to have a “refresh” button, or some way to update the view with the latest real information, otherwise you can only refresh the page, which drops the filters (this is already SOLR-11559, but putting it here for completeness) * I’d like to be able to link to a specific filter (i.e. I want to share a link to the cloud tab but that shows a specific collection that I filtered by name). Right now the filters can’t take parameters from the URL I believe. * I’d like to be able to display more information of the replicas/shards than just ip:port and replica state (this is SOLR-11558) * Similar to the filters in the Graph tab, I’d like to be able to link to a specific znode in the tree (or be able to reload the page and remain on the same znode in the UI) * It would be nice to have an Overseer stats page in the Cloud section, showing some of the data returned by OVERSEERSTATUS Also : +1 to s/Graph/Collections/ +1 to remove the Dump section +1 to remove the Graph (Radial) section > Modernise cloud tab on Admin UI > --- > > Key: SOLR-8207 > URL: https://issues.apache.org/jira/browse/SOLR-8207 > Project: Solr > Issue Type: Improvement > Components: Admin UI >Affects Versions: 5.3 >Reporter: Upayavira >Assignee: Jan Høydahl >Priority: Major > Attachments: nodes-tab.png > > > The various sub-tabs of the "Cloud tab" were designed before anyone was > making real use of SolrCloud, and when we didn't really know the use-cases we > would need to support. I would argue that, whilst they are pretty (and > clever) they aren't really fit for purpose (with the exception of tree view). > Issues: > * Radial view doesn't scale beyond a small number of nodes/collections > * Paging on the graph view is based on collections - so a collection with > many replicas won't be subject to pagination > * The Dump feature is kinda redundant and should be removed > * There is now a major overlap in functionality with the new Collections tab > What I'd propose is that we: > * promote the tree tab to top level > * remove the graph views and the dump tab > * add a new Nodes tab > This nodes tab would complement the collections tab - showing nodes, and > their associated replicas/collections. From this view, it would be possible > to add/remove replicas and to see the status of nodes. It would also be > possible to filter nodes by status: "show me only up nodes", "show me nodes > that are in trouble", "show me nodes that have leaders on them", etc. > Presumably, if we have APIs to support it, we might have a "decommission > node" option, that would ensure that no replicas on this node are leaders, > and then remove all replicas from the node, ready for it to be removed from > the cluster. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12256) Aliases and eventual consistency (should use sync())
[ https://issues.apache.org/jira/browse/SOLR-12256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16446314#comment-16446314 ] ASF subversion and git services commented on SOLR-12256: Commit 566c07f7de3ada3a537d34ec10a38565f5094398 in lucene-solr's branch refs/heads/branch_7_3 from [~dsmiley] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=566c07f ] SOLR-12256: AliasesManager.update() should call ZooKeeper.sync() * SetAliasPropCmd now calls AliasesManager.update() first. * SetAliasPropCmd now more efficiently updates multiple values. * Tests: Commented out BadApple annotations on alias related stuff. (cherry picked from commit b16b380) > Aliases and eventual consistency (should use sync()) > > > Key: SOLR-12256 > URL: https://issues.apache.org/jira/browse/SOLR-12256 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Reporter: David Smiley >Assignee: David Smiley >Priority: Major > Fix For: 7.3.1 > > Attachments: SOLR-12256.patch > > > ZkStateReader.AliasesManager.update() reads alias info from ZK into the > ZkStateReader. This method is called in ~5 places (+2 for tests). In at > least some of these places, the caller assumes that the alias info is > subsequently up to date when in fact this might not be so since ZK is allowed > to return a stale value. ZooKeeper.sync() can be called to force an up to > date value. As with sync(), AliasManager.update() ought not to be called > aggressively/commonly, only in certain circumstances (e.g. _after_ failing to > resolve stuff that would otherwise return an error). > And related to this eventual consistency issue, SetAliasPropCmd will throw an > exception if the alias doesn't exist. Fair enough, but sometimes (as seen in > some tests), the node receiving the command to update Alias properties is > simply "behind"; it does not yet know about an alias that other nodes know > about. I believe this is the cause of some failures in AliasIntegrationTest; > perhaps others. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11200) provide a config to enable disable ConcurrentMergeSchedule.doAutoIOThrottle
[ https://issues.apache.org/jira/browse/SOLR-11200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16446315#comment-16446315 ] Varun Thacker commented on SOLR-11200: -- Can we reuse the solrconfig-tieredmergepolicy.xml file instead of creating a new one? If we can reuse that it avoids the need to add yet another solrconfig file to the test setup > provide a config to enable disable ConcurrentMergeSchedule.doAutoIOThrottle > --- > > Key: SOLR-11200 > URL: https://issues.apache.org/jira/browse/SOLR-11200 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Nawab Zada Asad iqbal >Assignee: Dawid Weiss >Priority: Minor > Fix For: 7.4 > > Attachments: SOLR-11200.patch, SOLR-11200.patch, SOLR-11200.patch > > > This config can be useful while bulk indexing. Lucene introduced it > https://issues.apache.org/jira/browse/LUCENE-6119 . -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12256) Aliases and eventual consistency (should use sync())
[ https://issues.apache.org/jira/browse/SOLR-12256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16446307#comment-16446307 ] ASF subversion and git services commented on SOLR-12256: Commit b16b380b2cec1614597df6a045599307628988c2 in lucene-solr's branch refs/heads/branch_7x from [~dsmiley] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=b16b380 ] SOLR-12256: AliasesManager.update() should call ZooKeeper.sync() * SetAliasPropCmd now calls AliasesManager.update() first. * SetAliasPropCmd now more efficiently updates multiple values. * Tests: Commented out BadApple annotations on alias related stuff. (cherry picked from commit 8f296d0) > Aliases and eventual consistency (should use sync()) > > > Key: SOLR-12256 > URL: https://issues.apache.org/jira/browse/SOLR-12256 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Reporter: David Smiley >Assignee: David Smiley >Priority: Major > Fix For: 7.3.1 > > Attachments: SOLR-12256.patch > > > ZkStateReader.AliasesManager.update() reads alias info from ZK into the > ZkStateReader. This method is called in ~5 places (+2 for tests). In at > least some of these places, the caller assumes that the alias info is > subsequently up to date when in fact this might not be so since ZK is allowed > to return a stale value. ZooKeeper.sync() can be called to force an up to > date value. As with sync(), AliasManager.update() ought not to be called > aggressively/commonly, only in certain circumstances (e.g. _after_ failing to > resolve stuff that would otherwise return an error). > And related to this eventual consistency issue, SetAliasPropCmd will throw an > exception if the alias doesn't exist. Fair enough, but sometimes (as seen in > some tests), the node receiving the command to update Alias properties is > simply "behind"; it does not yet know about an alias that other nodes know > about. I believe this is the cause of some failures in AliasIntegrationTest; > perhaps others. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12256) Aliases and eventual consistency (should use sync())
[ https://issues.apache.org/jira/browse/SOLR-12256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16446305#comment-16446305 ] ASF subversion and git services commented on SOLR-12256: Commit 8f296d0ccf82174f9c612920ce25b928196a1fa8 in lucene-solr's branch refs/heads/master from [~dsmiley] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=8f296d0 ] SOLR-12256: AliasesManager.update() should call ZooKeeper.sync() * SetAliasPropCmd now calls AliasesManager.update() first. * SetAliasPropCmd now more efficiently updates multiple values. * Tests: Commented out BadApple annotations on alias related stuff. > Aliases and eventual consistency (should use sync()) > > > Key: SOLR-12256 > URL: https://issues.apache.org/jira/browse/SOLR-12256 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Reporter: David Smiley >Assignee: David Smiley >Priority: Major > Fix For: 7.3.1 > > Attachments: SOLR-12256.patch > > > ZkStateReader.AliasesManager.update() reads alias info from ZK into the > ZkStateReader. This method is called in ~5 places (+2 for tests). In at > least some of these places, the caller assumes that the alias info is > subsequently up to date when in fact this might not be so since ZK is allowed > to return a stale value. ZooKeeper.sync() can be called to force an up to > date value. As with sync(), AliasManager.update() ought not to be called > aggressively/commonly, only in certain circumstances (e.g. _after_ failing to > resolve stuff that would otherwise return an error). > And related to this eventual consistency issue, SetAliasPropCmd will throw an > exception if the alias doesn't exist. Fair enough, but sometimes (as seen in > some tests), the node receiving the command to update Alias properties is > simply "behind"; it does not yet know about an alias that other nodes know > about. I believe this is the cause of some failures in AliasIntegrationTest; > perhaps others. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-4793) Solr Cloud can't upload large config files ( > 1MB) to Zookeeper
[ https://issues.apache.org/jira/browse/SOLR-4793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Rowe resolved SOLR-4793. -- Resolution: Workaround Assignee: Steve Rowe Fix Version/s: master (8.0) 7.4 Resolving this issue for now with status "Workaround" as a result of the added documentation. I'll open an issue to remove this documentation once the blob store is a viable alternative to ZK storage of large configuration files everywhere. > Solr Cloud can't upload large config files ( > 1MB) to Zookeeper > - > > Key: SOLR-4793 > URL: https://issues.apache.org/jira/browse/SOLR-4793 > Project: Solr > Issue Type: Improvement >Reporter: Son Nguyen >Assignee: Steve Rowe >Priority: Major > Fix For: 7.4, master (8.0) > > Attachments: SOLR-4793.patch > > > Zookeeper set znode size limit to 1MB by default. So we can't start Solr > Cloud with some large config files, like synonyms.txt. > Jan Høydahl has a good idea: > "SolrCloud is designed with an assumption that you should be able to upload > your whole disk-based conf folder into ZK, and that you should be able to add > an empty Solr node to a cluster and it would download all config from ZK. So > immediately a splitting strategy automatically handled by ZkSolresourceLoader > for large files could be one way forward, i.e. store synonyms.txt as e.g. > __001_synonyms.txt __002_synonyms.txt" -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12256) Aliases and eventual consistency (should use sync())
[ https://issues.apache.org/jira/browse/SOLR-12256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Smiley updated SOLR-12256: Fix Version/s: 7.3.1 > Aliases and eventual consistency (should use sync()) > > > Key: SOLR-12256 > URL: https://issues.apache.org/jira/browse/SOLR-12256 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Reporter: David Smiley >Assignee: David Smiley >Priority: Major > Fix For: 7.3.1 > > Attachments: SOLR-12256.patch > > > ZkStateReader.AliasesManager.update() reads alias info from ZK into the > ZkStateReader. This method is called in ~5 places (+2 for tests). In at > least some of these places, the caller assumes that the alias info is > subsequently up to date when in fact this might not be so since ZK is allowed > to return a stale value. ZooKeeper.sync() can be called to force an up to > date value. As with sync(), AliasManager.update() ought not to be called > aggressively/commonly, only in certain circumstances (e.g. _after_ failing to > resolve stuff that would otherwise return an error). > And related to this eventual consistency issue, SetAliasPropCmd will throw an > exception if the alias doesn't exist. Fair enough, but sometimes (as seen in > some tests), the node receiving the command to update Alias properties is > simply "behind"; it does not yet know about an alias that other nodes know > about. I believe this is the cause of some failures in AliasIntegrationTest; > perhaps others. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-12257) Remove documentation of ZooKeeper sysprop jute.maxbuffer once the blob store is a viable alternative for storing/loading large configuration files everywhere in Solr
Steve Rowe created SOLR-12257: - Summary: Remove documentation of ZooKeeper sysprop jute.maxbuffer once the blob store is a viable alternative for storing/loading large configuration files everywhere in Solr Key: SOLR-12257 URL: https://issues.apache.org/jira/browse/SOLR-12257 Project: Solr Issue Type: Sub-task Reporter: Steve Rowe SOLR-4793 added documentation to the Solr reference guide of ZooKeeper's {{jute.maxbuffer}} sysprop, which can be used to increase ZooKeeper's file size limit, which defaults to 1MB. Once the blob store is a viable alternative to ZooKeeper for storing/loading large configuration files (SOLR-8751 and SOLR-9175, perhaps others?), the {{jute.maxbuffer}} documentation should be removed from the ref guide. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12256) Aliases and eventual consistency (should use sync())
[ https://issues.apache.org/jira/browse/SOLR-12256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16446299#comment-16446299 ] David Smiley commented on SOLR-12256: - Patch notes: * ZkStateReader: AliasesManager.update() add call to ZooKeeper.sync() * SetAliasPropCmd: ** eagerly call AliasesManager.update(). Setting alias props won't be called in high frequency so I think this is ok. ** Improve efficiency by using the the overloaded method of .applyModificationAndExportToZk that takes a Map instead of making modifications one at a time. * AliasIntegrationTest: ** comment away BadApple annotations. I'm looking at these things. ** Minor inlining of needless UnaryOperator local vars * CreateRoutedAliasTest: comment away BadApple annotations. I'm looking at these things. * TimeRoutedAliasUpdateProcessorTest: added more diagnostic logging and cleaned up some indentation and other minor stuff. *I'm going to commit this right away and keep the issue open a bit to see the effects (hopefully no Jenkins failures).* Furthermore, I looked at two separate TimeRoutedAliasUpdateProcessorTest failures by Jenkins. These failures I'm certain have (almost) nothing to do with the above. (1) Timed out creating the collection {{alias + "_2017-10-23"}} which is at a point before any actual TRA stuff is happening. I looked at the logs carefully and I have no idea why it timed out. It seems the collection was created (shards were being made) then a long pause of ~165 seconds and then the timeout failure. I'll keep an eye on this... I'm keeping the logs to compare. (2) After the comment "manipulate the config" we configure the collection created before this step. We use the V2 API. But when it got to Solr, the node receiving it didn't know about this collection and so it failed. Note that the V1 API will not immediately fail, it will internally call AliasesManager.update() and then do a retry. Wether or not an alias is actually being referenced, this has the effect of giving the V1 API a little bit more time to see the collection or alias. I'll file a separate issue about this. > Aliases and eventual consistency (should use sync()) > > > Key: SOLR-12256 > URL: https://issues.apache.org/jira/browse/SOLR-12256 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Reporter: David Smiley >Assignee: David Smiley >Priority: Major > Attachments: SOLR-12256.patch > > > ZkStateReader.AliasesManager.update() reads alias info from ZK into the > ZkStateReader. This method is called in ~5 places (+2 for tests). In at > least some of these places, the caller assumes that the alias info is > subsequently up to date when in fact this might not be so since ZK is allowed > to return a stale value. ZooKeeper.sync() can be called to force an up to > date value. As with sync(), AliasManager.update() ought not to be called > aggressively/commonly, only in certain circumstances (e.g. _after_ failing to > resolve stuff that would otherwise return an error). > And related to this eventual consistency issue, SetAliasPropCmd will throw an > exception if the alias doesn't exist. Fair enough, but sometimes (as seen in > some tests), the node receiving the command to update Alias properties is > simply "behind"; it does not yet know about an alias that other nodes know > about. I believe this is the cause of some failures in AliasIntegrationTest; > perhaps others. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-4793) Solr Cloud can't upload large config files ( > 1MB) to Zookeeper
[ https://issues.apache.org/jira/browse/SOLR-4793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16446294#comment-16446294 ] ASF subversion and git services commented on SOLR-4793: --- Commit 22c4b9c36f5dfdf0578bacea2e83740714512765 in lucene-solr's branch refs/heads/master from [~steve_rowe] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=22c4b9c ] SOLR-4793: Document usage of ZooKeeper's jute.maxbuffer sysprop for increasing the file size limit above 1MB > Solr Cloud can't upload large config files ( > 1MB) to Zookeeper > - > > Key: SOLR-4793 > URL: https://issues.apache.org/jira/browse/SOLR-4793 > Project: Solr > Issue Type: Improvement >Reporter: Son Nguyen >Priority: Major > Attachments: SOLR-4793.patch > > > Zookeeper set znode size limit to 1MB by default. So we can't start Solr > Cloud with some large config files, like synonyms.txt. > Jan Høydahl has a good idea: > "SolrCloud is designed with an assumption that you should be able to upload > your whole disk-based conf folder into ZK, and that you should be able to add > an empty Solr node to a cluster and it would download all config from ZK. So > immediately a splitting strategy automatically handled by ZkSolresourceLoader > for large files could be one way forward, i.e. store synonyms.txt as e.g. > __001_synonyms.txt __002_synonyms.txt" -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-4793) Solr Cloud can't upload large config files ( > 1MB) to Zookeeper
[ https://issues.apache.org/jira/browse/SOLR-4793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16446293#comment-16446293 ] ASF subversion and git services commented on SOLR-4793: --- Commit 9592221193971732b8d2c4b2c2994417bd7a3072 in lucene-solr's branch refs/heads/branch_7x from [~steve_rowe] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=9592221 ] SOLR-4793: Document usage of ZooKeeper's jute.maxbuffer sysprop for increasing the file size limit above 1MB > Solr Cloud can't upload large config files ( > 1MB) to Zookeeper > - > > Key: SOLR-4793 > URL: https://issues.apache.org/jira/browse/SOLR-4793 > Project: Solr > Issue Type: Improvement >Reporter: Son Nguyen >Priority: Major > Attachments: SOLR-4793.patch > > > Zookeeper set znode size limit to 1MB by default. So we can't start Solr > Cloud with some large config files, like synonyms.txt. > Jan Høydahl has a good idea: > "SolrCloud is designed with an assumption that you should be able to upload > your whole disk-based conf folder into ZK, and that you should be able to add > an empty Solr node to a cluster and it would download all config from ZK. So > immediately a splitting strategy automatically handled by ZkSolresourceLoader > for large files could be one way forward, i.e. store synonyms.txt as e.g. > __001_synonyms.txt __002_synonyms.txt" -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1809 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1809/ Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseSerialGC 10 tests failed. FAILED: org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testTrigger Error Message: number of ops expected:<2> but was:<1> Stack Trace: java.lang.AssertionError: number of ops expected:<2> but was:<1> at __randomizedtesting.SeedInfo.seed([59C020624117F8F2:3A0B16E0D8D88BDF]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.failNotEquals(Assert.java:647) at org.junit.Assert.assertEquals(Assert.java:128) at org.junit.Assert.assertEquals(Assert.java:472) at org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testTrigger(IndexSizeTriggerTest.java:187) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.lang.Thread.run(Thread.java:748) FAILED: org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testSplitIntegration Error Message: Stack Trace:
[jira] [Commented] (SOLR-12163) Ref Guide: Improve Setting Up an External ZK Ensemble page
[ https://issues.apache.org/jira/browse/SOLR-12163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16446287#comment-16446287 ] Steve Rowe commented on SOLR-12163: --- After running them by Cassandra offline, I pushed cleanups for a few minor issues I noticed on the page. > Ref Guide: Improve Setting Up an External ZK Ensemble page > -- > > Key: SOLR-12163 > URL: https://issues.apache.org/jira/browse/SOLR-12163 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: documentation >Reporter: Cassandra Targett >Assignee: Cassandra Targett >Priority: Major > Fix For: 7.4, master (8.0) > > Attachments: setting-up-an-external-zookeeper-ensemble.adoc > > > I had to set up a ZK ensemble the other day for the first time in a while, > and thought I'd test our docs on the subject while I was at it. I headed over > to > https://lucene.apache.org/solr/guide/setting-up-an-external-zookeeper-ensemble.html, > and...Well, I still haven't gotten back to what I was trying to do, but I > rewrote the entire page. > The problem to me is that the page today is mostly a stripped down copy of > the ZK Getting Started docs: walking through setting up a single ZK instance > before introducing the idea of an ensemble and going back through the same > configs again to update them for the ensemble. > IOW, despite the page being titled "setting up an ensemble", it's mostly > about not setting up an ensemble. That's at the end of the page, which itself > focuses a bit heavily on the use case of running an ensemble on a single > server (so, if you're counting...that's 3 use cases we don't want people to > use discussed in detail on a page that's supposedly about _not_ doing any of > those things). > So, I took all of it and restructured the whole thing to focus primarily on > the use case we want people to use: running 3 ZK nodes on different machines. > Running 3 on one machine is still there, but noted in passing with the > appropriate caveats. I've also added information about choosing to use a > chroot, which AFAICT was only covered in the section on Taking Solr to > Production. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12163) Ref Guide: Improve Setting Up an External ZK Ensemble page
[ https://issues.apache.org/jira/browse/SOLR-12163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16446283#comment-16446283 ] ASF subversion and git services commented on SOLR-12163: Commit e1ccb49956d19f2449c482ece69faf9abe901095 in lucene-solr's branch refs/heads/branch_7x from [~steve_rowe] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=e1ccb49 ] SOLR-12163: Minor cleanups > Ref Guide: Improve Setting Up an External ZK Ensemble page > -- > > Key: SOLR-12163 > URL: https://issues.apache.org/jira/browse/SOLR-12163 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: documentation >Reporter: Cassandra Targett >Assignee: Cassandra Targett >Priority: Major > Fix For: 7.4, master (8.0) > > Attachments: setting-up-an-external-zookeeper-ensemble.adoc > > > I had to set up a ZK ensemble the other day for the first time in a while, > and thought I'd test our docs on the subject while I was at it. I headed over > to > https://lucene.apache.org/solr/guide/setting-up-an-external-zookeeper-ensemble.html, > and...Well, I still haven't gotten back to what I was trying to do, but I > rewrote the entire page. > The problem to me is that the page today is mostly a stripped down copy of > the ZK Getting Started docs: walking through setting up a single ZK instance > before introducing the idea of an ensemble and going back through the same > configs again to update them for the ensemble. > IOW, despite the page being titled "setting up an ensemble", it's mostly > about not setting up an ensemble. That's at the end of the page, which itself > focuses a bit heavily on the use case of running an ensemble on a single > server (so, if you're counting...that's 3 use cases we don't want people to > use discussed in detail on a page that's supposedly about _not_ doing any of > those things). > So, I took all of it and restructured the whole thing to focus primarily on > the use case we want people to use: running 3 ZK nodes on different machines. > Running 3 on one machine is still there, but noted in passing with the > appropriate caveats. I've also added information about choosing to use a > chroot, which AFAICT was only covered in the section on Taking Solr to > Production. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12163) Ref Guide: Improve Setting Up an External ZK Ensemble page
[ https://issues.apache.org/jira/browse/SOLR-12163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16446284#comment-16446284 ] ASF subversion and git services commented on SOLR-12163: Commit 76578cf17b07c7d3d3440de171c031386a10aa28 in lucene-solr's branch refs/heads/master from [~steve_rowe] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=76578cf ] SOLR-12163: Minor cleanups > Ref Guide: Improve Setting Up an External ZK Ensemble page > -- > > Key: SOLR-12163 > URL: https://issues.apache.org/jira/browse/SOLR-12163 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: documentation >Reporter: Cassandra Targett >Assignee: Cassandra Targett >Priority: Major > Fix For: 7.4, master (8.0) > > Attachments: setting-up-an-external-zookeeper-ensemble.adoc > > > I had to set up a ZK ensemble the other day for the first time in a while, > and thought I'd test our docs on the subject while I was at it. I headed over > to > https://lucene.apache.org/solr/guide/setting-up-an-external-zookeeper-ensemble.html, > and...Well, I still haven't gotten back to what I was trying to do, but I > rewrote the entire page. > The problem to me is that the page today is mostly a stripped down copy of > the ZK Getting Started docs: walking through setting up a single ZK instance > before introducing the idea of an ensemble and going back through the same > configs again to update them for the ensemble. > IOW, despite the page being titled "setting up an ensemble", it's mostly > about not setting up an ensemble. That's at the end of the page, which itself > focuses a bit heavily on the use case of running an ensemble on a single > server (so, if you're counting...that's 3 use cases we don't want people to > use discussed in detail on a page that's supposedly about _not_ doing any of > those things). > So, I took all of it and restructured the whole thing to focus primarily on > the use case we want people to use: running 3 ZK nodes on different machines. > Running 3 on one machine is still there, but noted in passing with the > appropriate caveats. I've also added information about choosing to use a > chroot, which AFAICT was only covered in the section on Taking Solr to > Production. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12256) Aliases and eventual consistency (should use sync())
[ https://issues.apache.org/jira/browse/SOLR-12256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Smiley updated SOLR-12256: Attachment: SOLR-12256.patch > Aliases and eventual consistency (should use sync()) > > > Key: SOLR-12256 > URL: https://issues.apache.org/jira/browse/SOLR-12256 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Reporter: David Smiley >Assignee: David Smiley >Priority: Major > Attachments: SOLR-12256.patch > > > ZkStateReader.AliasesManager.update() reads alias info from ZK into the > ZkStateReader. This method is called in ~5 places (+2 for tests). In at > least some of these places, the caller assumes that the alias info is > subsequently up to date when in fact this might not be so since ZK is allowed > to return a stale value. ZooKeeper.sync() can be called to force an up to > date value. As with sync(), AliasManager.update() ought not to be called > aggressively/commonly, only in certain circumstances (e.g. _after_ failing to > resolve stuff that would otherwise return an error). > And related to this eventual consistency issue, SetAliasPropCmd will throw an > exception if the alias doesn't exist. Fair enough, but sometimes (as seen in > some tests), the node receiving the command to update Alias properties is > simply "behind"; it does not yet know about an alias that other nodes know > about. I believe this is the cause of some failures in AliasIntegrationTest; > perhaps others. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-12256) Aliases and eventual consistency (should use sync())
David Smiley created SOLR-12256: --- Summary: Aliases and eventual consistency (should use sync()) Key: SOLR-12256 URL: https://issues.apache.org/jira/browse/SOLR-12256 Project: Solr Issue Type: Bug Security Level: Public (Default Security Level. Issues are Public) Components: SolrCloud Reporter: David Smiley Assignee: David Smiley ZkStateReader.AliasesManager.update() reads alias info from ZK into the ZkStateReader. This method is called in ~5 places (+2 for tests). In at least some of these places, the caller assumes that the alias info is subsequently up to date when in fact this might not be so since ZK is allowed to return a stale value. ZooKeeper.sync() can be called to force an up to date value. As with sync(), AliasManager.update() ought not to be called aggressively/commonly, only in certain circumstances (e.g. _after_ failing to resolve stuff that would otherwise return an error). And related to this eventual consistency issue, SetAliasPropCmd will throw an exception if the alias doesn't exist. Fair enough, but sometimes (as seen in some tests), the node receiving the command to update Alias properties is simply "behind"; it does not yet know about an alias that other nodes know about. I believe this is the cause of some failures in AliasIntegrationTest; perhaps others. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-BadApples-NightlyTests-master - Build # 8 - Unstable
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-NightlyTests-master/8/ 12 tests failed. FAILED: org.apache.solr.cloud.RestartWhileUpdatingTest.test Error Message: There are still nodes recoverying - waited for 320 seconds Stack Trace: java.lang.AssertionError: There are still nodes recoverying - waited for 320 seconds at __randomizedtesting.SeedInfo.seed([F0F18ECAFD45CB1C:78A5B11053B9A6E4]:0) at org.junit.Assert.fail(Assert.java:93) at org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:185) at org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForRecoveriesToFinish(AbstractFullDistribZkTestBase.java:921) at org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForThingsToLevelOut(AbstractFullDistribZkTestBase.java:1478) at org.apache.solr.cloud.RestartWhileUpdatingTest.test(RestartWhileUpdatingTest.java:145) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:993) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at
[jira] [Moved] (LUCENE-8262) NativeFSLockFactory loses the channel when a thread is interrupted and the SolrCore becomes unusable after
[ https://issues.apache.org/jira/browse/LUCENE-8262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Christine Poerschke moved SOLR-12232 to LUCENE-8262: Affects Version/s: (was: 7.1.1) 7.1.1 Security: (was: Public) Lucene Fields: New Key: LUCENE-8262 (was: SOLR-12232) Project: Lucene - Core (was: Solr) > NativeFSLockFactory loses the channel when a thread is interrupted and the > SolrCore becomes unusable after > -- > > Key: LUCENE-8262 > URL: https://issues.apache.org/jira/browse/LUCENE-8262 > Project: Lucene - Core > Issue Type: Bug >Affects Versions: 7.1.1 >Reporter: Jeff Miller >Assignee: Erick Erickson >Priority: Minor > Labels: NativeFSLockFactory, locking > Original Estimate: 24h > Time Spent: 10m > Remaining Estimate: 23h 50m > > The condition is rare for us and seems basically a race. If a thread that is > running just happens to have the FileChannel open for NativeFSLockFactory and > is interrupted, the channel is closed since it extends > [AbstractInterruptibleChannel|https://docs.oracle.com/javase/7/docs/api/java/nio/channels/spi/AbstractInterruptibleChannel.html] > Unfortunately this means the Solr Core has to be unloaded and reopened to > make the core usable again as the ensureValid check forever throws an > exception after. > org.apache.lucene.store.AlreadyClosedException: FileLock invalidated by an > external force: > NativeFSLock(path=data/index/write.lock,impl=sun.nio.ch.FileLockImpl[0:9223372036854775807 > exclusive invalid],creationTime=2018-04-06T21:45:11Z) at > org.apache.lucene.store.NativeFSLockFactory$NativeFSLock.ensureValid(NativeFSLockFactory.java:178) > at > org.apache.lucene.store.LockValidatingDirectoryWrapper.createOutput(LockValidatingDirectoryWrapper.java:43) > at > org.apache.lucene.store.TrackingDirectoryWrapper.createOutput(TrackingDirectoryWrapper.java:43) > at > org.apache.lucene.codecs.compressing.CompressingStoredFieldsWriter.(CompressingStoredFieldsWriter.java:113) > at > org.apache.lucene.codecs.compressing.CompressingStoredFieldsFormat.fieldsWriter(CompressingStoredFieldsFormat.java:128) > at > org.apache.lucene.codecs.lucene50.Lucene50StoredFieldsFormat.fieldsWriter(Lucene50StoredFieldsFormat.java:183) > > Proposed solution is using AsynchronousFileChannel instead, since this is > only operating on a lock and .size method -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (LUCENE-8262) NativeFSLockFactory loses the channel when a thread is interrupted and the SolrCore becomes unusable after
[ https://issues.apache.org/jira/browse/LUCENE-8262?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16446269#comment-16446269 ] Christine Poerschke edited comment on LUCENE-8262 at 4/20/18 7:41 PM: -- {quote}... Is this perhaps more properly a Lucene issue? {quote} Good question. JIRA can support issue moves between projects – I think, let me try that here, SOLR-12232 would become a forwarding link to the LUCENE issue. edit: SOLR-12232 moved to be LUCENE-8262 was (Author: cpoerschke): bq. ... Is this perhaps more properly a Lucene issue? Good question. JIRA can support issue moves between projects -- I think, let me try that here, SOLR-12232 would become a forwarding link to the LUCENE issue. > NativeFSLockFactory loses the channel when a thread is interrupted and the > SolrCore becomes unusable after > -- > > Key: LUCENE-8262 > URL: https://issues.apache.org/jira/browse/LUCENE-8262 > Project: Lucene - Core > Issue Type: Bug >Affects Versions: 7.1.1 >Reporter: Jeff Miller >Assignee: Erick Erickson >Priority: Minor > Labels: NativeFSLockFactory, locking > Original Estimate: 24h > Time Spent: 10m > Remaining Estimate: 23h 50m > > The condition is rare for us and seems basically a race. If a thread that is > running just happens to have the FileChannel open for NativeFSLockFactory and > is interrupted, the channel is closed since it extends > [AbstractInterruptibleChannel|https://docs.oracle.com/javase/7/docs/api/java/nio/channels/spi/AbstractInterruptibleChannel.html] > Unfortunately this means the Solr Core has to be unloaded and reopened to > make the core usable again as the ensureValid check forever throws an > exception after. > org.apache.lucene.store.AlreadyClosedException: FileLock invalidated by an > external force: > NativeFSLock(path=data/index/write.lock,impl=sun.nio.ch.FileLockImpl[0:9223372036854775807 > exclusive invalid],creationTime=2018-04-06T21:45:11Z) at > org.apache.lucene.store.NativeFSLockFactory$NativeFSLock.ensureValid(NativeFSLockFactory.java:178) > at > org.apache.lucene.store.LockValidatingDirectoryWrapper.createOutput(LockValidatingDirectoryWrapper.java:43) > at > org.apache.lucene.store.TrackingDirectoryWrapper.createOutput(TrackingDirectoryWrapper.java:43) > at > org.apache.lucene.codecs.compressing.CompressingStoredFieldsWriter.(CompressingStoredFieldsWriter.java:113) > at > org.apache.lucene.codecs.compressing.CompressingStoredFieldsFormat.fieldsWriter(CompressingStoredFieldsFormat.java:128) > at > org.apache.lucene.codecs.lucene50.Lucene50StoredFieldsFormat.fieldsWriter(Lucene50StoredFieldsFormat.java:183) > > Proposed solution is using AsynchronousFileChannel instead, since this is > only operating on a lock and .size method -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12232) NativeFSLockFactory loses the channel when a thread is interrupted and the SolrCore becomes unusable after
[ https://issues.apache.org/jira/browse/SOLR-12232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16446269#comment-16446269 ] Christine Poerschke commented on SOLR-12232: bq. ... Is this perhaps more properly a Lucene issue? Good question. JIRA can support issue moves between projects -- I think, let me try that here, SOLR-12232 would become a forwarding link to the LUCENE issue. > NativeFSLockFactory loses the channel when a thread is interrupted and the > SolrCore becomes unusable after > -- > > Key: SOLR-12232 > URL: https://issues.apache.org/jira/browse/SOLR-12232 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 7.1.1 >Reporter: Jeff Miller >Assignee: Erick Erickson >Priority: Minor > Labels: NativeFSLockFactory, locking > Original Estimate: 24h > Time Spent: 10m > Remaining Estimate: 23h 50m > > The condition is rare for us and seems basically a race. If a thread that is > running just happens to have the FileChannel open for NativeFSLockFactory and > is interrupted, the channel is closed since it extends > [AbstractInterruptibleChannel|https://docs.oracle.com/javase/7/docs/api/java/nio/channels/spi/AbstractInterruptibleChannel.html] > Unfortunately this means the Solr Core has to be unloaded and reopened to > make the core usable again as the ensureValid check forever throws an > exception after. > org.apache.lucene.store.AlreadyClosedException: FileLock invalidated by an > external force: > NativeFSLock(path=data/index/write.lock,impl=sun.nio.ch.FileLockImpl[0:9223372036854775807 > exclusive invalid],creationTime=2018-04-06T21:45:11Z) at > org.apache.lucene.store.NativeFSLockFactory$NativeFSLock.ensureValid(NativeFSLockFactory.java:178) > at > org.apache.lucene.store.LockValidatingDirectoryWrapper.createOutput(LockValidatingDirectoryWrapper.java:43) > at > org.apache.lucene.store.TrackingDirectoryWrapper.createOutput(TrackingDirectoryWrapper.java:43) > at > org.apache.lucene.codecs.compressing.CompressingStoredFieldsWriter.(CompressingStoredFieldsWriter.java:113) > at > org.apache.lucene.codecs.compressing.CompressingStoredFieldsFormat.fieldsWriter(CompressingStoredFieldsFormat.java:128) > at > org.apache.lucene.codecs.lucene50.Lucene50StoredFieldsFormat.fieldsWriter(Lucene50StoredFieldsFormat.java:183) > > Proposed solution is using AsynchronousFileChannel instead, since this is > only operating on a lock and .size method -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11646) Ref Guide: Update API examples to include v2 style examples
[ https://issues.apache.org/jira/browse/SOLR-11646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16446261#comment-16446261 ] Cassandra Targett commented on SOLR-11646: -- I've done another round of adding v2 examples: || Page || Old API endpoint || |blob-store-api.adoc | /.system/blob |config-api.adoc | //config | |configuring-solrconfig-xml.adoc | /admin/collections | |configsets-api.adoc | /admin/configs | |implicit-requesthandlers.adoc | //config | Along the way, I made some extensive changes to a few of these pages as I noticed problems with them that needed to be fixed. The pages for configsets got updated with a bit more information about what configsets are; the config-api.adoc got better descriptions for the commands & better examples in a couple of cases; the list of implicit handlers got re-organized and pulled out of the table that looked really bad in the PDF. Remaining pages to be done: || Page || Old API endpoint || |collections-api.adoc | /admin/collections | |coreadmin-api.adoc | /admin/cores | |enabling-ssl.adoc | /admin/collections | |making-and-restoring-backups.adoc | /admin/cores | |other-parsers.adoc | //update | |request-parameters-api.adoc | //config | |rule-based-authorization-plugin.adoc | /admin/authorization | |rule-based-authorization-plugin.adoc | /admin/collections | |schema-api.adoc | //schema | |schemaless-mode.adoc | //config | |schemaless-mode.adoc | //schema | |schemaless-mode.adoc | //update | |solr-tutorial.adoc | //config | |solr-tutorial.adoc | //schema | |solr-tutorial.adoc | /admin/collections | |solrcloud-autoscaling-api.adoc | /autoscaling/* | |solrcloud-autoscaling-auto-add-replicas.adoc | /admin/collections | |solrcloud-autoscaling-fault-tolerance.adoc | /autoscaling/* | |solrcloud-autoscaling-overview.adoc | /admin/collections | |solrcloud-autoscaling-overview.adoc | /autoscaling/* | |updating-parts-of-documents.adoc | //update | |uploading-data-with-index-handlers.adoc | //update | I kept the order as "v1" then "v2" for now, only to be consistent about it while I continue to consider either putting v2 first or making the 2nd tab the default (which would require some more extensive changes to the code that's converting the asciidoc to html) is a better option. > Ref Guide: Update API examples to include v2 style examples > --- > > Key: SOLR-11646 > URL: https://issues.apache.org/jira/browse/SOLR-11646 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: documentation, v2 API >Reporter: Cassandra Targett >Assignee: Cassandra Targett >Priority: Major > > The Ref Guide currently only has a single page with what might be generously > called an overview of the v2 API added in 6.5 > (https://lucene.apache.org/solr/guide/v2-api.html) but most of the actual > APIs that support the v2 approach do not show an example of using it with the > v2 style. A few v2-style APIs are already used as examples, but there's > nothing consistent. > With this issue I'll add API input/output examples throughout the Guide. Just > in terms of process, my intention is to have a series of commits to the pages > as I work through them so we make incremental progress. I'll start by adding > a list of pages/APIs to this issue so the scope of the work is clear. > Once this is done we can figure out what to do with the V2 API page itself - > perhaps it gets archived and replaced with another page that describes Solr's > APIs overall; perhaps by then we figure out something else to do with it. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-4793) Solr Cloud can't upload large config files ( > 1MB) to Zookeeper
[ https://issues.apache.org/jira/browse/SOLR-4793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16446257#comment-16446257 ] Christine Poerschke commented on SOLR-4793: --- bq. I think the long term solution could be to implement something like a BlobStoreResourceLoader, and a configset (as a whole or in parts) could be loaded from ZK or blob store. I like the idea of (where appropriate) getting parts of a configset from 'elsewhere' and that could be from a blob store or from somewhere else. ticket cross-reference: * SOLR-9887 is about stopwords and synonyms from a JDBC source - cc [~kaessmann] [~tboeghk] + [~dsmiley] re: the pulling from a streaming expression mention > Solr Cloud can't upload large config files ( > 1MB) to Zookeeper > - > > Key: SOLR-4793 > URL: https://issues.apache.org/jira/browse/SOLR-4793 > Project: Solr > Issue Type: Improvement >Reporter: Son Nguyen >Priority: Major > Attachments: SOLR-4793.patch > > > Zookeeper set znode size limit to 1MB by default. So we can't start Solr > Cloud with some large config files, like synonyms.txt. > Jan Høydahl has a good idea: > "SolrCloud is designed with an assumption that you should be able to upload > your whole disk-based conf folder into ZK, and that you should be able to add > an empty Solr node to a cluster and it would download all config from ZK. So > immediately a splitting strategy automatically handled by ZkSolresourceLoader > for large files could be one way forward, i.e. store synonyms.txt as e.g. > __001_synonyms.txt __002_synonyms.txt" -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11646) Ref Guide: Update API examples to include v2 style examples
[ https://issues.apache.org/jira/browse/SOLR-11646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16446252#comment-16446252 ] ASF subversion and git services commented on SOLR-11646: Commit 5915e61e42aa0311b8b574949fed8a2ec566a502 in lucene-solr's branch refs/heads/branch_7x from [~ctargett] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=5915e61 ] SOLR-11646: more v2 examples; redesign Implicit Handler page to add v2 api paths where they exist > Ref Guide: Update API examples to include v2 style examples > --- > > Key: SOLR-11646 > URL: https://issues.apache.org/jira/browse/SOLR-11646 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: documentation, v2 API >Reporter: Cassandra Targett >Assignee: Cassandra Targett >Priority: Major > > The Ref Guide currently only has a single page with what might be generously > called an overview of the v2 API added in 6.5 > (https://lucene.apache.org/solr/guide/v2-api.html) but most of the actual > APIs that support the v2 approach do not show an example of using it with the > v2 style. A few v2-style APIs are already used as examples, but there's > nothing consistent. > With this issue I'll add API input/output examples throughout the Guide. Just > in terms of process, my intention is to have a series of commits to the pages > as I work through them so we make incremental progress. I'll start by adding > a list of pages/APIs to this issue so the scope of the work is clear. > Once this is done we can figure out what to do with the V2 API page itself - > perhaps it gets archived and replaced with another page that describes Solr's > APIs overall; perhaps by then we figure out something else to do with it. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11646) Ref Guide: Update API examples to include v2 style examples
[ https://issues.apache.org/jira/browse/SOLR-11646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16446251#comment-16446251 ] ASF subversion and git services commented on SOLR-11646: Commit df57afce9be949aef65330b2fe4243667c13d4c3 in lucene-solr's branch refs/heads/branch_7x from [~ctargett] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=df57afc ] SOLR-11646: Add v2 APIs for Config API; change "ConfigSet" to "configset" in docs & specs to match community spelling > Ref Guide: Update API examples to include v2 style examples > --- > > Key: SOLR-11646 > URL: https://issues.apache.org/jira/browse/SOLR-11646 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: documentation, v2 API >Reporter: Cassandra Targett >Assignee: Cassandra Targett >Priority: Major > > The Ref Guide currently only has a single page with what might be generously > called an overview of the v2 API added in 6.5 > (https://lucene.apache.org/solr/guide/v2-api.html) but most of the actual > APIs that support the v2 approach do not show an example of using it with the > v2 style. A few v2-style APIs are already used as examples, but there's > nothing consistent. > With this issue I'll add API input/output examples throughout the Guide. Just > in terms of process, my intention is to have a series of commits to the pages > as I work through them so we make incremental progress. I'll start by adding > a list of pages/APIs to this issue so the scope of the work is clear. > Once this is done we can figure out what to do with the V2 API page itself - > perhaps it gets archived and replaced with another page that describes Solr's > APIs overall; perhaps by then we figure out something else to do with it. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11646) Ref Guide: Update API examples to include v2 style examples
[ https://issues.apache.org/jira/browse/SOLR-11646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16446250#comment-16446250 ] ASF subversion and git services commented on SOLR-11646: Commit b99e07c7531f1fe61e9d33dfa17b33600f12a00c in lucene-solr's branch refs/heads/master from [~ctargett] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=b99e07c ] SOLR-11646: more v2 examples; redesign Implicit Handler page to add v2 api paths where they exist > Ref Guide: Update API examples to include v2 style examples > --- > > Key: SOLR-11646 > URL: https://issues.apache.org/jira/browse/SOLR-11646 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: documentation, v2 API >Reporter: Cassandra Targett >Assignee: Cassandra Targett >Priority: Major > > The Ref Guide currently only has a single page with what might be generously > called an overview of the v2 API added in 6.5 > (https://lucene.apache.org/solr/guide/v2-api.html) but most of the actual > APIs that support the v2 approach do not show an example of using it with the > v2 style. A few v2-style APIs are already used as examples, but there's > nothing consistent. > With this issue I'll add API input/output examples throughout the Guide. Just > in terms of process, my intention is to have a series of commits to the pages > as I work through them so we make incremental progress. I'll start by adding > a list of pages/APIs to this issue so the scope of the work is clear. > Once this is done we can figure out what to do with the V2 API page itself - > perhaps it gets archived and replaced with another page that describes Solr's > APIs overall; perhaps by then we figure out something else to do with it. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11646) Ref Guide: Update API examples to include v2 style examples
[ https://issues.apache.org/jira/browse/SOLR-11646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16446249#comment-16446249 ] ASF subversion and git services commented on SOLR-11646: Commit d08e62d59878147b8447698e87374dfbfeb597c1 in lucene-solr's branch refs/heads/master from [~ctargett] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=d08e62d ] SOLR-11646: Add v2 APIs for Config API; change "ConfigSet" to "configset" in docs & specs to match community spelling > Ref Guide: Update API examples to include v2 style examples > --- > > Key: SOLR-11646 > URL: https://issues.apache.org/jira/browse/SOLR-11646 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: documentation, v2 API >Reporter: Cassandra Targett >Assignee: Cassandra Targett >Priority: Major > > The Ref Guide currently only has a single page with what might be generously > called an overview of the v2 API added in 6.5 > (https://lucene.apache.org/solr/guide/v2-api.html) but most of the actual > APIs that support the v2 approach do not show an example of using it with the > v2 style. A few v2-style APIs are already used as examples, but there's > nothing consistent. > With this issue I'll add API input/output examples throughout the Guide. Just > in terms of process, my intention is to have a series of commits to the pages > as I work through them so we make incremental progress. I'll start by adding > a list of pages/APIs to this issue so the scope of the work is clear. > Once this is done we can figure out what to do with the V2 API page itself - > perhaps it gets archived and replaced with another page that describes Solr's > APIs overall; perhaps by then we figure out something else to do with it. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-12255) ref guide updates for new "nori" analyis module for Korean
Hoss Man created SOLR-12255: --- Summary: ref guide updates for new "nori" analyis module for Korean Key: SOLR-12255 URL: https://issues.apache.org/jira/browse/SOLR-12255 Project: Solr Issue Type: Improvement Security Level: Public (Default Security Level. Issues are Public) Components: documentation Reporter: Hoss Man LUCENE-8231 added a new "nori" analysis module -- similar for the "Kuromoji" module but for Korean text. We should update the ref guide to mention how to use the new module & configure the new Factories. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk-10) - Build # 1761 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/1761/ Java: 64bit/jdk-10 -XX:+UseCompressedOops -XX:+UseParallelGC 1 tests failed. FAILED: org.apache.solr.cloud.autoscaling.NodeMarkersRegistrationTest.testNodeMarkersRegistration Error Message: Path /autoscaling/nodeLost/127.0.0.1:38385_solr exists Stack Trace: java.lang.AssertionError: Path /autoscaling/nodeLost/127.0.0.1:38385_solr exists at __randomizedtesting.SeedInfo.seed([BE0B3C272B7D7794:A6B1B42B2548BA7B]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.junit.Assert.assertFalse(Assert.java:68) at org.apache.solr.cloud.autoscaling.NodeMarkersRegistrationTest.testNodeMarkersRegistration(NodeMarkersRegistrationTest.java:120) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:564) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.base/java.lang.Thread.run(Thread.java:844) Build Log: [...truncated 14506 lines...] [junit4] Suite: org.apache.solr.cloud.autoscaling.NodeMarkersRegistrationTest [junit4] 2> Creating
[jira] [Commented] (SOLR-4793) Solr Cloud can't upload large config files ( > 1MB) to Zookeeper
[ https://issues.apache.org/jira/browse/SOLR-4793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16446036#comment-16446036 ] Shawn Heisey commented on SOLR-4793: bq. Hmm, I don't think we should be starting with this kind of magic If there are still things to work out with the blob store (this is the .system collection, right?) then I agree, we should let those things bake for a while before we implement automagic redirection from ZK. I do like Jan's idea, though. > Solr Cloud can't upload large config files ( > 1MB) to Zookeeper > - > > Key: SOLR-4793 > URL: https://issues.apache.org/jira/browse/SOLR-4793 > Project: Solr > Issue Type: Improvement >Reporter: Son Nguyen >Priority: Major > Attachments: SOLR-4793.patch > > > Zookeeper set znode size limit to 1MB by default. So we can't start Solr > Cloud with some large config files, like synonyms.txt. > Jan Høydahl has a good idea: > "SolrCloud is designed with an assumption that you should be able to upload > your whole disk-based conf folder into ZK, and that you should be able to add > an empty Solr node to a cluster and it would download all config from ZK. So > immediately a splitting strategy automatically handled by ZkSolresourceLoader > for large files could be one way forward, i.e. store synonyms.txt as e.g. > __001_synonyms.txt __002_synonyms.txt" -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: Can't find resource 'currency.xml' in classpath | currencyFieldType
: I am facing an issue on solr search version 7.x. I want to create a : currencyFIeldType in my managed-schema file. In order to do that I created : the following entries : : : * * that configuration instructs this instance of CurrencyFieldType to use a local file named "currency.xml" in order to know what the exchange rates are between the various currencies it encounters. : When I restart solr every time I get this exception : ... : *Caused by: org.apache.solr.core.SolrResourceNotFoundException: Can't find : resource 'currency.xml' in classpath or ...that error indicates that CurrencyFieldType can not find the file 'currency.xml' file you told it to use. Please review the docs for information on the various options for how to specify the exchange rates... https://lucene.apache.org/solr/guide/7_3/working-with-currencies-and-exchange-rates.html -Hoss http://www.lucidworks.com/ - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-12254) TestInPlaceUpdatesDistrib reproducing failure
Steve Rowe created SOLR-12254: - Summary: TestInPlaceUpdatesDistrib reproducing failure Key: SOLR-12254 URL: https://issues.apache.org/jira/browse/SOLR-12254 Project: Solr Issue Type: Bug Security Level: Public (Default Security Level. Issues are Public) Components: Tests, update Reporter: Steve Rowe >From [https://builds.apache.org/job/Lucene-Solr-NightlyTests-7.x/205/], 100% >reproducing (see [https://builds.apache.org/job/Lucene-Solr-repro/535/]): {noformat} [junit4] 2> NOTE: reproduce with: ant test -Dtestcase=TestInPlaceUpdatesDistrib -Dtests.method=test -Dtests.seed=9BC71F2BDDB8F28A -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt -Dtests.locale=ru-RU -Dtests.timezone=Hongkong -Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1 [junit4] ERROR 23.6s J2 | TestInPlaceUpdatesDistrib.test <<< [junit4]> Throwable #1: org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at https://127.0.0.1:56916/collection1: ERROR adding document SolrInputDocument(fields: [id=-216, title_s=title-216, id_i=-216, _version_=1598231319283761152]) [junit4]>at __randomizedtesting.SeedInfo.seed([9BC71F2BDDB8F28A:139320F173449F72]:0) [junit4]>at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643) [junit4]>at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255) [junit4]>at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244) [junit4]>at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194) [junit4]>at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211) [junit4]>at org.apache.solr.update.TestInPlaceUpdatesDistrib.addDocAndGetVersion(TestInPlaceUpdatesDistrib.java:1105) [junit4]>at org.apache.solr.update.TestInPlaceUpdatesDistrib.buildRandomIndex(TestInPlaceUpdatesDistrib.java:1150) [junit4]>at org.apache.solr.update.TestInPlaceUpdatesDistrib.docValuesUpdateTest(TestInPlaceUpdatesDistrib.java:318) [junit4]>at org.apache.solr.update.TestInPlaceUpdatesDistrib.test(TestInPlaceUpdatesDistrib.java:144) [junit4]>at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:993) [junit4]>at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968) [junit4]>at java.lang.Thread.run(Thread.java:748) [...] [junit4] 2> NOTE: test params are: codec=Asserting(Lucene70): {title_s=PostingsFormat(name=LuceneFixedGap), id=Lucene50(blocksize=128), id_field_copy_that_does_not_support_in_place_update_s=PostingsFormat(name=Memory)}, docValues:{inplace_updatable_float=DocValuesFormat(name=Lucene70), id_i=DocValuesFormat(name=Direct), _version_=DocValuesFormat(name=Asserting), id=DocValuesFormat(name=Memory), inplace_updatable_int_with_default=DocValuesFormat(name=Lucene70), inplace_updatable_float_with_default=DocValuesFormat(name=Direct)}, maxPointsInLeafNode=922, maxMBSortInHeap=5.690194493492291, sim=RandomSimilarity(queryNorm=true): {}, locale=ru-RU, timezone=Hongkong [junit4] 2> NOTE: Linux 3.13.0-88-generic amd64/Oracle Corporation 1.8.0_152 (64-bit)/cpus=4,threads=1,free=127774192,total=523763712 {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12254) TestInPlaceUpdatesDistrib reproducing failure
[ https://issues.apache.org/jira/browse/SOLR-12254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Rowe updated SOLR-12254: -- Description: >From [https://builds.apache.org/job/Lucene-Solr-NightlyTests-7.x/205/], 100% >reproducing (see [https://builds.apache.org/job/Lucene-Solr-repro/535/]): {noformat} Checking out Revision 3d21fda4ce1c899f31b8f00e200eb1ac0d23d17b (refs/remotes/origin/branch_7x) [...] [junit4] 2> NOTE: reproduce with: ant test -Dtestcase=TestInPlaceUpdatesDistrib -Dtests.method=test -Dtests.seed=9BC71F2BDDB8F28A -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt -Dtests.locale=ru-RU -Dtests.timezone=Hongkong -Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1 [junit4] ERROR 23.6s J2 | TestInPlaceUpdatesDistrib.test <<< [junit4]> Throwable #1: org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at https://127.0.0.1:56916/collection1: ERROR adding document SolrInputDocument(fields: [id=-216, title_s=title-216, id_i=-216, _version_=1598231319283761152]) [junit4]>at __randomizedtesting.SeedInfo.seed([9BC71F2BDDB8F28A:139320F173449F72]:0) [junit4]>at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643) [junit4]>at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255) [junit4]>at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244) [junit4]>at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194) [junit4]>at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211) [junit4]>at org.apache.solr.update.TestInPlaceUpdatesDistrib.addDocAndGetVersion(TestInPlaceUpdatesDistrib.java:1105) [junit4]>at org.apache.solr.update.TestInPlaceUpdatesDistrib.buildRandomIndex(TestInPlaceUpdatesDistrib.java:1150) [junit4]>at org.apache.solr.update.TestInPlaceUpdatesDistrib.docValuesUpdateTest(TestInPlaceUpdatesDistrib.java:318) [junit4]>at org.apache.solr.update.TestInPlaceUpdatesDistrib.test(TestInPlaceUpdatesDistrib.java:144) [junit4]>at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:993) [junit4]>at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968) [junit4]>at java.lang.Thread.run(Thread.java:748) [...] [junit4] 2> NOTE: test params are: codec=Asserting(Lucene70): {title_s=PostingsFormat(name=LuceneFixedGap), id=Lucene50(blocksize=128), id_field_copy_that_does_not_support_in_place_update_s=PostingsFormat(name=Memory)}, docValues:{inplace_updatable_float=DocValuesFormat(name=Lucene70), id_i=DocValuesFormat(name=Direct), _version_=DocValuesFormat(name=Asserting), id=DocValuesFormat(name=Memory), inplace_updatable_int_with_default=DocValuesFormat(name=Lucene70), inplace_updatable_float_with_default=DocValuesFormat(name=Direct)}, maxPointsInLeafNode=922, maxMBSortInHeap=5.690194493492291, sim=RandomSimilarity(queryNorm=true): {}, locale=ru-RU, timezone=Hongkong [junit4] 2> NOTE: Linux 3.13.0-88-generic amd64/Oracle Corporation 1.8.0_152 (64-bit)/cpus=4,threads=1,free=127774192,total=523763712 {noformat} was: >From [https://builds.apache.org/job/Lucene-Solr-NightlyTests-7.x/205/], 100% >reproducing (see [https://builds.apache.org/job/Lucene-Solr-repro/535/]): {noformat} [junit4] 2> NOTE: reproduce with: ant test -Dtestcase=TestInPlaceUpdatesDistrib -Dtests.method=test -Dtests.seed=9BC71F2BDDB8F28A -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt -Dtests.locale=ru-RU -Dtests.timezone=Hongkong -Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1 [junit4] ERROR 23.6s J2 | TestInPlaceUpdatesDistrib.test <<< [junit4]> Throwable #1: org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at https://127.0.0.1:56916/collection1: ERROR adding document SolrInputDocument(fields: [id=-216, title_s=title-216, id_i=-216, _version_=1598231319283761152]) [junit4]>at __randomizedtesting.SeedInfo.seed([9BC71F2BDDB8F28A:139320F173449F72]:0) [junit4]>at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643) [junit4]>at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255) [junit4]>at
[jira] [Commented] (SOLR-9272) Auto resolve zkHost for bin/solr zk for running Solr
[ https://issues.apache.org/jira/browse/SOLR-9272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16446018#comment-16446018 ] Amrit Sarkar commented on SOLR-9272: Thanks [~janhoy], bq. Default port functionality was buggy. Now defaults to 8983 Oh my bad, I thought I checked / tested properly and made sure all bases are covered, I will look through the current patch and understand where it lacked. > Auto resolve zkHost for bin/solr zk for running Solr > > > Key: SOLR-9272 > URL: https://issues.apache.org/jira/browse/SOLR-9272 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: scripts and tools >Affects Versions: 6.2 >Reporter: Jan Høydahl >Assignee: Jan Høydahl >Priority: Major > Labels: newdev > Attachments: SOLR-9272.patch, SOLR-9272.patch, SOLR-9272.patch, > SOLR-9272.patch, SOLR-9272.patch, SOLR-9272.patch, SOLR-9272.patch, > SOLR-9272.patch, SOLR-9272.patch > > > Spinoff from SOLR-9194: > We can skip requiring {{-z}} for {{bin/solr zk}} for a Solr that is already > running. We can optionally accept the {{-p}} parameter instead, and with that > use StatusTool to fetch the {{cloud/ZooKeeper}} property from there. It's > easier to remember solr port than zk string. > Example: > {noformat} > bin/solr start -c -p 9090 > bin/solr zk ls / -p 9090 > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 4575 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/4575/ Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseG1GC 1 tests failed. FAILED: org.apache.solr.cloud.DocValuesNotIndexedTest.testGroupingDVOnly Error Message: Unexpected number of elements in the group for intGSF: 3 Stack Trace: java.lang.AssertionError: Unexpected number of elements in the group for intGSF: 3 at __randomizedtesting.SeedInfo.seed([239C2F155B39FE69:B827414D1661CC37]:0) at org.junit.Assert.fail(Assert.java:93) at org.apache.solr.cloud.DocValuesNotIndexedTest.testGroupingDVOnly(DocValuesNotIndexedTest.java:379) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.lang.Thread.run(Thread.java:748) Build Log: [...truncated 12969 lines...] [junit4] Suite: org.apache.solr.cloud.DocValuesNotIndexedTest [junit4] 2> Creating dataDir:
[jira] [Commented] (LUCENE-8261) InterpolatedProperties.interpolate should quote the replacement
[ https://issues.apache.org/jira/browse/LUCENE-8261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16445935#comment-16445935 ] Steve Rowe commented on LUCENE-8261: bq. Very likely true. But should it be allowed? If we quoted the replacement it would fail with a more reasonable error later on (unresolved property)? I'm not sure if it would be more reasonable; as I said, I think the appropriate place to inform people about this problem is in validation (forms of which already occur with {{ant precommit}}) - that would maximize reasonableness. bq. Alternatively, we could resolve it recursively too (catching cycles), but it'd be more difficult to implement. Yeah, I've considered implementing it, but this is such a niche functionality, and as I said, so rarely even asked for, that I haven't done anything about it. > InterpolatedProperties.interpolate should quote the replacement > --- > > Key: LUCENE-8261 > URL: https://issues.apache.org/jira/browse/LUCENE-8261 > Project: Lucene - Core > Issue Type: Bug >Reporter: Dawid Weiss >Assignee: Dawid Weiss >Priority: Trivial > Attachments: LUCENE-8261.patch > > > InterpolatedProperties is used in lib check tasks in the build file. I > occasionally see this: > {code} > /home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/tools/custom-tasks.xml:108: > java.lang.IllegalArgumentException: named capturing group is missing > trailing '}' > at > java.base/java.util.regex.Matcher.appendExpandedReplacement(Matcher.java:1052) > at > java.base/java.util.regex.Matcher.appendReplacement(Matcher.java:908) > at > org.apache.lucene.dependencies.InterpolatedProperties.interpolate(InterpolatedProperties.java:64) > {code} > I don't think we ever need to use any group references in those replacements; > they should be fixed strings (quoted verbatim)? So > {{Pattern.quoteReplacement}} would be adequate here. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Reopened] (SOLR-12181) Add trigger based on document count
[ https://issues.apache.org/jira/browse/SOLR-12181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hoss Man reopened SOLR-12181: - {quote}Thanks Steve Rowe - I fixed the NPE, and some of these other failures are reproducible. I'll fix this on Monday (I disabled the test for now). {quote} IndexSizeTriggerTest has been failing a lot ... just in the past 24 hours... {noformat} "Suite?","Class","Method","Rate","Runs","Fails" "null","org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest","testSplitIntegration","38.8429752066116","121","47" "false","org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest","testTrigger","30.7692307692308","117","36" "false","org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest","testMergeIntegration","27.5","120","33" {noformat} > Add trigger based on document count > --- > > Key: SOLR-12181 > URL: https://issues.apache.org/jira/browse/SOLR-12181 > Project: Solr > Issue Type: Sub-task > Security Level: Public(Default Security Level. Issues are Public) > Components: AutoScaling >Affects Versions: master (8.0) >Reporter: Andrzej Bialecki >Assignee: Andrzej Bialecki >Priority: Major > Fix For: 7.4, master (8.0) > > Attachments: SOLR-12181.patch > > > This may turn out to be as simple as using a {{MetricTrigger}} but it's > likely this will require some specialization, and we may want to add this > type of trigger anyway for convenience. > The two control actions associated with this trigger will be SPLITSHARD and > (yet nonexistent) MERGESHARD. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-12253) Remove optimize button from the core admin page too
Erick Erickson created SOLR-12253: - Summary: Remove optimize button from the core admin page too Key: SOLR-12253 URL: https://issues.apache.org/jira/browse/SOLR-12253 Project: Solr Issue Type: Bug Security Level: Public (Default Security Level. Issues are Public) Reporter: Erick Erickson Assignee: Erick Erickson SOLR-7733 removed the optimize button in the individual core display but not the "core admin" link. Further the optimize button does nothing. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8261) InterpolatedProperties.interpolate should quote the replacement
[ https://issues.apache.org/jira/browse/LUCENE-8261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16445944#comment-16445944 ] Steve Rowe commented on LUCENE-8261: I should say that I agree that quoting the replacement is a reasonable step, so +1 to the patch. I'm just dubious that it will make things appreciably better for users. > InterpolatedProperties.interpolate should quote the replacement > --- > > Key: LUCENE-8261 > URL: https://issues.apache.org/jira/browse/LUCENE-8261 > Project: Lucene - Core > Issue Type: Bug >Reporter: Dawid Weiss >Assignee: Dawid Weiss >Priority: Trivial > Attachments: LUCENE-8261.patch > > > InterpolatedProperties is used in lib check tasks in the build file. I > occasionally see this: > {code} > /home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/tools/custom-tasks.xml:108: > java.lang.IllegalArgumentException: named capturing group is missing > trailing '}' > at > java.base/java.util.regex.Matcher.appendExpandedReplacement(Matcher.java:1052) > at > java.base/java.util.regex.Matcher.appendReplacement(Matcher.java:908) > at > org.apache.lucene.dependencies.InterpolatedProperties.interpolate(InterpolatedProperties.java:64) > {code} > I don't think we ever need to use any group references in those replacements; > they should be fixed strings (quoted verbatim)? So > {{Pattern.quoteReplacement}} would be adequate here. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8261) InterpolatedProperties.interpolate should quote the replacement
[ https://issues.apache.org/jira/browse/LUCENE-8261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16445955#comment-16445955 ] Dawid Weiss commented on LUCENE-8261: - Ok, let me leave it for a bit of spare time and I'll either add validation or recursive property resolution (shouldn't be too hard). > InterpolatedProperties.interpolate should quote the replacement > --- > > Key: LUCENE-8261 > URL: https://issues.apache.org/jira/browse/LUCENE-8261 > Project: Lucene - Core > Issue Type: Bug >Reporter: Dawid Weiss >Assignee: Dawid Weiss >Priority: Trivial > Attachments: LUCENE-8261.patch > > > InterpolatedProperties is used in lib check tasks in the build file. I > occasionally see this: > {code} > /home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/tools/custom-tasks.xml:108: > java.lang.IllegalArgumentException: named capturing group is missing > trailing '}' > at > java.base/java.util.regex.Matcher.appendExpandedReplacement(Matcher.java:1052) > at > java.base/java.util.regex.Matcher.appendReplacement(Matcher.java:908) > at > org.apache.lucene.dependencies.InterpolatedProperties.interpolate(InterpolatedProperties.java:64) > {code} > I don't think we ever need to use any group references in those replacements; > they should be fixed strings (quoted verbatim)? So > {{Pattern.quoteReplacement}} would be adequate here. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8261) InterpolatedProperties.interpolate should quote the replacement
[ https://issues.apache.org/jira/browse/LUCENE-8261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16445928#comment-16445928 ] Dawid Weiss commented on LUCENE-8261: - Very likely true. But should it be allowed? If we quoted the replacement it would fail with a more reasonable error later on (unresolved property)? Alternatively, we could resolve it recursively too (catching cycles), but it'd be more difficult to implement. > InterpolatedProperties.interpolate should quote the replacement > --- > > Key: LUCENE-8261 > URL: https://issues.apache.org/jira/browse/LUCENE-8261 > Project: Lucene - Core > Issue Type: Bug >Reporter: Dawid Weiss >Assignee: Dawid Weiss >Priority: Trivial > Attachments: LUCENE-8261.patch > > > InterpolatedProperties is used in lib check tasks in the build file. I > occasionally see this: > {code} > /home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/tools/custom-tasks.xml:108: > java.lang.IllegalArgumentException: named capturing group is missing > trailing '}' > at > java.base/java.util.regex.Matcher.appendExpandedReplacement(Matcher.java:1052) > at > java.base/java.util.regex.Matcher.appendReplacement(Matcher.java:908) > at > org.apache.lucene.dependencies.InterpolatedProperties.interpolate(InterpolatedProperties.java:64) > {code} > I don't think we ever need to use any group references in those replacements; > they should be fixed strings (quoted verbatim)? So > {{Pattern.quoteReplacement}} would be adequate here. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8261) InterpolatedProperties.interpolate should quote the replacement
[ https://issues.apache.org/jira/browse/LUCENE-8261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16445924#comment-16445924 ] Steve Rowe commented on LUCENE-8261: I believe that error happens because {{InterpolatedProperties}}'s interpolation is not recursive, but people have tried to use multiple indirection, e.g. {noformat} propA=value propB=${propA} /org/name=${propB} {noformat} So suppressing the error (via {{Pattern.quoteReplacement}}) would cause a different kind of trouble. I haven't done anything about this because none of the attempts at multiple indirection has persisted. Probably {{ivy-versions.properties}} validation should look for this and fail? > InterpolatedProperties.interpolate should quote the replacement > --- > > Key: LUCENE-8261 > URL: https://issues.apache.org/jira/browse/LUCENE-8261 > Project: Lucene - Core > Issue Type: Bug >Reporter: Dawid Weiss >Assignee: Dawid Weiss >Priority: Trivial > Attachments: LUCENE-8261.patch > > > InterpolatedProperties is used in lib check tasks in the build file. I > occasionally see this: > {code} > /home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/tools/custom-tasks.xml:108: > java.lang.IllegalArgumentException: named capturing group is missing > trailing '}' > at > java.base/java.util.regex.Matcher.appendExpandedReplacement(Matcher.java:1052) > at > java.base/java.util.regex.Matcher.appendReplacement(Matcher.java:908) > at > org.apache.lucene.dependencies.InterpolatedProperties.interpolate(InterpolatedProperties.java:64) > {code} > I don't think we ever need to use any group references in those replacements; > they should be fixed strings (quoted verbatim)? So > {{Pattern.quoteReplacement}} would be adequate here. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12159) Add memset Stream Evaluator
[ https://issues.apache.org/jira/browse/SOLR-12159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16445885#comment-16445885 ] ASF subversion and git services commented on SOLR-12159: Commit 8c9b00a7a0ae631dfef741efde6bd696a30b80f4 in lucene-solr's branch refs/heads/branch_7x from [~joel.bernstein] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=8c9b00a ] SOLR-12159: Add memset Stream Evaluator > Add memset Stream Evaluator > --- > > Key: SOLR-12159 > URL: https://issues.apache.org/jira/browse/SOLR-12159 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Joel Bernstein >Assignee: Joel Bernstein >Priority: Major > Fix For: 7.4 > > Attachments: SOLR-12159.patch, SOLR-12159.patch, SOLR-12159.patch > > > The *memset* function copies multiple numeric arrays into memory from fields > in an underlying TupleStream. This will be much more memory efficient then > calling the *col* function multiple times on an in-memory list of Tuples. > Sample syntax: > {code:java} > let(a=memset(random(collection1, q="*:*", fl="field1, field2", rows=1), > cols="field1, field2", > vars="c, d", > size=1), > e=corr(c, d)) > {code} > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12251) pk ids not sort when in deltaQuery
[ https://issues.apache.org/jira/browse/SOLR-12251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16445884#comment-16445884 ] Shawn Heisey commented on SOLR-12251: - The patch also removes wildcard imports and makes a few formatting adjustments. I looked at all usages of HashMap as well as HashSet in DocBuilder and adjusted one of the HashMap usages to LinkedHashMap. In the JdbcDataSource class, there were a couple of warnings from my IDE about unnecessary else clauses, so I adjusted those too, and removed the wildcard imports found there. (I was looking at JdbcDataSource to confirm which class would log SQL statements, if that became necessary) I've looked into the arguments on both sides of the debate on wildcard versus specific imports. I think the potential problems with wildcard imports far outweigh any level of convenience for somebody who wants to avoid hand-typing a lot of import statements. > pk ids not sort when in deltaQuery > --- > > Key: SOLR-12251 > URL: https://issues.apache.org/jira/browse/SOLR-12251 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: contrib - DataImportHandler >Affects Versions: 7.1 > Environment: windows10 > Solr7.1 > JDK8 > >Reporter: wzhonggo >Priority: Major > Attachments: SOLR-12251.patch > > > I use solr and mysql for search. > > {code:xml} > // data-config.xml > query="select * from score order by create_date asc" > deltaImportQuery="select * from score where id='${dih.delta.id}'" > deltaQuery="selectid from score where update_date > > '${dataimporter.last_index_time}' order by create_date asc " > {code} > > Mysql has three rows data in *score* table > > ||id||name||score||create_date||update_date|| > |UUID1|user1|60|2018-04-10|2018-04-10| > |UUID2|user1|70|2018-04-11 |2018-04-11| > |UUID3|user1|80|2018-04-12|2018-04-12| > The expected results In solr doc > ||Name||Score||CreateDate||UpdateDate|| > |user1|80|2018-04-12|2018-04-12| > > Use full import it will correct , but use delta import will wrong. > In the *org.apache.solr.handler.dataimport.DocBuilder* class , return not > LinkHashSet in > *collectDelta* method. > > Thanks. > > > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12159) Add memset Stream Evaluator
[ https://issues.apache.org/jira/browse/SOLR-12159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16445877#comment-16445877 ] ASF subversion and git services commented on SOLR-12159: Commit f0d1e11796419d45051f4384f47cf83b0fb8044b in lucene-solr's branch refs/heads/master from [~joel.bernstein] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f0d1e11 ] SOLR-12159: Add memset Stream Evaluator > Add memset Stream Evaluator > --- > > Key: SOLR-12159 > URL: https://issues.apache.org/jira/browse/SOLR-12159 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Joel Bernstein >Assignee: Joel Bernstein >Priority: Major > Fix For: 7.4 > > Attachments: SOLR-12159.patch, SOLR-12159.patch, SOLR-12159.patch > > > The *memset* function copies multiple numeric arrays into memory from fields > in an underlying TupleStream. This will be much more memory efficient then > calling the *col* function multiple times on an in-memory list of Tuples. > Sample syntax: > {code:java} > let(a=memset(random(collection1, q="*:*", fl="field1, field2", rows=1), > cols="field1, field2", > vars="c, d", > size=1), > e=corr(c, d)) > {code} > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12142) EmbeddedSolrServer should use req.getContentWriter
[ https://issues.apache.org/jira/browse/SOLR-12142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16445866#comment-16445866 ] David Smiley commented on SOLR-12142: - So this method, EmbeddedSolrServer.request(...) confusingly looks up the requestHandler twice – once on the coreContainer reference (I wish there was a comment explaining why), and failing that then more normally further below at line 190: [https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/client/solrj/embedded/EmbeddedSolrServer.java#L190] You changed the first occurrence but not the second. Again; a test would have revealed this oversight I think. I'll cook up a patch. Maybe that would even happen indirectly if the SolrTextTagger were to be incorporated directly into Solr; a few people have asked me about this. > EmbeddedSolrServer should use req.getContentWriter > --- > > Key: SOLR-12142 > URL: https://issues.apache.org/jira/browse/SOLR-12142 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: David Smiley >Assignee: Noble Paul >Priority: Major > Fix For: 7.4 > > Attachments: SOLR-12142.patch > > > In SOLR-11380, SolrRequest.getContentWriter was introduced as a replacement > for getContentStreams. However, EmbeddedSolrServer still calls > getContentStreams, and so clients who need to send POST data to it cannot yet > switch from the Deprecated API to the new API. The SolrTextTagger is an > example of a project where one would want to do this. > It seems EmbeddedSolrServer ought to check for getContentWriter and if > present then convert it into a ContentStream somehow. For the time being, > ESS needs to call both since both APIs exist. > CC [~noble.paul] -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-12252) Fix minor compiler and intellij warnings in policy framework
[ https://issues.apache.org/jira/browse/SOLR-12252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shalin Shekhar Mangar resolved SOLR-12252. -- Resolution: Fixed > Fix minor compiler and intellij warnings in policy framework > > > Key: SOLR-12252 > URL: https://issues.apache.org/jira/browse/SOLR-12252 > Project: Solr > Issue Type: Task > Security Level: Public(Default Security Level. Issues are Public) > Components: AutoScaling >Reporter: Shalin Shekhar Mangar >Assignee: Shalin Shekhar Mangar >Priority: Trivial > Fix For: 7.4, master (8.0) > > Attachments: SOLR-11252.patch > > > I noticed a few compiler and IntelliJ warnings during SOLR-11990. I'll use > this issue to fix them. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS-EA] Lucene-Solr-7.x-Windows (64bit/jdk-11-ea+5) - Build # 554 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/554/ Java: 64bit/jdk-11-ea+5 -XX:+UseCompressedOops -XX:+UseG1GC 12 tests failed. FAILED: org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testMergeIntegration Error Message: did not finish processing in time Stack Trace: java.lang.AssertionError: did not finish processing in time at __randomizedtesting.SeedInfo.seed([4C1FF90C3541BFDA:1FA6BBBCD7502A20]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testMergeIntegration(IndexSizeTriggerTest.java:404) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:564) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.base/java.lang.Thread.run(Thread.java:841) FAILED: org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testMergeIntegration Error Message: did not finish processing in time Stack Trace: java.lang.AssertionError: did not finish processing in time at
[jira] [Created] (SOLR-12252) Fix minor compiler and intellij warnings in policy framework
Shalin Shekhar Mangar created SOLR-12252: Summary: Fix minor compiler and intellij warnings in policy framework Key: SOLR-12252 URL: https://issues.apache.org/jira/browse/SOLR-12252 Project: Solr Issue Type: Task Security Level: Public (Default Security Level. Issues are Public) Components: AutoScaling Reporter: Shalin Shekhar Mangar Assignee: Shalin Shekhar Mangar Fix For: 7.4, master (8.0) I noticed a few compiler and IntelliJ warnings during SOLR-11990. I'll use this issue to fix them. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12252) Fix minor compiler and intellij warnings in policy framework
[ https://issues.apache.org/jira/browse/SOLR-12252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16445835#comment-16445835 ] ASF subversion and git services commented on SOLR-12252: Commit bbc14472e73cbcdbe58a04d4e6f0168f676c2b38 in lucene-solr's branch refs/heads/branch_7x from [~shalinmangar] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=bbc1447 ] SOLR-12252: Fix jira issue in CHANGES.txt (cherry picked from commit a4b335c) > Fix minor compiler and intellij warnings in policy framework > > > Key: SOLR-12252 > URL: https://issues.apache.org/jira/browse/SOLR-12252 > Project: Solr > Issue Type: Task > Security Level: Public(Default Security Level. Issues are Public) > Components: AutoScaling >Reporter: Shalin Shekhar Mangar >Assignee: Shalin Shekhar Mangar >Priority: Trivial > Fix For: 7.4, master (8.0) > > Attachments: SOLR-11252.patch > > > I noticed a few compiler and IntelliJ warnings during SOLR-11990. I'll use > this issue to fix them. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12252) Fix minor compiler and intellij warnings in policy framework
[ https://issues.apache.org/jira/browse/SOLR-12252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16445833#comment-16445833 ] Shalin Shekhar Mangar commented on SOLR-12252: -- I made a mistake and put the issue number as SOLR-11252 in the commit message. Here is the text posted by jira bot on SOLR-11252: {quote} Commit 86b34fe0fd0b1facb203406a4dab63ce76827b75 in lucene-solr's branch refs/heads/master from Shalin Shekhar Mangar [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=86b34fe ] SOLR-11252: Fix minor compiler and intellij warnings in autoscaling policy framework Commit 4e766a0b5fc9b5e446ccf365a14cc6e6afddfbb1 in lucene-solr's branch refs/heads/branch_7x from Shalin Shekhar Mangar [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=4e766a0 ] SOLR-11252: Fix minor compiler and intellij warnings in autoscaling policy framework (cherry picked from commit 86b34fe) {quote} > Fix minor compiler and intellij warnings in policy framework > > > Key: SOLR-12252 > URL: https://issues.apache.org/jira/browse/SOLR-12252 > Project: Solr > Issue Type: Task > Security Level: Public(Default Security Level. Issues are Public) > Components: AutoScaling >Reporter: Shalin Shekhar Mangar >Assignee: Shalin Shekhar Mangar >Priority: Trivial > Fix For: 7.4, master (8.0) > > Attachments: SOLR-11252.patch > > > I noticed a few compiler and IntelliJ warnings during SOLR-11990. I'll use > this issue to fix them. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-9272) Auto resolve zkHost for bin/solr zk for running Solr
[ https://issues.apache.org/jira/browse/SOLR-9272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jan Høydahl updated SOLR-9272: -- Attachment: SOLR-9272.patch > Auto resolve zkHost for bin/solr zk for running Solr > > > Key: SOLR-9272 > URL: https://issues.apache.org/jira/browse/SOLR-9272 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: scripts and tools >Affects Versions: 6.2 >Reporter: Jan Høydahl >Assignee: Jan Høydahl >Priority: Major > Labels: newdev > Attachments: SOLR-9272.patch, SOLR-9272.patch, SOLR-9272.patch, > SOLR-9272.patch, SOLR-9272.patch, SOLR-9272.patch, SOLR-9272.patch, > SOLR-9272.patch, SOLR-9272.patch > > > Spinoff from SOLR-9194: > We can skip requiring {{-z}} for {{bin/solr zk}} for a Solr that is already > running. We can optionally accept the {{-p}} parameter instead, and with that > use StatusTool to fetch the {{cloud/ZooKeeper}} property from there. It's > easier to remember solr port than zk string. > Example: > {noformat} > bin/solr start -c -p 9090 > bin/solr zk ls / -p 9090 > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9272) Auto resolve zkHost for bin/solr zk for running Solr
[ https://issues.apache.org/jira/browse/SOLR-9272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16445809#comment-16445809 ] Jan Høydahl commented on SOLR-9272: --- I tested the feature and found a few flaws that I have included in a new patch (attached): * Default port functionality was buggy. Now defaults to 8983 * Improved error output if getZkhost() throws exception. * If HttpHostConnectException during talking to Solr, we now also print tool usage and a hint to specify -p or -z (applies for all tools using this method) * Modified RefGuide page solr-control-script-reference.adoc * Passes precommit * Added CHANGES entry Think we are getting there now... > Auto resolve zkHost for bin/solr zk for running Solr > > > Key: SOLR-9272 > URL: https://issues.apache.org/jira/browse/SOLR-9272 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: scripts and tools >Affects Versions: 6.2 >Reporter: Jan Høydahl >Assignee: Jan Høydahl >Priority: Major > Labels: newdev > Attachments: SOLR-9272.patch, SOLR-9272.patch, SOLR-9272.patch, > SOLR-9272.patch, SOLR-9272.patch, SOLR-9272.patch, SOLR-9272.patch, > SOLR-9272.patch, SOLR-9272.patch > > > Spinoff from SOLR-9194: > We can skip requiring {{-z}} for {{bin/solr zk}} for a Solr that is already > running. We can optionally accept the {{-p}} parameter instead, and with that > use StatusTool to fetch the {{cloud/ZooKeeper}} property from there. It's > easier to remember solr port than zk string. > Example: > {noformat} > bin/solr start -c -p 9090 > bin/solr zk ls / -p 9090 > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11252) Ref Guide: Add docs on JSON request api
[ https://issues.apache.org/jira/browse/SOLR-11252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16445828#comment-16445828 ] Shalin Shekhar Mangar commented on SOLR-11252: -- Sorry, those commits belong to SOLR-12252 instead > Ref Guide: Add docs on JSON request api > --- > > Key: SOLR-11252 > URL: https://issues.apache.org/jira/browse/SOLR-11252 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: documentation, JSON Request API >Reporter: Cassandra Targett >Priority: Major > Attachments: json-request-api.adoc > > > The old Confluence Ref Guide had a draft version of basic docs on the JSON > Request API ,but it never made its way into the published guides. During the > conversion of the Ref Guide from Confluence, I made sure the page was > exported and converted to {{.adoc}} format. > Attaching that converted file here so someone could finish the conversion and > check that it's accurate before adding to the Ref Guide - I'm not sure if > there have been changes that should be documented, but perhaps there have > been since the original page was quite old. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12252) Fix minor compiler and intellij warnings in policy framework
[ https://issues.apache.org/jira/browse/SOLR-12252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16445825#comment-16445825 ] ASF subversion and git services commented on SOLR-12252: Commit a4b335c942cb46a61cb4022c567a0977b5cdc229 in lucene-solr's branch refs/heads/master from [~shalinmangar] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=a4b335c ] SOLR-12252: Fix jira issue in CHANGES.txt > Fix minor compiler and intellij warnings in policy framework > > > Key: SOLR-12252 > URL: https://issues.apache.org/jira/browse/SOLR-12252 > Project: Solr > Issue Type: Task > Security Level: Public(Default Security Level. Issues are Public) > Components: AutoScaling >Reporter: Shalin Shekhar Mangar >Assignee: Shalin Shekhar Mangar >Priority: Trivial > Fix For: 7.4, master (8.0) > > Attachments: SOLR-11252.patch > > > I noticed a few compiler and IntelliJ warnings during SOLR-11990. I'll use > this issue to fix them. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12251) pk ids not sort when in deltaQuery
[ https://issues.apache.org/jira/browse/SOLR-12251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shawn Heisey updated SOLR-12251: Attachment: SOLR-12251.patch > pk ids not sort when in deltaQuery > --- > > Key: SOLR-12251 > URL: https://issues.apache.org/jira/browse/SOLR-12251 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: contrib - DataImportHandler >Affects Versions: 7.1 > Environment: windows10 > Solr7.1 > JDK8 > >Reporter: wzhonggo >Priority: Major > Attachments: SOLR-12251.patch > > > I use solr and mysql for search. > > {code:xml} > // data-config.xml > query="select * from score order by create_date asc" > deltaImportQuery="select * from score where id='${dih.delta.id}'" > deltaQuery="selectid from score where update_date > > '${dataimporter.last_index_time}' order by create_date asc " > {code} > > Mysql has three rows data in *score* table > > ||id||name||score||create_date||update_date|| > |UUID1|user1|60|2018-04-10|2018-04-10| > |UUID2|user1|70|2018-04-11 |2018-04-11| > |UUID3|user1|80|2018-04-12|2018-04-12| > The expected results In solr doc > ||Name||Score||CreateDate||UpdateDate|| > |user1|80|2018-04-12|2018-04-12| > > Use full import it will correct , but use delta import will wrong. > In the *org.apache.solr.handler.dataimport.DocBuilder* class , return not > LinkHashSet in > *collectDelta* method. > > Thanks. > > > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11252) Ref Guide: Add docs on JSON request api
[ https://issues.apache.org/jira/browse/SOLR-11252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16445824#comment-16445824 ] ASF subversion and git services commented on SOLR-11252: Commit 4e766a0b5fc9b5e446ccf365a14cc6e6afddfbb1 in lucene-solr's branch refs/heads/branch_7x from [~shalinmangar] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=4e766a0 ] SOLR-11252: Fix minor compiler and intellij warnings in autoscaling policy framework (cherry picked from commit 86b34fe) > Ref Guide: Add docs on JSON request api > --- > > Key: SOLR-11252 > URL: https://issues.apache.org/jira/browse/SOLR-11252 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: documentation, JSON Request API >Reporter: Cassandra Targett >Priority: Major > Attachments: json-request-api.adoc > > > The old Confluence Ref Guide had a draft version of basic docs on the JSON > Request API ,but it never made its way into the published guides. During the > conversion of the Ref Guide from Confluence, I made sure the page was > exported and converted to {{.adoc}} format. > Attaching that converted file here so someone could finish the conversion and > check that it's accurate before adding to the Ref Guide - I'm not sure if > there have been changes that should be documented, but perhaps there have > been since the original page was quite old. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11252) Ref Guide: Add docs on JSON request api
[ https://issues.apache.org/jira/browse/SOLR-11252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16445821#comment-16445821 ] ASF subversion and git services commented on SOLR-11252: Commit 86b34fe0fd0b1facb203406a4dab63ce76827b75 in lucene-solr's branch refs/heads/master from [~shalinmangar] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=86b34fe ] SOLR-11252: Fix minor compiler and intellij warnings in autoscaling policy framework > Ref Guide: Add docs on JSON request api > --- > > Key: SOLR-11252 > URL: https://issues.apache.org/jira/browse/SOLR-11252 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: documentation, JSON Request API >Reporter: Cassandra Targett >Priority: Major > Attachments: json-request-api.adoc > > > The old Confluence Ref Guide had a draft version of basic docs on the JSON > Request API ,but it never made its way into the published guides. During the > conversion of the Ref Guide from Confluence, I made sure the page was > exported and converted to {{.adoc}} format. > Attaching that converted file here so someone could finish the conversion and > check that it's accurate before adding to the Ref Guide - I'm not sure if > there have been changes that should be documented, but perhaps there have > been since the original page was quite old. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12252) Fix minor compiler and intellij warnings in policy framework
[ https://issues.apache.org/jira/browse/SOLR-12252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shalin Shekhar Mangar updated SOLR-12252: - Attachment: SOLR-11252.patch > Fix minor compiler and intellij warnings in policy framework > > > Key: SOLR-12252 > URL: https://issues.apache.org/jira/browse/SOLR-12252 > Project: Solr > Issue Type: Task > Security Level: Public(Default Security Level. Issues are Public) > Components: AutoScaling >Reporter: Shalin Shekhar Mangar >Assignee: Shalin Shekhar Mangar >Priority: Trivial > Fix For: 7.4, master (8.0) > > Attachments: SOLR-11252.patch > > > I noticed a few compiler and IntelliJ warnings during SOLR-11990. I'll use > this issue to fix them. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-repro - Build # 537 - Unstable
Build: https://builds.apache.org/job/Lucene-Solr-repro/537/ [...truncated 28 lines...] [repro] Jenkins log URL: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-7.x/204/consoleText [repro] Revision: 3d21fda4ce1c899f31b8f00e200eb1ac0d23d17b [repro] Ant options: -DsmokeTestRelease.java9=/home/jenkins/tools/java/latest1.9 [repro] Repro line: ant test -Dtestcase=NodeAddedTriggerTest -Dtests.method=testRestoreState -Dtests.seed=8AD75CB48F6098AF -Dtests.multiplier=2 -Dtests.locale=lv -Dtests.timezone=Australia/Canberra -Dtests.asserts=true -Dtests.file.encoding=US-ASCII [repro] git rev-parse --abbrev-ref HEAD [repro] git rev-parse HEAD [repro] Initial local git branch/revision: 4eead83a83235b235145f07f0a625055b860ad65 [repro] git fetch [...truncated 2 lines...] [repro] git checkout 3d21fda4ce1c899f31b8f00e200eb1ac0d23d17b [...truncated 2 lines...] [repro] git merge --ff-only [...truncated 1 lines...] [repro] ant clean [...truncated 6 lines...] [repro] Test suites by module: [repro]solr/core [repro] NodeAddedTriggerTest [repro] ant compile-test [...truncated 3316 lines...] [repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 -Dtests.class="*.NodeAddedTriggerTest" -Dtests.showOutput=onerror -DsmokeTestRelease.java9=/home/jenkins/tools/java/latest1.9 -Dtests.seed=8AD75CB48F6098AF -Dtests.multiplier=2 -Dtests.locale=lv -Dtests.timezone=Australia/Canberra -Dtests.asserts=true -Dtests.file.encoding=US-ASCII [...truncated 1321 lines...] [repro] Setting last failure code to 256 [repro] Failures: [repro] 3/5 failed: org.apache.solr.cloud.autoscaling.NodeAddedTriggerTest [repro] git checkout 4eead83a83235b235145f07f0a625055b860ad65 [...truncated 2 lines...] [repro] Exiting with code 256 [...truncated 5 lines...] - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-9272) Auto resolve zkHost for bin/solr zk for running Solr
[ https://issues.apache.org/jira/browse/SOLR-9272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jan Høydahl updated SOLR-9272: -- Attachment: SOLR-9272.patch > Auto resolve zkHost for bin/solr zk for running Solr > > > Key: SOLR-9272 > URL: https://issues.apache.org/jira/browse/SOLR-9272 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: scripts and tools >Affects Versions: 6.2 >Reporter: Jan Høydahl >Assignee: Jan Høydahl >Priority: Major > Labels: newdev > Attachments: SOLR-9272.patch, SOLR-9272.patch, SOLR-9272.patch, > SOLR-9272.patch, SOLR-9272.patch, SOLR-9272.patch, SOLR-9272.patch, > SOLR-9272.patch > > > Spinoff from SOLR-9194: > We can skip requiring {{-z}} for {{bin/solr zk}} for a Solr that is already > running. We can optionally accept the {{-p}} parameter instead, and with that > use StatusTool to fetch the {{cloud/ZooKeeper}} property from there. It's > easier to remember solr port than zk string. > Example: > {noformat} > bin/solr start -c -p 9090 > bin/solr zk ls / -p 9090 > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Reopened] (SOLR-12251) pk ids not sort when in deltaQuery
[ https://issues.apache.org/jira/browse/SOLR-12251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shawn Heisey reopened SOLR-12251: - On a closer look, I see exactly what you're talking about in DocBuilder. Using HashSet, the order of the results is lost. I've cooked up a patch that switches all the usages of HashSet to LinkedHashSet, and also eliminates all warnings noticed by my IDE (eclipse). > pk ids not sort when in deltaQuery > --- > > Key: SOLR-12251 > URL: https://issues.apache.org/jira/browse/SOLR-12251 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: contrib - DataImportHandler >Affects Versions: 7.1 > Environment: windows10 > Solr7.1 > JDK8 > >Reporter: wzhonggo >Priority: Major > > I use solr and mysql for search. > > {code:xml} > // data-config.xml > query="select * from score order by create_date asc" > deltaImportQuery="select * from score where id='${dih.delta.id}'" > deltaQuery="selectid from score where update_date > > '${dataimporter.last_index_time}' order by create_date asc " > {code} > > Mysql has three rows data in *score* table > > ||id||name||score||create_date||update_date|| > |UUID1|user1|60|2018-04-10|2018-04-10| > |UUID2|user1|70|2018-04-11 |2018-04-11| > |UUID3|user1|80|2018-04-12|2018-04-12| > The expected results In solr doc > ||Name||Score||CreateDate||UpdateDate|| > |user1|80|2018-04-12|2018-04-12| > > Use full import it will correct , but use delta import will wrong. > In the *org.apache.solr.handler.dataimport.DocBuilder* class , return not > LinkHashSet in > *collectDelta* method. > > Thanks. > > > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-4793) Solr Cloud can't upload large config files ( > 1MB) to Zookeeper
[ https://issues.apache.org/jira/browse/SOLR-4793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16445754#comment-16445754 ] Steve Rowe commented on SOLR-4793: -- bq. I think the long term solution could be to implement something like a BlobStoreResourceLoader Agreed - see SOLR-8751 and SOLR-9175. bq. and a configset (as a whole or in parts) could be loaded from ZK or blob store. I'm not sure how useful it would be to store whole configsets in the blob store. In any case, that won't be the first step here. bq. Can we keep all but large files in zk. When zkClient is asked to upload a large file it will upload it to blob instead and create a shadow file with same name in Zk, but with a body telling it is a blob file with a pointer to blob store ID. Then when zk resource loader gets a file it will detect such files and serve them from blob transparently. Hmm, I don't think we should be *starting* with this kind of magic - I'm much more comfortable with separate blob store upload (already implemented) and schema reference steps (SOLR-8751/SOLR-9175) bq. This probably means that backup/restore feature also needs to backup the blob store? Yes, but this is already true right now: {{solrconfig.xml}} can load handler and component classes from blobs in the blob store. > Solr Cloud can't upload large config files ( > 1MB) to Zookeeper > - > > Key: SOLR-4793 > URL: https://issues.apache.org/jira/browse/SOLR-4793 > Project: Solr > Issue Type: Improvement >Reporter: Son Nguyen >Priority: Major > Attachments: SOLR-4793.patch > > > Zookeeper set znode size limit to 1MB by default. So we can't start Solr > Cloud with some large config files, like synonyms.txt. > Jan Høydahl has a good idea: > "SolrCloud is designed with an assumption that you should be able to upload > your whole disk-based conf folder into ZK, and that you should be able to add > an empty Solr node to a cluster and it would download all config from ZK. So > immediately a splitting strategy automatically handled by ZkSolresourceLoader > for large files could be one way forward, i.e. store synonyms.txt as e.g. > __001_synonyms.txt __002_synonyms.txt" -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-12251) pk ids not sort when in deltaQuery
[ https://issues.apache.org/jira/browse/SOLR-12251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shawn Heisey resolved SOLR-12251. - Resolution: Invalid Although it is entirely possible that this is a bug, it doesn't seem very likely. This issue tracker is not a support portal. This should have been brought up on the mailing list or the IRC channel, to confirm whether or not there's a bug before opening an issue. http://lucene.apache.org/solr/community.html#mailing-lists-irc There, we can give you steps for debugging the problem. If it turns out that there is a bug, then we can re-open this issue. Note that even if there is a bug, it will have to be confirmed in the latest version (currently 7.3.0). A problem like this is not severe enough to warrant a new 7.1.x version. > pk ids not sort when in deltaQuery > --- > > Key: SOLR-12251 > URL: https://issues.apache.org/jira/browse/SOLR-12251 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: contrib - DataImportHandler >Affects Versions: 7.1 > Environment: windows10 > Solr7.1 > JDK8 > >Reporter: wzhonggo >Priority: Major > > I use solr and mysql for search. > > {code:xml} > // data-config.xml > query="select * from score order by create_date asc" > deltaImportQuery="select * from score where id='${dih.delta.id}'" > deltaQuery="selectid from score where update_date > > '${dataimporter.last_index_time}' order by create_date asc " > {code} > > Mysql has three rows data in *score* table > > ||id||name||score||create_date||update_date|| > |UUID1|user1|60|2018-04-10|2018-04-10| > |UUID2|user1|70|2018-04-11 |2018-04-11| > |UUID3|user1|80|2018-04-12|2018-04-12| > The expected results In solr doc > ||Name||Score||CreateDate||UpdateDate|| > |user1|80|2018-04-12|2018-04-12| > > Use full import it will correct , but use delta import will wrong. > In the *org.apache.solr.handler.dataimport.DocBuilder* class , return not > LinkHashSet in > *collectDelta* method. > > Thanks. > > > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8249) Add matches to exact PhraseQuery and MultiPhraseQuery
[ https://issues.apache.org/jira/browse/LUCENE-8249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16445723#comment-16445723 ] Adrien Grand commented on LUCENE-8249: -- Thanks for the update, I'll have another look. bq. I like the idea of changing it to return a BytesRef[] though, let's do that in a followup. Can we change the API first? I wouldn't want one of our main queries to get a hacky implementation of this API, even temporarily. > Add matches to exact PhraseQuery and MultiPhraseQuery > - > > Key: LUCENE-8249 > URL: https://issues.apache.org/jira/browse/LUCENE-8249 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Alan Woodward >Assignee: Alan Woodward >Priority: Major > Attachments: LUCENE-8249.patch, LUCENE-8249.patch, LUCENE-8249.patch > > > ExactPhraseScorer can be rejigged fairly easily to expose a MatchesIterator -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-Windows (32bit/jdk1.8.0_144) - Build # 7277 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7277/ Java: 32bit/jdk1.8.0_144 -client -XX:+UseParallelGC 17 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.cloud.TestCloudRecovery Error Message: Could not remove the following files (in the order of attempts): C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.TestCloudRecovery_50E50F5F87E6C66F-001\tempDir-001\node1\collection1_shard1_replica_n2\data\tlog\tlog.001: java.nio.file.FileSystemException: C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.TestCloudRecovery_50E50F5F87E6C66F-001\tempDir-001\node1\collection1_shard1_replica_n2\data\tlog\tlog.001: The process cannot access the file because it is being used by another process. C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.TestCloudRecovery_50E50F5F87E6C66F-001\tempDir-001\node1\collection1_shard1_replica_n2\data\tlog: java.nio.file.DirectoryNotEmptyException: C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.TestCloudRecovery_50E50F5F87E6C66F-001\tempDir-001\node1\collection1_shard1_replica_n2\data\tlog C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.TestCloudRecovery_50E50F5F87E6C66F-001\tempDir-001\node1\collection1_shard1_replica_n2\data: java.nio.file.DirectoryNotEmptyException: C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.TestCloudRecovery_50E50F5F87E6C66F-001\tempDir-001\node1\collection1_shard1_replica_n2\data C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.TestCloudRecovery_50E50F5F87E6C66F-001\tempDir-001\node1\collection1_shard1_replica_n2: java.nio.file.DirectoryNotEmptyException: C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.TestCloudRecovery_50E50F5F87E6C66F-001\tempDir-001\node1\collection1_shard1_replica_n2 C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.TestCloudRecovery_50E50F5F87E6C66F-001\tempDir-001\node1\collection1_shard2_replica_n6\data\tlog\tlog.001: java.nio.file.FileSystemException: C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.TestCloudRecovery_50E50F5F87E6C66F-001\tempDir-001\node1\collection1_shard2_replica_n6\data\tlog\tlog.001: The process cannot access the file because it is being used by another process. C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.TestCloudRecovery_50E50F5F87E6C66F-001\tempDir-001\node1\collection1_shard2_replica_n6\data\tlog: java.nio.file.DirectoryNotEmptyException: C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.TestCloudRecovery_50E50F5F87E6C66F-001\tempDir-001\node1\collection1_shard2_replica_n6\data\tlog C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.TestCloudRecovery_50E50F5F87E6C66F-001\tempDir-001\node1\collection1_shard2_replica_n6\data: java.nio.file.DirectoryNotEmptyException: C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.TestCloudRecovery_50E50F5F87E6C66F-001\tempDir-001\node1\collection1_shard2_replica_n6\data C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.TestCloudRecovery_50E50F5F87E6C66F-001\tempDir-001\node1\collection1_shard2_replica_n6: java.nio.file.DirectoryNotEmptyException: C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.TestCloudRecovery_50E50F5F87E6C66F-001\tempDir-001\node1\collection1_shard2_replica_n6 C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.TestCloudRecovery_50E50F5F87E6C66F-001\tempDir-001\node1: java.nio.file.DirectoryNotEmptyException: C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.TestCloudRecovery_50E50F5F87E6C66F-001\tempDir-001\node1 C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.TestCloudRecovery_50E50F5F87E6C66F-001\tempDir-001\node2\collection1_shard1_replica_n1\data\tlog\tlog.001: java.nio.file.FileSystemException: C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.TestCloudRecovery_50E50F5F87E6C66F-001\tempDir-001\node2\collection1_shard1_replica_n1\data\tlog\tlog.001: The process cannot access the file because it is being used by another process.
[jira] [Commented] (SOLR-12238) Synonym Query Style Boost By Payload
[ https://issues.apache.org/jira/browse/SOLR-12238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16445623#comment-16445623 ] Alessandro Benedetti commented on SOLR-12238: - Pull Request Attached : [GitHub Pull Request #357|https://github.com/apache/lucene-solr/pull/357], > Synonym Query Style Boost By Payload > > > Key: SOLR-12238 > URL: https://issues.apache.org/jira/browse/SOLR-12238 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: query parsers >Reporter: Alessandro Benedetti >Priority: Major > Time Spent: 10m > Remaining Estimate: 0h > > This improvement is built on top of the Synonym Query Style feature and > brings the possibility of boosting synonym queries using the payload > associated. > It introduces two new modalities for the Synonym Query Style : > PICK_BEST_BOOST_BY_PAYLOAD -> build a Disjunction query with the clauses > boosted by payload > AS_DISTINCT_TERMS_BOOST_BY_PAYLOAD -> build a Boolean query with the clauses > boosted by payload > This new synonym query styles will assume payloads are available so they must > be used in conjunction with a token filter able to produce payloads. > An synonym.txt example could be : > # Synonyms used by Payload Boost > tiger => tiger|1.0, Big_Cat|0.8, Shere_Khan|0.9 > leopard => leopard, Big_Cat|0.8, Bagheera|0.9 > lion => lion|1.0, panthera leo|0.99, Simba|0.8 > snow_leopard => panthera uncia|0.99, snow leopard|1.0 > A simple token filter to populate the payloads from such synonym.txt is : > delimiter="|"/> -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7896) Add a login page for Solr Administrative Interface
[ https://issues.apache.org/jira/browse/SOLR-7896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16445606#comment-16445606 ] Jan Høydahl commented on SOLR-7896: --- {quote}I do think it would be good to have Solr password protected by default, with command line switch to start it in legacy "open" mode {quote} Please open another Jira if you want to work on capabilities of making some auth being enabled by "default" (whatever that means), I think there is a similar Jira about making SSL enabled by default. For the sake of this login page feature, it is already quite simple to enable auth as the first thing you do after installation: {code} bin/solr auth enable -credentials solr:solrRocks -blockUnknown true {code} After this Jira is completed, this is all you need to do - the next time you open the Admin UI it will redirect to the new login page :) > Add a login page for Solr Administrative Interface > -- > > Key: SOLR-7896 > URL: https://issues.apache.org/jira/browse/SOLR-7896 > Project: Solr > Issue Type: New Feature > Components: Admin UI, security >Affects Versions: 5.2.1 >Reporter: Aaron Greenspan >Assignee: Jan Høydahl >Priority: Major > Labels: authentication, login, password > Fix For: master (8.0) > > Attachments: dispatchfilter-code.png > > > Now that Solr supports Authentication plugins, the missing piece is to be > allowed access from Admin UI when authentication is enabled. For this we need > * Some plumbing in Admin UI that allows the UI to detect 401 responses and > redirect to login page > * Possibility to have multiple login pages depending on auth method and > redirect to the correct one > * [AngularJS HTTP > interceptors|https://docs.angularjs.org/api/ng/service/$http#interceptors] to > add correct HTTP headers on all requests when user is logged in > This issue should aim to implement some of the plumbing mentioned above, and > make it work with Basic Auth. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-7896) Add a login page for Solr Administrative Interface
[ https://issues.apache.org/jira/browse/SOLR-7896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jan Høydahl updated SOLR-7896: -- Fix Version/s: master (8.0) > Add a login page for Solr Administrative Interface > -- > > Key: SOLR-7896 > URL: https://issues.apache.org/jira/browse/SOLR-7896 > Project: Solr > Issue Type: New Feature > Components: Admin UI, security >Affects Versions: 5.2.1 >Reporter: Aaron Greenspan >Assignee: Jan Høydahl >Priority: Major > Labels: authentication, login, password > Fix For: master (8.0) > > Attachments: dispatchfilter-code.png > > > Now that Solr supports Authentication plugins, the missing piece is to be > allowed access from Admin UI when authentication is enabled. For this we need > * Some plumbing in Admin UI that allows the UI to detect 401 responses and > redirect to login page > * Possibility to have multiple login pages depending on auth method and > redirect to the correct one > * [AngularJS HTTP > interceptors|https://docs.angularjs.org/api/ng/service/$http#interceptors] to > add correct HTTP headers on all requests when user is logged in > This issue should aim to implement some of the plumbing mentioned above, and > make it work with Basic Auth. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9104) NPE in CollapsingQParser when two fq={!collapse} and zero results
[ https://issues.apache.org/jira/browse/SOLR-9104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16445601#comment-16445601 ] Markus Jelsma commented on SOLR-9104: - Thanks Matthias, it sounds great but we need some committers to chime in regarding side effects. Is there a patch available? > NPE in CollapsingQParser when two fq={!collapse} and zero results > - > > Key: SOLR-9104 > URL: https://issues.apache.org/jira/browse/SOLR-9104 > Project: Solr > Issue Type: Bug >Affects Versions: 6.0 >Reporter: Markus Jelsma >Priority: Major > Fix For: 6.2, 7.0 > > > This is a very weird problem that is reproducible on a small production > server, but not on the local machine although they run the same 6.0 version., > and have an almost identical index. The only minor difference is that > production is a SolrCloud with 1 shard and two replica's, just for a bit of > redundancy. > The following query yields zero results and throws the NPE: > {code} > select?q=query:seis={!collapse field=query_digest}={!collapse > field=result_digest} > {code} > The next query does yield results and does not throw anything, it just works > as it should work: > {code} > select?q=query:seiz={!collapse field=query_digest}={!collapse > field=result_digest} > {code} > The /select handler used does not add any fancy param other than rows. > Here's the NPE: > {code} > 2016-05-11 14:10:27.666 ERROR (qtp1209271652-3338) [c:suggestions s:shard1 > r:core_node1 x:suggestions_shard1_replica1] o.a.s.s.HttpSolrCall > null:java.lang.NullPointerException > at > org.apache.solr.search.CollapsingQParserPlugin$IntScoreCollector.finish(CollapsingQParserPlugin.java:814) > at > org.apache.solr.search.CollapsingQParserPlugin$IntScoreCollector.finish(CollapsingQParserPlugin.java:851) > at > org.apache.solr.search.SolrIndexSearcher.buildAndRunCollectorChain(SolrIndexSearcher.java:272) > at > org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.java:1794) > at > org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1611) > at > org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:634) > at > org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:529) > at > org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:287) > at > org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:155) > {code} > Edit: for the sake of clarity. It really needs double fq={!collapse bla bla > for the NPE to appear. If i remove either of the filters from the query, i > get a nice zero resultset back. Both fields are defined as int. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-7.x-Solaris (64bit/jdk1.8.0) - Build # 565 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Solaris/565/ Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseG1GC 9 tests failed. FAILED: org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testMergeIntegration Error Message: did not finish processing in time Stack Trace: java.lang.AssertionError: did not finish processing in time at __randomizedtesting.SeedInfo.seed([C866F69C3AA4A940:9BDFB42CD8B53CBA]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testMergeIntegration(IndexSizeTriggerTest.java:404) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.lang.Thread.run(Thread.java:748) FAILED: org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testTrigger Error Message: number of ops expected:<2> but was:<1> Stack Trace: java.lang.AssertionError: number of ops expected:<2> but was:<1> at
Re: [JENKINS] Lucene-Solr-NightlyTests-master - Build # 1534 - Failure
are these memory codecs worth the trouble? i propose dropping them. On Fri, Apr 20, 2018 at 5:29 AM, Dawid Weisswrote: >> +1. It’s a shame that @SuppressCodecs doesn’t work on test methods, only on >> classes, which makes things a little trickier. > > The default codec is picked per-class, not per-test (part of the > reason for that is codecs are used in pre-test hooks, for example). We > could make the annotation apply to test methods too and just ignore > the test if a suppressed codec was picked at the class level. This > would be one workaround. > > D. > > - > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org > For additional commands, e-mail: dev-h...@lucene.apache.org > - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-8261) InterpolatedProperties.interpolate should quote the replacement
[ https://issues.apache.org/jira/browse/LUCENE-8261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dawid Weiss updated LUCENE-8261: Attachment: LUCENE-8261.patch > InterpolatedProperties.interpolate should quote the replacement > --- > > Key: LUCENE-8261 > URL: https://issues.apache.org/jira/browse/LUCENE-8261 > Project: Lucene - Core > Issue Type: Bug >Reporter: Dawid Weiss >Assignee: Dawid Weiss >Priority: Trivial > Attachments: LUCENE-8261.patch > > > InterpolatedProperties is used in lib check tasks in the build file. I > occasionally see this: > {code} > /home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/tools/custom-tasks.xml:108: > java.lang.IllegalArgumentException: named capturing group is missing > trailing '}' > at > java.base/java.util.regex.Matcher.appendExpandedReplacement(Matcher.java:1052) > at > java.base/java.util.regex.Matcher.appendReplacement(Matcher.java:908) > at > org.apache.lucene.dependencies.InterpolatedProperties.interpolate(InterpolatedProperties.java:64) > {code} > I don't think we ever need to use any group references in those replacements; > they should be fixed strings (quoted verbatim)? So > {{Pattern.quoteReplacement}} would be adequate here. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (LUCENE-8261) InterpolatedProperties.interpolate should quote the replacement
Dawid Weiss created LUCENE-8261: --- Summary: InterpolatedProperties.interpolate should quote the replacement Key: LUCENE-8261 URL: https://issues.apache.org/jira/browse/LUCENE-8261 Project: Lucene - Core Issue Type: Bug Reporter: Dawid Weiss Assignee: Dawid Weiss InterpolatedProperties is used in lib check tasks in the build file. I occasionally see this: {code} /home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/tools/custom-tasks.xml:108: java.lang.IllegalArgumentException: named capturing group is missing trailing '}' at java.base/java.util.regex.Matcher.appendExpandedReplacement(Matcher.java:1052) at java.base/java.util.regex.Matcher.appendReplacement(Matcher.java:908) at org.apache.lucene.dependencies.InterpolatedProperties.interpolate(InterpolatedProperties.java:64) {code} I don't think we ever need to use any group references in those replacements; they should be fixed strings (quoted verbatim)? So {{Pattern.quoteReplacement}} would be adequate here. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-9.0.4) - Build # 21867 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/21867/ Java: 64bit/jdk-9.0.4 -XX:+UseCompressedOops -XX:+UseParallelGC 1 tests failed. FAILED: org.apache.solr.cloud.OverseerRolesTest.testOverseerRole Error Message: Timed out waiting for overseer state change Stack Trace: java.lang.AssertionError: Timed out waiting for overseer state change at __randomizedtesting.SeedInfo.seed([62B57CDAA5A2397:E7E0AA5991E91546]:0) at org.junit.Assert.fail(Assert.java:93) at org.apache.solr.cloud.OverseerRolesTest.waitForNewOverseer(OverseerRolesTest.java:63) at org.apache.solr.cloud.OverseerRolesTest.testOverseerRole(OverseerRolesTest.java:141) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:564) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.base/java.lang.Thread.run(Thread.java:844) Build Log: [...truncated 13839 lines...] [junit4] Suite: org.apache.solr.cloud.OverseerRolesTest [junit4] 2> 1572891 INFO (SUITE-OverseerRolesTest-seed#[62B57CDAA5A2397]-worker) [] o.a.s.SolrTestCaseJ4 SecureRandom sanity checks:
[JENKINS] Lucene-Solr-repro - Build # 535 - Unstable
Build: https://builds.apache.org/job/Lucene-Solr-repro/535/ [...truncated 28 lines...] [repro] Jenkins log URL: https://builds.apache.org/job/Lucene-Solr-NightlyTests-7.x/205/consoleText [repro] Revision: 3d21fda4ce1c899f31b8f00e200eb1ac0d23d17b [repro] Ant options: -Dtests.multiplier=2 -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt [repro] Repro line: ant test -Dtestcase=TestInPlaceUpdatesDistrib -Dtests.method=test -Dtests.seed=9BC71F2BDDB8F28A -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt -Dtests.locale=ru-RU -Dtests.timezone=Hongkong -Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1 [repro] git rev-parse --abbrev-ref HEAD [repro] git rev-parse HEAD [repro] Initial local git branch/revision: 48e071f350c76cd8783839199ef2b1c372919ec8 [repro] git fetch [...truncated 2 lines...] [repro] git checkout 3d21fda4ce1c899f31b8f00e200eb1ac0d23d17b [...truncated 2 lines...] [repro] git merge --ff-only [...truncated 1 lines...] [repro] ant clean [...truncated 6 lines...] [repro] Test suites by module: [repro]solr/core [repro] TestInPlaceUpdatesDistrib [repro] ant compile-test [...truncated 3316 lines...] [repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 -Dtests.class="*.TestInPlaceUpdatesDistrib" -Dtests.showOutput=onerror -Dtests.multiplier=2 -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt -Dtests.seed=9BC71F2BDDB8F28A -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt -Dtests.locale=ru-RU -Dtests.timezone=Hongkong -Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1 [...truncated 6892 lines...] [repro] Setting last failure code to 256 [repro] Failures: [repro] 5/5 failed: org.apache.solr.update.TestInPlaceUpdatesDistrib [repro] Re-testing 100% failures at the tip of branch_7x [repro] ant clean [...truncated 8 lines...] [repro] Test suites by module: [repro]solr/core [repro] TestInPlaceUpdatesDistrib [repro] ant compile-test [...truncated 3316 lines...] [repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 -Dtests.class="*.TestInPlaceUpdatesDistrib" -Dtests.showOutput=onerror -Dtests.multiplier=2 -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt -Dtests.seed=9BC71F2BDDB8F28A -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt -Dtests.locale=ru-RU -Dtests.timezone=Hongkong -Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1 [...truncated 12425 lines...] [repro] Setting last failure code to 256 [repro] Failures at the tip of branch_7x: [repro] 5/5 failed: org.apache.solr.update.TestInPlaceUpdatesDistrib [repro] Re-testing 100% failures at the tip of branch_7x without a seed [repro] ant clean [...truncated 8 lines...] [repro] Test suites by module: [repro]solr/core [repro] TestInPlaceUpdatesDistrib [repro] ant compile-test [...truncated 3316 lines...] [repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 -Dtests.class="*.TestInPlaceUpdatesDistrib" -Dtests.showOutput=onerror -Dtests.multiplier=2 -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt -Dtests.locale=ru-RU -Dtests.timezone=Hongkong -Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1 [...truncated 61310 lines...] [repro] Setting last failure code to 256 [repro] Failures at the tip of branch_7x without a seed: [repro] 1/5 failed: org.apache.solr.update.TestInPlaceUpdatesDistrib [repro] git checkout 48e071f350c76cd8783839199ef2b1c372919ec8 [...truncated 2 lines...] [repro] Exiting with code 256 [...truncated 6 lines...] - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Assigned] (SOLR-11200) provide a config to enable disable ConcurrentMergeSchedule.doAutoIOThrottle
[ https://issues.apache.org/jira/browse/SOLR-11200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dawid Weiss reassigned SOLR-11200: -- Assignee: Dawid Weiss > provide a config to enable disable ConcurrentMergeSchedule.doAutoIOThrottle > --- > > Key: SOLR-11200 > URL: https://issues.apache.org/jira/browse/SOLR-11200 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Nawab Zada Asad iqbal >Assignee: Dawid Weiss >Priority: Minor > Fix For: 7.4 > > Attachments: SOLR-11200.patch, SOLR-11200.patch, SOLR-11200.patch > > > This config can be useful while bulk indexing. Lucene introduced it > https://issues.apache.org/jira/browse/LUCENE-6119 . -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11200) provide a config to enable disable ConcurrentMergeSchedule.doAutoIOThrottle
[ https://issues.apache.org/jira/browse/SOLR-11200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16445543#comment-16445543 ] Dawid Weiss commented on SOLR-11200: I've ran precommit and tests and committed in in to 7x and master. Thanks for feedback, guys. > provide a config to enable disable ConcurrentMergeSchedule.doAutoIOThrottle > --- > > Key: SOLR-11200 > URL: https://issues.apache.org/jira/browse/SOLR-11200 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Nawab Zada Asad iqbal >Assignee: Dawid Weiss >Priority: Minor > Fix For: 7.4 > > Attachments: SOLR-11200.patch, SOLR-11200.patch, SOLR-11200.patch > > > This config can be useful while bulk indexing. Lucene introduced it > https://issues.apache.org/jira/browse/LUCENE-6119 . -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11200) provide a config to enable disable ConcurrentMergeSchedule.doAutoIOThrottle
[ https://issues.apache.org/jira/browse/SOLR-11200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16445541#comment-16445541 ] ASF subversion and git services commented on SOLR-11200: Commit b5cee67ba3f824e71e0d0128f29784594e8cdd55 in lucene-solr's branch refs/heads/branch_7x from [~dawid.weiss] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=b5cee67 ] SOLR-11200: A new CMS config option 'ioThrottle' to manually enable/disable ConcurrentMergeSchedule.doAutoIOThrottle. (Amrit Sarkar, Nawab Zada Asad iqbal) > provide a config to enable disable ConcurrentMergeSchedule.doAutoIOThrottle > --- > > Key: SOLR-11200 > URL: https://issues.apache.org/jira/browse/SOLR-11200 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Nawab Zada Asad iqbal >Priority: Minor > Fix For: 7.4 > > Attachments: SOLR-11200.patch, SOLR-11200.patch, SOLR-11200.patch > > > This config can be useful while bulk indexing. Lucene introduced it > https://issues.apache.org/jira/browse/LUCENE-6119 . -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-11200) provide a config to enable disable ConcurrentMergeSchedule.doAutoIOThrottle
[ https://issues.apache.org/jira/browse/SOLR-11200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dawid Weiss updated SOLR-11200: --- Fix Version/s: 7.4 > provide a config to enable disable ConcurrentMergeSchedule.doAutoIOThrottle > --- > > Key: SOLR-11200 > URL: https://issues.apache.org/jira/browse/SOLR-11200 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Nawab Zada Asad iqbal >Priority: Minor > Fix For: 7.4 > > Attachments: SOLR-11200.patch, SOLR-11200.patch, SOLR-11200.patch > > > This config can be useful while bulk indexing. Lucene introduced it > https://issues.apache.org/jira/browse/LUCENE-6119 . -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11200) provide a config to enable disable ConcurrentMergeSchedule.doAutoIOThrottle
[ https://issues.apache.org/jira/browse/SOLR-11200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16445542#comment-16445542 ] ASF subversion and git services commented on SOLR-11200: Commit 4eead83a83235b235145f07f0a625055b860ad65 in lucene-solr's branch refs/heads/master from [~dawid.weiss] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=4eead83 ] SOLR-11200: A new CMS config option 'ioThrottle' to manually enable/disable ConcurrentMergeSchedule.doAutoIOThrottle. (Amrit Sarkar, Nawab Zada Asad iqbal) > provide a config to enable disable ConcurrentMergeSchedule.doAutoIOThrottle > --- > > Key: SOLR-11200 > URL: https://issues.apache.org/jira/browse/SOLR-11200 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Nawab Zada Asad iqbal >Priority: Minor > Fix For: 7.4 > > Attachments: SOLR-11200.patch, SOLR-11200.patch, SOLR-11200.patch > > > This config can be useful while bulk indexing. Lucene introduced it > https://issues.apache.org/jira/browse/LUCENE-6119 . -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: [JENKINS] Lucene-Solr-NightlyTests-master - Build # 1534 - Failure
> +1. It’s a shame that @SuppressCodecs doesn’t work on test methods, only on > classes, which makes things a little trickier. The default codec is picked per-class, not per-test (part of the reason for that is codecs are used in pre-test hooks, for example). We could make the annotation apply to test methods too and just ignore the test if a suppressed codec was picked at the class level. This would be one workaround. D. - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: [JENKINS] Lucene-Solr-NightlyTests-master - Build # 1534 - Failure
+1. It’s a shame that @SuppressCodecs doesn’t work on test methods, only on classes, which makes things a little trickier. > On 20 Apr 2018, at 10:13, Dawid Weisswrote: > > This is due to an out of memory exception in > > [junit4] 1> at > org.apache.lucene.search.TestInetAddressRangeQueries.testRandomBig(TestInetAddressRangeQueries.java:81) > > Seems like mem codec has been picked -- should we add suppression to this > test? > > @SuppressCodecs({"Direct", "Memory"}) > > Dawid > > - > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org > For additional commands, e-mail: dev-h...@lucene.apache.org > - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org