[JENKINS] Lucene-Solr-BadApples-7.x-Linux (64bit/jdk-10) - Build # 31 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-BadApples-7.x-Linux/31/ Java: 64bit/jdk-10 -XX:+UseCompressedOops -XX:+UseG1GC 1 tests failed. FAILED: org.apache.solr.cloud.api.collections.TestCollectionsAPIViaSolrCloudCluster.testCollectionCreateSearchDelete Error Message: Error from server at http://127.0.0.1:34303/solr/testcollection_shard1_replica_n2: Expected mime type application/octet-stream but got text/html.Error 404 Can not find: /solr/testcollection_shard1_replica_n2/update HTTP ERROR 404 Problem accessing /solr/testcollection_shard1_replica_n2/update. Reason: Can not find: /solr/testcollection_shard1_replica_n2/updatehttp://eclipse.org/jetty;>Powered by Jetty:// 9.4.8.v20171121 Stack Trace: org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from server at http://127.0.0.1:34303/solr/testcollection_shard1_replica_n2: Expected mime type application/octet-stream but got text/html. Error 404 Can not find: /solr/testcollection_shard1_replica_n2/update HTTP ERROR 404 Problem accessing /solr/testcollection_shard1_replica_n2/update. Reason: Can not find: /solr/testcollection_shard1_replica_n2/updatehttp://eclipse.org/jetty;>Powered by Jetty:// 9.4.8.v20171121 at __randomizedtesting.SeedInfo.seed([848D4F40F5C06FE:ABB27A5188B4EC5B]:0) at org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:551) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1015) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:886) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:948) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:948) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:948) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:948) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:948) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:819) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194) at org.apache.solr.client.solrj.request.UpdateRequest.commit(UpdateRequest.java:233) at org.apache.solr.cloud.api.collections.TestCollectionsAPIViaSolrCloudCluster.testCollectionCreateSearchDelete(TestCollectionsAPIViaSolrCloudCluster.java:170) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:564) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at
[jira] [Updated] (SOLR-12258) V2 API should "retry" for unresolved collections/aliases (like V1 does)
[ https://issues.apache.org/jira/browse/SOLR-12258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Smiley updated SOLR-12258: Attachment: SOLR-12258.patch > V2 API should "retry" for unresolved collections/aliases (like V1 does) > --- > > Key: SOLR-12258 > URL: https://issues.apache.org/jira/browse/SOLR-12258 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud, v2 API >Reporter: David Smiley >Priority: Major > Attachments: SOLR-12258.patch > > > When using V1, if the request refers to a possible collection/alias that > fails to resolve, HttpSolrCall will invoke AliasesManager.update() then retry > the request as if anew (in collaboration with SolrDispatchFilter). If it > fails to resolve again we stop there and return an error; it doesn't go on > forever. > V2 (V2HttpCall specifically) doesn't have this retry mechanism. It'll return > "no such collection or alias". > The retry will not only work for an alias but the retrying is a delay that > will at least help the odds of a newly made collection from being known to > this Solr node. It'd be nice if this was more explicit – i.e. if there was a > mechanism similar to AliasesManager.update() but for a collection. I'm not > sure how to do that? > BTW I discovered this while debugging a Jenkins failure of > TimeRoutedAliasUpdateProcessorTest.test where it early on simply goes to > issue a V2 based request to change the configuration of a collection that was > created immediately before it. It's pretty mysterious. I am aware of > SolrCloudTestCase.waitForState which is maybe something that needs to be > called? But if that were true then *every* SolrCloud test would need to use > it; it just seems wrong to me that we ought to use this method commonly. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-12310) A commit or rollback caused by a request param using javabin and no updates will happen twice.
Mark Miller created SOLR-12310: -- Summary: A commit or rollback caused by a request param using javabin and no updates will happen twice. Key: SOLR-12310 URL: https://issues.apache.org/jira/browse/SOLR-12310 Project: Solr Issue Type: Bug Security Level: Public (Default Security Level. Issues are Public) Reporter: Mark Miller Doesn't seem like JavabinLoader should have this logic, just like for other loaders it is handled in ContentStreamHandlerBase. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12299) More Like This Params Refactor
[ https://issues.apache.org/jira/browse/SOLR-12299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16463325#comment-16463325 ] Lucene/Solr QA commented on SOLR-12299: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 5s{color} | {color:red} SOLR-12299 does not apply to master. Rebase required? Wrong Branch? See https://wiki.apache.org/solr/HowToContribute#Creating_the_patch_file for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | SOLR-12299 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12921792/SOLR-12299.patch | | Console output | https://builds.apache.org/job/PreCommit-SOLR-Build/77/console | | Powered by | Apache Yetus 0.7.0 http://yetus.apache.org | This message was automatically generated. > More Like This Params Refactor > -- > > Key: SOLR-12299 > URL: https://issues.apache.org/jira/browse/SOLR-12299 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: MoreLikeThis >Reporter: Alessandro Benedetti >Priority: Major > Attachments: SOLR-12299.patch > > Time Spent: 10m > Remaining Estimate: 0h > > More Like This ca be refactored to improve the code readability, test > coverage and maintenance. > Scope of this Jira issue is to start the More Like This refactor from the > More Like This Params. > This Jira will not improve the current More Like This but just keep the same > functionality with a refactored code. > Other Jira issues will follow improving the overall code readability, test > coverage and maintenance. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-BadApples-Tests-master - Build # 50 - Still unstable
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-master/50/ 2 tests failed. FAILED: org.apache.solr.cloud.ForceLeaderTest.testReplicasInLowerTerms Error Message: Address already in use Stack Trace: java.net.BindException: Address already in use at __randomizedtesting.SeedInfo.seed([A1E965CAA61A8F45:2FDD458966BD2F31]:0) at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:433) at sun.nio.ch.Net.bind(Net.java:425) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.eclipse.jetty.server.ServerConnector.openAcceptChannel(ServerConnector.java:334) at org.eclipse.jetty.server.ServerConnector.open(ServerConnector.java:302) at org.eclipse.jetty.server.AbstractNetworkConnector.doStart(AbstractNetworkConnector.java:80) at org.eclipse.jetty.server.ServerConnector.doStart(ServerConnector.java:238) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68) at org.eclipse.jetty.server.Server.doStart(Server.java:397) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68) at org.apache.solr.client.solrj.embedded.JettySolrRunner.start(JettySolrRunner.java:396) at org.apache.solr.client.solrj.embedded.JettySolrRunner.start(JettySolrRunner.java:369) at org.apache.solr.cloud.ForceLeaderTest.bringBackOldLeaderAndSendDoc(ForceLeaderTest.java:396) at org.apache.solr.cloud.ForceLeaderTest.testReplicasInLowerTerms(ForceLeaderTest.java:144) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:993) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at
[jira] [Commented] (SOLR-12308) LISTALIASES should return up to date response
[ https://issues.apache.org/jira/browse/SOLR-12308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16463314#comment-16463314 ] David Smiley commented on SOLR-12308: - Patch summary: * CollectionsHandler: LISTALIASES: add aliasesManager.update() * ZkStateReader AliasesManager: just clarified the loop (while -> for) and clarified that if you get to after the loop then we always (not conditionally) throw an exception. * MiniSolrCloudCluster.deleteAllCollections now deletes all aliases too (just a one-liner; very efficient) * AliasIntegrationTest: ** tearDown: simplify to no longer explicitly delete aliases; no need ** testProperties: simplified some code at the end; no real change ** testModifyPropertiesV2: removed one call to sleepToAllowZkPropagation that shouldn't be necessary anymore. I reviewed the other uses which should stay. * CreateRoutedAliasTest: ** refactored away the need to have a httpClient field ** moved the cleanup logic to a doAfter where it ought to be. It needn't explicitly delete aliases here since it'll now happen via cluster.deleteAllCollections(); Tests pass. [~gus_heck] could you please take a look? > LISTALIASES should return up to date response > - > > Key: SOLR-12308 > URL: https://issues.apache.org/jira/browse/SOLR-12308 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Reporter: David Smiley >Assignee: David Smiley >Priority: Minor > Attachments: SOLR-12308.patch > > > The LISTALIASES command might return a stale response due to the default > eventual consistency of reads of ZooKeeper. I think if someone calls this > command (which generally won't be rapid-fire), they deserve an up to date > response. This is easily done with a one-liner; patch forthcoming. > Returning stale alias info is the only plausible explanation I have for why a > recent CI failure for AliasesIntegrationTest.tearDown() failed to detect > aliases to be deleted. It calls listAliases to know which aliases exist so it > can then delete them 1st. > [https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1833/] > tearDown then calls MiniSolrCloudCluster.deleteAllCollections() which > interestingly grabs a ZkStateReader.createClusterStateWatchersAndUpdate() > perhaps this ought to delete all aliases _as well_ since, after all, if there > were any aliases then well deleting all collections is bound to fail. Should > I file a separate issue or just handle this together? -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7976) Make TieredMergePolicy respect maxSegmentSizeMB and allow singleton merges of very large segments
[ https://issues.apache.org/jira/browse/LUCENE-7976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16463273#comment-16463273 ] Erick Erickson commented on LUCENE-7976: [~mikemccand] Do note that I'm seeing these errors on my latest full test run, I'll chase down what's up with them. That said, I think I'm at a point where I don't want to do another significant change to the approach if I can help it or unless there are issues I'll see if I can reproduce these errors reliably and whether I can get them to occur with the unmodified TMP code. Precommit passes though and compiles, can I ship it ;) [junit4] - org.apache.solr.core.TestCodecSupport.testMixedCompressionMode [junit4] - org.apache.solr.search.join.TestScoreJoinQPNoScore.testRandomJoin [junit4] - org.apache.solr.TestRandomFaceting.testRandomFaceting [junit4] - org.apache.solr.TestRandomDVFaceting.testRandomFaceting [junit4] - org.apache.solr.core.TestMergePolicyConfig.testTieredMergePolicyConfig [junit4] - org.apache.solr.TestJoin.testRandomJoin [junit4] - org.apache.solr.TestJoin.testJoin [junit4] - org.apache.solr.core.TestSolrDeletionPolicy1.testKeepOptimizedOnlyCommits > Make TieredMergePolicy respect maxSegmentSizeMB and allow singleton merges of > very large segments > - > > Key: LUCENE-7976 > URL: https://issues.apache.org/jira/browse/LUCENE-7976 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Erick Erickson >Assignee: Erick Erickson >Priority: Major > Attachments: LUCENE-7976.patch, LUCENE-7976.patch, LUCENE-7976.patch, > LUCENE-7976.patch, LUCENE-7976.patch, LUCENE-7976.patch, LUCENE-7976.patch > > > We're seeing situations "in the wild" where there are very large indexes (on > disk) handled quite easily in a single Lucene index. This is particularly > true as features like docValues move data into MMapDirectory space. The > current TMP algorithm allows on the order of 50% deleted documents as per a > dev list conversation with Mike McCandless (and his blog here: > https://www.elastic.co/blog/lucenes-handling-of-deleted-documents). > Especially in the current era of very large indexes in aggregate, (think many > TB) solutions like "you need to distribute your collection over more shards" > become very costly. Additionally, the tempting "optimize" button exacerbates > the issue since once you form, say, a 100G segment (by > optimizing/forceMerging) it is not eligible for merging until 97.5G of the > docs in it are deleted (current default 5G max segment size). > The proposal here would be to add a new parameter to TMP, something like > (no, that's not serious name, suggestions > welcome) which would default to 100 (or the same behavior we have now). > So if I set this parameter to, say, 20%, and the max segment size stays at > 5G, the following would happen when segments were selected for merging: > > any segment with > 20% deleted documents would be merged or rewritten NO > > MATTER HOW LARGE. There are two cases, > >> the segment has < 5G "live" docs. In that case it would be merged with > >> smaller segments to bring the resulting segment up to 5G. If no smaller > >> segments exist, it would just be rewritten > >> The segment has > 5G "live" docs (the result of a forceMerge or optimize). > >> It would be rewritten into a single segment removing all deleted docs no > >> matter how big it is to start. The 100G example above would be rewritten > >> to an 80G segment for instance. > Of course this would lead to potentially much more I/O which is why the > default would be the same behavior we see now. As it stands now, though, > there's no way to recover from an optimize/forceMerge except to re-index from > scratch. We routinely see 200G-300G Lucene indexes at this point "in the > wild" with 10s of shards replicated 3 or more times. And that doesn't even > include having these over HDFS. > Alternatives welcome! Something like the above seems minimally invasive. A > new merge policy is certainly an alternative. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-Tests-master - Build # 2517 - Still Unstable
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/2517/ 1 tests failed. FAILED: org.apache.solr.cloud.autoscaling.sim.TestTriggerIntegration.testTriggerThrottling Error Message: Both triggers should have fired by now Stack Trace: java.lang.AssertionError: Both triggers should have fired by now at __randomizedtesting.SeedInfo.seed([2EFA8899AF0488A9:D5D820BC7DAE6B3B]:0) at org.junit.Assert.fail(Assert.java:93) at org.apache.solr.cloud.autoscaling.sim.TestTriggerIntegration.testTriggerThrottling(TestTriggerIntegration.java:226) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.lang.Thread.run(Thread.java:748) Build Log: [...truncated 13691 lines...] [junit4] Suite: org.apache.solr.cloud.autoscaling.sim.TestTriggerIntegration [junit4] 2> Creating dataDir: /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/build/solr-core/test/J1/temp/solr.cloud.autoscaling.sim.TestTriggerIntegration_2EFA8899AF0488A9-001/init-core-data-001
[JENKINS] Lucene-Solr-repro - Build # 576 - Unstable
Build: https://builds.apache.org/job/Lucene-Solr-repro/576/ [...truncated 31 lines...] [repro] Jenkins log URL: https://builds.apache.org/job/Lucene-Solr-Tests-master/2516/consoleText [repro] Revision: 8b9c2a3185d824a9aaae5c993b872205358729dd [repro] Repro line: ant test -Dtestcase=SearchRateTriggerTest -Dtests.method=testTrigger -Dtests.seed=608DFD5AAF163AD9 -Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=cs-CZ -Dtests.timezone=Asia/Baghdad -Dtests.asserts=true -Dtests.file.encoding=UTF-8 [repro] Repro line: ant test -Dtestcase=IndexSizeTriggerTest -Dtests.method=testTrigger -Dtests.seed=608DFD5AAF163AD9 -Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=it-CH -Dtests.timezone=America/Cayenne -Dtests.asserts=true -Dtests.file.encoding=UTF-8 [repro] git rev-parse --abbrev-ref HEAD [repro] git rev-parse HEAD [repro] Initial local git branch/revision: b617489638db4ddca63e5fbc45a58c5695a021d3 [repro] git fetch [repro] git checkout 8b9c2a3185d824a9aaae5c993b872205358729dd [...truncated 2 lines...] [repro] git merge --ff-only [...truncated 1 lines...] [repro] ant clean [...truncated 6 lines...] [repro] Test suites by module: [repro]solr/core [repro] IndexSizeTriggerTest [repro] SearchRateTriggerTest [repro] ant compile-test [...truncated 3298 lines...] [repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=10 -Dtests.class="*.IndexSizeTriggerTest|*.SearchRateTriggerTest" -Dtests.showOutput=onerror -Dtests.seed=608DFD5AAF163AD9 -Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=it-CH -Dtests.timezone=America/Cayenne -Dtests.asserts=true -Dtests.file.encoding=UTF-8 [...truncated 14523 lines...] [repro] Setting last failure code to 256 [repro] Failures: [repro] 5/5 failed: org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest [repro] 5/5 failed: org.apache.solr.cloud.autoscaling.SearchRateTriggerTest [repro] Re-testing 100% failures at the tip of master [repro] ant clean [...truncated 8 lines...] [repro] Test suites by module: [repro]solr/core [repro] IndexSizeTriggerTest [repro] SearchRateTriggerTest [repro] ant compile-test [...truncated 3298 lines...] [repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=10 -Dtests.class="*.IndexSizeTriggerTest|*.SearchRateTriggerTest" -Dtests.showOutput=onerror -Dtests.seed=608DFD5AAF163AD9 -Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=it-CH -Dtests.timezone=America/Cayenne -Dtests.asserts=true -Dtests.file.encoding=UTF-8 [...truncated 10838 lines...] [repro] Setting last failure code to 256 [repro] Failures at the tip of master: [repro] 3/5 failed: org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest [repro] 5/5 failed: org.apache.solr.cloud.autoscaling.SearchRateTriggerTest [repro] Re-testing 100% failures at the tip of master without a seed [repro] ant clean [...truncated 8 lines...] [repro] Test suites by module: [repro]solr/core [repro] SearchRateTriggerTest [repro] ant compile-test [...truncated 3298 lines...] [repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 -Dtests.class="*.SearchRateTriggerTest" -Dtests.showOutput=onerror -Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=cs-CZ -Dtests.timezone=Asia/Baghdad -Dtests.asserts=true -Dtests.file.encoding=UTF-8 [...truncated 7133 lines...] [repro] Setting last failure code to 256 [repro] Failures at the tip of master without a seed: [repro] 5/5 failed: org.apache.solr.cloud.autoscaling.SearchRateTriggerTest [repro] git checkout b617489638db4ddca63e5fbc45a58c5695a021d3 [...truncated 2 lines...] [repro] Exiting with code 256 [...truncated 6 lines...] - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-7.3-Windows (64bit/jdk1.8.0_144) - Build # 40 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.3-Windows/40/ Java: 64bit/jdk1.8.0_144 -XX:-UseCompressedOops -XX:+UseParallelGC 2 tests failed. FAILED: org.apache.solr.cloud.CollectionsAPISolrJTest.testCreateWithDefaultConfigSet Error Message: Stack Trace: java.lang.AssertionError at __randomizedtesting.SeedInfo.seed([A8C2DC73899E0C5A:E6697D34D62176A9]:0) at org.junit.Assert.fail(Assert.java:92) at org.junit.Assert.assertTrue(Assert.java:43) at org.junit.Assert.assertTrue(Assert.java:54) at org.apache.solr.cloud.CollectionsAPISolrJTest.testCreateWithDefaultConfigSet(CollectionsAPISolrJTest.java:74) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.lang.Thread.run(Thread.java:748) FAILED: org.apache.solr.core.TestJmxIntegration.testJmxOnCoreReload Error Message: Number of registered MBeans is not the same as the number of core metrics: 433 != 434 Stack Trace: java.lang.AssertionError: Number of registered MBeans is not the same as
[jira] [Commented] (SOLR-12238) Synonym Query Style Boost By Payload
[ https://issues.apache.org/jira/browse/SOLR-12238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16463164#comment-16463164 ] Lucene/Solr QA commented on SOLR-12238: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 11s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 14s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 1m 33s{color} | {color:red} lucene_core generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} Release audit (RAT) {color} | {color:green} 2m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} Check forbidden APIs {color} | {color:green} 1m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} Validate source patterns {color} | {color:green} 1m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} Validate ref guide {color} | {color:green} 1m 33s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 43m 52s{color} | {color:green} core in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}116m 59s{color} | {color:red} core in the patch failed. {color} | | {color:black}{color} | {color:black} {color} | {color:black}183m 32s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | solr.cloud.autoscaling.IndexSizeTriggerTest | | | solr.cloud.TestTlogReplica | | | solr.cloud.autoscaling.sim.TestTriggerIntegration | | | solr.cloud.autoscaling.sim.TestLargeCluster | | | solr.cloud.ReplaceNodeTest | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | SOLR-12238 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12921773/SOLR-12238.patch | | Optional Tests | compile javac unit ratsources checkforbiddenapis validatesourcepatterns validaterefguide | | uname | Linux lucene2-us-west.apache.org 4.4.0-112-generic #135-Ubuntu SMP Fri Jan 19 11:48:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | ant | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-SOLR-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh | | git revision | master / ab11867 | | ant | version: Apache Ant(TM) version 1.9.6 compiled on July 8 2015 | | Default Java | 1.8.0_172 | | javac | https://builds.apache.org/job/PreCommit-SOLR-Build/75/artifact/out/diff-compile-javac-lucene_core.txt | | unit | https://builds.apache.org/job/PreCommit-SOLR-Build/75/artifact/out/patch-unit-solr_core.txt | | Test Results | https://builds.apache.org/job/PreCommit-SOLR-Build/75/testReport/ | | modules | C: lucene/core solr/core solr/solr-ref-guide U: . | | Console output | https://builds.apache.org/job/PreCommit-SOLR-Build/75/console | | Powered by | Apache Yetus 0.7.0 http://yetus.apache.org | This message was automatically generated. > Synonym Query Style Boost By Payload > > > Key: SOLR-12238 > URL: https://issues.apache.org/jira/browse/SOLR-12238 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: query parsers >Affects Versions: 7.2 >Reporter: Alessandro Benedetti >Priority: Major > Attachments: SOLR-12238.patch, SOLR-12238.patch, SOLR-12238.patch > > Time Spent: 10m > Remaining Estimate: 0h > > This improvement is built on top of the Synonym Query Style feature and > brings the possibility of boosting synonym queries using the payload > associated. > It introduces two new modalities for the Synonym Query Style : > PICK_BEST_BOOST_BY_PAYLOAD -> build a Disjunction query with the clauses > boosted by payload > AS_DISTINCT_TERMS_BOOST_BY_PAYLOAD -> build a Boolean query with the clauses > boosted by payload > This new synonym query styles will assume payloads are available so they must > be used in conjunction with a token filter able to produce payloads. > An synonym.txt example could be : > # Synonyms used by Payload Boost > tiger => tiger|1.0, Big_Cat|0.8, Shere_Khan|0.9 > leopard => leopard,
[GitHub] lucene-solr pull request #358: SOLR-11277: Add auto hard commit setting base...
Github user asfgit closed the pull request at: https://github.com/apache/lucene-solr/pull/358 --- - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11277) Add auto hard commit setting based on tlog size
[ https://issues.apache.org/jira/browse/SOLR-11277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16463128#comment-16463128 ] Anshum Gupta commented on SOLR-11277: - Thanks [~rupsshankar]! I've committed this patch. Changes look good, and the tests and precommit passes. > Add auto hard commit setting based on tlog size > --- > > Key: SOLR-11277 > URL: https://issues.apache.org/jira/browse/SOLR-11277 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Rupa Shankar >Assignee: Anshum Gupta >Priority: Major > Attachments: SOLR-11277.patch, SOLR-11277.patch, SOLR-11277.patch, > SOLR-11277.patch, SOLR-11277.patch, SOLR-11277.patch, SOLR-11277.patch, > max_size_auto_commit.patch > > Time Spent: 10m > Remaining Estimate: 0h > > When indexing documents of variable sizes and at variable schedules, it can > be hard to estimate the optimal auto hard commit maxDocs or maxTime settings. > We’ve had some occurrences of really huge tlogs, resulting in serious issues, > so in an attempt to avoid this, it would be great to have a “maxSize” setting > based on the tlog size on disk. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11277) Add auto hard commit setting based on tlog size
[ https://issues.apache.org/jira/browse/SOLR-11277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16463127#comment-16463127 ] ASF subversion and git services commented on SOLR-11277: Commit b617489638db4ddca63e5fbc45a58c5695a021d3 in lucene-solr's branch refs/heads/master from [~anshumg] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=b617489 ] SOLR-11277: Add auto hard commit setting based on tlog size (this closes #358) > Add auto hard commit setting based on tlog size > --- > > Key: SOLR-11277 > URL: https://issues.apache.org/jira/browse/SOLR-11277 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Rupa Shankar >Assignee: Anshum Gupta >Priority: Major > Attachments: SOLR-11277.patch, SOLR-11277.patch, SOLR-11277.patch, > SOLR-11277.patch, SOLR-11277.patch, SOLR-11277.patch, SOLR-11277.patch, > max_size_auto_commit.patch > > Time Spent: 10m > Remaining Estimate: 0h > > When indexing documents of variable sizes and at variable schedules, it can > be hard to estimate the optimal auto hard commit maxDocs or maxTime settings. > We’ve had some occurrences of really huge tlogs, resulting in serious issues, > so in an attempt to avoid this, it would be great to have a “maxSize” setting > based on the tlog size on disk. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7344) Allow Jetty thread pool limits while still avoiding distributed deadlock.
[ https://issues.apache.org/jira/browse/SOLR-7344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16463102#comment-16463102 ] Mark Miller commented on SOLR-7344: --- Even before, having multiple pools was probably the wrong idea. Instead we should have focused on internal thread reuse and optimization as well as tuning the thread pool the best we could to be less thread greedy, and then add a QOS type filter like the one I pointed too way above. > Allow Jetty thread pool limits while still avoiding distributed deadlock. > - > > Key: SOLR-7344 > URL: https://issues.apache.org/jira/browse/SOLR-7344 > Project: Solr > Issue Type: Improvement > Components: SolrCloud >Reporter: Mark Miller >Priority: Major > Attachments: SOLR-7344.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12304) Interesting Terms parameter is ignored by MLT Component
[ https://issues.apache.org/jira/browse/SOLR-12304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16463099#comment-16463099 ] Lucene/Solr QA commented on SOLR-12304: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 21s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} Release audit (RAT) {color} | {color:green} 2m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} Check forbidden APIs {color} | {color:green} 2m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} Validate source patterns {color} | {color:green} 2m 20s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 80m 35s{color} | {color:red} core in the patch failed. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 88m 39s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | solr.cloud.api.collections.TestCollectionsAPIViaSolrCloudCluster | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | SOLR-12304 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12921774/SOLR-12304.patch | | Optional Tests | compile javac unit ratsources checkforbiddenapis validatesourcepatterns | | uname | Linux lucene1-us-west 3.13.0-88-generic #135-Ubuntu SMP Wed Jun 8 21:10:42 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | ant | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-SOLR-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh | | git revision | master / ab11867 | | ant | version: Apache Ant(TM) version 1.9.3 compiled on April 8 2014 | | Default Java | 1.8.0_172 | | unit | https://builds.apache.org/job/PreCommit-SOLR-Build/76/artifact/out/patch-unit-solr_core.txt | | Test Results | https://builds.apache.org/job/PreCommit-SOLR-Build/76/testReport/ | | modules | C: solr/core U: solr/core | | Console output | https://builds.apache.org/job/PreCommit-SOLR-Build/76/console | | Powered by | Apache Yetus 0.7.0 http://yetus.apache.org | This message was automatically generated. > Interesting Terms parameter is ignored by MLT Component > --- > > Key: SOLR-12304 > URL: https://issues.apache.org/jira/browse/SOLR-12304 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: MoreLikeThis >Affects Versions: 7.2 >Reporter: Alessandro Benedetti >Priority: Major > Attachments: SOLR-12304.patch, SOLR-12304.patch > > Time Spent: 10m > Remaining Estimate: 0h > > Currently the More Like This component just ignores the mlt.InterestingTerms > parameter ( which is usable by the MoreLikeThisHandler). > Scope of this issue is to fix the bug and add related tests ( which will > succeed after the fix ) > *N.B.* MoreLikeThisComponent and MoreLikeThisHandler are very coupled and the > tests for the MoreLikeThisHandler are intersecting the MoreLikeThisComponent > ones . > It is out of scope for this issue any consideration or refactor of that. > Other issues will follow. > *N.B.* out of scope for this issue is the distributed case, which is much > more complicated and requires much deeper investigations -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12308) LISTALIASES should return up to date response
[ https://issues.apache.org/jira/browse/SOLR-12308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Smiley updated SOLR-12308: Attachment: SOLR-12308.patch > LISTALIASES should return up to date response > - > > Key: SOLR-12308 > URL: https://issues.apache.org/jira/browse/SOLR-12308 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Reporter: David Smiley >Assignee: David Smiley >Priority: Minor > Attachments: SOLR-12308.patch > > > The LISTALIASES command might return a stale response due to the default > eventual consistency of reads of ZooKeeper. I think if someone calls this > command (which generally won't be rapid-fire), they deserve an up to date > response. This is easily done with a one-liner; patch forthcoming. > Returning stale alias info is the only plausible explanation I have for why a > recent CI failure for AliasesIntegrationTest.tearDown() failed to detect > aliases to be deleted. It calls listAliases to know which aliases exist so it > can then delete them 1st. > [https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1833/] > tearDown then calls MiniSolrCloudCluster.deleteAllCollections() which > interestingly grabs a ZkStateReader.createClusterStateWatchersAndUpdate() > perhaps this ought to delete all aliases _as well_ since, after all, if there > were any aliases then well deleting all collections is bound to fail. Should > I file a separate issue or just handle this together? -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-12309) CloudSolrClient.Builder constructors are not well documented
Shawn Heisey created SOLR-12309: --- Summary: CloudSolrClient.Builder constructors are not well documented Key: SOLR-12309 URL: https://issues.apache.org/jira/browse/SOLR-12309 Project: Solr Issue Type: Bug Security Level: Public (Default Security Level. Issues are Public) Components: clients - java Affects Versions: 7.3 Reporter: Shawn Heisey I was having a lot of trouble figuring out how to create a CloudSolrClient object without using deprecated code. The no-arg constructor on the Builder object is deprecated, and the two remaining methods have similar signatures to each other. It is not at all obvious how to successfully call the one that uses ZooKeeper to connect. The javadoc is silent on the issue. I did finally figure it out with a lot of googling, and I would like to save others the hassle. I believe that this is what the javadoc for the third ctor should say: Provide a series of ZooKeeper hosts which will be used when configuring CloudSolrClient instances. Optionally, include a chroot to be used when accessing the ZooKeeper database. Here are a couple of examples. The first one has no chroot, the second one does: new CloudSolrClient.Builder(zkHosts, Optional.empty()) new CloudSolrClient.Builder(zkHosts, Optional.of("/solr")) The javadoc for the URL-based method should probably say something to indicate that it is easy to confuse with the ZK-based method. I have not yet looked at the current reference guide to see if that has any clarification. Is it a good idea to completely eliminate the ability to create a cloud client using a single string that matches the zkHost value used when starting Solr in cloud mode? -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7344) Allow Jetty thread pool limits while still avoiding distributed deadlock.
[ https://issues.apache.org/jira/browse/SOLR-7344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16463061#comment-16463061 ] Mark Miller commented on SOLR-7344: --- For this issue, I don't really think we need to deal with this anymore. I have not checked with the current Apache HttpClient and Jetty, but with Jetty HttpClient and Jetty and the NIO2 stuff it can do, even with very high pool limits, normal load uses way fewer threads. The thread per request model we still used a couple years ago was really what made things so ugly. Having smaller thread pools is way less interesting now and having more than one even less so. Now we should be able to set pretty high limits like we do now and instead implement a filter for throttling or load control. > Allow Jetty thread pool limits while still avoiding distributed deadlock. > - > > Key: SOLR-7344 > URL: https://issues.apache.org/jira/browse/SOLR-7344 > Project: Solr > Issue Type: Improvement > Components: SolrCloud >Reporter: Mark Miller >Priority: Major > Attachments: SOLR-7344.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-Windows (32bit/jdk1.8.0_144) - Build # 7300 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7300/ Java: 32bit/jdk1.8.0_144 -server -XX:+UseParallelGC 39 tests failed. FAILED: org.apache.solr.cloud.autoscaling.NodeAddedTriggerTest.testRestoreState Error Message: Did not expect the processor to fire on first run! event={ "id":"24c2143403be3T7gu7w20nhea51v2cqeuqd0wvv", "source":"node_added_trigger", "eventTime":646655699336163, "eventType":"NODEADDED", "properties":{ "eventTimes":[646655699336163], "nodeNames":["127.0.0.1:60440_solr"]}} Stack Trace: java.lang.AssertionError: Did not expect the processor to fire on first run! event={ "id":"24c2143403be3T7gu7w20nhea51v2cqeuqd0wvv", "source":"node_added_trigger", "eventTime":646655699336163, "eventType":"NODEADDED", "properties":{ "eventTimes":[646655699336163], "nodeNames":["127.0.0.1:60440_solr"]}} at __randomizedtesting.SeedInfo.seed([26EDF600F688A6E2:E84352930EB1DEF4]:0) at org.junit.Assert.fail(Assert.java:93) at org.apache.solr.cloud.autoscaling.NodeAddedTriggerTest.lambda$new$0(NodeAddedTriggerTest.java:49) at org.apache.solr.cloud.autoscaling.NodeAddedTrigger.run(NodeAddedTrigger.java:161) at org.apache.solr.cloud.autoscaling.NodeAddedTriggerTest.testRestoreState(NodeAddedTriggerTest.java:257) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at
[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-11-ea+5) - Build # 21952 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/21952/ Java: 64bit/jdk-11-ea+5 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC 1 tests failed. FAILED: org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testTrigger Error Message: number of ops expected:<2> but was:<1> Stack Trace: java.lang.AssertionError: number of ops expected:<2> but was:<1> at __randomizedtesting.SeedInfo.seed([9A66BFAD4B7CA757:F9AD892FD2B3D47A]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.failNotEquals(Assert.java:647) at org.junit.Assert.assertEquals(Assert.java:128) at org.junit.Assert.assertEquals(Assert.java:472) at org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testTrigger(IndexSizeTriggerTest.java:187) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:564) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.base/java.lang.Thread.run(Thread.java:841) Build Log: [...truncated 1864 lines...] [junit4] JVM J1: stderr was not empty, see:
[jira] [Commented] (SOLR-7344) Allow Jetty thread pool limits while still avoiding distributed deadlock.
[ https://issues.apache.org/jira/browse/SOLR-7344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16463010#comment-16463010 ] Mark Miller commented on SOLR-7344: --- Heh, it's a serious statement and question. We do want to determine what the best underlying tech to use is for the long term and that was part of the motivation to move away from a WAR (but just part). If people like or want Netty, it would be good to know why and get those discussions out of the way. Personally, I've never used Netty. From what I understand, it's a lower level project and it would be a lot of work to switch - so to even consider it, we would want powerful reasons. AFAIK, the Jetty team has done a ton of work in response to the popularity of frameworks like Netty to match most of there features. Perhaps not as light weight in some cases, but that is just because if you are building it for your app, you might be able to leave a lot out. I'm not very optimistic we could leave much out of what Jetty does for us. Instead, we would need to take a long time to stabilize. So my thoughts have been to work towards supporting more advanced features available in Jetty over time. > Allow Jetty thread pool limits while still avoiding distributed deadlock. > - > > Key: SOLR-7344 > URL: https://issues.apache.org/jira/browse/SOLR-7344 > Project: Solr > Issue Type: Improvement > Components: SolrCloud >Reporter: Mark Miller >Priority: Major > Attachments: SOLR-7344.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11490) Add @since javadoc tags to the interesting Solr/Lucene classes
[ https://issues.apache.org/jira/browse/SOLR-11490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16463002#comment-16463002 ] Jan Høydahl commented on SOLR-11490: I think Robert says that "3.1" is fine, since that is a valid version. We can view lucene/solr as a "new product" separate since the merger, and that product started as v3.1. I'm ok with that as well, since the goal of Javadoc is not to track svn/git history, we still have repos for that, but to give a hint what releases a particular class has been present in. PS: When we had the Solr docs in the old wiki, we used to tag features, parameters etc with a "since" tag which was often very useful. I hope that adding these since annotations can get back some of that. I guess we can also tag new methods, not only classes? > Add @since javadoc tags to the interesting Solr/Lucene classes > -- > > Key: SOLR-11490 > URL: https://issues.apache.org/jira/browse/SOLR-11490 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Alexandre Rafalovitch >Assignee: Alexandre Rafalovitch >Priority: Minor > > As per the discussion on the dev list, it may be useful to add Javadoc since > tags to significant (or even all) Java files. > For user-facing files (such as analyzers, URPs, stream evaluators, etc) it > would be useful when trying to identifying whether a particular class only > comes later than user's particular version. > For other classes, it may be useful for historical reasons. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7344) Allow Jetty thread pool limits while still avoiding distributed deadlock.
[ https://issues.apache.org/jira/browse/SOLR-7344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16462998#comment-16462998 ] Erick Erickson commented on SOLR-7344: -- Hey, man! I admitted total ignorance! I'm remembering discussions about how Jetty would eventually be replaced when we were initially going to no war file. I'll totally defer the question to people who know things about the details. > Allow Jetty thread pool limits while still avoiding distributed deadlock. > - > > Key: SOLR-7344 > URL: https://issues.apache.org/jira/browse/SOLR-7344 > Project: Solr > Issue Type: Improvement > Components: SolrCloud >Reporter: Mark Miller >Priority: Major > Attachments: SOLR-7344.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-12308) LISTALIASES should return up to date response
David Smiley created SOLR-12308: --- Summary: LISTALIASES should return up to date response Key: SOLR-12308 URL: https://issues.apache.org/jira/browse/SOLR-12308 Project: Solr Issue Type: Improvement Security Level: Public (Default Security Level. Issues are Public) Components: SolrCloud Reporter: David Smiley Assignee: David Smiley The LISTALIASES command might return a stale response due to the default eventual consistency of reads of ZooKeeper. I think if someone calls this command (which generally won't be rapid-fire), they deserve an up to date response. This is easily done with a one-liner; patch forthcoming. Returning stale alias info is the only plausible explanation I have for why a recent CI failure for AliasesIntegrationTest.tearDown() failed to detect aliases to be deleted. It calls listAliases to know which aliases exist so it can then delete them 1st. [https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1833/] tearDown then calls MiniSolrCloudCluster.deleteAllCollections() which interestingly grabs a ZkStateReader.createClusterStateWatchersAndUpdate() perhaps this ought to delete all aliases _as well_ since, after all, if there were any aliases then well deleting all collections is bound to fail. Should I file a separate issue or just handle this together? -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-Tests-7.3 - Build # 60 - Still Unstable
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.3/60/ 2 tests failed. FAILED: org.apache.solr.cloud.LeaderVoteWaitTimeoutTest.testMostInSyncReplicasCanWinElection Error Message: Timeout waiting for new leader null Live Nodes: [127.0.0.1:52694_solr, 127.0.0.1:58961_solr, 127.0.0.1:59993_solr] Last available state: DocCollection(collection1//collections/collection1/state.json/14)={ "pullReplicas":"0", "replicationFactor":"3", "shards":{"shard1":{ "range":"8000-7fff", "state":"active", "replicas":{ "core_node62":{ "core":"collection1_shard1_replica_n61", "base_url":"https://127.0.0.1:34744/solr;, "node_name":"127.0.0.1:34744_solr", "state":"down", "type":"NRT"}, "core_node64":{ "core":"collection1_shard1_replica_n63", "base_url":"https://127.0.0.1:58961/solr;, "node_name":"127.0.0.1:58961_solr", "state":"active", "type":"NRT"}, "core_node66":{ "core":"collection1_shard1_replica_n65", "base_url":"https://127.0.0.1:52694/solr;, "node_name":"127.0.0.1:52694_solr", "state":"active", "type":"NRT", "router":{"name":"compositeId"}, "maxShardsPerNode":"1", "autoAddReplicas":"false", "nrtReplicas":"3", "tlogReplicas":"0"} Stack Trace: java.lang.AssertionError: Timeout waiting for new leader null Live Nodes: [127.0.0.1:52694_solr, 127.0.0.1:58961_solr, 127.0.0.1:59993_solr] Last available state: DocCollection(collection1//collections/collection1/state.json/14)={ "pullReplicas":"0", "replicationFactor":"3", "shards":{"shard1":{ "range":"8000-7fff", "state":"active", "replicas":{ "core_node62":{ "core":"collection1_shard1_replica_n61", "base_url":"https://127.0.0.1:34744/solr;, "node_name":"127.0.0.1:34744_solr", "state":"down", "type":"NRT"}, "core_node64":{ "core":"collection1_shard1_replica_n63", "base_url":"https://127.0.0.1:58961/solr;, "node_name":"127.0.0.1:58961_solr", "state":"active", "type":"NRT"}, "core_node66":{ "core":"collection1_shard1_replica_n65", "base_url":"https://127.0.0.1:52694/solr;, "node_name":"127.0.0.1:52694_solr", "state":"active", "type":"NRT", "router":{"name":"compositeId"}, "maxShardsPerNode":"1", "autoAddReplicas":"false", "nrtReplicas":"3", "tlogReplicas":"0"} at __randomizedtesting.SeedInfo.seed([47275CC5ADF98B96:EF3B407F6FB9BFBC]:0) at org.junit.Assert.fail(Assert.java:93) at org.apache.solr.cloud.SolrCloudTestCase.waitForState(SolrCloudTestCase.java:269) at org.apache.solr.cloud.LeaderVoteWaitTimeoutTest.testMostInSyncReplicasCanWinElection(LeaderVoteWaitTimeoutTest.java:189) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at
[jira] [Commented] (LUCENE-8294) KeywordTokenizer hangs with user misconfigured inputs
[ https://issues.apache.org/jira/browse/LUCENE-8294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16462981#comment-16462981 ] Adrien Grand commented on LUCENE-8294: -- Would you like to submit a patch to reject 0 as a buffer size? > KeywordTokenizer hangs with user misconfigured inputs > - > > Key: LUCENE-8294 > URL: https://issues.apache.org/jira/browse/LUCENE-8294 > Project: Lucene - Core > Issue Type: Bug >Affects Versions: 2.1 >Reporter: John Doe >Priority: Minor > > When a user configures the bufferSize to be 0, the while loop in > KeywordTokenizer.next() function hangs endlessly. Here is the code snippet. > {code:java} > public KeywordTokenizer(Reader input, int bufferSize) { > super(input); > this.buffer = new char[bufferSize];//bufferSize is misconfigured with 0 > this.done = false; > } > public Token next() throws IOException { > if (!done) { > done = true; > StringBuffer buffer = new StringBuffer(); > int length; > while (true) { > length = input.read(this.buffer); //length is always 0 when the > buffer.size == 0 > if (length == -1) break; > buffer.append(this.buffer, 0, length); > } > String text = buffer.toString(); > return new Token(text, 0, text.length()); > } > return null; > } > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (LUCENE-8294) KeywordTokenizer hangs with user misconfigured inputs
John Doe created LUCENE-8294: Summary: KeywordTokenizer hangs with user misconfigured inputs Key: LUCENE-8294 URL: https://issues.apache.org/jira/browse/LUCENE-8294 Project: Lucene - Core Issue Type: Bug Affects Versions: 2.1 Reporter: John Doe When a user configures the bufferSize to be 0, the while loop in KeywordTokenizer.next() function hangs endlessly. Here is the code snippet. {code:java} public KeywordTokenizer(Reader input, int bufferSize) { super(input); this.buffer = new char[bufferSize];//bufferSize is misconfigured with 0 this.done = false; } public Token next() throws IOException { if (!done) { done = true; StringBuffer buffer = new StringBuffer(); int length; while (true) { length = input.read(this.buffer); //length is always 0 when the buffer.size == 0 if (length == -1) break; buffer.append(this.buffer, 0, length); } String text = buffer.toString(); return new Token(text, 0, text.length()); } return null; } {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7344) Allow Jetty thread pool limits while still avoiding distributed deadlock.
[ https://issues.apache.org/jira/browse/SOLR-7344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16462967#comment-16462967 ] Mark Miller commented on SOLR-7344: --- I don't know that we care about switching to Netty. Basically it's just a lower level project, in many ways that probably means more work to replace what we have. What do you think it offers that we can't get with Jetty? > Allow Jetty thread pool limits while still avoiding distributed deadlock. > - > > Key: SOLR-7344 > URL: https://issues.apache.org/jira/browse/SOLR-7344 > Project: Solr > Issue Type: Improvement > Components: SolrCloud >Reporter: Mark Miller >Priority: Major > Attachments: SOLR-7344.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12209) add Paging Streaming Expression
[ https://issues.apache.org/jira/browse/SOLR-12209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16462966#comment-16462966 ] Lucene/Solr QA commented on SOLR-12209: --- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 8s{color} | {color:red} SOLR-12209 does not apply to master. Rebase required? Wrong Branch? See https://wiki.apache.org/solr/HowToContribute#Creating_the_patch_file for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | SOLR-12209 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12921768/0001-added-skip-and-limit-stream-decorators.patch | | Console output | https://builds.apache.org/job/PreCommit-SOLR-Build/74/console | | Powered by | Apache Yetus 0.7.0 http://yetus.apache.org | This message was automatically generated. > add Paging Streaming Expression > --- > > Key: SOLR-12209 > URL: https://issues.apache.org/jira/browse/SOLR-12209 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) > Components: streaming expressions >Affects Versions: 7.3 >Reporter: mosh >Priority: Major > Attachments: 0001-added-skip-and-limit-stream-decorators.patch > > > Currently the closest streaming expression that allows some sort of > pagination is top. > I propose we add a new streaming expression, which is based on the > RankedStream class to add offset to the stream. currently it can only be done > in code by reading the stream until the desired offset is reached. > The new expression will be used as such: > {{paging(rows=3, search(collection1, q="*:*", qt="/export", > fl="id,a_s,a_i,a_f", sort="a_f desc, a_i desc"), sort="a_f asc, a_i asc", > start=100)}} > {{this will offset the returned stream by 100 documents}} > > [~joel.bernstein] what to you think? -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-Tests-7.x - Build # 595 - Still Unstable
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/595/ 4 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.cloud.ZkControllerTest Error Message: ObjectTracker found 2 object(s) that were not released!!! [Overseer, Overseer] org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: org.apache.solr.cloud.Overseer at org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42) at org.apache.solr.cloud.Overseer.start(Overseer.java:545) at org.apache.solr.cloud.OverseerElectionContext.runLeaderProcess(ElectionContext.java:851) at org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:170) at org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:135) at org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:307) at org.apache.solr.cloud.LeaderElector.retryElection(LeaderElector.java:393) at org.apache.solr.cloud.ZkController.rejoinOverseerElection(ZkController.java:2081) at org.apache.solr.cloud.Overseer$ClusterStateUpdater.checkIfIamStillLeader(Overseer.java:331) at java.lang.Thread.run(Thread.java:748) org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: org.apache.solr.cloud.Overseer at org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42) at org.apache.solr.cloud.Overseer.start(Overseer.java:545) at org.apache.solr.cloud.OverseerElectionContext.runLeaderProcess(ElectionContext.java:851) at org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:170) at org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:135) at org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:307) at org.apache.solr.cloud.LeaderElector.retryElection(LeaderElector.java:393) at org.apache.solr.cloud.ZkController.rejoinOverseerElection(ZkController.java:2081) at org.apache.solr.cloud.Overseer$ClusterStateUpdater.checkIfIamStillLeader(Overseer.java:331) at java.lang.Thread.run(Thread.java:748) Stack Trace: java.lang.AssertionError: ObjectTracker found 2 object(s) that were not released!!! [Overseer, Overseer] org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: org.apache.solr.cloud.Overseer at org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42) at org.apache.solr.cloud.Overseer.start(Overseer.java:545) at org.apache.solr.cloud.OverseerElectionContext.runLeaderProcess(ElectionContext.java:851) at org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:170) at org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:135) at org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:307) at org.apache.solr.cloud.LeaderElector.retryElection(LeaderElector.java:393) at org.apache.solr.cloud.ZkController.rejoinOverseerElection(ZkController.java:2081) at org.apache.solr.cloud.Overseer$ClusterStateUpdater.checkIfIamStillLeader(Overseer.java:331) at java.lang.Thread.run(Thread.java:748) org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: org.apache.solr.cloud.Overseer at org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42) at org.apache.solr.cloud.Overseer.start(Overseer.java:545) at org.apache.solr.cloud.OverseerElectionContext.runLeaderProcess(ElectionContext.java:851) at org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:170) at org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:135) at org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:307) at org.apache.solr.cloud.LeaderElector.retryElection(LeaderElector.java:393) at org.apache.solr.cloud.ZkController.rejoinOverseerElection(ZkController.java:2081) at org.apache.solr.cloud.Overseer$ClusterStateUpdater.checkIfIamStillLeader(Overseer.java:331) at java.lang.Thread.run(Thread.java:748) at __randomizedtesting.SeedInfo.seed([1A65AC88F805A465]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.junit.Assert.assertNull(Assert.java:551) at org.apache.solr.SolrTestCaseJ4.teardownTestCases(SolrTestCaseJ4.java:303) at sun.reflect.GeneratedMethodAccessor49.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:897) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
[JENKINS] Lucene-Solr-7.x-Solaris (64bit/jdk1.8.0) - Build # 594 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Solaris/594/ Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC 16 tests failed. FAILED: org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testMergeIntegration Error Message: did not finish processing in time Stack Trace: java.lang.AssertionError: did not finish processing in time at __randomizedtesting.SeedInfo.seed([E07E995A37C6CBCB:B3C7DBEAD5D75E31]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testMergeIntegration(IndexSizeTriggerTest.java:404) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.lang.Thread.run(Thread.java:748) FAILED: org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testTrigger Error Message: number of ops expected:<2> but was:<1> Stack Trace: java.lang.AssertionError: number of ops expected:<2> but was:<1> at
[jira] [Commented] (SOLR-7344) Allow Jetty thread pool limits while still avoiding distributed deadlock.
[ https://issues.apache.org/jira/browse/SOLR-7344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16462922#comment-16462922 ] Hrishikesh Gadre commented on SOLR-7344: [~erickerickson] {quote}If this much work is going on, is it time to consider ditching Jetty and replacing with Netty or whatever? {quote} No. The synchronous RPCs is the root cause of the distributed deadlock issue and no matter what technology we use (Jetty, Netty, Tomcat) it will not go away until either we fix it or work around it. > Allow Jetty thread pool limits while still avoiding distributed deadlock. > - > > Key: SOLR-7344 > URL: https://issues.apache.org/jira/browse/SOLR-7344 > Project: Solr > Issue Type: Improvement > Components: SolrCloud >Reporter: Mark Miller >Priority: Major > Attachments: SOLR-7344.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12304) Interesting Terms parameter is ignored by MLT Component
[ https://issues.apache.org/jira/browse/SOLR-12304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16462903#comment-16462903 ] David Smiley commented on SOLR-12304: - Just curious, but in your opinion is there any purpose to MLT component & handler anymore now that we have an MLT query parser? > Interesting Terms parameter is ignored by MLT Component > --- > > Key: SOLR-12304 > URL: https://issues.apache.org/jira/browse/SOLR-12304 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: MoreLikeThis >Affects Versions: 7.2 >Reporter: Alessandro Benedetti >Priority: Major > Attachments: SOLR-12304.patch, SOLR-12304.patch > > Time Spent: 10m > Remaining Estimate: 0h > > Currently the More Like This component just ignores the mlt.InterestingTerms > parameter ( which is usable by the MoreLikeThisHandler). > Scope of this issue is to fix the bug and add related tests ( which will > succeed after the fix ) > *N.B.* MoreLikeThisComponent and MoreLikeThisHandler are very coupled and the > tests for the MoreLikeThisHandler are intersecting the MoreLikeThisComponent > ones . > It is out of scope for this issue any consideration or refactor of that. > Other issues will follow. > *N.B.* out of scope for this issue is the distributed case, which is much > more complicated and requires much deeper investigations -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7344) Allow Jetty thread pool limits while still avoiding distributed deadlock.
[ https://issues.apache.org/jira/browse/SOLR-7344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16462897#comment-16462897 ] Erick Erickson commented on SOLR-7344: -- Gotta ask the question, while admitting total ignorance of the scope: If this much work is going on, is it time to consider ditching Jetty and replacing with Netty or whatever? > Allow Jetty thread pool limits while still avoiding distributed deadlock. > - > > Key: SOLR-7344 > URL: https://issues.apache.org/jira/browse/SOLR-7344 > Project: Solr > Issue Type: Improvement > Components: SolrCloud >Reporter: Mark Miller >Priority: Major > Attachments: SOLR-7344.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-7344) Allow Jetty thread pool limits while still avoiding distributed deadlock.
[ https://issues.apache.org/jira/browse/SOLR-7344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16462840#comment-16462840 ] Hrishikesh Gadre edited comment on SOLR-7344 at 5/3/18 5:45 PM: [~markrmil...@gmail.com] {quote}Is this deadlock even an issue anymore? We are Jetty 9 now and it only offers NIO connectors (so long thread per request). AFAIK that means requests waiting on IO don't hold a thread. {quote} In order to fully utilize NIO connector capability, the application needs to use asynchronous servlet APIs (provided as part of Servlet 3 spec). Here is a good tutorial that you can take a look: https://docs.oracle.com/javaee/7/tutorial/servlets012.htm Is it possible for us to use this feature for SOLR? Sure, but it will take a major rewrite of core parts of SOLR cloud (e.g. distributed querying, replication, remote queries etc.) as these components synchronously wait for the results of RPC calls. The servlet-request scheduler proposed in this Jira ([https://github.com/hgadre/servletrequest-scheduler)] internally uses servlet 3 async API to queue up the requests overflowing the thread-pool capacity, ensuring that distributed deadlocks are avoided without requiring *any* change in the SOLR cloud functionality. was (Author: hgadre): [~markrmil...@gmail.com] {quote}Is this deadlock even an issue anymore? We are Jetty 9 now and it only offers NIO connectors (so long thread per request). AFAIK that means requests waiting on IO don't hold a thread. {quote} In order to fully utilize NIO connector capability, the application needs to use asynchronous servlet APIs (provided as part of Servlet 3 spec). Here is a good tutorial that you can take a look: [https://www.javacodegeeks.com/2013/08/async-servlet-feature-of-servlet-3.html] Is it possible for us to use this feature for SOLR? Sure, but it will take a major rewrite of core parts of SOLR cloud (e.g. distributed querying, replication, remote queries etc.) as these components synchronously wait for the results of RPC calls. The servlet-request scheduler proposed in this Jira ([https://github.com/hgadre/servletrequest-scheduler)] internally uses servlet 3 async API to queue up the requests overflowing the thread-pool capacity, ensuring that distributed deadlocks are avoided without requiring *any* change in the SOLR cloud functionality. > Allow Jetty thread pool limits while still avoiding distributed deadlock. > - > > Key: SOLR-7344 > URL: https://issues.apache.org/jira/browse/SOLR-7344 > Project: Solr > Issue Type: Improvement > Components: SolrCloud >Reporter: Mark Miller >Priority: Major > Attachments: SOLR-7344.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-7344) Allow Jetty thread pool limits while still avoiding distributed deadlock.
[ https://issues.apache.org/jira/browse/SOLR-7344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16462840#comment-16462840 ] Hrishikesh Gadre edited comment on SOLR-7344 at 5/3/18 5:43 PM: [~markrmil...@gmail.com] {quote}Is this deadlock even an issue anymore? We are Jetty 9 now and it only offers NIO connectors (so long thread per request). AFAIK that means requests waiting on IO don't hold a thread. {quote} In order to fully utilize NIO connector capability, the application needs to use asynchronous servlet APIs (provided as part of Servlet 3 spec). Here is a good tutorial that you can take a look: [https://www.javacodegeeks.com/2013/08/async-servlet-feature-of-servlet-3.html] Is it possible for us to use this feature for SOLR? Sure, but it will take a major rewrite of core parts of SOLR cloud (e.g. distributed querying, replication, remote queries etc.) as these components synchronously wait for the results of RPC calls. The servlet-request scheduler proposed in this Jira ([https://github.com/hgadre/servletrequest-scheduler)] internally uses servlet 3 async API to queue up the requests overflowing the thread-pool capacity, ensuring that distributed deadlocks are avoided without requiring *any* change in the SOLR cloud functionality. was (Author: hgadre): [~markrmil...@gmail.com] {quote}Is this deadlock even an issue anymore? We are Jetty 9 now and it only offers NIO connectors (so long thread per request). AFAIK that means requests waiting on IO don't hold a thread. {quote} In order to fully utilize NIO connector capability, the application needs to use asynchronous servlet APIs (provided as part of Servlet 3 spec). Here is a good tutorial that you can take a look: [https://plumbr.io/blog/java/how-to-use-asynchronous-servlets-to-improve-performance] Is it possible for us to use this feature for SOLR? Sure, but it will take a major rewrite of core parts of SOLR cloud (e.g. distributed querying, replication, remote queries etc.) as these components synchronously wait for the results of RPC calls. The servlet-request scheduler proposed in this Jira ([https://github.com/hgadre/servletrequest-scheduler)] internally uses servlet 3 async API to queue up the requests overflowing the thread-pool capacity, ensuring that distributed deadlocks are avoided without requiring *any* change in the SOLR cloud functionality. > Allow Jetty thread pool limits while still avoiding distributed deadlock. > - > > Key: SOLR-7344 > URL: https://issues.apache.org/jira/browse/SOLR-7344 > Project: Solr > Issue Type: Improvement > Components: SolrCloud >Reporter: Mark Miller >Priority: Major > Attachments: SOLR-7344.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7344) Allow Jetty thread pool limits while still avoiding distributed deadlock.
[ https://issues.apache.org/jira/browse/SOLR-7344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16462840#comment-16462840 ] Hrishikesh Gadre commented on SOLR-7344: [~markrmil...@gmail.com] {quote}Is this deadlock even an issue anymore? We are Jetty 9 now and it only offers NIO connectors (so long thread per request). AFAIK that means requests waiting on IO don't hold a thread. {quote} In order to fully utilize NIO connector capability, the application needs to use asynchronous servlet APIs (provided as part of Servlet 3 spec). Here is a good tutorial that you can take a look: [https://plumbr.io/blog/java/how-to-use-asynchronous-servlets-to-improve-performance] Is it possible for us to use this feature for SOLR? Sure, but it will take a major rewrite of core parts of SOLR cloud (e.g. distributed querying, replication, remote queries etc.) as these components synchronously wait for the results of RPC calls. The servlet-request scheduler proposed in this Jira ([https://github.com/hgadre/servletrequest-scheduler)] internally uses servlet 3 async API to queue up the requests overflowing the thread-pool capacity, ensuring that distributed deadlocks are avoided without requiring *any* change in the SOLR cloud functionality. > Allow Jetty thread pool limits while still avoiding distributed deadlock. > - > > Key: SOLR-7344 > URL: https://issues.apache.org/jira/browse/SOLR-7344 > Project: Solr > Issue Type: Improvement > Components: SolrCloud >Reporter: Mark Miller >Priority: Major > Attachments: SOLR-7344.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1837 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1837/ Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC 13 tests failed. FAILED: org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testTrigger Error Message: should have fired an event Stack Trace: java.lang.AssertionError: should have fired an event at __randomizedtesting.SeedInfo.seed([87F432B2C89BB98E:E43F04305154CAA3]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.junit.Assert.assertNotNull(Assert.java:526) at org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testTrigger(IndexSizeTriggerTest.java:184) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.lang.Thread.run(Thread.java:748) FAILED: org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testTrigger Error Message: should have fired an event Stack Trace: java.lang.AssertionError: should have fired an event at
[jira] [Commented] (SOLR-12303) Subquery Doc transform doesn't inherit path from original request
[ https://issues.apache.org/jira/browse/SOLR-12303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16462794#comment-16462794 ] Shawn Heisey commented on SOLR-12303: - I do not know what this subquery feature you're referring to even is. Apologies if I got the wrong idea. > Subquery Doc transform doesn't inherit path from original request > - > > Key: SOLR-12303 > URL: https://issues.apache.org/jira/browse/SOLR-12303 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Munendra S N >Priority: Major > > {code:java} > localhost:8983/solr/k_test/search?sort=score desc,uniqueId > desc=AND=json={!parent which=parent_field:true score=max}({!edismax > v=$origQuery})=false=uniqueId=score=_children_:[subquery]=uniqueId=false=parent_field&_children_.fl=uniqueId&_children_.fl=score&_children_.rows=3=false&_children_.q={!edismax > qf=parentId v=$row.uniqueId}=1 > {code} > For this request, even though the path is */search*, the subquery request > would be fired on handler */select*. > Subquery request should inherit the parent request handler and there should > be an option to override this behavior. (option to override is already > available by specifying *qt*) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8293) Ensure only hard deletes are carried over in a merge
[ https://issues.apache.org/jira/browse/LUCENE-8293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16462795#comment-16462795 ] Michael McCandless commented on LUCENE-8293: +1 I like how you tap into segment warmer in the test cases to sneak in a "delete during merge"! > Ensure only hard deletes are carried over in a merge > > > Key: LUCENE-8293 > URL: https://issues.apache.org/jira/browse/LUCENE-8293 > Project: Lucene - Core > Issue Type: Bug >Affects Versions: 7.4, master (8.0) >Reporter: Simon Willnauer >Priority: Major > Fix For: 7.4, master (8.0) > > Attachments: LUCENE-8293.patch, LUCENE-8293.patch > > > Today we carry over hard deletes based on the SegmentReaders liveDocs. > This is not correct if soft-deletes are used especially with rentention > policies. If a soft delete is added while a segment is merged the document > might end up hard deleted in the target segment. This isn't necessarily a > correctness issue but causes unnecessary writes of hard-deletes. The > biggest > issue here is that we assert that previously deleted documents are still > deleted > in the live-docs we apply and that might be violated by the retention > policy. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12299) More Like This Params Refactor
[ https://issues.apache.org/jira/browse/SOLR-12299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16462779#comment-16462779 ] Alessandro Benedetti commented on SOLR-12299: - It is required to merge 12304 first. This patch is built on top of 12304. > More Like This Params Refactor > -- > > Key: SOLR-12299 > URL: https://issues.apache.org/jira/browse/SOLR-12299 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: MoreLikeThis >Reporter: Alessandro Benedetti >Priority: Major > Attachments: SOLR-12299.patch > > Time Spent: 10m > Remaining Estimate: 0h > > More Like This ca be refactored to improve the code readability, test > coverage and maintenance. > Scope of this Jira issue is to start the More Like This refactor from the > More Like This Params. > This Jira will not improve the current More Like This but just keep the same > functionality with a refactored code. > Other Jira issues will follow improving the overall code readability, test > coverage and maintenance. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12306) JDBC stream source throws NPE when field in db is NULL
[ https://issues.apache.org/jira/browse/SOLR-12306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16462777#comment-16462777 ] Lucene/Solr QA commented on SOLR-12306: --- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 55s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} Release audit (RAT) {color} | {color:green} 1m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} Check forbidden APIs {color} | {color:green} 1m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} Validate source patterns {color} | {color:green} 1m 50s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 18s{color} | {color:green} solrj in the patch passed. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 14m 33s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | SOLR-12306 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12921738/0001-JDBCStream-check-for-null.patch | | Optional Tests | compile javac unit ratsources checkforbiddenapis validatesourcepatterns | | uname | Linux lucene1-us-west 3.13.0-88-generic #135-Ubuntu SMP Wed Jun 8 21:10:42 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | ant | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-SOLR-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh | | git revision | master / 9b26108 | | ant | version: Apache Ant(TM) version 1.9.3 compiled on April 8 2014 | | Default Java | 1.8.0_172 | | Test Results | https://builds.apache.org/job/PreCommit-SOLR-Build/73/testReport/ | | modules | C: solr/solrj U: solr/solrj | | Console output | https://builds.apache.org/job/PreCommit-SOLR-Build/73/console | | Powered by | Apache Yetus 0.7.0 http://yetus.apache.org | This message was automatically generated. > JDBC stream source throws NPE when field in db is NULL > -- > > Key: SOLR-12306 > URL: https://issues.apache.org/jira/browse/SOLR-12306 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: streaming expressions >Affects Versions: 7.2, 7.3 >Reporter: mosh >Priority: Major > Attachments: 0001-JDBCStream-check-for-null.patch > > > The JDBC stream source throws a NullPointerException when reading database > values which equal null. > This occurs because there is no null check when creating solr document from > the query, resulting fields with a null value. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12299) More Like This Params Refactor
[ https://issues.apache.org/jira/browse/SOLR-12299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16462771#comment-16462771 ] Alessandro Benedetti commented on SOLR-12299: - https://patch-diff.githubusercontent.com/raw/apache/lucene-solr/pull/369.patch > More Like This Params Refactor > -- > > Key: SOLR-12299 > URL: https://issues.apache.org/jira/browse/SOLR-12299 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: MoreLikeThis >Reporter: Alessandro Benedetti >Priority: Major > Time Spent: 10m > Remaining Estimate: 0h > > More Like This ca be refactored to improve the code readability, test > coverage and maintenance. > Scope of this Jira issue is to start the More Like This refactor from the > More Like This Params. > This Jira will not improve the current More Like This but just keep the same > functionality with a refactored code. > Other Jira issues will follow improving the overall code readability, test > coverage and maintenance. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_162) - Build # 21951 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/21951/ Java: 64bit/jdk1.8.0_162 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC 1 tests failed. FAILED: org.apache.solr.cloud.autoscaling.SearchRateTriggerTest.testTrigger Error Message: expected:<1> but was:<2> Stack Trace: java.lang.AssertionError: expected:<1> but was:<2> at __randomizedtesting.SeedInfo.seed([5A7E3A8C903C22B8:39B50C0E09F35195]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.failNotEquals(Assert.java:647) at org.junit.Assert.assertEquals(Assert.java:128) at org.junit.Assert.assertEquals(Assert.java:472) at org.junit.Assert.assertEquals(Assert.java:456) at org.apache.solr.cloud.autoscaling.SearchRateTriggerTest.testTrigger(SearchRateTriggerTest.java:133) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.lang.Thread.run(Thread.java:748) Build Log: [...truncated 14466 lines...] [junit4] Suite:
[GitHub] lucene-solr pull request #369: SOLR-12299
GitHub user alessandrobenedetti opened a pull request: https://github.com/apache/lucene-solr/pull/369 SOLR-12299 This Pull Request is about the first step in the More Like This refactor. The overall scope of the refactor is to improve the test coverage, readability and maintenance of the More Like This module. Scope of this patch is to extract the More Like This parameters in a cohesive and tested class. Other patches will follow. You can merge this pull request into a Git repository by running: $ git pull https://github.com/SeaseLtd/lucene-solr SOLR-12299 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/lucene-solr/pull/369.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #369 commit d0cd13e763e77bfb39c8e1a657e968436122a6a6 Author: Alessandro BenedettiDate: 2018-05-01T12:49:31Z [SOLR-12299] More Like This Query Params refactor + tests commit e944e83b137527d5128a56be0253d25f4db9395f Author: Alessandro Benedetti Date: 2018-05-02T16:39:53Z [SOLR-12304] More Like This component interesting term fix +tests commit c8a591bed313630c7a85f88356854ef84962af11 Author: Alessandro Benedetti Date: 2018-05-03T14:06:08Z Merge branch 'SOLR-12304' into SOLR-12299 # Conflicts: # solr/core/src/java/org/apache/solr/handler/component/MoreLikeThisComponent.java commit 40705fb6e5a25cc18767eb3441f33e4cd016f6af Author: Alessandro Benedetti Date: 2018-05-03T16:44:20Z [SOLR-12299] More Like This Parameters + tests --- - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-repro - Build # 574 - Unstable
Build: https://builds.apache.org/job/Lucene-Solr-repro/574/ [...truncated 31 lines...] [repro] Jenkins log URL: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/594/consoleText [repro] Revision: 0c89db842506e1dc9804723ebcf6d99b3947ee3e [repro] Repro line: ant test -Dtestcase=SearchRateTriggerIntegrationTest -Dtests.method=testDeleteNode -Dtests.seed=4968F19A2BCD5AA4 -Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=sr-ME -Dtests.timezone=Europe/Vilnius -Dtests.asserts=true -Dtests.file.encoding=US-ASCII [repro] git rev-parse --abbrev-ref HEAD [repro] git rev-parse HEAD [repro] Initial local git branch/revision: 9b261087abcd7ef350e49d4fa4e72e075a135799 [repro] git fetch [...truncated 2 lines...] [repro] git checkout 0c89db842506e1dc9804723ebcf6d99b3947ee3e [...truncated 2 lines...] [repro] git merge --ff-only [...truncated 1 lines...] [repro] ant clean [...truncated 6 lines...] [repro] Test suites by module: [repro]solr/core [repro] SearchRateTriggerIntegrationTest [repro] ant compile-test [...truncated 3316 lines...] [repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 -Dtests.class="*.SearchRateTriggerIntegrationTest" -Dtests.showOutput=onerror -Dtests.seed=4968F19A2BCD5AA4 -Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=sr-ME -Dtests.timezone=Europe/Vilnius -Dtests.asserts=true -Dtests.file.encoding=US-ASCII [...truncated 16798 lines...] [repro] Setting last failure code to 256 [repro] Failures: [repro] 4/5 failed: org.apache.solr.cloud.autoscaling.SearchRateTriggerIntegrationTest [repro] git checkout 9b261087abcd7ef350e49d4fa4e72e075a135799 [...truncated 2 lines...] [repro] Exiting with code 256 [...truncated 6 lines...] - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7976) Make TieredMergePolicy respect maxSegmentSizeMB and allow singleton merges of very large segments
[ https://issues.apache.org/jira/browse/LUCENE-7976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16462618#comment-16462618 ] Michael McCandless commented on LUCENE-7976: Thanks [~erickerickson]; I will try to review this soon. Maybe [~simonw] can also have a look. > Make TieredMergePolicy respect maxSegmentSizeMB and allow singleton merges of > very large segments > - > > Key: LUCENE-7976 > URL: https://issues.apache.org/jira/browse/LUCENE-7976 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Erick Erickson >Assignee: Erick Erickson >Priority: Major > Attachments: LUCENE-7976.patch, LUCENE-7976.patch, LUCENE-7976.patch, > LUCENE-7976.patch, LUCENE-7976.patch, LUCENE-7976.patch, LUCENE-7976.patch > > > We're seeing situations "in the wild" where there are very large indexes (on > disk) handled quite easily in a single Lucene index. This is particularly > true as features like docValues move data into MMapDirectory space. The > current TMP algorithm allows on the order of 50% deleted documents as per a > dev list conversation with Mike McCandless (and his blog here: > https://www.elastic.co/blog/lucenes-handling-of-deleted-documents). > Especially in the current era of very large indexes in aggregate, (think many > TB) solutions like "you need to distribute your collection over more shards" > become very costly. Additionally, the tempting "optimize" button exacerbates > the issue since once you form, say, a 100G segment (by > optimizing/forceMerging) it is not eligible for merging until 97.5G of the > docs in it are deleted (current default 5G max segment size). > The proposal here would be to add a new parameter to TMP, something like > (no, that's not serious name, suggestions > welcome) which would default to 100 (or the same behavior we have now). > So if I set this parameter to, say, 20%, and the max segment size stays at > 5G, the following would happen when segments were selected for merging: > > any segment with > 20% deleted documents would be merged or rewritten NO > > MATTER HOW LARGE. There are two cases, > >> the segment has < 5G "live" docs. In that case it would be merged with > >> smaller segments to bring the resulting segment up to 5G. If no smaller > >> segments exist, it would just be rewritten > >> The segment has > 5G "live" docs (the result of a forceMerge or optimize). > >> It would be rewritten into a single segment removing all deleted docs no > >> matter how big it is to start. The 100G example above would be rewritten > >> to an 80G segment for instance. > Of course this would lead to potentially much more I/O which is why the > default would be the same behavior we see now. As it stands now, though, > there's no way to recover from an optimize/forceMerge except to re-index from > scratch. We routinely see 200G-300G Lucene indexes at this point "in the > wild" with 10s of shards replicated 3 or more times. And that doesn't even > include having these over HDFS. > Alternatives welcome! Something like the above seems minimally invasive. A > new merge policy is certainly an alternative. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11490) Add @since javadoc tags to the interesting Solr/Lucene classes
[ https://issues.apache.org/jira/browse/SOLR-11490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16462616#comment-16462616 ] Alexandre Rafalovitch commented on SOLR-11490: -- Ok. But then I really don't get your proposal. What would you be ok with tagging - for example - PatternTokenizerFactory? "3.1" is not valid "pre-3.1 " you are blocking nothing is confusing to newbies, in my opinion, so that's what I am trying to avoid Do we have something between us that is not -1? > Add @since javadoc tags to the interesting Solr/Lucene classes > -- > > Key: SOLR-11490 > URL: https://issues.apache.org/jira/browse/SOLR-11490 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Alexandre Rafalovitch >Assignee: Alexandre Rafalovitch >Priority: Minor > > As per the discussion on the dev list, it may be useful to add Javadoc since > tags to significant (or even all) Java files. > For user-facing files (such as analyzers, URPs, stream evaluators, etc) it > would be useful when trying to identifying whether a particular class only > comes later than user's particular version. > For other classes, it may be useful for historical reasons. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-8293) Ensure only hard deletes are carried over in a merge
[ https://issues.apache.org/jira/browse/LUCENE-8293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Simon Willnauer updated LUCENE-8293: Attachment: LUCENE-8293.patch > Ensure only hard deletes are carried over in a merge > > > Key: LUCENE-8293 > URL: https://issues.apache.org/jira/browse/LUCENE-8293 > Project: Lucene - Core > Issue Type: Bug >Affects Versions: 7.4, master (8.0) >Reporter: Simon Willnauer >Priority: Major > Fix For: 7.4, master (8.0) > > Attachments: LUCENE-8293.patch, LUCENE-8293.patch > > > Today we carry over hard deletes based on the SegmentReaders liveDocs. > This is not correct if soft-deletes are used especially with rentention > policies. If a soft delete is added while a segment is merged the document > might end up hard deleted in the target segment. This isn't necessarily a > correctness issue but causes unnecessary writes of hard-deletes. The > biggest > issue here is that we assert that previously deleted documents are still > deleted > in the live-docs we apply and that might be violated by the retention > policy. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8293) Ensure only hard deletes are carried over in a merge
[ https://issues.apache.org/jira/browse/LUCENE-8293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16462609#comment-16462609 ] Simon Willnauer commented on LUCENE-8293: - [~mikemccand] I added another test and fixed some corner cases with soft-deletes. Can you take another look? > Ensure only hard deletes are carried over in a merge > > > Key: LUCENE-8293 > URL: https://issues.apache.org/jira/browse/LUCENE-8293 > Project: Lucene - Core > Issue Type: Bug >Affects Versions: 7.4, master (8.0) >Reporter: Simon Willnauer >Priority: Major > Fix For: 7.4, master (8.0) > > Attachments: LUCENE-8293.patch, LUCENE-8293.patch > > > Today we carry over hard deletes based on the SegmentReaders liveDocs. > This is not correct if soft-deletes are used especially with rentention > policies. If a soft delete is added while a segment is merged the document > might end up hard deleted in the target segment. This isn't necessarily a > correctness issue but causes unnecessary writes of hard-deletes. The > biggest > issue here is that we assert that previously deleted documents are still > deleted > in the live-docs we apply and that might be violated by the retention > policy. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11490) Add @since javadoc tags to the interesting Solr/Lucene classes
[ https://issues.apache.org/jira/browse/SOLR-11490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16462592#comment-16462592 ] Robert Muir commented on SOLR-11490: I am against pre-3.1 or any other invalid versions in since tags. I'm gonna quote myself just to re-iterate what I already said. {quote} Just like how you marked HMMChineseTokenizerFactory as 4.8.0, that's fine. But lineage-wise (look at svn for that) you'd see its been around since 2.9, it was just named something different (SmartChinese). {quote} > Add @since javadoc tags to the interesting Solr/Lucene classes > -- > > Key: SOLR-11490 > URL: https://issues.apache.org/jira/browse/SOLR-11490 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Alexandre Rafalovitch >Assignee: Alexandre Rafalovitch >Priority: Minor > > As per the discussion on the dev list, it may be useful to add Javadoc since > tags to significant (or even all) Java files. > For user-facing files (such as analyzers, URPs, stream evaluators, etc) it > would be useful when trying to identifying whether a particular class only > comes later than user's particular version. > For other classes, it may be useful for historical reasons. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11490) Add @since javadoc tags to the interesting Solr/Lucene classes
[ https://issues.apache.org/jira/browse/SOLR-11490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16462585#comment-16462585 ] Alexandre Rafalovitch commented on SOLR-11490: -- 19 Oct, me: "Sure. Let's do 3.1+ as a joint tag and ignore anything before that. I'll roll-back the earlier ones." Perhaps I misunderstood. I thought it makes no sense to tag something that's been in lucene 1.x as 3.1, and read it that you meant to tag it not at all. So, my proposal now is to tag it "pre-3.1". Maybe we are on the same page. > Add @since javadoc tags to the interesting Solr/Lucene classes > -- > > Key: SOLR-11490 > URL: https://issues.apache.org/jira/browse/SOLR-11490 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Alexandre Rafalovitch >Assignee: Alexandre Rafalovitch >Priority: Minor > > As per the discussion on the dev list, it may be useful to add Javadoc since > tags to significant (or even all) Java files. > For user-facing files (such as analyzers, URPs, stream evaluators, etc) it > would be useful when trying to identifying whether a particular class only > comes later than user's particular version. > For other classes, it may be useful for historical reasons. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-7.x-MacOSX (64bit/jdk-9) - Build # 613 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-MacOSX/613/ Java: 64bit/jdk-9 -XX:+UseCompressedOops -XX:+UseParallelGC 1 tests failed. FAILED: org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testTrigger Error Message: number of ops expected:<2> but was:<1> Stack Trace: java.lang.AssertionError: number of ops expected:<2> but was:<1> at __randomizedtesting.SeedInfo.seed([E1083916E980665:6DDBB513F7577548]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.failNotEquals(Assert.java:647) at org.junit.Assert.assertEquals(Assert.java:128) at org.junit.Assert.assertEquals(Assert.java:472) at org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testTrigger(IndexSizeTriggerTest.java:187) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:564) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.base/java.lang.Thread.run(Thread.java:844) Build Log: [...truncated 12612 lines...] [junit4] Suite: org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest [junit4] 2> 318483 INFO
[JENKINS-EA] Lucene-Solr-7.x-Linux (64bit/jdk-11-ea+5) - Build # 1842 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/1842/ Java: 64bit/jdk-11-ea+5 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC 1 tests failed. FAILED: org.apache.solr.cloud.autoscaling.SearchRateTriggerTest.testTrigger Error Message: expected:<1> but was:<2> Stack Trace: java.lang.AssertionError: expected:<1> but was:<2> at __randomizedtesting.SeedInfo.seed([AEFF4AB868AF03E7:CD347C3AF16070CA]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.failNotEquals(Assert.java:647) at org.junit.Assert.assertEquals(Assert.java:128) at org.junit.Assert.assertEquals(Assert.java:472) at org.junit.Assert.assertEquals(Assert.java:456) at org.apache.solr.cloud.autoscaling.SearchRateTriggerTest.testTrigger(SearchRateTriggerTest.java:133) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:564) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.base/java.lang.Thread.run(Thread.java:841) Build Log: [...truncated 1799 lines...] [junit4] JVM J1: stderr was not empty, see:
[jira] [Commented] (SOLR-12298) Index Full nested document Hierarchy For Queries (umbrella issue)
[ https://issues.apache.org/jira/browse/SOLR-12298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16462570#comment-16462570 ] David Smiley commented on SOLR-12298: - It'll be exciting to see Solr's nested document support get improved! * You said the JSON loader could have changes but then wouldn't that limit the benefit to only that update method? Why not an URP instead? * Will {{\_path\_}} have a chain of uniqueKey IDs from parent to child? You didn't specify what it is. Or, after re-examining Anshum's LSR presentation you referenced, is this a list of the name of the entity type at each level (e.g. "post.comment.reply.keywords" etc.)? If it is some sort of entity name, then this name needs to be put into each child document so that this type path can be constructed? Perhaps these special fields should all start with "nest" so as to clearly distinguish these for support of nested documents? e.g. nestParent, nestLevel, nestPath (with leading & trailing underscores; escaping in JIRA is a pain :-) ) > Index Full nested document Hierarchy For Queries (umbrella issue) > - > > Key: SOLR-12298 > URL: https://issues.apache.org/jira/browse/SOLR-12298 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: mosh >Priority: Major > > Solr ought to have the ability to index deeply nested objects, while storing > the original document hierarchy. > Currently the client has to index the child document's full path and level > to manually reconstruct the original document structure, since the children > are flattened and returned in the reserved "__childDocuments__" key. > Ideally you could index a nested document, having Solr transparently add the > required fields while providing a document transformer to rebuild the > original document's hierarchy. > > This issue is an umbrella issue for the particular tasks that will make it > all happen – either subtasks or issue linking. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11490) Add @since javadoc tags to the interesting Solr/Lucene classes
[ https://issues.apache.org/jira/browse/SOLR-11490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16462549#comment-16462549 ] Robert Muir commented on SOLR-11490: {quote} We had agreed that pre-3.1 classes will get no since tag. {quote} Where was this? I see consensus above to simply label these as "3.1". > Add @since javadoc tags to the interesting Solr/Lucene classes > -- > > Key: SOLR-11490 > URL: https://issues.apache.org/jira/browse/SOLR-11490 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Alexandre Rafalovitch >Assignee: Alexandre Rafalovitch >Priority: Minor > > As per the discussion on the dev list, it may be useful to add Javadoc since > tags to significant (or even all) Java files. > For user-facing files (such as analyzers, URPs, stream evaluators, etc) it > would be useful when trying to identifying whether a particular class only > comes later than user's particular version. > For other classes, it may be useful for historical reasons. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8207) Modernise cloud tab on Admin UI
[ https://issues.apache.org/jira/browse/SOLR-8207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16462532#comment-16462532 ] Erick Erickson commented on SOLR-8207: -- Jan: I don't know whether you'll do anything with the "segments.js" file, the link that shows each segment and has a shaded section of each bar shows the ratio for deleted docs. It's wildly out of proportion and I'm changing it as part of LUCENE-7976. Here's the change in case you are also changing that bit of code, line 44 in the current segments.js file: segment.deletedDocSize = Math.floor((segment.delCount / (segment.delCount + segment.totalSize)) * segment.totalSize); should be segment.deletedDocSize = Math.floor((segment.delCount / segment.size) * segment.totalSize); It's not a big deal, I can reconcile if there are merge conflicts and you get there first, just FYI. FWIW, Erick > Modernise cloud tab on Admin UI > --- > > Key: SOLR-8207 > URL: https://issues.apache.org/jira/browse/SOLR-8207 > Project: Solr > Issue Type: Improvement > Components: Admin UI >Affects Versions: 5.3 >Reporter: Upayavira >Assignee: Jan Høydahl >Priority: Major > Attachments: node-compact.png, node-details.png, node-hostcolumn.png, > node-toggle-row-numdocs.png, nodes-tab-real.png, nodes-tab.png > > Time Spent: 10m > Remaining Estimate: 0h > > The various sub-tabs of the "Cloud tab" were designed before anyone was > making real use of SolrCloud, and when we didn't really know the use-cases we > would need to support. I would argue that, whilst they are pretty (and > clever) they aren't really fit for purpose (with the exception of tree view). > Issues: > * Radial view doesn't scale beyond a small number of nodes/collections > * Paging on the graph view is based on collections - so a collection with > many replicas won't be subject to pagination > * The Dump feature is kinda redundant and should be removed > * There is now a major overlap in functionality with the new Collections tab > What I'd propose is that we: > * promote the tree tab to top level > * remove the graph views and the dump tab > * add a new Nodes tab > This nodes tab would complement the collections tab - showing nodes, and > their associated replicas/collections. From this view, it would be possible > to add/remove replicas and to see the status of nodes. It would also be > possible to filter nodes by status: "show me only up nodes", "show me nodes > that are in trouble", "show me nodes that have leaders on them", etc. > Presumably, if we have APIs to support it, we might have a "decommission > node" option, that would ensure that no replicas on this node are leaders, > and then remove all replicas from the node, ready for it to be removed from > the cluster. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8293) Ensure only hard deletes are carried over in a merge
[ https://issues.apache.org/jira/browse/LUCENE-8293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16462527#comment-16462527 ] Simon Willnauer commented on LUCENE-8293: - [~erickerickson] no it doesn't > Ensure only hard deletes are carried over in a merge > > > Key: LUCENE-8293 > URL: https://issues.apache.org/jira/browse/LUCENE-8293 > Project: Lucene - Core > Issue Type: Bug >Affects Versions: 7.4, master (8.0) >Reporter: Simon Willnauer >Priority: Major > Fix For: 7.4, master (8.0) > > Attachments: LUCENE-8293.patch > > > Today we carry over hard deletes based on the SegmentReaders liveDocs. > This is not correct if soft-deletes are used especially with rentention > policies. If a soft delete is added while a segment is merged the document > might end up hard deleted in the target segment. This isn't necessarily a > correctness issue but causes unnecessary writes of hard-deletes. The > biggest > issue here is that we assert that previously deleted documents are still > deleted > in the live-docs we apply and that might be violated by the retention > policy. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8207) Modernise cloud tab on Admin UI
[ https://issues.apache.org/jira/browse/SOLR-8207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16462522#comment-16462522 ] Jan Høydahl commented on SOLR-8207: --- A small refactor: * Added a separate "host" column for host-specific details * If a host has multiple nodes, the host-info will span those rows. This gives a very good overview for those that run multiple nodes per host. * Now the node column only shows port/context, jvm & solr version, JVM uptime * The load column is removed and instead put as detail on host column * You can expand/collapse each node individually, expanding details on host level also expands first node (since it's same html table row) !node-hostcolumn.png|width=900! I'm uncertain about whether the CPU column could also be moved to host-level, but it says it is per-JVM so I keep it per-node for now. > Modernise cloud tab on Admin UI > --- > > Key: SOLR-8207 > URL: https://issues.apache.org/jira/browse/SOLR-8207 > Project: Solr > Issue Type: Improvement > Components: Admin UI >Affects Versions: 5.3 >Reporter: Upayavira >Assignee: Jan Høydahl >Priority: Major > Attachments: node-compact.png, node-details.png, node-hostcolumn.png, > node-toggle-row-numdocs.png, nodes-tab-real.png, nodes-tab.png > > Time Spent: 10m > Remaining Estimate: 0h > > The various sub-tabs of the "Cloud tab" were designed before anyone was > making real use of SolrCloud, and when we didn't really know the use-cases we > would need to support. I would argue that, whilst they are pretty (and > clever) they aren't really fit for purpose (with the exception of tree view). > Issues: > * Radial view doesn't scale beyond a small number of nodes/collections > * Paging on the graph view is based on collections - so a collection with > many replicas won't be subject to pagination > * The Dump feature is kinda redundant and should be removed > * There is now a major overlap in functionality with the new Collections tab > What I'd propose is that we: > * promote the tree tab to top level > * remove the graph views and the dump tab > * add a new Nodes tab > This nodes tab would complement the collections tab - showing nodes, and > their associated replicas/collections. From this view, it would be possible > to add/remove replicas and to see the status of nodes. It would also be > possible to filter nodes by status: "show me only up nodes", "show me nodes > that are in trouble", "show me nodes that have leaders on them", etc. > Presumably, if we have APIs to support it, we might have a "decommission > node" option, that would ensure that no replicas on this node are leaders, > and then remove all replicas from the node, ready for it to be removed from > the cluster. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11490) Add @since javadoc tags to the interesting Solr/Lucene classes
[ https://issues.apache.org/jira/browse/SOLR-11490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16462523#comment-16462523 ] Alexandre Rafalovitch commented on SOLR-11490: -- Yes, I will mark it resolved and come back to it later in more targeted groups. The only issue this tagging created is that there will always be new classes that are not (yet) tagged. And they will look identical as ones not tagged because they are pre-3.1 We had agreed that pre-3.1 classes will get no since tag. But perhaps I can retag them all "pre-3.1". This way only the newest classes in the particular category will be untagged. Obviously, this mostly matters to the Lucene/Solr newbies, but that's exactly my target audience with this JIRA. [~rcmuir] - is that ok with you to have a common-joint historical tag instead of no tag at all? I just want to resolve this in this case, as it was more common, to have the discussion finalized in one place. > Add @since javadoc tags to the interesting Solr/Lucene classes > -- > > Key: SOLR-11490 > URL: https://issues.apache.org/jira/browse/SOLR-11490 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Alexandre Rafalovitch >Assignee: Alexandre Rafalovitch >Priority: Minor > > As per the discussion on the dev list, it may be useful to add Javadoc since > tags to significant (or even all) Java files. > For user-facing files (such as analyzers, URPs, stream evaluators, etc) it > would be useful when trying to identifying whether a particular class only > comes later than user's particular version. > For other classes, it may be useful for historical reasons. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-8207) Modernise cloud tab on Admin UI
[ https://issues.apache.org/jira/browse/SOLR-8207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jan Høydahl updated SOLR-8207: -- Attachment: node-hostcolumn.png > Modernise cloud tab on Admin UI > --- > > Key: SOLR-8207 > URL: https://issues.apache.org/jira/browse/SOLR-8207 > Project: Solr > Issue Type: Improvement > Components: Admin UI >Affects Versions: 5.3 >Reporter: Upayavira >Assignee: Jan Høydahl >Priority: Major > Attachments: node-compact.png, node-details.png, node-hostcolumn.png, > node-toggle-row-numdocs.png, nodes-tab-real.png, nodes-tab.png > > Time Spent: 10m > Remaining Estimate: 0h > > The various sub-tabs of the "Cloud tab" were designed before anyone was > making real use of SolrCloud, and when we didn't really know the use-cases we > would need to support. I would argue that, whilst they are pretty (and > clever) they aren't really fit for purpose (with the exception of tree view). > Issues: > * Radial view doesn't scale beyond a small number of nodes/collections > * Paging on the graph view is based on collections - so a collection with > many replicas won't be subject to pagination > * The Dump feature is kinda redundant and should be removed > * There is now a major overlap in functionality with the new Collections tab > What I'd propose is that we: > * promote the tree tab to top level > * remove the graph views and the dump tab > * add a new Nodes tab > This nodes tab would complement the collections tab - showing nodes, and > their associated replicas/collections. From this view, it would be possible > to add/remove replicas and to see the status of nodes. It would also be > possible to filter nodes by status: "show me only up nodes", "show me nodes > that are in trouble", "show me nodes that have leaders on them", etc. > Presumably, if we have APIs to support it, we might have a "decommission > node" option, that would ensure that no replicas on this node are leaders, > and then remove all replicas from the node, ready for it to be removed from > the cluster. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-Tests-master - Build # 2516 - Unstable
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/2516/ 2 tests failed. FAILED: org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testTrigger Error Message: waitFor not elapsed but produced an event Stack Trace: java.lang.AssertionError: waitFor not elapsed but produced an event at __randomizedtesting.SeedInfo.seed([608DFD5AAF163AD9:346CBD836D949F4]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.junit.Assert.assertNull(Assert.java:551) at org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testTrigger(IndexSizeTriggerTest.java:180) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.lang.Thread.run(Thread.java:748) FAILED: org.apache.solr.cloud.autoscaling.SearchRateTriggerTest.testTrigger Error Message: expected:<1> but was:<2> Stack Trace: java.lang.AssertionError: expected:<1> but was:<2> at __randomizedtesting.SeedInfo.seed([608DFD5AAF163AD9:346CBD836D949F4]:0)
[jira] [Commented] (LUCENE-8293) Ensure only hard deletes are carried over in a merge
[ https://issues.apache.org/jira/browse/LUCENE-8293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16462508#comment-16462508 ] Erick Erickson commented on LUCENE-8293: Related question: Does this have any implications for TieredMergePolicy? In particular TMP relies on: IndexWriter.numDeletesToMerge(info); SegmentCommitInfo.info.maxDoc() in order to score documents to pass off to the merging code. I'm not worried about the nuts and bolts of merging you're addressing here, mostly whether IndexWriter.numDeletesToMerge(info); will continue to reflect the number of docs that will be merged away. > Ensure only hard deletes are carried over in a merge > > > Key: LUCENE-8293 > URL: https://issues.apache.org/jira/browse/LUCENE-8293 > Project: Lucene - Core > Issue Type: Bug >Affects Versions: 7.4, master (8.0) >Reporter: Simon Willnauer >Priority: Major > Fix For: 7.4, master (8.0) > > Attachments: LUCENE-8293.patch > > > Today we carry over hard deletes based on the SegmentReaders liveDocs. > This is not correct if soft-deletes are used especially with rentention > policies. If a soft delete is added while a segment is merged the document > might end up hard deleted in the target segment. This isn't necessarily a > correctness issue but causes unnecessary writes of hard-deletes. The > biggest > issue here is that we assert that previously deleted documents are still > deleted > in the live-docs we apply and that might be violated by the retention > policy. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12209) add Paging Streaming Expression
[ https://issues.apache.org/jira/browse/SOLR-12209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] mosh updated SOLR-12209: Component/s: streaming expressions > add Paging Streaming Expression > --- > > Key: SOLR-12209 > URL: https://issues.apache.org/jira/browse/SOLR-12209 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) > Components: streaming expressions >Affects Versions: 7.3 >Reporter: mosh >Priority: Major > Attachments: 0001-added-skip-and-limit-stream-decorators.patch > > > Currently the closest streaming expression that allows some sort of > pagination is top. > I propose we add a new streaming expression, which is based on the > RankedStream class to add offset to the stream. currently it can only be done > in code by reading the stream until the desired offset is reached. > The new expression will be used as such: > {{paging(rows=3, search(collection1, q="*:*", qt="/export", > fl="id,a_s,a_i,a_f", sort="a_f desc, a_i desc"), sort="a_f asc, a_i asc", > start=100)}} > {{this will offset the returned stream by 100 documents}} > > [~joel.bernstein] what to you think? -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8292) Fix FilterLeafReader.FilterTermsEnum to delegate all seekExact methods
[ https://issues.apache.org/jira/browse/LUCENE-8292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16462501#comment-16462501 ] David Smiley commented on LUCENE-8292: -- _Any_ use of a delegating wrapper is arguably "trappy" in the sense that you need to be mindful of what you should and should not override to do whatever it is you are doing. So I think we might as well delegate everything – at least at the time of creating the subclass you can look at the FilterTermsEnum and observe the methods to potentially override yourself. Today you need to know there are some "hidden" ones further below in the hierarchy. Sidenote: if we were all using Kotlin, we probably would not bother to have such Filter/delegate classes in Lucene because Kotlin [makes it trivial to auto-delegate all members|https://kotlinlang.org/docs/reference/delegation.html]. You still need to be mindful of what you need to override to do whatever it is you need to do. > Fix FilterLeafReader.FilterTermsEnum to delegate all seekExact methods > -- > > Key: LUCENE-8292 > URL: https://issues.apache.org/jira/browse/LUCENE-8292 > Project: Lucene - Core > Issue Type: Bug > Components: core/index >Affects Versions: 7.2.1 >Reporter: Bruno Roustant >Priority: Major > Fix For: trunk > > Attachments: > 0001-Fix-FilterLeafReader.FilterTermsEnum-to-delegate-see.patch, > LUCENE-8292.patch > > > FilterLeafReader#FilterTermsEnum wraps another TermsEnum and delegates many > methods. > It misses some seekExact() methods, thus it is not possible to the delegate > to override these methods to have specific behavior (unlike the TermsEnum API > which allows that). > The fix is straightforward: simply override these seekExact() methods and > delegate. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8231) Nori, a Korean analyzer based on mecab-ko-dic
[ https://issues.apache.org/jira/browse/LUCENE-8231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16462497#comment-16462497 ] ASF subversion and git services commented on LUCENE-8231: - Commit 1ed95c097b82ee5f175e93f3fe62572abe064da6 in lucene-solr's branch refs/heads/branch_7x from [~jim.ferenczi] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=1ed95c0 ] LUCENE-8231: Add missing part of speech filter in the SPI META-INF file > Nori, a Korean analyzer based on mecab-ko-dic > - > > Key: LUCENE-8231 > URL: https://issues.apache.org/jira/browse/LUCENE-8231 > Project: Lucene - Core > Issue Type: New Feature >Reporter: Jim Ferenczi >Priority: Major > Fix For: 7.4, master (8.0) > > Attachments: LUCENE-8231-remap-hangul.patch, LUCENE-8231.patch, > LUCENE-8231.patch, LUCENE-8231.patch, LUCENE-8231.patch, LUCENE-8231.patch, > LUCENE-8231.patch, LUCENE-8231.patch, LUCENE-8231.patch, LUCENE-8231.patch, > LUCENE-8231.patch, LUCENE-8231.patch > > > There is a dictionary similar to IPADIC but for Korean called mecab-ko-dic: > It is available under an Apache license here: > https://bitbucket.org/eunjeon/mecab-ko-dic > This dictionary was built with MeCab, it defines a format for the features > adapted for the Korean language. > Since the Kuromoji tokenizer uses the same format for the morphological > analysis (left cost + right cost + word cost) I tried to adapt the module to > handle Korean with the mecab-ko-dic. I've started with a POC that copies the > Kuromoji module and adapts it for the mecab-ko-dic. > I used the same classes to build and read the dictionary but I had to make > some modifications to handle the differences with the IPADIC and Japanese. > The resulting binary dictionary takes 28MB on disk, it's bigger than the > IPADIC but mainly because the source is bigger and there are a lot of > compound and inflect terms that define a group of terms and the segmentation > that can be applied. > I attached the patch that contains this new Korean module called -godori- > nori. It is an adaptation of the Kuromoji module so currently > the two modules don't share any code. I wanted to validate the approach first > and check the relevancy of the results. I don't speak Korean so I used the > relevancy > tests that was added for another Korean tokenizer > (https://issues.apache.org/jira/browse/LUCENE-4956) and tested the output > against mecab-ko which is the official fork of mecab to use the mecab-ko-dic. > I had to simplify the JapaneseTokenizer, my version removes the nBest output > and the decomposition of too long tokens. I also > modified the handling of whitespaces since they are important in Korean. > Whitespaces that appear before a term are attached to that term and this > information is used to compute a penalty based on the Part of Speech of the > token. The penalty cost is a feature added to mecab-ko to handle > morphemes that should not appear after a morpheme and is described in the > mecab-ko page: > https://bitbucket.org/eunjeon/mecab-ko > Ignoring whitespaces is also more inlined with the official MeCab library > which attach the whitespaces to the term that follows. > I also added a decompounder filter that expand the compounds and inflects > defined in the dictionary and a part of speech filter similar to the Japanese > that removes the morpheme that are not useful for relevance (suffix, prefix, > interjection, ...). These filters don't play well with the tokenizer if it > can > output multiple paths (nBest output for instance) so for simplicity I removed > this ability and the Korean tokenizer only outputs the best path. > I compared the result with mecab-ko to confirm that the analyzer is working > and ran the relevancy test that is defined in HantecRel.java included > in the patch (written by Robert for another Korean analyzer). Here are the > results: > ||Analyzer||Index Time||Index Size||MAP(CLASSIC)||MAP(BM25)||MAP(GL2)|| > |Standard|35s|131MB|.007|.1044|.1053| > |CJK|36s|164MB|.1418|.1924|.1916| > |Korean|212s|90MB|.1628|.2094|.2078| > I find the results very promising so I plan to continue to work on this > project. I started to extract the part of the code that could be shared with > the > Kuromoji module but I wanted to share the status and this POC first to > confirm that this approach is viable. The advantages of using the same model > than > the Japanese analyzer are multiple: we don't have a Korean analyzer at the > moment ;), the resulting dictionary is small compared to other libraries that > use the mecab-ko-dic (the FST takes only 5.4MB) and the Tokenizer prunes the > lattice on the fly to select the best path efficiently. > The dictionary can be built directly from the godori module with the >
[jira] [Commented] (LUCENE-8231) Nori, a Korean analyzer based on mecab-ko-dic
[ https://issues.apache.org/jira/browse/LUCENE-8231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16462495#comment-16462495 ] ASF subversion and git services commented on LUCENE-8231: - Commit 9b261087abcd7ef350e49d4fa4e72e075a135799 in lucene-solr's branch refs/heads/master from [~jim.ferenczi] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=9b26108 ] LUCENE-8231: Add missing part of speech filter in the SPI META-INF file > Nori, a Korean analyzer based on mecab-ko-dic > - > > Key: LUCENE-8231 > URL: https://issues.apache.org/jira/browse/LUCENE-8231 > Project: Lucene - Core > Issue Type: New Feature >Reporter: Jim Ferenczi >Priority: Major > Fix For: 7.4, master (8.0) > > Attachments: LUCENE-8231-remap-hangul.patch, LUCENE-8231.patch, > LUCENE-8231.patch, LUCENE-8231.patch, LUCENE-8231.patch, LUCENE-8231.patch, > LUCENE-8231.patch, LUCENE-8231.patch, LUCENE-8231.patch, LUCENE-8231.patch, > LUCENE-8231.patch, LUCENE-8231.patch > > > There is a dictionary similar to IPADIC but for Korean called mecab-ko-dic: > It is available under an Apache license here: > https://bitbucket.org/eunjeon/mecab-ko-dic > This dictionary was built with MeCab, it defines a format for the features > adapted for the Korean language. > Since the Kuromoji tokenizer uses the same format for the morphological > analysis (left cost + right cost + word cost) I tried to adapt the module to > handle Korean with the mecab-ko-dic. I've started with a POC that copies the > Kuromoji module and adapts it for the mecab-ko-dic. > I used the same classes to build and read the dictionary but I had to make > some modifications to handle the differences with the IPADIC and Japanese. > The resulting binary dictionary takes 28MB on disk, it's bigger than the > IPADIC but mainly because the source is bigger and there are a lot of > compound and inflect terms that define a group of terms and the segmentation > that can be applied. > I attached the patch that contains this new Korean module called -godori- > nori. It is an adaptation of the Kuromoji module so currently > the two modules don't share any code. I wanted to validate the approach first > and check the relevancy of the results. I don't speak Korean so I used the > relevancy > tests that was added for another Korean tokenizer > (https://issues.apache.org/jira/browse/LUCENE-4956) and tested the output > against mecab-ko which is the official fork of mecab to use the mecab-ko-dic. > I had to simplify the JapaneseTokenizer, my version removes the nBest output > and the decomposition of too long tokens. I also > modified the handling of whitespaces since they are important in Korean. > Whitespaces that appear before a term are attached to that term and this > information is used to compute a penalty based on the Part of Speech of the > token. The penalty cost is a feature added to mecab-ko to handle > morphemes that should not appear after a morpheme and is described in the > mecab-ko page: > https://bitbucket.org/eunjeon/mecab-ko > Ignoring whitespaces is also more inlined with the official MeCab library > which attach the whitespaces to the term that follows. > I also added a decompounder filter that expand the compounds and inflects > defined in the dictionary and a part of speech filter similar to the Japanese > that removes the morpheme that are not useful for relevance (suffix, prefix, > interjection, ...). These filters don't play well with the tokenizer if it > can > output multiple paths (nBest output for instance) so for simplicity I removed > this ability and the Korean tokenizer only outputs the best path. > I compared the result with mecab-ko to confirm that the analyzer is working > and ran the relevancy test that is defined in HantecRel.java included > in the patch (written by Robert for another Korean analyzer). Here are the > results: > ||Analyzer||Index Time||Index Size||MAP(CLASSIC)||MAP(BM25)||MAP(GL2)|| > |Standard|35s|131MB|.007|.1044|.1053| > |CJK|36s|164MB|.1418|.1924|.1916| > |Korean|212s|90MB|.1628|.2094|.2078| > I find the results very promising so I plan to continue to work on this > project. I started to extract the part of the code that could be shared with > the > Kuromoji module but I wanted to share the status and this POC first to > confirm that this approach is viable. The advantages of using the same model > than > the Japanese analyzer are multiple: we don't have a Korean analyzer at the > moment ;), the resulting dictionary is small compared to other libraries that > use the mecab-ko-dic (the FST takes only 5.4MB) and the Tokenizer prunes the > lattice on the fly to select the best path efficiently. > The dictionary can be built directly from the godori module with the >
[jira] [Commented] (LUCENE-8270) Remove MatchesIterator.term()
[ https://issues.apache.org/jira/browse/LUCENE-8270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16462485#comment-16462485 ] David Smiley commented on LUCENE-8270: -- {quote}That ought to be one term unless the indexed data had more than one term at this position and furthermore the query matched more than one of the terms at this position. {quote} Actually that's easy too – just iterate to these terms at this same position as well. This will happen automatically if the MatchesIterator is 1-1 with a PostingsEnum. > Remove MatchesIterator.term() > - > > Key: LUCENE-8270 > URL: https://issues.apache.org/jira/browse/LUCENE-8270 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Alan Woodward >Assignee: Alan Woodward >Priority: Major > Attachments: LUCENE-8270.patch > > > As discussed on LUCENE-8268, we don't have a clear use-case for this yet, and > it's complicating adding Matches to phrase queries, so let's just remove it > for now. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12238) Synonym Query Style Boost By Payload
[ https://issues.apache.org/jira/browse/SOLR-12238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alessandro Benedetti updated SOLR-12238: Attachment: SOLR-12238.patch > Synonym Query Style Boost By Payload > > > Key: SOLR-12238 > URL: https://issues.apache.org/jira/browse/SOLR-12238 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: query parsers >Affects Versions: 7.2 >Reporter: Alessandro Benedetti >Priority: Major > Attachments: SOLR-12238.patch, SOLR-12238.patch, SOLR-12238.patch > > Time Spent: 10m > Remaining Estimate: 0h > > This improvement is built on top of the Synonym Query Style feature and > brings the possibility of boosting synonym queries using the payload > associated. > It introduces two new modalities for the Synonym Query Style : > PICK_BEST_BOOST_BY_PAYLOAD -> build a Disjunction query with the clauses > boosted by payload > AS_DISTINCT_TERMS_BOOST_BY_PAYLOAD -> build a Boolean query with the clauses > boosted by payload > This new synonym query styles will assume payloads are available so they must > be used in conjunction with a token filter able to produce payloads. > An synonym.txt example could be : > # Synonyms used by Payload Boost > tiger => tiger|1.0, Big_Cat|0.8, Shere_Khan|0.9 > leopard => leopard, Big_Cat|0.8, Bagheera|0.9 > lion => lion|1.0, panthera leo|0.99, Simba|0.8 > snow_leopard => panthera uncia|0.99, snow leopard|1.0 > A simple token filter to populate the payloads from such synonym.txt is : > delimiter="|"/> -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12304) Interesting Terms parameter is ignored by MLT Component
[ https://issues.apache.org/jira/browse/SOLR-12304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alessandro Benedetti updated SOLR-12304: Attachment: SOLR-12304.patch > Interesting Terms parameter is ignored by MLT Component > --- > > Key: SOLR-12304 > URL: https://issues.apache.org/jira/browse/SOLR-12304 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: MoreLikeThis >Affects Versions: 7.2 >Reporter: Alessandro Benedetti >Priority: Major > Attachments: SOLR-12304.patch, SOLR-12304.patch > > Time Spent: 10m > Remaining Estimate: 0h > > Currently the More Like This component just ignores the mlt.InterestingTerms > parameter ( which is usable by the MoreLikeThisHandler). > Scope of this issue is to fix the bug and add related tests ( which will > succeed after the fix ) > *N.B.* MoreLikeThisComponent and MoreLikeThisHandler are very coupled and the > tests for the MoreLikeThisHandler are intersecting the MoreLikeThisComponent > ones . > It is out of scope for this issue any consideration or refactor of that. > Other issues will follow. > *N.B.* out of scope for this issue is the distributed case, which is much > more complicated and requires much deeper investigations -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8286) UnifiedHighlighter should support the new Weight.matches API for better match accuracy
[ https://issues.apache.org/jira/browse/LUCENE-8286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16462466#comment-16462466 ] David Smiley commented on LUCENE-8286: -- The "span" width _could_ be used for passage relevancy, and perhaps ought to be – sure. I just meant to convey that today the UH doesn't have or use this info. BTW I did a quick hack integration last night of Weight.getMatches into the UH and ran some tests. I had no issue with term vectors. The fieldMatcher (aka requireFieldMatch option) will require some work. And if the query references non-highlighted fields in a way that will constraint the results (i.e. MUST otherfield:foo), for the Analysis offset strategy, we'll need to combine an aggregate index view of analysis with the underlying real index for other fields because the MemoryIndex alone only has one field – the field being highlighted. > UnifiedHighlighter should support the new Weight.matches API for better match > accuracy > -- > > Key: LUCENE-8286 > URL: https://issues.apache.org/jira/browse/LUCENE-8286 > Project: Lucene - Core > Issue Type: Improvement > Components: modules/highlighter >Reporter: David Smiley >Priority: Major > > The new Weight.matches() API should allow the UnifiedHighlighter to more > accurately highlight some BooleanQuery patterns correctly -- see LUCENE-7903. > In addition, this API should make the job of highlighting easier, reducing > the LOC and related complexities, especially the UH's PhraseHelper. Note: > reducing/removing PhraseHelper is not a near-term goal since Weight.matches > is experimental and incomplete, and perhaps we'll discover some gaps in > flexibility/functionality. > This issue should introduce a new UnifiedHighlighter.HighlightFlag enum > option for this method of highlighting. Perhaps call it {{WEIGHT_MATCHES}}? > Longer term it could go away and it'll be implied if you specify enum values > for PHRASES & MULTI_TERM_QUERY? -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12307) Stop endless spin java.io.IOException: org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = Session expired for /autoscaling.json
[ https://issues.apache.org/jira/browse/SOLR-12307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Khludnev updated SOLR-12307: Attachment: SOLR-12307.patch > Stop endless spin java.io.IOException: > org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode > = Session expired for /autoscaling.json > - > > Key: SOLR-12307 > URL: https://issues.apache.org/jira/browse/SOLR-12307 > Project: Solr > Issue Type: Sub-task > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Reporter: Mikhail Khludnev >Priority: Major > Attachments: SOLR-12307.patch > > > When ZK expires one loop continue spinning pointlessly that hurts CI so often > {code} > [junit4] 2>at > org.apache.solr.client.solrj.cloud.DistribStateManager.getAutoScalingConfig(DistribStateManager.java:83) > ~[java/:?] >[junit4] 2>at > org.apache.solr.cloud.autoscaling.OverseerTriggerThread.run(OverseerTriggerThread.java:131) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12209) add Paging Streaming Expression
[ https://issues.apache.org/jira/browse/SOLR-12209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] mosh updated SOLR-12209: Attachment: 0001-added-skip-and-limit-stream-decorators.patch > add Paging Streaming Expression > --- > > Key: SOLR-12209 > URL: https://issues.apache.org/jira/browse/SOLR-12209 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) >Reporter: mosh >Priority: Major > > Currently the closest streaming expression that allows some sort of > pagination is top. > I propose we add a new streaming expression, which is based on the > RankedStream class to add offset to the stream. currently it can only be done > in code by reading the stream until the desired offset is reached. > The new expression will be used as such: > {{paging(rows=3, search(collection1, q="*:*", qt="/export", > fl="id,a_s,a_i,a_f", sort="a_f desc, a_i desc"), sort="a_f asc, a_i asc", > start=100)}} > {{this will offset the returned stream by 100 documents}} > > [~joel.bernstein] what to you think? -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12209) add Paging Streaming Expression
[ https://issues.apache.org/jira/browse/SOLR-12209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] mosh updated SOLR-12209: Attachment: (was: 0001-added-skip-and-limit-stream-decorators.patch) > add Paging Streaming Expression > --- > > Key: SOLR-12209 > URL: https://issues.apache.org/jira/browse/SOLR-12209 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) >Reporter: mosh >Priority: Major > > Currently the closest streaming expression that allows some sort of > pagination is top. > I propose we add a new streaming expression, which is based on the > RankedStream class to add offset to the stream. currently it can only be done > in code by reading the stream until the desired offset is reached. > The new expression will be used as such: > {{paging(rows=3, search(collection1, q="*:*", qt="/export", > fl="id,a_s,a_i,a_f", sort="a_f desc, a_i desc"), sort="a_f asc, a_i asc", > start=100)}} > {{this will offset the returned stream by 100 documents}} > > [~joel.bernstein] what to you think? -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-Tests-7.3 - Build # 59 - Still Unstable
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.3/59/ 3 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.cloud.ZkControllerTest Error Message: ObjectTracker found 1 object(s) that were not released!!! [Overseer] org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: org.apache.solr.cloud.Overseer at org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42) at org.apache.solr.cloud.Overseer.start(Overseer.java:545) at org.apache.solr.cloud.OverseerElectionContext.runLeaderProcess(ElectionContext.java:850) at org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:170) at org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:135) at org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:307) at org.apache.solr.cloud.LeaderElector.retryElection(LeaderElector.java:393) at org.apache.solr.cloud.ZkController.rejoinOverseerElection(ZkController.java:2068) at org.apache.solr.cloud.Overseer$ClusterStateUpdater.checkIfIamStillLeader(Overseer.java:331) at java.lang.Thread.run(Thread.java:748) Stack Trace: java.lang.AssertionError: ObjectTracker found 1 object(s) that were not released!!! [Overseer] org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: org.apache.solr.cloud.Overseer at org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42) at org.apache.solr.cloud.Overseer.start(Overseer.java:545) at org.apache.solr.cloud.OverseerElectionContext.runLeaderProcess(ElectionContext.java:850) at org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:170) at org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:135) at org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:307) at org.apache.solr.cloud.LeaderElector.retryElection(LeaderElector.java:393) at org.apache.solr.cloud.ZkController.rejoinOverseerElection(ZkController.java:2068) at org.apache.solr.cloud.Overseer$ClusterStateUpdater.checkIfIamStillLeader(Overseer.java:331) at java.lang.Thread.run(Thread.java:748) at __randomizedtesting.SeedInfo.seed([CFB061D4AC91DB3D]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.junit.Assert.assertNull(Assert.java:551) at org.apache.solr.SolrTestCaseJ4.teardownTestCases(SolrTestCaseJ4.java:301) at sun.reflect.GeneratedMethodAccessor40.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:897) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.lang.Thread.run(Thread.java:748) FAILED: junit.framework.TestSuite.org.apache.solr.cloud.ZkControllerTest Error Message: 1 thread leaked from SUITE scope at org.apache.solr.cloud.ZkControllerTest: 1)
[jira] [Comment Edited] (LUCENE-8292) Fix FilterLeafReader.FilterTermsEnum to delegate all seekExact methods
[ https://issues.apache.org/jira/browse/LUCENE-8292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16462409#comment-16462409 ] Bruno Roustant edited comment on LUCENE-8292 at 5/3/18 1:08 PM: Another option would be to modify the TermsEnum.seekExact() method and make it final, or have the javadoc be explicit that it should not be overridden. (though I don't like this option) was (Author: bruno.roustant): Another option would be to modify the TermsEnum.seekExact() method and make it final, or have the javadoc be explicit that it should not be overridden. > Fix FilterLeafReader.FilterTermsEnum to delegate all seekExact methods > -- > > Key: LUCENE-8292 > URL: https://issues.apache.org/jira/browse/LUCENE-8292 > Project: Lucene - Core > Issue Type: Bug > Components: core/index >Affects Versions: 7.2.1 >Reporter: Bruno Roustant >Priority: Major > Fix For: trunk > > Attachments: > 0001-Fix-FilterLeafReader.FilterTermsEnum-to-delegate-see.patch, > LUCENE-8292.patch > > > FilterLeafReader#FilterTermsEnum wraps another TermsEnum and delegates many > methods. > It misses some seekExact() methods, thus it is not possible to the delegate > to override these methods to have specific behavior (unlike the TermsEnum API > which allows that). > The fix is straightforward: simply override these seekExact() methods and > delegate. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8292) Fix FilterLeafReader.FilterTermsEnum to delegate all seekExact methods
[ https://issues.apache.org/jira/browse/LUCENE-8292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16462409#comment-16462409 ] Bruno Roustant commented on LUCENE-8292: Another option would be to modify the TermsEnum.seekExact() method and make it final, or have the javadoc be explicit that it should not be overridden. > Fix FilterLeafReader.FilterTermsEnum to delegate all seekExact methods > -- > > Key: LUCENE-8292 > URL: https://issues.apache.org/jira/browse/LUCENE-8292 > Project: Lucene - Core > Issue Type: Bug > Components: core/index >Affects Versions: 7.2.1 >Reporter: Bruno Roustant >Priority: Major > Fix For: trunk > > Attachments: > 0001-Fix-FilterLeafReader.FilterTermsEnum-to-delegate-see.patch, > LUCENE-8292.patch > > > FilterLeafReader#FilterTermsEnum wraps another TermsEnum and delegates many > methods. > It misses some seekExact() methods, thus it is not possible to the delegate > to override these methods to have specific behavior (unlike the TermsEnum API > which allows that). > The fix is straightforward: simply override these seekExact() methods and > delegate. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (LUCENE-8142) Should codecs expose raw impacts?
[ https://issues.apache.org/jira/browse/LUCENE-8142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adrien Grand resolved LUCENE-8142. -- Resolution: Fixed Fix Version/s: master (8.0) > Should codecs expose raw impacts? > - > > Key: LUCENE-8142 > URL: https://issues.apache.org/jira/browse/LUCENE-8142 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Adrien Grand >Priority: Minor > Fix For: master (8.0) > > Attachments: LUCENE-8142.patch > > > Follow-up of LUCENE-4198. Currently, call-sites of TermsEnum.impacts provide > a SimScorer so that the maximum score for the block can be computed. Should > ImpactsEnum instead return the (freq,norm) pairs and let callers deal with > max score computation? -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (LUCENE-8292) Fix FilterLeafReader.FilterTermsEnum to delegate all seekExact methods
[ https://issues.apache.org/jira/browse/LUCENE-8292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16462394#comment-16462394 ] Bruno Roustant edited comment on LUCENE-8292 at 5/3/18 1:03 PM: When looking at TermsEnum API, what I understand is that seekExact() defaults to calling seekCeil(), but if needed (not for correctness but for performance consideration) we can override it to have a specialized seek that searches only the exact term and does not have to position to the next term if not found. This may have an impact for some TermsEnum extensions (a really noticeable impact in my case, that's why I noticed this issue). To me the current behavior of FilterTermsEnum is not correct with regard to TermsEnum API. (And I noticed that AssertingLeafReader overrides seekExact()). Adding these two methods in FilterTermsEnum fixes correctness, even if I agree it makes more room for bugs. was (Author: bruno.roustant): When looking at TermsEnum API, what I understand is that seekExact() defaults to calling seekCeil(), but if needed (not for correctness but for performance consideration) we can override it to have a specialized seek that searches only the exact term and does not have to position to the next term if not found. This may have an impact for some TermsEnum extensions (a really noticeable impact in my case, that's why I noticed this issue). To me the current behavior of FilterTermsEnum is not correct with regard to TermsEnum API. (And I noticed that AssertingLeafReader overrides seekExact()). Adding this two methods in FilterTermsEnum fixes correctness, even if I agree it makes more room for bugs. > Fix FilterLeafReader.FilterTermsEnum to delegate all seekExact methods > -- > > Key: LUCENE-8292 > URL: https://issues.apache.org/jira/browse/LUCENE-8292 > Project: Lucene - Core > Issue Type: Bug > Components: core/index >Affects Versions: 7.2.1 >Reporter: Bruno Roustant >Priority: Major > Fix For: trunk > > Attachments: > 0001-Fix-FilterLeafReader.FilterTermsEnum-to-delegate-see.patch, > LUCENE-8292.patch > > > FilterLeafReader#FilterTermsEnum wraps another TermsEnum and delegates many > methods. > It misses some seekExact() methods, thus it is not possible to the delegate > to override these methods to have specific behavior (unlike the TermsEnum API > which allows that). > The fix is straightforward: simply override these seekExact() methods and > delegate. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8292) Fix FilterLeafReader.FilterTermsEnum to delegate all seekExact methods
[ https://issues.apache.org/jira/browse/LUCENE-8292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16462394#comment-16462394 ] Bruno Roustant commented on LUCENE-8292: When looking at TermsEnum API, what I understand is that seekExact() defaults to calling seekCeil(), but if needed (not for correctness but for performance consideration) we can override it to have a specialized seek that searches only the exact term and does not have to position to the next term if not found. This may have an impact for some TermsEnum extensions (a really noticeable impact in my case, that's why I noticed this issue). To me the current behavior of FilterTermsEnum is not correct with regard to TermsEnum API. (And I noticed that AssertingLeafReader overrides seekExact()). Adding this two methods in FilterTermsEnum fixes correctness, even if I agree it makes more room for bugs. > Fix FilterLeafReader.FilterTermsEnum to delegate all seekExact methods > -- > > Key: LUCENE-8292 > URL: https://issues.apache.org/jira/browse/LUCENE-8292 > Project: Lucene - Core > Issue Type: Bug > Components: core/index >Affects Versions: 7.2.1 >Reporter: Bruno Roustant >Priority: Major > Fix For: trunk > > Attachments: > 0001-Fix-FilterLeafReader.FilterTermsEnum-to-delegate-see.patch, > LUCENE-8292.patch > > > FilterLeafReader#FilterTermsEnum wraps another TermsEnum and delegates many > methods. > It misses some seekExact() methods, thus it is not possible to the delegate > to override these methods to have specific behavior (unlike the TermsEnum API > which allows that). > The fix is straightforward: simply override these seekExact() methods and > delegate. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12307) Stop endless spin java.io.IOException: org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = Session expired for /autoscaling.json
[ https://issues.apache.org/jira/browse/SOLR-12307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16462386#comment-16462386 ] Mikhail Khludnev commented on SOLR-12307: - stack trace {quote} [junit4] 2> 1992793 ERROR (OverseerAutoScalingTriggerThread-72097539512664067-127.0.0.1:8983_solr-n_01) [] o.a.s.c.a.OverseerTriggerThread A ZK error has occurre d [junit4] 2> java.io.IOException: org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = Session expired for /autoscaling.json [junit4] 2>at org.apache.solr.client.solrj.impl.ZkDistribStateManager.getAutoScalingConfig(ZkDistribStateManager.java:183) ~[java/:?] [junit4] 2>at org.apache.solr.client.solrj.cloud.DistribStateManager.getAutoScalingConfig(DistribStateManager.java:83) ~[java/:?] [junit4] 2>at org.apache.solr.cloud.autoscaling.OverseerTriggerThread.run(OverseerTriggerThread.java:131) [java/:?] [junit4] 2>at java.lang.Thread.run(Thread.java:748) [?:1.8.0_144] [junit4] 2> Caused by: org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = Session expired for /autoscaling.json [junit4] 2>at org.apache.zookeeper.KeeperException.create(KeeperException.java:130) ~[zookeeper-3.4.11.jar:3.4.11-37e277162d567b55a07d1755f0b31c32e93c01a0] [junit4] 2>at org.apache.zookeeper.KeeperException.create(KeeperException.java:54) ~[zookeeper-3.4.11.jar:3.4.11-37e277162d567b55a07d1755f0b31c32e93c01a0] [junit4] 2>at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1215) ~[zookeeper-3.4.11.jar:3.4.11-37e277162d567b55a07d1755f0b31c32e93c01a0] [junit4] 2>at org.apache.solr.common.cloud.SolrZkClient.lambda$getData$5(SolrZkClient.java:340) ~[java/:?] [junit4] 2>at org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:60) ~[java/:?] [junit4] 2>at org.apache.solr.common.cloud.SolrZkClient.getData(SolrZkClient.java:340) ~[java/:?] [junit4] 2>at org.apache.solr.client.solrj.impl.ZkDistribStateManager.getAutoScalingConfig(ZkDistribStateManager.java:176) ~[java/:?] [junit4] 2>... 3 more {quote} > Stop endless spin java.io.IOException: > org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode > = Session expired for /autoscaling.json > - > > Key: SOLR-12307 > URL: https://issues.apache.org/jira/browse/SOLR-12307 > Project: Solr > Issue Type: Sub-task > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Reporter: Mikhail Khludnev >Priority: Major > > When ZK expires one loop continue spinning pointlessly that hurts CI so often > {code} > [junit4] 2>at > org.apache.solr.client.solrj.cloud.DistribStateManager.getAutoScalingConfig(DistribStateManager.java:83) > ~[java/:?] >[junit4] 2>at > org.apache.solr.cloud.autoscaling.OverseerTriggerThread.run(OverseerTriggerThread.java:131) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: [JENKINS] Lucene-Solr-7.x-Windows (32bit/jdk1.8.0_144) - Build # 575 - Still Unstable!
I'll push that stop condition under https://issues.apache.org/jira/browse/SOLR-12307 On Thu, May 3, 2018 at 3:26 PM, Dawid Weisswrote: > I honestly don't know (don't know much about zookeeper). I think there > should be some kind of action to this unrecoverable situation rather > than an endless loop :) Your patch looks good to me, but I really > don't know much about that fragment of the code. > > Dawid > > On Thu, May 3, 2018 at 2:12 PM, Mikhail Khludnev wrote: > > I have the fix just for this spin in > > https://issues.apache.org/jira/secure/attachment/ > 12919074/SOLR-12200.patch > > (Although I abandoned SOLR-12200) > > > > diff --git > > a/solr/core/src/java/org/apache/solr/cloud/autoscaling/ > OverseerTriggerThread.java > > b/solr/core/src/java/org/apache/solr/cloud/autoscaling/ > OverseerTriggerThread.java > > index ece4c4c..5cb1f90 100644 > > --- > > a/solr/core/src/java/org/apache/solr/cloud/autoscaling/ > OverseerTriggerThread.java > > +++ > > b/solr/core/src/java/org/apache/solr/cloud/autoscaling/ > OverseerTriggerThread.java > > @@ -142,8 +142,14 @@ public class OverseerTriggerThread implements > Runnable, > > SolrCloseable { > > Thread.currentThread().interrupt(); > > log.warn("Interrupted", e); > > break; > > - } catch (IOException | KeeperException e) { > > + } > > + catch (IOException | KeeperException e) { > > log.error("A ZK error has occurred", e); > > +if (e.getCause()!=null && e.getCause() instanceof > > KeeperException.SessionExpiredException) { > > + log.warn("Solr cannot talk to ZK, exiting " + > > + getClass().getSimpleName() + " main queue loop", e); > > + return; > > +} > >} > > } > > > > > > I can push only this, just to stop torture Jenkins. WDYT ? > > > > On Thu, May 3, 2018 at 2:57 PM, Dawid Weiss > wrote: > >> > >> Endless loop (session expired): > >> > >>[junit4] 2> 1992793 ERROR > >> > >> (OverseerAutoScalingTriggerThread-72097539512664067-127.0.0. > 1:8983_solr-n_01) > >> [] o.a.s.c.a.OverseerTriggerThread A ZK error has occurre > >> d > >>[junit4] 2> java.io.IOException: > >> org.apache.zookeeper.KeeperException$SessionExpiredException: > >> KeeperErrorCode = Session expired for /autoscaling.json > >>[junit4] 2>at > >> > >> org.apache.solr.client.solrj.impl.ZkDistribStateManager. > getAutoScalingConfig(ZkDistribStateManager.java:183) > >> ~[java/:?] > >>[junit4] 2>at > >> > >> org.apache.solr.client.solrj.cloud.DistribStateManager. > getAutoScalingConfig(DistribStateManager.java:83) > >> ~[java/:?] > >>[junit4] 2>at > >> > >> org.apache.solr.cloud.autoscaling.OverseerTriggerThread.run( > OverseerTriggerThread.java:131) > >> [java/:?] > >>[junit4] 2>at java.lang.Thread.run(Thread.java:748) > >> [?:1.8.0_144] > >>[junit4] 2> Caused by: > >> org.apache.zookeeper.KeeperException$SessionExpiredException: > >> KeeperErrorCode = Session expired for /autoscaling.json > >>[junit4] 2>at > >> org.apache.zookeeper.KeeperException.create(KeeperException.java:130) > >> ~[zookeeper-3.4.11.jar:3.4.11-37e277162d567b55a07d1755f0b31c32e93c01a0] > >>[junit4] 2>at > >> org.apache.zookeeper.KeeperException.create(KeeperException.java:54) > >> ~[zookeeper-3.4.11.jar:3.4.11-37e277162d567b55a07d1755f0b31c32e93c01a0] > >>[junit4] 2>at > >> org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1215) > >> ~[zookeeper-3.4.11.jar:3.4.11-37e277162d567b55a07d1755f0b31c32e93c01a0] > >>[junit4] 2>at > >> > >> org.apache.solr.common.cloud.SolrZkClient.lambda$getData$5( > SolrZkClient.java:340) > >> ~[java/:?] > >>[junit4] 2>at > >> > >> org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation( > ZkCmdExecutor.java:60) > >> ~[java/:?] > >>[junit4] 2>at > >> org.apache.solr.common.cloud.SolrZkClient.getData( > SolrZkClient.java:340) > >> ~[java/:?] > >>[junit4] 2>at > >> > >> org.apache.solr.client.solrj.impl.ZkDistribStateManager. > getAutoScalingConfig(ZkDistribStateManager.java:176) > >> ~[java/:?] > >>[junit4] 2>... 3 more > >> > >> > >> On Thu, May 3, 2018 at 1:37 PM, Policeman Jenkins Server > >> wrote: > >> > Error processing tokens: Error while parsing action > >> > 'Text/ZeroOrMore/FirstOf/Token/DelimitedToken/DelimitedToken_Action3' > at > >> > input position (line 79, pos 4): > >> > )"} > >> >^ > >> > > >> > java.lang.OutOfMemoryError: Java heap space > >> > > >> > > >> > - > >> > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org > >> > For additional commands, e-mail: dev-h...@lucene.apache.org > >> > >> - > >> To unsubscribe, e-mail:
[jira] [Created] (SOLR-12307) Stop endless spin java.io.IOException: org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = Session expired for /autoscaling.json
Mikhail Khludnev created SOLR-12307: --- Summary: Stop endless spin java.io.IOException: org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = Session expired for /autoscaling.json Key: SOLR-12307 URL: https://issues.apache.org/jira/browse/SOLR-12307 Project: Solr Issue Type: Sub-task Security Level: Public (Default Security Level. Issues are Public) Components: SolrCloud Reporter: Mikhail Khludnev When ZK expires one loop continue spinning pointlessly that hurts CI so often {code} [junit4] 2>at org.apache.solr.client.solrj.cloud.DistribStateManager.getAutoScalingConfig(DistribStateManager.java:83) ~[java/:?] [junit4] 2>at org.apache.solr.cloud.autoscaling.OverseerTriggerThread.run(OverseerTriggerThread.java:131) {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8292) Fix FilterLeafReader.FilterTermsEnum to delegate all seekExact methods
[ https://issues.apache.org/jira/browse/LUCENE-8292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16462378#comment-16462378 ] Adrien Grand commented on LUCENE-8292: -- Indeed these methods need to be overridden explicitly if you want them to be used. In general, we do not delegate methods that have a default implementation because the default implementation is correct regardless of what the wrapper class does. Overriding these methods in FilterTermsEnum to delegate to the wrapped instance would make room for bugs by requiring more methods to be overridden for the wrapper to be correct. > Fix FilterLeafReader.FilterTermsEnum to delegate all seekExact methods > -- > > Key: LUCENE-8292 > URL: https://issues.apache.org/jira/browse/LUCENE-8292 > Project: Lucene - Core > Issue Type: Bug > Components: core/index >Affects Versions: 7.2.1 >Reporter: Bruno Roustant >Priority: Major > Fix For: trunk > > Attachments: > 0001-Fix-FilterLeafReader.FilterTermsEnum-to-delegate-see.patch, > LUCENE-8292.patch > > > FilterLeafReader#FilterTermsEnum wraps another TermsEnum and delegates many > methods. > It misses some seekExact() methods, thus it is not possible to the delegate > to override these methods to have specific behavior (unlike the TermsEnum API > which allows that). > The fix is straightforward: simply override these seekExact() methods and > delegate. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-9.0.4) - Build # 21950 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/21950/ Java: 64bit/jdk-9.0.4 -XX:+UseCompressedOops -XX:+UseSerialGC 1 tests failed. FAILED: org.apache.solr.cloud.autoscaling.SearchRateTriggerTest.testTrigger Error Message: expected:<1> but was:<2> Stack Trace: java.lang.AssertionError: expected:<1> but was:<2> at __randomizedtesting.SeedInfo.seed([BF8FB6EA0BA43E13:DC448068926B4D3E]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.failNotEquals(Assert.java:647) at org.junit.Assert.assertEquals(Assert.java:128) at org.junit.Assert.assertEquals(Assert.java:472) at org.junit.Assert.assertEquals(Assert.java:456) at org.apache.solr.cloud.autoscaling.SearchRateTriggerTest.testTrigger(SearchRateTriggerTest.java:133) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:564) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.base/java.lang.Thread.run(Thread.java:844) Build Log: [...truncated 13628 lines...] [junit4] Suite: org.apache.solr.cloud.autoscaling.SearchRateTriggerTest [junit4] 2>
[jira] [Commented] (LUCENE-8292) Fix FilterLeafReader.FilterTermsEnum to delegate all seekExact methods
[ https://issues.apache.org/jira/browse/LUCENE-8292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16462368#comment-16462368 ] Bruno Roustant commented on LUCENE-8292: 1- "Not possible to override": I was not clear. It is still possible for a delegate TermsEnum to override the seekExact() method. But it will never be called since the FilterTermsEnum above always calls seekCeil(). 2- "Two more methods to override": You're right. Although normally the same code should be reusable, it should not be tedious. I see the trappy point. > Fix FilterLeafReader.FilterTermsEnum to delegate all seekExact methods > -- > > Key: LUCENE-8292 > URL: https://issues.apache.org/jira/browse/LUCENE-8292 > Project: Lucene - Core > Issue Type: Bug > Components: core/index >Affects Versions: 7.2.1 >Reporter: Bruno Roustant >Priority: Major > Fix For: trunk > > Attachments: > 0001-Fix-FilterLeafReader.FilterTermsEnum-to-delegate-see.patch, > LUCENE-8292.patch > > > FilterLeafReader#FilterTermsEnum wraps another TermsEnum and delegates many > methods. > It misses some seekExact() methods, thus it is not possible to the delegate > to override these methods to have specific behavior (unlike the TermsEnum API > which allows that). > The fix is straightforward: simply override these seekExact() methods and > delegate. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8286) UnifiedHighlighter should support the new Weight.matches API for better match accuracy
[ https://issues.apache.org/jira/browse/LUCENE-8286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16462361#comment-16462361 ] Adrien Grand commented on LUCENE-8286: -- bq. MI has things we don't need – position spans I don't know the unified highlighter well, but I would expect this information to be important to score passages? For instance if you run a sloppy phrase query, matches that have a smaller width should get a higher weight, shouldn't they? > UnifiedHighlighter should support the new Weight.matches API for better match > accuracy > -- > > Key: LUCENE-8286 > URL: https://issues.apache.org/jira/browse/LUCENE-8286 > Project: Lucene - Core > Issue Type: Improvement > Components: modules/highlighter >Reporter: David Smiley >Priority: Major > > The new Weight.matches() API should allow the UnifiedHighlighter to more > accurately highlight some BooleanQuery patterns correctly -- see LUCENE-7903. > In addition, this API should make the job of highlighting easier, reducing > the LOC and related complexities, especially the UH's PhraseHelper. Note: > reducing/removing PhraseHelper is not a near-term goal since Weight.matches > is experimental and incomplete, and perhaps we'll discover some gaps in > flexibility/functionality. > This issue should introduce a new UnifiedHighlighter.HighlightFlag enum > option for this method of highlighting. Perhaps call it {{WEIGHT_MATCHES}}? > Longer term it could go away and it'll be implied if you specify enum values > for PHRASES & MULTI_TERM_QUERY? -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: [JENKINS] Lucene-Solr-7.x-Windows (32bit/jdk1.8.0_144) - Build # 575 - Still Unstable!
I honestly don't know (don't know much about zookeeper). I think there should be some kind of action to this unrecoverable situation rather than an endless loop :) Your patch looks good to me, but I really don't know much about that fragment of the code. Dawid On Thu, May 3, 2018 at 2:12 PM, Mikhail Khludnevwrote: > I have the fix just for this spin in > https://issues.apache.org/jira/secure/attachment/12919074/SOLR-12200.patch > (Although I abandoned SOLR-12200) > > diff --git > a/solr/core/src/java/org/apache/solr/cloud/autoscaling/OverseerTriggerThread.java > b/solr/core/src/java/org/apache/solr/cloud/autoscaling/OverseerTriggerThread.java > index ece4c4c..5cb1f90 100644 > --- > a/solr/core/src/java/org/apache/solr/cloud/autoscaling/OverseerTriggerThread.java > +++ > b/solr/core/src/java/org/apache/solr/cloud/autoscaling/OverseerTriggerThread.java > @@ -142,8 +142,14 @@ public class OverseerTriggerThread implements Runnable, > SolrCloseable { > Thread.currentThread().interrupt(); > log.warn("Interrupted", e); > break; > - } catch (IOException | KeeperException e) { > + } > + catch (IOException | KeeperException e) { > log.error("A ZK error has occurred", e); > +if (e.getCause()!=null && e.getCause() instanceof > KeeperException.SessionExpiredException) { > + log.warn("Solr cannot talk to ZK, exiting " + > + getClass().getSimpleName() + " main queue loop", e); > + return; > +} >} > } > > > I can push only this, just to stop torture Jenkins. WDYT ? > > On Thu, May 3, 2018 at 2:57 PM, Dawid Weiss wrote: >> >> Endless loop (session expired): >> >>[junit4] 2> 1992793 ERROR >> >> (OverseerAutoScalingTriggerThread-72097539512664067-127.0.0.1:8983_solr-n_01) >> [] o.a.s.c.a.OverseerTriggerThread A ZK error has occurre >> d >>[junit4] 2> java.io.IOException: >> org.apache.zookeeper.KeeperException$SessionExpiredException: >> KeeperErrorCode = Session expired for /autoscaling.json >>[junit4] 2>at >> >> org.apache.solr.client.solrj.impl.ZkDistribStateManager.getAutoScalingConfig(ZkDistribStateManager.java:183) >> ~[java/:?] >>[junit4] 2>at >> >> org.apache.solr.client.solrj.cloud.DistribStateManager.getAutoScalingConfig(DistribStateManager.java:83) >> ~[java/:?] >>[junit4] 2>at >> >> org.apache.solr.cloud.autoscaling.OverseerTriggerThread.run(OverseerTriggerThread.java:131) >> [java/:?] >>[junit4] 2>at java.lang.Thread.run(Thread.java:748) >> [?:1.8.0_144] >>[junit4] 2> Caused by: >> org.apache.zookeeper.KeeperException$SessionExpiredException: >> KeeperErrorCode = Session expired for /autoscaling.json >>[junit4] 2>at >> org.apache.zookeeper.KeeperException.create(KeeperException.java:130) >> ~[zookeeper-3.4.11.jar:3.4.11-37e277162d567b55a07d1755f0b31c32e93c01a0] >>[junit4] 2>at >> org.apache.zookeeper.KeeperException.create(KeeperException.java:54) >> ~[zookeeper-3.4.11.jar:3.4.11-37e277162d567b55a07d1755f0b31c32e93c01a0] >>[junit4] 2>at >> org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1215) >> ~[zookeeper-3.4.11.jar:3.4.11-37e277162d567b55a07d1755f0b31c32e93c01a0] >>[junit4] 2>at >> >> org.apache.solr.common.cloud.SolrZkClient.lambda$getData$5(SolrZkClient.java:340) >> ~[java/:?] >>[junit4] 2>at >> >> org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:60) >> ~[java/:?] >>[junit4] 2>at >> org.apache.solr.common.cloud.SolrZkClient.getData(SolrZkClient.java:340) >> ~[java/:?] >>[junit4] 2>at >> >> org.apache.solr.client.solrj.impl.ZkDistribStateManager.getAutoScalingConfig(ZkDistribStateManager.java:176) >> ~[java/:?] >>[junit4] 2>... 3 more >> >> >> On Thu, May 3, 2018 at 1:37 PM, Policeman Jenkins Server >> wrote: >> > Error processing tokens: Error while parsing action >> > 'Text/ZeroOrMore/FirstOf/Token/DelimitedToken/DelimitedToken_Action3' at >> > input position (line 79, pos 4): >> > )"} >> >^ >> > >> > java.lang.OutOfMemoryError: Java heap space >> > >> > >> > - >> > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org >> > For additional commands, e-mail: dev-h...@lucene.apache.org >> >> - >> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org >> For additional commands, e-mail: dev-h...@lucene.apache.org >> > > > > -- > Sincerely yours > Mikhail Khludnev - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8292) Fix FilterLeafReader.FilterTermsEnum to delegate all seekExact methods
[ https://issues.apache.org/jira/browse/LUCENE-8292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16462356#comment-16462356 ] Adrien Grand commented on LUCENE-8292: -- This may be a bit trappy: if someone writes a FilterTermsEnum that hides some terms, now they have 2 more methods to override for their impl to be correct. I don't understand why you are saying that it is not possible to override the behavior of these methods without your change, can you clarify? > Fix FilterLeafReader.FilterTermsEnum to delegate all seekExact methods > -- > > Key: LUCENE-8292 > URL: https://issues.apache.org/jira/browse/LUCENE-8292 > Project: Lucene - Core > Issue Type: Bug > Components: core/index >Affects Versions: 7.2.1 >Reporter: Bruno Roustant >Priority: Major > Fix For: trunk > > Attachments: > 0001-Fix-FilterLeafReader.FilterTermsEnum-to-delegate-see.patch, > LUCENE-8292.patch > > > FilterLeafReader#FilterTermsEnum wraps another TermsEnum and delegates many > methods. > It misses some seekExact() methods, thus it is not possible to the delegate > to override these methods to have specific behavior (unlike the TermsEnum API > which allows that). > The fix is straightforward: simply override these seekExact() methods and > delegate. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8207) Modernise cloud tab on Admin UI
[ https://issues.apache.org/jira/browse/SOLR-8207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16462353#comment-16462353 ] Jan Høydahl commented on SOLR-8207: --- Feel free to check out the GitHub PR and play with this yourselves, and if you wish, contribute changes, better styling, more details, whatever. I think we have approached a place where the new view is useful as is even if it is not responsive, lacks column sorting, pulls a bit too much metrics data etc. All of this can be improved in followup issues. > Modernise cloud tab on Admin UI > --- > > Key: SOLR-8207 > URL: https://issues.apache.org/jira/browse/SOLR-8207 > Project: Solr > Issue Type: Improvement > Components: Admin UI >Affects Versions: 5.3 >Reporter: Upayavira >Assignee: Jan Høydahl >Priority: Major > Attachments: node-compact.png, node-details.png, > node-toggle-row-numdocs.png, nodes-tab-real.png, nodes-tab.png > > Time Spent: 10m > Remaining Estimate: 0h > > The various sub-tabs of the "Cloud tab" were designed before anyone was > making real use of SolrCloud, and when we didn't really know the use-cases we > would need to support. I would argue that, whilst they are pretty (and > clever) they aren't really fit for purpose (with the exception of tree view). > Issues: > * Radial view doesn't scale beyond a small number of nodes/collections > * Paging on the graph view is based on collections - so a collection with > many replicas won't be subject to pagination > * The Dump feature is kinda redundant and should be removed > * There is now a major overlap in functionality with the new Collections tab > What I'd propose is that we: > * promote the tree tab to top level > * remove the graph views and the dump tab > * add a new Nodes tab > This nodes tab would complement the collections tab - showing nodes, and > their associated replicas/collections. From this view, it would be possible > to add/remove replicas and to see the status of nodes. It would also be > possible to filter nodes by status: "show me only up nodes", "show me nodes > that are in trouble", "show me nodes that have leaders on them", etc. > Presumably, if we have APIs to support it, we might have a "decommission > node" option, that would ensure that no replicas on this node are leaders, > and then remove all replicas from the node, ready for it to be removed from > the cluster. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8207) Modernise cloud tab on Admin UI
[ https://issues.apache.org/jira/browse/SOLR-8207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16462347#comment-16462347 ] Jan Høydahl commented on SOLR-8207: --- It was quite easy to toggle details per row, so I put in that, and changed the toggle button on top to expand/collapse all nodes. This version also adds more details per node in details view: * numDocs, deletedDocs, warmupTime per core * total num docs and avg size/doc per node !node-toggle-row-numdocs.png|width=900! > Modernise cloud tab on Admin UI > --- > > Key: SOLR-8207 > URL: https://issues.apache.org/jira/browse/SOLR-8207 > Project: Solr > Issue Type: Improvement > Components: Admin UI >Affects Versions: 5.3 >Reporter: Upayavira >Assignee: Jan Høydahl >Priority: Major > Attachments: node-compact.png, node-details.png, > node-toggle-row-numdocs.png, nodes-tab-real.png, nodes-tab.png > > Time Spent: 10m > Remaining Estimate: 0h > > The various sub-tabs of the "Cloud tab" were designed before anyone was > making real use of SolrCloud, and when we didn't really know the use-cases we > would need to support. I would argue that, whilst they are pretty (and > clever) they aren't really fit for purpose (with the exception of tree view). > Issues: > * Radial view doesn't scale beyond a small number of nodes/collections > * Paging on the graph view is based on collections - so a collection with > many replicas won't be subject to pagination > * The Dump feature is kinda redundant and should be removed > * There is now a major overlap in functionality with the new Collections tab > What I'd propose is that we: > * promote the tree tab to top level > * remove the graph views and the dump tab > * add a new Nodes tab > This nodes tab would complement the collections tab - showing nodes, and > their associated replicas/collections. From this view, it would be possible > to add/remove replicas and to see the status of nodes. It would also be > possible to filter nodes by status: "show me only up nodes", "show me nodes > that are in trouble", "show me nodes that have leaders on them", etc. > Presumably, if we have APIs to support it, we might have a "decommission > node" option, that would ensure that no replicas on this node are leaders, > and then remove all replicas from the node, ready for it to be removed from > the cluster. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: [JENKINS] Lucene-Solr-7.x-Windows (32bit/jdk1.8.0_144) - Build # 575 - Still Unstable!
I have the fix just for this spin in https://issues.apache.org/jira/secure/attachment/12919074/SOLR-12200.patch (Although I abandoned SOLR-12200) diff --git a/solr/core/src/java/org/apache/solr/cloud/autoscaling/OverseerTriggerThread.java b/solr/core/src/java/org/apache/solr/cloud/autoscaling/OverseerTriggerThread.java index ece4c4c..5cb1f90 100644 --- a/solr/core/src/java/org/apache/solr/cloud/autoscaling/OverseerTriggerThread.java +++ b/solr/core/src/java/org/apache/solr/cloud/autoscaling/OverseerTriggerThread.java @@ -142,8 +142,14 @@ public class OverseerTriggerThread implements Runnable, SolrCloseable { Thread.currentThread().interrupt(); log.warn("Interrupted", e); break; - } catch (IOException | KeeperException e) { + } + catch (IOException | KeeperException e) { log.error("A ZK error has occurred", e); +if (e.getCause()!=null && e.getCause() instanceof KeeperException.SessionExpiredException) { + log.warn("Solr cannot talk to ZK, exiting " + + getClass().getSimpleName() + " main queue loop", e); + return; +} } } I can push only this, just to stop torture Jenkins. WDYT ? On Thu, May 3, 2018 at 2:57 PM, Dawid Weisswrote: > Endless loop (session expired): > >[junit4] 2> 1992793 ERROR > (OverseerAutoScalingTriggerThread-72097539512664067-127.0.0. > 1:8983_solr-n_01) > [] o.a.s.c.a.OverseerTriggerThread A ZK error has occurre > d >[junit4] 2> java.io.IOException: > org.apache.zookeeper.KeeperException$SessionExpiredException: > KeeperErrorCode = Session expired for /autoscaling.json >[junit4] 2>at > org.apache.solr.client.solrj.impl.ZkDistribStateManager. > getAutoScalingConfig(ZkDistribStateManager.java:183) > ~[java/:?] >[junit4] 2>at > org.apache.solr.client.solrj.cloud.DistribStateManager. > getAutoScalingConfig(DistribStateManager.java:83) > ~[java/:?] >[junit4] 2>at > org.apache.solr.cloud.autoscaling.OverseerTriggerThread.run( > OverseerTriggerThread.java:131) > [java/:?] >[junit4] 2>at java.lang.Thread.run(Thread.java:748) > [?:1.8.0_144] >[junit4] 2> Caused by: > org.apache.zookeeper.KeeperException$SessionExpiredException: > KeeperErrorCode = Session expired for /autoscaling.json >[junit4] 2>at > org.apache.zookeeper.KeeperException.create(KeeperException.java:130) > ~[zookeeper-3.4.11.jar:3.4.11-37e277162d567b55a07d1755f0b31c32e93c01a0] >[junit4] 2>at > org.apache.zookeeper.KeeperException.create(KeeperException.java:54) > ~[zookeeper-3.4.11.jar:3.4.11-37e277162d567b55a07d1755f0b31c32e93c01a0] >[junit4] 2>at > org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1215) > ~[zookeeper-3.4.11.jar:3.4.11-37e277162d567b55a07d1755f0b31c32e93c01a0] >[junit4] 2>at > org.apache.solr.common.cloud.SolrZkClient.lambda$getData$5( > SolrZkClient.java:340) > ~[java/:?] >[junit4] 2>at > org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation( > ZkCmdExecutor.java:60) > ~[java/:?] >[junit4] 2>at > org.apache.solr.common.cloud.SolrZkClient.getData(SolrZkClient.java:340) > ~[java/:?] >[junit4] 2>at > org.apache.solr.client.solrj.impl.ZkDistribStateManager. > getAutoScalingConfig(ZkDistribStateManager.java:176) > ~[java/:?] >[junit4] 2>... 3 more > > > On Thu, May 3, 2018 at 1:37 PM, Policeman Jenkins Server > wrote: > > Error processing tokens: Error while parsing action > 'Text/ZeroOrMore/FirstOf/Token/DelimitedToken/DelimitedToken_Action3' at > input position (line 79, pos 4): > > )"} > >^ > > > > java.lang.OutOfMemoryError: Java heap space > > > > > > - > > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org > > For additional commands, e-mail: dev-h...@lucene.apache.org > > - > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org > For additional commands, e-mail: dev-h...@lucene.apache.org > > -- Sincerely yours Mikhail Khludnev
[jira] [Updated] (SOLR-8207) Modernise cloud tab on Admin UI
[ https://issues.apache.org/jira/browse/SOLR-8207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jan Høydahl updated SOLR-8207: -- Attachment: node-toggle-row-numdocs.png > Modernise cloud tab on Admin UI > --- > > Key: SOLR-8207 > URL: https://issues.apache.org/jira/browse/SOLR-8207 > Project: Solr > Issue Type: Improvement > Components: Admin UI >Affects Versions: 5.3 >Reporter: Upayavira >Assignee: Jan Høydahl >Priority: Major > Attachments: node-compact.png, node-details.png, > node-toggle-row-numdocs.png, nodes-tab-real.png, nodes-tab.png > > Time Spent: 10m > Remaining Estimate: 0h > > The various sub-tabs of the "Cloud tab" were designed before anyone was > making real use of SolrCloud, and when we didn't really know the use-cases we > would need to support. I would argue that, whilst they are pretty (and > clever) they aren't really fit for purpose (with the exception of tree view). > Issues: > * Radial view doesn't scale beyond a small number of nodes/collections > * Paging on the graph view is based on collections - so a collection with > many replicas won't be subject to pagination > * The Dump feature is kinda redundant and should be removed > * There is now a major overlap in functionality with the new Collections tab > What I'd propose is that we: > * promote the tree tab to top level > * remove the graph views and the dump tab > * add a new Nodes tab > This nodes tab would complement the collections tab - showing nodes, and > their associated replicas/collections. From this view, it would be possible > to add/remove replicas and to see the status of nodes. It would also be > possible to filter nodes by status: "show me only up nodes", "show me nodes > that are in trouble", "show me nodes that have leaders on them", etc. > Presumably, if we have APIs to support it, we might have a "decommission > node" option, that would ensure that no replicas on this node are leaders, > and then remove all replicas from the node, ready for it to be removed from > the cluster. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: [JENKINS] Lucene-Solr-7.x-Windows (32bit/jdk1.8.0_144) - Build # 575 - Still Unstable!
Endless loop (session expired): [junit4] 2> 1992793 ERROR (OverseerAutoScalingTriggerThread-72097539512664067-127.0.0.1:8983_solr-n_01) [] o.a.s.c.a.OverseerTriggerThread A ZK error has occurre d [junit4] 2> java.io.IOException: org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = Session expired for /autoscaling.json [junit4] 2>at org.apache.solr.client.solrj.impl.ZkDistribStateManager.getAutoScalingConfig(ZkDistribStateManager.java:183) ~[java/:?] [junit4] 2>at org.apache.solr.client.solrj.cloud.DistribStateManager.getAutoScalingConfig(DistribStateManager.java:83) ~[java/:?] [junit4] 2>at org.apache.solr.cloud.autoscaling.OverseerTriggerThread.run(OverseerTriggerThread.java:131) [java/:?] [junit4] 2>at java.lang.Thread.run(Thread.java:748) [?:1.8.0_144] [junit4] 2> Caused by: org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = Session expired for /autoscaling.json [junit4] 2>at org.apache.zookeeper.KeeperException.create(KeeperException.java:130) ~[zookeeper-3.4.11.jar:3.4.11-37e277162d567b55a07d1755f0b31c32e93c01a0] [junit4] 2>at org.apache.zookeeper.KeeperException.create(KeeperException.java:54) ~[zookeeper-3.4.11.jar:3.4.11-37e277162d567b55a07d1755f0b31c32e93c01a0] [junit4] 2>at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1215) ~[zookeeper-3.4.11.jar:3.4.11-37e277162d567b55a07d1755f0b31c32e93c01a0] [junit4] 2>at org.apache.solr.common.cloud.SolrZkClient.lambda$getData$5(SolrZkClient.java:340) ~[java/:?] [junit4] 2>at org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:60) ~[java/:?] [junit4] 2>at org.apache.solr.common.cloud.SolrZkClient.getData(SolrZkClient.java:340) ~[java/:?] [junit4] 2>at org.apache.solr.client.solrj.impl.ZkDistribStateManager.getAutoScalingConfig(ZkDistribStateManager.java:176) ~[java/:?] [junit4] 2>... 3 more On Thu, May 3, 2018 at 1:37 PM, Policeman Jenkins Serverwrote: > Error processing tokens: Error while parsing action > 'Text/ZeroOrMore/FirstOf/Token/DelimitedToken/DelimitedToken_Action3' at > input position (line 79, pos 4): > )"} >^ > > java.lang.OutOfMemoryError: Java heap space > > > - > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org > For additional commands, e-mail: dev-h...@lucene.apache.org - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (LUCENE-8290) Keep soft deletes in sync with on-disk DocValues
[ https://issues.apache.org/jira/browse/LUCENE-8290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Simon Willnauer resolved LUCENE-8290. - Resolution: Fixed > Keep soft deletes in sync with on-disk DocValues > > > Key: LUCENE-8290 > URL: https://issues.apache.org/jira/browse/LUCENE-8290 > Project: Lucene - Core > Issue Type: Bug >Affects Versions: 7.4, master (8.0) >Reporter: Simon Willnauer >Priority: Major > Fix For: 7.4, master (8.0) > > Attachments: LUCENE-8290.patch > > > Today we pass on the doc values update to the PendingDeletes > when it's applied. This might cause issues with a rentention policy > merge policy that will see a deleted document but not it's value on > disk. > This change moves back the PendingDeletes callback to flush time > in order to be consistent with what is actually updated on disk. > > This change also makes sure we write values to disk on flush that > are in the reader pool as well as extra best effort checks to drop > fully deleted segments on flush, commit and getReader. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8207) Modernise cloud tab on Admin UI
[ https://issues.apache.org/jira/browse/SOLR-8207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16462317#comment-16462317 ] Jan Høydahl commented on SOLR-8207: --- It's just a simple table modelled after the one in "Suggestions" tab. Yea, would probably be better with per-row detail view. The collection and core references are clickable and takes you to that collection in the collections tab. > Modernise cloud tab on Admin UI > --- > > Key: SOLR-8207 > URL: https://issues.apache.org/jira/browse/SOLR-8207 > Project: Solr > Issue Type: Improvement > Components: Admin UI >Affects Versions: 5.3 >Reporter: Upayavira >Assignee: Jan Høydahl >Priority: Major > Attachments: node-compact.png, node-details.png, nodes-tab-real.png, > nodes-tab.png > > Time Spent: 10m > Remaining Estimate: 0h > > The various sub-tabs of the "Cloud tab" were designed before anyone was > making real use of SolrCloud, and when we didn't really know the use-cases we > would need to support. I would argue that, whilst they are pretty (and > clever) they aren't really fit for purpose (with the exception of tree view). > Issues: > * Radial view doesn't scale beyond a small number of nodes/collections > * Paging on the graph view is based on collections - so a collection with > many replicas won't be subject to pagination > * The Dump feature is kinda redundant and should be removed > * There is now a major overlap in functionality with the new Collections tab > What I'd propose is that we: > * promote the tree tab to top level > * remove the graph views and the dump tab > * add a new Nodes tab > This nodes tab would complement the collections tab - showing nodes, and > their associated replicas/collections. From this view, it would be possible > to add/remove replicas and to see the status of nodes. It would also be > possible to filter nodes by status: "show me only up nodes", "show me nodes > that are in trouble", "show me nodes that have leaders on them", etc. > Presumably, if we have APIs to support it, we might have a "decommission > node" option, that would ensure that no replicas on this node are leaders, > and then remove all replicas from the node, ready for it to be removed from > the cluster. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-7.x-Windows (32bit/jdk1.8.0_144) - Build # 575 - Still Unstable!
Error processing tokens: Error while parsing action 'Text/ZeroOrMore/FirstOf/Token/DelimitedToken/DelimitedToken_Action3' at input position (line 79, pos 4): )"} ^ java.lang.OutOfMemoryError: Java heap space - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8290) Keep soft deletes in sync with on-disk DocValues
[ https://issues.apache.org/jira/browse/LUCENE-8290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16462307#comment-16462307 ] ASF subversion and git services commented on LUCENE-8290: - Commit 8fdd3d7584bcc23442d6256cca94da0dbf2ccc10 in lucene-solr's branch refs/heads/branch_7x from [~simonw] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=8fdd3d7 ] LUCENE-8290: Keep soft deletes in sync with on-disk DocValues Today we pass on the doc values update to the PendingDeletes when it's applied. This might cause issues with a rentention policy merge policy that will see a deleted document but not it's value on disk. This change moves back the PendingDeletes callback to flush time in order to be consistent with what is actually updated on disk. This change also makes sure we write values to disk on flush that are in the reader pool as well as extra best effort checks to drop fully deleted segments on flush, commit and getReader. > Keep soft deletes in sync with on-disk DocValues > > > Key: LUCENE-8290 > URL: https://issues.apache.org/jira/browse/LUCENE-8290 > Project: Lucene - Core > Issue Type: Bug >Affects Versions: 7.4, master (8.0) >Reporter: Simon Willnauer >Priority: Major > Fix For: 7.4, master (8.0) > > Attachments: LUCENE-8290.patch > > > Today we pass on the doc values update to the PendingDeletes > when it's applied. This might cause issues with a rentention policy > merge policy that will see a deleted document but not it's value on > disk. > This change moves back the PendingDeletes callback to flush time > in order to be consistent with what is actually updated on disk. > > This change also makes sure we write values to disk on flush that > are in the reader pool as well as extra best effort checks to drop > fully deleted segments on flush, commit and getReader. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8290) Keep soft deletes in sync with on-disk DocValues
[ https://issues.apache.org/jira/browse/LUCENE-8290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16462305#comment-16462305 ] ASF subversion and git services commented on LUCENE-8290: - Commit 591fc6627acffdc75ce88feb8a912b3225b47f9d in lucene-solr's branch refs/heads/master from [~simonw] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=591fc66 ] LUCENE-8290: Keep soft deletes in sync with on-disk DocValues Today we pass on the doc values update to the PendingDeletes when it's applied. This might cause issues with a rentention policy merge policy that will see a deleted document but not it's value on disk. This change moves back the PendingDeletes callback to flush time in order to be consistent with what is actually updated on disk. This change also makes sure we write values to disk on flush that are in the reader pool as well as extra best effort checks to drop fully deleted segments on flush, commit and getReader. > Keep soft deletes in sync with on-disk DocValues > > > Key: LUCENE-8290 > URL: https://issues.apache.org/jira/browse/LUCENE-8290 > Project: Lucene - Core > Issue Type: Bug >Affects Versions: 7.4, master (8.0) >Reporter: Simon Willnauer >Priority: Major > Fix For: 7.4, master (8.0) > > Attachments: LUCENE-8290.patch > > > Today we pass on the doc values update to the PendingDeletes > when it's applied. This might cause issues with a rentention policy > merge policy that will see a deleted document but not it's value on > disk. > This change moves back the PendingDeletes callback to flush time > in order to be consistent with what is actually updated on disk. > > This change also makes sure we write values to disk on flush that > are in the reader pool as well as extra best effort checks to drop > fully deleted segments on flush, commit and getReader. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org