[jira] [Commented] (SOLR-12922) Facet parser plugin for json.facet aka custom facet types
[ https://issues.apache.org/jira/browse/SOLR-12922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16664713#comment-16664713 ] Mikhail Khludnev commented on SOLR-12922: - [~dsmiley], facet parser handles facet types. This plugin, let users to introduce own facet types. [~mgibney], range faceting, that test parser mimics what's discussed in jiras regarding range facets over values. Potential use cases are just wide variety or user specific facet handling logic. I suppose there are more people who can implement own facet handling rather than those who can build patched Solr. > Facet parser plugin for json.facet aka custom facet types > - > > Key: SOLR-12922 > URL: https://issues.apache.org/jira/browse/SOLR-12922 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) > Components: Facet Module >Reporter: Mikhail Khludnev >Priority: Minor > Attachments: SOLR-12922.patch, SOLR-12922.patch > > > Why don't introduce a plugin for json facet parsers? Attaching draft patch, > it just demonstrate the thing. Test fails, iirc. Opinions? -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12922) Facet parser plugin for json.facet aka custom facet types
[ https://issues.apache.org/jira/browse/SOLR-12922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Khludnev updated SOLR-12922: Summary: Facet parser plugin for json.facet aka custom facet types (was: Facet parser plugin for json.facet) > Facet parser plugin for json.facet aka custom facet types > - > > Key: SOLR-12922 > URL: https://issues.apache.org/jira/browse/SOLR-12922 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) > Components: Facet Module >Reporter: Mikhail Khludnev >Priority: Minor > Attachments: SOLR-12922.patch, SOLR-12922.patch > > > Why don't introduce a plugin for json facet parsers? Attaching draft patch, > it just demonstrate the thing. Test fails, iirc. Opinions? -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: [jira] [Commented] (SOLR-12801) Fix the tests, remove BadApples and AwaitsFix annotations, improve env for test development.
I'll get back to you on that very shortly. - Mark On Thu, Oct 25, 2018 at 7:51 PM Erick Erickson wrote: > Mark: > > All power to you of course. Procedurally, what is most helpful? My > impression is that there's going to be a _lot_ of reorganization going > on and what I don't quite understand is how others can be most > helpful. IOW, will it be one of those things where you'd prefer to > work alone for the preliminary restructuring then have everyone else > pitch in? Or are there things that others could do to help with the > initial push? If the latter, how do you want to divide-and-conquer? > > Best, > Erick > On Thu, Oct 25, 2018 at 3:02 PM Mark Miller (JIRA) > wrote: > > > > > > [ > https://issues.apache.org/jira/browse/SOLR-12801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16664333#comment-16664333 > ] > > > > Mark Miller commented on SOLR-12801: > > > > > > [~dsmiley] - I've linked both those issues. > > > > They are probably outside the scope of what I'd focus on in my flurry of > issues, because I'm going to own getting to a finish line so to speak and > that is a deep well, but that is part of why I need a lot of help - there > is a lot we have done and need to continue to do in terms of simplifying > test development. > > > > I'm focusing more directly on the test failure rate issue here, but > everything is really directly influencing that. > > > > My plan is to kind of be the super nova in the center of addressing the > flaky tests, but I'll burn out long before I address everything we would > like to be in a really ideal test land situation. What everyone else has > been doing around tests and is currently doing is still going to be hugely > important. > > > > > Fix the tests, remove BadApples and AwaitsFix annotations, improve env > for test development. > > > > > > > > > > Key: SOLR-12801 > > > URL: https://issues.apache.org/jira/browse/SOLR-12801 > > > Project: Solr > > > Issue Type: Task > > > Security Level: Public(Default Security Level. Issues are Public) > > >Reporter: Mark Miller > > >Assignee: Mark Miller > > >Priority: Critical > > > > > > A single issue to counteract the single issue adding tons of > annotations, the continued addition of new flakey tests, and the continued > addition of flakiness to existing tests. > > > Lots more to come. > > > > > > > > -- > > This message was sent by Atlassian JIRA > > (v7.6.3#76005) > > > > - > > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org > > For additional commands, e-mail: dev-h...@lucene.apache.org > > > > - > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org > For additional commands, e-mail: dev-h...@lucene.apache.org > > -- - Mark about.me/markrmiller
[GitHub] lucene-solr pull request #484: solr 7.5 suggest The recommended result is em...
GitHub user lunxianG opened a pull request: https://github.com/apache/lucene-solr/pull/484 solr 7.5 suggest The recommended result is empty The solr 7.5 recommendation is empty and you can specify your own dictionary You can merge this pull request into a Git repository by running: $ git pull https://github.com/apache/lucene-solr jira/solr-12730 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/lucene-solr/pull/484.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #484 commit 3d91b8e27736b6a7456d710a467a7c723205d47a Author: Andrzej Bialecki Date: 2018-10-24T11:28:21Z Initial patch. --- - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-repro - Build # 1783 - Unstable
Build: https://builds.apache.org/job/Lucene-Solr-repro/1783/ [...truncated 33 lines...] [repro] Jenkins log URL: https://builds.apache.org/job/Lucene-Solr-Tests-master/2901/consoleText [repro] Revision: 8d109393492924cdde9663b9b9c4da00daaae433 [repro] Repro line: ant test -Dtestcase=TestTlogReplica -Dtests.method=testKillLeader -Dtests.seed=29FA8A4C6F0BDF8C -Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=mk -Dtests.timezone=US/Alaska -Dtests.asserts=true -Dtests.file.encoding=UTF-8 [repro] Repro line: ant test -Dtestcase=CloudSolrClientTest -Dtests.method=testVersionsAreReturned -Dtests.seed=B41FC1DA37B3017D -Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=vi -Dtests.timezone=America/Indiana/Tell_City -Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1 [repro] Repro line: ant test -Dtestcase=CloudSolrClientTest -Dtests.method=testParallelUpdateQTime -Dtests.seed=B41FC1DA37B3017D -Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=vi -Dtests.timezone=America/Indiana/Tell_City -Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1 [repro] git rev-parse --abbrev-ref HEAD [repro] git rev-parse HEAD [repro] Initial local git branch/revision: f33be7a172d7b4596530d8cb925ba6dd1f1f53f0 [repro] git fetch [repro] git checkout 8d109393492924cdde9663b9b9c4da00daaae433 [...truncated 2 lines...] [repro] git merge --ff-only [...truncated 1 lines...] [repro] ant clean [...truncated 6 lines...] [repro] Test suites by module: [repro]solr/core [repro] TestTlogReplica [repro]solr/solrj [repro] CloudSolrClientTest [repro] ant compile-test [...truncated 3567 lines...] [repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 -Dtests.class="*.TestTlogReplica" -Dtests.showOutput=onerror -Dtests.seed=29FA8A4C6F0BDF8C -Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=mk -Dtests.timezone=US/Alaska -Dtests.asserts=true -Dtests.file.encoding=UTF-8 [...truncated 128 lines...] [repro] ant compile-test [...truncated 454 lines...] [repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 -Dtests.class="*.CloudSolrClientTest" -Dtests.showOutput=onerror -Dtests.seed=B41FC1DA37B3017D -Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=vi -Dtests.timezone=America/Indiana/Tell_City -Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1 [...truncated 1125 lines...] [repro] Setting last failure code to 256 [repro] Failures: [repro] 0/5 failed: org.apache.solr.cloud.TestTlogReplica [repro] 1/5 failed: org.apache.solr.client.solrj.impl.CloudSolrClientTest [repro] git checkout f33be7a172d7b4596530d8cb925ba6dd1f1f53f0 [...truncated 2 lines...] [repro] Exiting with code 256 [...truncated 6 lines...] - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-BadApples-master-Linux (32bit/jdk1.8.0_172) - Build # 112 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-BadApples-master-Linux/112/ Java: 32bit/jdk1.8.0_172 -client -XX:+UseParallelGC 2 tests failed. FAILED: org.apache.solr.client.solrj.impl.CloudSolrClientTest.testParallelUpdateQTime Error Message: Error from server at https://127.0.0.1:43715/solr/collection1_shard2_replica_n3: Expected mime type application/octet-stream but got text/html.Error 404 Can not find: /solr/collection1_shard2_replica_n3/update HTTP ERROR 404 Problem accessing /solr/collection1_shard2_replica_n3/update. Reason: Can not find: /solr/collection1_shard2_replica_n3/updatehttp://eclipse.org/jetty";>Powered by Jetty:// 9.4.11.v20180605 Stack Trace: org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from server at https://127.0.0.1:43715/solr/collection1_shard2_replica_n3: Expected mime type application/octet-stream but got text/html. Error 404 Can not find: /solr/collection1_shard2_replica_n3/update HTTP ERROR 404 Problem accessing /solr/collection1_shard2_replica_n3/update. Reason: Can not find: /solr/collection1_shard2_replica_n3/updatehttp://eclipse.org/jetty";>Powered by Jetty:// 9.4.11.v20180605 at __randomizedtesting.SeedInfo.seed([85D8C62F0C85E636:6B00BDBE42FA33A5]:0) at org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:551) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1016) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194) at org.apache.solr.client.solrj.impl.CloudSolrClientTest.testParallelUpdateQTime(CloudSolrClientTest.java:146) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985) at org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:110) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemProp
[jira] [Commented] (SOLR-12866) Reproducing TestLocalFSCloudBackupRestore and TestHdfsCloudBackupRestore failures
[ https://issues.apache.org/jira/browse/SOLR-12866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16664648#comment-16664648 ] Varun Thacker commented on SOLR-12866: -- I'll work on this in the next couple of days! Sorry it took longer for me to get to this. > Reproducing TestLocalFSCloudBackupRestore and TestHdfsCloudBackupRestore > failures > - > > Key: SOLR-12866 > URL: https://issues.apache.org/jira/browse/SOLR-12866 > Project: Solr > Issue Type: Task > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Steve Rowe >Assignee: Varun Thacker >Priority: Major > > From [https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-7.x/185/], > both tests failed 10/10 iterations for me on branch_7x with the seed: > {noformat} > Checking out Revision 37fdcb02d87ec44293ec4942c75a3cb709c45418 > (refs/remotes/origin/branch_7x) > [...] >[junit4] 2> NOTE: reproduce with: ant test > -Dtestcase=TestLocalFSCloudBackupRestore -Dtests.method=test > -Dtests.seed=3CD4284489C09DB4 -Dtests.multiplier=2 -Dtests.slow=true > -Dtests.badapples=true -Dtests.locale=mk-MK > -Dtests.timezone=Pacific/Kiritimati -Dtests.asserts=true > -Dtests.file.encoding=US-ASCII >[junit4] FAILURE 10.8s J2 | TestLocalFSCloudBackupRestore.test <<< >[junit4]> Throwable #1: java.lang.AssertionError: Node > 127.0.0.1:43864_solr has 3 replicas. Expected num replicas : 2. state: >[junit4]> > DocCollection(backuprestore_restored//collections/backuprestore_restored/state.json/9)={ >[junit4]> "pullReplicas":0, >[junit4]> "replicationFactor":1, >[junit4]> "shards":{ >[junit4]> "shard2":{ >[junit4]> "range":"0-7fff", >[junit4]> "state":"active", >[junit4]> "replicas":{"core_node62":{ >[junit4]> "core":"backuprestore_restored_shard2_replica_n61", >[junit4]> "base_url":"https://127.0.0.1:43864/solr";, >[junit4]> "node_name":"127.0.0.1:43864_solr", >[junit4]> "state":"active", >[junit4]> "type":"NRT", >[junit4]> "force_set_state":"false", >[junit4]> "leader":"true"}}, >[junit4]> "stateTimestamp":"1539459703266853250"}, >[junit4]> "shard1_1":{ >[junit4]> "range":"c000-", >[junit4]> "state":"active", >[junit4]> "replicas":{"core_node64":{ >[junit4]> > "core":"backuprestore_restored_shard1_1_replica_n63", >[junit4]> "base_url":"https://127.0.0.1:43864/solr";, >[junit4]> "node_name":"127.0.0.1:43864_solr", >[junit4]> "state":"active", >[junit4]> "type":"NRT", >[junit4]> "force_set_state":"false", >[junit4]> "leader":"true"}}, >[junit4]> "stateTimestamp":"1539459703266887720"}, >[junit4]> "shard1_0":{ >[junit4]> "range":"8000-bfff", >[junit4]> "state":"active", >[junit4]> "replicas":{"core_node66":{ >[junit4]> > "core":"backuprestore_restored_shard1_0_replica_n65", >[junit4]> "base_url":"https://127.0.0.1:43864/solr";, >[junit4]> "node_name":"127.0.0.1:43864_solr", >[junit4]> "state":"active", >[junit4]> "type":"NRT", >[junit4]> "force_set_state":"false", >[junit4]> "leader":"true"}}, >[junit4]> "stateTimestamp":"1539459703266910800"}}, >[junit4]> "router":{ >[junit4]> "name":"compositeId", >[junit4]> "field":"shard_s"}, >[junit4]> "maxShardsPerNode":"-1", >[junit4]> "autoAddReplicas":"false", >[junit4]> "nrtReplicas":1, >[junit4]> "tlogReplicas":0} >[junit4]> at > __randomizedtesting.SeedInfo.seed([3CD4284489C09DB4:B480179E273CF04C]:0) >[junit4]> at > org.apache.solr.cloud.api.collections.AbstractCloudBackupRestoreTestCase.lambda$testBackupAndRestore$1(AbstractCloudBackupRestoreTestCase.java:339) >[junit4]> at java.util.HashMap.forEach(HashMap.java:1289) >[junit4]> at > org.apache.solr.cloud.api.collections.AbstractCloudBackupRestoreTestCase.testBackupAndRestore(AbstractCloudBackupRestoreTestCase.java:338) >[junit4]> at > org.apache.solr.cloud.api.collections.AbstractCloudBackupRestoreTestCase.test(AbstractCloudBackupRestoreTestCase.java:144) >[junit4]> at > org.apache.solr.cloud.api.collections.TestLocalFSCloudBackupRestore.test(TestLocalFSCloudBackupRestore.java:64)
[jira] [Assigned] (SOLR-12866) Reproducing TestLocalFSCloudBackupRestore and TestHdfsCloudBackupRestore failures
[ https://issues.apache.org/jira/browse/SOLR-12866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Varun Thacker reassigned SOLR-12866: Assignee: Varun Thacker > Reproducing TestLocalFSCloudBackupRestore and TestHdfsCloudBackupRestore > failures > - > > Key: SOLR-12866 > URL: https://issues.apache.org/jira/browse/SOLR-12866 > Project: Solr > Issue Type: Task > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Steve Rowe >Assignee: Varun Thacker >Priority: Major > > From [https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-7.x/185/], > both tests failed 10/10 iterations for me on branch_7x with the seed: > {noformat} > Checking out Revision 37fdcb02d87ec44293ec4942c75a3cb709c45418 > (refs/remotes/origin/branch_7x) > [...] >[junit4] 2> NOTE: reproduce with: ant test > -Dtestcase=TestLocalFSCloudBackupRestore -Dtests.method=test > -Dtests.seed=3CD4284489C09DB4 -Dtests.multiplier=2 -Dtests.slow=true > -Dtests.badapples=true -Dtests.locale=mk-MK > -Dtests.timezone=Pacific/Kiritimati -Dtests.asserts=true > -Dtests.file.encoding=US-ASCII >[junit4] FAILURE 10.8s J2 | TestLocalFSCloudBackupRestore.test <<< >[junit4]> Throwable #1: java.lang.AssertionError: Node > 127.0.0.1:43864_solr has 3 replicas. Expected num replicas : 2. state: >[junit4]> > DocCollection(backuprestore_restored//collections/backuprestore_restored/state.json/9)={ >[junit4]> "pullReplicas":0, >[junit4]> "replicationFactor":1, >[junit4]> "shards":{ >[junit4]> "shard2":{ >[junit4]> "range":"0-7fff", >[junit4]> "state":"active", >[junit4]> "replicas":{"core_node62":{ >[junit4]> "core":"backuprestore_restored_shard2_replica_n61", >[junit4]> "base_url":"https://127.0.0.1:43864/solr";, >[junit4]> "node_name":"127.0.0.1:43864_solr", >[junit4]> "state":"active", >[junit4]> "type":"NRT", >[junit4]> "force_set_state":"false", >[junit4]> "leader":"true"}}, >[junit4]> "stateTimestamp":"1539459703266853250"}, >[junit4]> "shard1_1":{ >[junit4]> "range":"c000-", >[junit4]> "state":"active", >[junit4]> "replicas":{"core_node64":{ >[junit4]> > "core":"backuprestore_restored_shard1_1_replica_n63", >[junit4]> "base_url":"https://127.0.0.1:43864/solr";, >[junit4]> "node_name":"127.0.0.1:43864_solr", >[junit4]> "state":"active", >[junit4]> "type":"NRT", >[junit4]> "force_set_state":"false", >[junit4]> "leader":"true"}}, >[junit4]> "stateTimestamp":"1539459703266887720"}, >[junit4]> "shard1_0":{ >[junit4]> "range":"8000-bfff", >[junit4]> "state":"active", >[junit4]> "replicas":{"core_node66":{ >[junit4]> > "core":"backuprestore_restored_shard1_0_replica_n65", >[junit4]> "base_url":"https://127.0.0.1:43864/solr";, >[junit4]> "node_name":"127.0.0.1:43864_solr", >[junit4]> "state":"active", >[junit4]> "type":"NRT", >[junit4]> "force_set_state":"false", >[junit4]> "leader":"true"}}, >[junit4]> "stateTimestamp":"1539459703266910800"}}, >[junit4]> "router":{ >[junit4]> "name":"compositeId", >[junit4]> "field":"shard_s"}, >[junit4]> "maxShardsPerNode":"-1", >[junit4]> "autoAddReplicas":"false", >[junit4]> "nrtReplicas":1, >[junit4]> "tlogReplicas":0} >[junit4]> at > __randomizedtesting.SeedInfo.seed([3CD4284489C09DB4:B480179E273CF04C]:0) >[junit4]> at > org.apache.solr.cloud.api.collections.AbstractCloudBackupRestoreTestCase.lambda$testBackupAndRestore$1(AbstractCloudBackupRestoreTestCase.java:339) >[junit4]> at java.util.HashMap.forEach(HashMap.java:1289) >[junit4]> at > org.apache.solr.cloud.api.collections.AbstractCloudBackupRestoreTestCase.testBackupAndRestore(AbstractCloudBackupRestoreTestCase.java:338) >[junit4]> at > org.apache.solr.cloud.api.collections.AbstractCloudBackupRestoreTestCase.test(AbstractCloudBackupRestoreTestCase.java:144) >[junit4]> at > org.apache.solr.cloud.api.collections.TestLocalFSCloudBackupRestore.test(TestLocalFSCloudBackupRestore.java:64) >[junit4]> at java.lang.Thread.run(Thread.java:748) > {noformat} > {noformat} >[junit4] 2> NOTE
[jira] [Commented] (SOLR-12928) TimeRoutedAliasUpdateProcessorTest testSliceRouting fails 1% of time
[ https://issues.apache.org/jira/browse/SOLR-12928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16664644#comment-16664644 ] David Smiley commented on SOLR-12928: - thank you; I would appreciate it if you beast this test class which contains two test methods -- testSliceRouting() (this issue) and test() ( SOLR-12929 ) which is currently badappled > TimeRoutedAliasUpdateProcessorTest testSliceRouting fails 1% of time > > > Key: SOLR-12928 > URL: https://issues.apache.org/jira/browse/SOLR-12928 > Project: Solr > Issue Type: Test > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Reporter: David Smiley >Priority: Major > Attachments: testSliceRouting b23054.log.zip > > > org.apache.solr.update.processor.TimeRoutedAliasUpdateProcessorTest#testSliceRouting > fails 1% of time: > [http://fucit.org/solr-jenkins-reports/failure-report.html] -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] lucene-solr pull request #483: Log Delete Query Processor custom solr compon...
Github user tirthmehta1994 commented on a diff in the pull request: https://github.com/apache/lucene-solr/pull/483#discussion_r228394789 --- Diff: solr/server/resources/log4j2.xml --- @@ -67,6 +67,10 @@ + + --- End diff -- Yup, it should have been LogUpdateProcessorFactory itself, since the changes are planned in LogUpdateProcessorFactory class itself. I have made the changes accordingly. --- - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] lucene-solr pull request #483: Log Delete Query Processor custom solr compon...
Github user tirthmehta1994 commented on a diff in the pull request: https://github.com/apache/lucene-solr/pull/483#discussion_r228394242 --- Diff: solr/core/src/java/org/apache/solr/update/processor/LogUpdateProcessorFactory.java --- @@ -187,12 +190,23 @@ public void finish() throws IOException { log.info(getLogStringAndClearRspToLog()); } + if (deleteLog.isInfoEnabled()) { --- End diff -- Any request that contains a delete --- - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS-EA] Lucene-Solr-7.x-Linux (64bit/jdk-12-ea+12) - Build # 2979 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2979/ Java: 64bit/jdk-12-ea+12 -XX:-UseCompressedOops -XX:+UseG1GC 44 tests failed. FAILED: org.apache.solr.client.solrj.impl.CloudSolrClientTest.testVersionsAreReturned Error Message: Error from server at http://127.0.0.1:36139/solr/collection1_shard2_replica_n3: Expected mime type application/octet-stream but got text/html. Error 404 Can not find: /solr/collection1_shard2_replica_n3/update HTTP ERROR 404 Problem accessing /solr/collection1_shard2_replica_n3/update. Reason: Can not find: /solr/collection1_shard2_replica_n3/updatehttp://eclipse.org/jetty";>Powered by Jetty:// 9.4.11.v20180605 Stack Trace: org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from server at http://127.0.0.1:36139/solr/collection1_shard2_replica_n3: Expected mime type application/octet-stream but got text/html. Error 404 Can not find: /solr/collection1_shard2_replica_n3/update HTTP ERROR 404 Problem accessing /solr/collection1_shard2_replica_n3/update. Reason: Can not find: /solr/collection1_shard2_replica_n3/updatehttp://eclipse.org/jetty";>Powered by Jetty:// 9.4.11.v20180605 at __randomizedtesting.SeedInfo.seed([3E1323C79B2D1689:C6D5DAF208348E41]:0) at org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:551) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1016) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194) at org.apache.solr.client.solrj.request.UpdateRequest.commit(UpdateRequest.java:237) at org.apache.solr.client.solrj.impl.CloudSolrClientTest.testVersionsAreReturned(CloudSolrClientTest.java:725) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985) at org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:110) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891) a
[GitHub] lucene-solr issue #477: Block Expensive Queries custom component
Github user tirthmehta1994 commented on the issue: https://github.com/apache/lucene-solr/pull/477 Hi @vthacker, I have made some changes please do have a look. --- - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-7.x-Solaris (64bit/jdk1.8.0) - Build # 877 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Solaris/877/ Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseSerialGC 9 tests failed. FAILED: org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica Error Message: Expected new active leader null Live Nodes: [127.0.0.1:49593_solr, 127.0.0.1:51953_solr, 127.0.0.1:58275_solr] Last available state: DocCollection(raceDeleteReplica_false//collections/raceDeleteReplica_false/state.json/13)={ "pullReplicas":"0", "replicationFactor":"2", "shards":{"shard1":{ "range":"8000-7fff", "state":"active", "replicas":{ "core_node3":{ "core":"raceDeleteReplica_false_shard1_replica_n1", "base_url":"http://127.0.0.1:62994/solr";, "node_name":"127.0.0.1:62994_solr", "state":"down", "type":"NRT", "force_set_state":"false", "leader":"true"}, "core_node6":{ "core":"raceDeleteReplica_false_shard1_replica_n5", "base_url":"http://127.0.0.1:62994/solr";, "node_name":"127.0.0.1:62994_solr", "state":"down", "type":"NRT", "force_set_state":"false", "router":{"name":"compositeId"}, "maxShardsPerNode":"1", "autoAddReplicas":"false", "nrtReplicas":"2", "tlogReplicas":"0"} Stack Trace: java.lang.AssertionError: Expected new active leader null Live Nodes: [127.0.0.1:49593_solr, 127.0.0.1:51953_solr, 127.0.0.1:58275_solr] Last available state: DocCollection(raceDeleteReplica_false//collections/raceDeleteReplica_false/state.json/13)={ "pullReplicas":"0", "replicationFactor":"2", "shards":{"shard1":{ "range":"8000-7fff", "state":"active", "replicas":{ "core_node3":{ "core":"raceDeleteReplica_false_shard1_replica_n1", "base_url":"http://127.0.0.1:62994/solr";, "node_name":"127.0.0.1:62994_solr", "state":"down", "type":"NRT", "force_set_state":"false", "leader":"true"}, "core_node6":{ "core":"raceDeleteReplica_false_shard1_replica_n5", "base_url":"http://127.0.0.1:62994/solr";, "node_name":"127.0.0.1:62994_solr", "state":"down", "type":"NRT", "force_set_state":"false", "router":{"name":"compositeId"}, "maxShardsPerNode":"1", "autoAddReplicas":"false", "nrtReplicas":"2", "tlogReplicas":"0"} at __randomizedtesting.SeedInfo.seed([2309F56A56D0DF54:491F94BA3E22959E]:0) at org.junit.Assert.fail(Assert.java:93) at org.apache.solr.cloud.SolrCloudTestCase.waitForState(SolrCloudTestCase.java:280) at org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica(DeleteReplicaTest.java:334) at org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica(DeleteReplicaTest.java:230) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830) at com.carrotsearch.r
[jira] [Commented] (SOLR-12868) Request forwarding for v2 API is broken
[ https://issues.apache.org/jira/browse/SOLR-12868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16664526#comment-16664526 ] ASF subversion and git services commented on SOLR-12868: Commit 329252fb9e5fbf0f8bba64cc320e34de4b83fa81 in lucene-solr's branch refs/heads/branch_7x from [~noble.paul] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=329252f ] SOLR-12868: Request forwarding for v2 API is broken > Request forwarding for v2 API is broken > --- > > Key: SOLR-12868 > URL: https://issues.apache.org/jira/browse/SOLR-12868 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud, v2 API >Reporter: Shalin Shekhar Mangar >Assignee: Noble Paul >Priority: Major > Fix For: 7.6, master (8.0) > > > I was working with Noble Paul to investigate test failures seen in SOLR-12806 > where we found this issue. Due to a bug, replicas of a collection weren't > spread evenly so there were some nodes which did not have any replicas at > all. In such cases, when a v2 API call hits an empty node, it is not > forwarded to the right path on the remote node causing test failures. > e.g. a call to {{/c/collection/_introspect}} is forwarded as > {{http://127.0.0.1:63326/solr/collection1/_introspect?method=POST&wt=javabin&version=2&command=}} > and {{/c/collection1/abccdef}} is forwarded as > {{http://127.0.0.1:63326/solr/collection1/abccdef}} > In summary, a remote query for v2 API from an empty node is converted to a v1 > style call which may not be a valid path. We should forward v2 API calls > as-is without changing the paths. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12868) Request forwarding for v2 API is broken
[ https://issues.apache.org/jira/browse/SOLR-12868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16664524#comment-16664524 ] ASF subversion and git services commented on SOLR-12868: Commit f33be7a172d7b4596530d8cb925ba6dd1f1f53f0 in lucene-solr's branch refs/heads/master from [~noble.paul] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f33be7a ] SOLR-12868: Request forwarding for v2 API is broken > Request forwarding for v2 API is broken > --- > > Key: SOLR-12868 > URL: https://issues.apache.org/jira/browse/SOLR-12868 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud, v2 API >Reporter: Shalin Shekhar Mangar >Assignee: Noble Paul >Priority: Major > Fix For: 7.6, master (8.0) > > > I was working with Noble Paul to investigate test failures seen in SOLR-12806 > where we found this issue. Due to a bug, replicas of a collection weren't > spread evenly so there were some nodes which did not have any replicas at > all. In such cases, when a v2 API call hits an empty node, it is not > forwarded to the right path on the remote node causing test failures. > e.g. a call to {{/c/collection/_introspect}} is forwarded as > {{http://127.0.0.1:63326/solr/collection1/_introspect?method=POST&wt=javabin&version=2&command=}} > and {{/c/collection1/abccdef}} is forwarded as > {{http://127.0.0.1:63326/solr/collection1/abccdef}} > In summary, a remote query for v2 API from an empty node is converted to a v1 > style call which may not be a valid path. We should forward v2 API calls > as-is without changing the paths. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12928) TimeRoutedAliasUpdateProcessorTest testSliceRouting fails 1% of time
[ https://issues.apache.org/jira/browse/SOLR-12928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16664482#comment-16664482 ] Erick Erickson commented on SOLR-12928: --- David: I have a spare machine if you'd like me to beast this test to gather failures and/or try patches. I suppose I'll beast it tonight just to see if I can repro since you're working on it. Oh, and what's the difference between this one and SOLR-12929? Erick > TimeRoutedAliasUpdateProcessorTest testSliceRouting fails 1% of time > > > Key: SOLR-12928 > URL: https://issues.apache.org/jira/browse/SOLR-12928 > Project: Solr > Issue Type: Test > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Reporter: David Smiley >Priority: Major > Attachments: testSliceRouting b23054.log.zip > > > org.apache.solr.update.processor.TimeRoutedAliasUpdateProcessorTest#testSliceRouting > fails 1% of time: > [http://fucit.org/solr-jenkins-reports/failure-report.html] -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: [jira] [Commented] (SOLR-12801) Fix the tests, remove BadApples and AwaitsFix annotations, improve env for test development.
Mark: All power to you of course. Procedurally, what is most helpful? My impression is that there's going to be a _lot_ of reorganization going on and what I don't quite understand is how others can be most helpful. IOW, will it be one of those things where you'd prefer to work alone for the preliminary restructuring then have everyone else pitch in? Or are there things that others could do to help with the initial push? If the latter, how do you want to divide-and-conquer? Best, Erick On Thu, Oct 25, 2018 at 3:02 PM Mark Miller (JIRA) wrote: > > > [ > https://issues.apache.org/jira/browse/SOLR-12801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16664333#comment-16664333 > ] > > Mark Miller commented on SOLR-12801: > > > [~dsmiley] - I've linked both those issues. > > They are probably outside the scope of what I'd focus on in my flurry of > issues, because I'm going to own getting to a finish line so to speak and > that is a deep well, but that is part of why I need a lot of help - there is > a lot we have done and need to continue to do in terms of simplifying test > development. > > I'm focusing more directly on the test failure rate issue here, but > everything is really directly influencing that. > > My plan is to kind of be the super nova in the center of addressing the flaky > tests, but I'll burn out long before I address everything we would like to be > in a really ideal test land situation. What everyone else has been doing > around tests and is currently doing is still going to be hugely important. > > > Fix the tests, remove BadApples and AwaitsFix annotations, improve env for > > test development. > > > > > > Key: SOLR-12801 > > URL: https://issues.apache.org/jira/browse/SOLR-12801 > > Project: Solr > > Issue Type: Task > > Security Level: Public(Default Security Level. Issues are Public) > >Reporter: Mark Miller > >Assignee: Mark Miller > >Priority: Critical > > > > A single issue to counteract the single issue adding tons of annotations, > > the continued addition of new flakey tests, and the continued addition of > > flakiness to existing tests. > > Lots more to come. > > > > -- > This message was sent by Atlassian JIRA > (v7.6.3#76005) > > - > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org > For additional commands, e-mail: dev-h...@lucene.apache.org > - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12931) Move Solr's ExitableDirectoryReader test to it's own package
[ https://issues.apache.org/jira/browse/SOLR-12931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16664477#comment-16664477 ] Varun Thacker commented on SOLR-12931: -- Fair enough. Move all 3 classes to - org.apache.solr.search ? > Move Solr's ExitableDirectoryReader test to it's own package > > > Key: SOLR-12931 > URL: https://issues.apache.org/jira/browse/SOLR-12931 > Project: Solr > Issue Type: Test > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Varun Thacker >Priority: Major > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-repro - Build # 1780 - Unstable
Build: https://builds.apache.org/job/Lucene-Solr-repro/1780/ [...truncated 32 lines...] [repro] Jenkins log URL: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/975/consoleText [repro] Revision: dbdc2547fd6303a2a768aea538bebe8130a16f7e [repro] Repro line: ant test -Dtestcase=DeleteReplicaTest -Dtests.method=raceConditionOnDeleteAndRegisterReplica -Dtests.seed=AE05E79B7C9B3838 -Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=ar-EG -Dtests.timezone=America/Edmonton -Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1 [repro] Repro line: ant test -Dtestcase=DeleteReplicaTest -Dtests.method=deleteLiveReplicaTest -Dtests.seed=AE05E79B7C9B3838 -Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=ar-EG -Dtests.timezone=America/Edmonton -Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1 [repro] Repro line: ant test -Dtestcase=DeleteReplicaTest -Dtests.method=deleteReplicaByCountForAllShards -Dtests.seed=AE05E79B7C9B3838 -Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=ar-EG -Dtests.timezone=America/Edmonton -Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1 [repro] Repro line: ant test -Dtestcase=IndexSizeTriggerTest -Dtests.method=testMaxOps -Dtests.seed=AE05E79B7C9B3838 -Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=lv-LV -Dtests.timezone=Africa/Khartoum -Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1 [repro] Repro line: ant test -Dtestcase=CloudSolrClientTest -Dtests.method=testParallelUpdateQTime -Dtests.seed=BF72387DECEE95A3 -Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=he -Dtests.timezone=Asia/Rangoon -Dtests.asserts=true -Dtests.file.encoding=US-ASCII [repro] git rev-parse --abbrev-ref HEAD [repro] git rev-parse HEAD [repro] Initial local git branch/revision: 8d109393492924cdde9663b9b9c4da00daaae433 [repro] git fetch [repro] git checkout dbdc2547fd6303a2a768aea538bebe8130a16f7e [...truncated 2 lines...] [repro] git merge --ff-only [...truncated 1 lines...] [repro] ant clean [...truncated 6 lines...] [repro] Test suites by module: [repro]solr/solrj [repro] CloudSolrClientTest [repro]solr/core [repro] DeleteReplicaTest [repro] IndexSizeTriggerTest [repro] ant compile-test [...truncated 2716 lines...] [repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 -Dtests.class="*.CloudSolrClientTest" -Dtests.showOutput=onerror -Dtests.seed=BF72387DECEE95A3 -Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=he -Dtests.timezone=Asia/Rangoon -Dtests.asserts=true -Dtests.file.encoding=US-ASCII [...truncated 501 lines...] [repro] Setting last failure code to 256 [repro] ant compile-test [...truncated 1352 lines...] [repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=10 -Dtests.class="*.DeleteReplicaTest|*.IndexSizeTriggerTest" -Dtests.showOutput=onerror -Dtests.seed=AE05E79B7C9B3838 -Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=ar-EG -Dtests.timezone=America/Edmonton -Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1 [...truncated 142 lines...] [repro] Failures: [repro] 0/5 failed: org.apache.solr.cloud.DeleteReplicaTest [repro] 0/5 failed: org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest [repro] 1/5 failed: org.apache.solr.client.solrj.impl.CloudSolrClientTest [repro] git checkout 8d109393492924cdde9663b9b9c4da00daaae433 [...truncated 2 lines...] [repro] Exiting with code 256 [...truncated 6 lines...] - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12931) Move Solr's ExitableDirectoryReader test to it's own package
[ https://issues.apache.org/jira/browse/SOLR-12931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16664405#comment-16664405 ] Anshum Gupta commented on SOLR-12931: - It might confuse users in terms of too many packages to look at and figure out where things are. I would like to consolidate them but not in their own package, that possibly then gets split into unit vs integration tests. > Move Solr's ExitableDirectoryReader test to it's own package > > > Key: SOLR-12931 > URL: https://issues.apache.org/jira/browse/SOLR-12931 > Project: Solr > Issue Type: Test > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Varun Thacker >Priority: Major > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12931) Move Solr's ExitableDirectoryReader test to it's own package
[ https://issues.apache.org/jira/browse/SOLR-12931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16664399#comment-16664399 ] Varun Thacker commented on SOLR-12931: -- Nothing else would be added in the future , but it helps new devs understand where a feature test lies ? Today the same feature's test is spread across three packages. Do you think isolating packges for dedicated features is a bad idea? Overkill? > Move Solr's ExitableDirectoryReader test to it's own package > > > Key: SOLR-12931 > URL: https://issues.apache.org/jira/browse/SOLR-12931 > Project: Solr > Issue Type: Test > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Varun Thacker >Priority: Major > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS-EA] Lucene-Solr-7.x-Linux (64bit/jdk-12-ea+12) - Build # 2978 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2978/ Java: 64bit/jdk-12-ea+12 -XX:+UseCompressedOops -XX:+UseSerialGC 30 tests failed. FAILED: org.apache.solr.cloud.SplitShardTest.doTest Error Message: Error from server at https://127.0.0.1:41113/solr: Could not find collection : splitshardtest-collection Stack Trace: org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at https://127.0.0.1:41113/solr: Could not find collection : splitshardtest-collection at __randomizedtesting.SeedInfo.seed([24FC63FFF0AB5EFE:83B8DB5B9D104D47]:0) at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244) at org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483) at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1107) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211) at org.apache.solr.cloud.SplitShardTest.doTest(SplitShardTest.java:64) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
[jira] [Commented] (SOLR-12931) Move Solr's ExitableDirectoryReader test to it's own package
[ https://issues.apache.org/jira/browse/SOLR-12931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16664383#comment-16664383 ] Anshum Gupta commented on SOLR-12931: - I'm not sure if we should move this test to it's own package. What else do you see becoming a part of the package in the longer run? Just trying to see what I'm missing here :) > Move Solr's ExitableDirectoryReader test to it's own package > > > Key: SOLR-12931 > URL: https://issues.apache.org/jira/browse/SOLR-12931 > Project: Solr > Issue Type: Test > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Varun Thacker >Priority: Major > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8524) Nori (Korean) analyzer tokenization issues
[ https://issues.apache.org/jira/browse/LUCENE-8524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16664372#comment-16664372 ] Lucene/Solr QA commented on LUCENE-8524: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} Release audit (RAT) {color} | {color:green} 0m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} Check forbidden APIs {color} | {color:green} 0m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} Validate source patterns {color} | {color:green} 0m 29s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 24s{color} | {color:red} nori in the patch failed. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 3m 38s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | lucene.analysis.ko.dict.TestTokenInfoDictionary | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | LUCENE-8524 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12945190/LUCENE-8524.patch | | Optional Tests | compile javac unit ratsources checkforbiddenapis validatesourcepatterns | | uname | Linux lucene1-us-west 4.4.0-137-generic #163~14.04.1-Ubuntu SMP Mon Sep 24 17:14:57 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | ant | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-LUCENE-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh | | git revision | master / 8d10939 | | ant | version: Apache Ant(TM) version 1.9.3 compiled on July 24 2018 | | Default Java | 1.8.0_172 | | unit | https://builds.apache.org/job/PreCommit-LUCENE-Build/112/artifact/out/patch-unit-lucene_analysis_nori.txt | | Test Results | https://builds.apache.org/job/PreCommit-LUCENE-Build/112/testReport/ | | modules | C: lucene/analysis/nori U: lucene/analysis/nori | | Console output | https://builds.apache.org/job/PreCommit-LUCENE-Build/112/console | | Powered by | Apache Yetus 0.7.0 http://yetus.apache.org | This message was automatically generated. > Nori (Korean) analyzer tokenization issues > -- > > Key: LUCENE-8524 > URL: https://issues.apache.org/jira/browse/LUCENE-8524 > Project: Lucene - Core > Issue Type: Bug > Components: modules/analysis >Reporter: Trey Jones >Priority: Major > Attachments: LUCENE-8524.patch > > > I opened this originally as an [Elastic > bug|https://github.com/elastic/elasticsearch/issues/34283#issuecomment-426940784], > but was asked to re-file it here. (Sorry for the poor formatting. > "pre-formatted" isn't behaving.) > *Elastic version* > { > "name" : "adOS8gy", > "cluster_name" : "elasticsearch", > "cluster_uuid" : "GVS7gpVBQDGwtHl3xnJbLw", > "version" : { > "number" : "6.4.0", > "build_flavor" : "default", > "build_type" : "deb", > "build_hash" : "595516e", > "build_date" : "2018-08-17T23:18:47.308994Z", > "build_snapshot" : false, > "lucene_version" : "7.4.0", > "minimum_wire_compatibility_version" : "5.6.0", > "minimum_index_compatibility_version" : "5.0.0" > }, > "tagline" : "You Know, for Search" > } > *Plugins installed:* [analysis-icu, analysis-nori] > *JVM version:* > openjdk version "1.8.0_181" > OpenJDK Runtime Environment (build 1.8.0_181-8u181-b13-1~deb9u1-b13) > OpenJDK 64-Bit Server VM (build 25.181-b13, mixed mode) > *OS version:* > Linux vagrantes6 4.9.0-6-amd64 #1 SMP Debian 4.9.82-1+deb9u3 (2018-03-02) > x86_64 GNU/Linux > *Description of the problem including expected versus actual behavior:* > I've uncovered a number of oddities in tokenization in the Nori analyzer. All > examples are from [Korean Wikipedia|https://ko.wikipedia.org/] or [Korean > Wiktionary|https://ko.wiktionary.org/] (including non-CJK examples). In rough > order of importance: > A. Tokens are split on different character POS types (which seem to not quite
[jira] [Commented] (SOLR-12931) Move Solr's ExitableDirectoryReader test to it's own package
[ https://issues.apache.org/jira/browse/SOLR-12931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16664371#comment-16664371 ] Varun Thacker commented on SOLR-12931: -- Context to the quesion : I started looking at SOLR-12906 and DelayingSearchComponent caught my eye as it was under the ( org.apache.solr.search ) package. After spending a couple of minutes I figured this has to do with tests around ExitableDirectoryReader Let's use ExitableDirectoryReader as the feature in context that we wanted to test - We wrote a test for single core and then a cloud test. I think this is a typical pattern where the single core test can be more exhaustive, but we want to write a cloud test to make sure it works in the distributed case as well Both are integration tests. DelayingSearchComponent is the helper class and in the ( org.apache.solr.search ) package CloudExitableDirectoryReaderTest is the test that uses this and is in the ( org.apache.solr.cloud ) package ExitableDirectoryReaderTest is the single core test for this in the ( org.apache.solr.core ) package Proposing that we move these three classes to it's own package as a first pass. When we make a clear distinction of mock vs integration tests we could move this package under integration > Move Solr's ExitableDirectoryReader test to it's own package > > > Key: SOLR-12931 > URL: https://issues.apache.org/jira/browse/SOLR-12931 > Project: Solr > Issue Type: Test > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Varun Thacker >Priority: Major > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8534) Another case of Polygon tessellator going into an infinite loop
[ https://issues.apache.org/jira/browse/LUCENE-8534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16664370#comment-16664370 ] Lucene/Solr QA commented on LUCENE-8534: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 17s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} Release audit (RAT) {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} Check forbidden APIs {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} Validate source patterns {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 4m 56s{color} | {color:green} sandbox in the patch passed. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 10m 18s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | LUCENE-8534 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12945606/LUCENE-8534.patch | | Optional Tests | compile javac unit ratsources checkforbiddenapis validatesourcepatterns | | uname | Linux lucene2-us-west.apache.org 4.4.0-112-generic #135-Ubuntu SMP Fri Jan 19 11:48:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | ant | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-LUCENE-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh | | git revision | master / 8d10939 | | ant | version: Apache Ant(TM) version 1.9.6 compiled on July 20 2018 | | Default Java | 1.8.0_172 | | Test Results | https://builds.apache.org/job/PreCommit-LUCENE-Build/111/testReport/ | | modules | C: lucene/sandbox U: lucene/sandbox | | Console output | https://builds.apache.org/job/PreCommit-LUCENE-Build/111/console | | Powered by | Apache Yetus 0.7.0 http://yetus.apache.org | This message was automatically generated. > Another case of Polygon tessellator going into an infinite loop > --- > > Key: LUCENE-8534 > URL: https://issues.apache.org/jira/browse/LUCENE-8534 > Project: Lucene - Core > Issue Type: Bug > Components: modules/sandbox >Reporter: Ignacio Vera >Priority: Major > Attachments: LUCENE-8534.patch, LUCENE-8534.patch, LUCENE-8534.patch, > bigPolygon.wkt, image-2018-10-19-12-25-07-849.png > > > Related to LUCENE-8454, another case where tesselator never returns when > processing a polygon. > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-12931) Move Solr'ExitableDirectoryReader
Varun Thacker created SOLR-12931: Summary: Move Solr'ExitableDirectoryReader Key: SOLR-12931 URL: https://issues.apache.org/jira/browse/SOLR-12931 Project: Solr Issue Type: Test Security Level: Public (Default Security Level. Issues are Public) Reporter: Varun Thacker -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12931) Move Solr's ExitableDirectoryReader test to it's own package
[ https://issues.apache.org/jira/browse/SOLR-12931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Varun Thacker updated SOLR-12931: - Summary: Move Solr's ExitableDirectoryReader test to it's own package (was: Move Solr'ExitableDirectoryReader ) > Move Solr's ExitableDirectoryReader test to it's own package > > > Key: SOLR-12931 > URL: https://issues.apache.org/jira/browse/SOLR-12931 > Project: Solr > Issue Type: Test > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Varun Thacker >Priority: Major > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-Tests-master - Build # 2901 - Still Unstable
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/2901/ 3 tests failed. FAILED: org.apache.solr.cloud.TestTlogReplica.testKillLeader Error Message: Replica core_node3 not up to date after 10 seconds expected:<1> but was:<0> Stack Trace: java.lang.AssertionError: Replica core_node3 not up to date after 10 seconds expected:<1> but was:<0> at __randomizedtesting.SeedInfo.seed([29FA8A4C6F0BDF8C:60EC7EF80DB04BDA]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.failNotEquals(Assert.java:647) at org.junit.Assert.assertEquals(Assert.java:128) at org.junit.Assert.assertEquals(Assert.java:472) at org.apache.solr.cloud.TestTlogReplica.waitForNumDocsInAllReplicas(TestTlogReplica.java:777) at org.apache.solr.cloud.TestTlogReplica.waitForNumDocsInAllReplicas(TestTlogReplica.java:765) at org.apache.solr.cloud.TestTlogReplica.doReplaceLeader(TestTlogReplica.java:386) at org.apache.solr.cloud.TestTlogReplica.testKillLeader(TestTlogReplica.java:324) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.r
[jira] [Commented] (SOLR-12801) Fix the tests, remove BadApples and AwaitsFix annotations, improve env for test development.
[ https://issues.apache.org/jira/browse/SOLR-12801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16664333#comment-16664333 ] Mark Miller commented on SOLR-12801: [~dsmiley] - I've linked both those issues. They are probably outside the scope of what I'd focus on in my flurry of issues, because I'm going to own getting to a finish line so to speak and that is a deep well, but that is part of why I need a lot of help - there is a lot we have done and need to continue to do in terms of simplifying test development. I'm focusing more directly on the test failure rate issue here, but everything is really directly influencing that. My plan is to kind of be the super nova in the center of addressing the flaky tests, but I'll burn out long before I address everything we would like to be in a really ideal test land situation. What everyone else has been doing around tests and is currently doing is still going to be hugely important. > Fix the tests, remove BadApples and AwaitsFix annotations, improve env for > test development. > > > Key: SOLR-12801 > URL: https://issues.apache.org/jira/browse/SOLR-12801 > Project: Solr > Issue Type: Task > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Mark Miller >Assignee: Mark Miller >Priority: Critical > > A single issue to counteract the single issue adding tons of annotations, the > continued addition of new flakey tests, and the continued addition of > flakiness to existing tests. > Lots more to come. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-BadApples-Tests-master - Build # 192 - Still Unstable
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-master/192/ 1 tests failed. FAILED: org.apache.solr.cloud.TestAuthenticationFramework.testBasics Error Message: Error from server at https://127.0.0.1:42279/solr/testcollection_shard1_replica_n3: Expected mime type application/octet-stream but got text/html.Error 404 Can not find: /solr/testcollection_shard1_replica_n3/update HTTP ERROR 404 Problem accessing /solr/testcollection_shard1_replica_n3/update. Reason: Can not find: /solr/testcollection_shard1_replica_n3/updatehttp://eclipse.org/jetty";>Powered by Jetty:// 9.4.11.v20180605 Stack Trace: org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from server at https://127.0.0.1:42279/solr/testcollection_shard1_replica_n3: Expected mime type application/octet-stream but got text/html. Error 404 Can not find: /solr/testcollection_shard1_replica_n3/update HTTP ERROR 404 Problem accessing /solr/testcollection_shard1_replica_n3/update. Reason: Can not find: /solr/testcollection_shard1_replica_n3/updatehttp://eclipse.org/jetty";>Powered by Jetty:// 9.4.11.v20180605 at __randomizedtesting.SeedInfo.seed([DA848156A08B285A:E75C2F7A9865762A]:0) at org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:551) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1016) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194) at org.apache.solr.client.solrj.request.UpdateRequest.commit(UpdateRequest.java:237) at org.apache.solr.cloud.TestAuthenticationFramework.collectionCreateSearchDeleteTwice(TestAuthenticationFramework.java:125) at org.apache.solr.cloud.TestAuthenticationFramework.testBasics(TestAuthenticationFramework.java:76) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
[jira] [Commented] (SOLR-12057) CDCR does not replicate to Collections with TLOG Replicas
[ https://issues.apache.org/jira/browse/SOLR-12057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16664330#comment-16664330 ] Varun Thacker commented on SOLR-12057: -- Hi Amrit, Some feedback on CdcrUpdateProcessor * Can we add some javadocs as to what this update processor wants to achieve? * Do we still need to override versionAdd / versionDelete versionDeleteByQuery ? * It would be nice to add some basic docs to the {{filterParams}} method to indicate what it's trying to filter etc. On CdcrReplicaTypesTest * {{//.withProperty("solr.directoryFactory", "solr.StandardDirectoryFactory")}} - Can we remove this comment? * Is {{testTlogReplica}} meant to only have tlog replicas? The create collection uses a combination of nrtReplicas and tlogReplicas so I'm trying to understand the motivation here * "Not really, we can remove this safely, from, all tests; 2 sec sleep is for loading the Cdcr components and avoiding potentially few retries." - You mentioned this but the patch still has a 2s delay * {{int batchSize = (TEST_NIGHTLY ? 100 : 10);}} - does batchSize represent numBatches? 100 seems to be the batch size in the inner loop >From a design perspective : Given the improvements you've made with the patch , are we in a position to roll up this block from CdcrUpdateProcessor into DistributedUpdateProcessor ? If yes then we would get CDCR to work even without them having to add an UpdateProcessor ? We coiuld keep CdcrUpdateProcessor as is for backward compat but remove references of it from the docs {code:java} if (params.get(CDCR_UPDATE) != null) { result.set(CDCR_UPDATE, ""); result.set(CommonParams.VERSION_FIELD, params.get(CommonParams.VERSION_FIELD)); }{code} > CDCR does not replicate to Collections with TLOG Replicas > - > > Key: SOLR-12057 > URL: https://issues.apache.org/jira/browse/SOLR-12057 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: CDCR >Affects Versions: 7.2 >Reporter: Webster Homer >Assignee: Varun Thacker >Priority: Major > Attachments: SOLR-12057.patch, SOLR-12057.patch, SOLR-12057.patch, > SOLR-12057.patch, SOLR-12057.patch, cdcr-fail-with-tlog-pull.patch, > cdcr-fail-with-tlog-pull.patch > > > We created a collection using TLOG replicas in our QA clouds. > We have a locally hosted solrcloud with 2 nodes, all our collections have 2 > shards. We use CDCR to replicate the collections from this environment to 2 > data centers hosted in Google cloud. This seems to work fairly well for our > collections with NRT replicas. However the new TLOG collection has problems. > > The google cloud solrclusters have 4 nodes each (3 separate Zookeepers). 2 > shards per collection with 2 replicas per shard. > > We never see data show up in the cloud collections, but we do see tlog files > show up on the cloud servers. I can see that all of the servers have cdcr > started, buffers are disabled. > The cdcr source configuration is: > > "requestHandler":{"/cdcr":{ > "name":"/cdcr", > "class":"solr.CdcrRequestHandler", > "replica":[ > { > > "zkHost":"[xxx-mzk01.sial.com:2181|http://xxx-mzk01.sial.com:2181/],[xxx-mzk02.sial.com:2181|http://xxx-mzk02.sial.com:2181/],[xxx-mzk03.sial.com:2181/solr|http://xxx-mzk03.sial.com:2181/solr]";, > "source":"b2b-catalog-material-180124T", > "target":"b2b-catalog-material-180124T"}, > { > > "zkHost":"[-mzk01.sial.com:2181|http://-mzk01.sial.com:2181/],[-mzk02.sial.com:2181|http://-mzk02.sial.com:2181/],[-mzk03.sial.com:2181/solr|http://-mzk03.sial.com:2181/solr]";, > "source":"b2b-catalog-material-180124T", > "target":"b2b-catalog-material-180124T"}], > "replicator":{ > "threadPoolSize":4, > "schedule":500, > "batchSize":250}, > "updateLogSynchronizer":\{"schedule":6 > > The target configurations in the 2 clouds are the same: > "requestHandler":{"/cdcr":{ "name":"/cdcr", > "class":"solr.CdcrRequestHandler", "buffer":{"defaultState":"disabled"}}} > > All of our collections have a timestamp field, index_date. In the source > collection all the records have a date of 2/28/2018 but the target > collections have a latest date of 1/26/2018 > > I don't see cdcr errors in the logs, but we use logstash to search them, and > we're still perfecting that. > > We have a number of similar collections that behave correctly. This is the > only collection that is a TLOG collection. It appears that CDCR doesn't > support TLOG collections. > > It looks like the data is getting to the target servers. I see tlog files > with the right timestamps.
[jira] [Commented] (SOLR-12930) Add great developer documentation for writing tests.
[ https://issues.apache.org/jira/browse/SOLR-12930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16664323#comment-16664323 ] Mark Miller commented on SOLR-12930: [~ctargett], [~hossman] - either of you know how to or have the power to get us a Solr Developer space in the cwiki? > Add great developer documentation for writing tests. > > > Key: SOLR-12930 > URL: https://issues.apache.org/jira/browse/SOLR-12930 > Project: Solr > Issue Type: Sub-task > Security Level: Public(Default Security Level. Issues are Public) > Components: Tests >Reporter: Mark Miller >Priority: Major > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12930) Add great developer documentation for writing tests.
[ https://issues.apache.org/jira/browse/SOLR-12930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16664319#comment-16664319 ] Anshum Gupta commented on SOLR-12930: - That sounds reasonable (y) > Add great developer documentation for writing tests. > > > Key: SOLR-12930 > URL: https://issues.apache.org/jira/browse/SOLR-12930 > Project: Solr > Issue Type: Sub-task > Security Level: Public(Default Security Level. Issues are Public) > Components: Tests >Reporter: Mark Miller >Priority: Major > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-12930) Add great developer documentation for writing tests.
[ https://issues.apache.org/jira/browse/SOLR-12930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16664315#comment-16664315 ] Mark Miller edited comment on SOLR-12930 at 10/25/18 9:33 PM: -- Yeah, the bulk of the good stuff will probably come down the road, but we could use really even just the basics for what we have now. When it's time to write a Solr or SolrCloud test as a new dev, it's really quite challenging to understand anything or get started. You just find something and copy and push off in some direction. Even now, those that know how to right good tests, what style of SolrCloud test to write, that you have to beast tests, how to beast them etc, have a lot to share in terms of best practices with less knowledgeable or newer devs I'd bet. What a unit and integration tests is, why we need both, how to write both etc (it's going to get easier, but you can still do some of this today) So a lot to do later, but a ton we could do now. was (Author: markrmil...@gmail.com): Yeah, the bulk of the good stuff will probably come down the road, but we could use really even just the basics for what we have now. When it's time to write a Solr or SolrCloud test as a new dev, it's really quite challenging to understand anything or get started. You just find something and copy and push off in some direction. Even know, those that know how to right good tests, what style of SolrCloud test to write, that you have to beast tests, how to beast them etc. What a unit and integration tests is, why we need both, how to write both etc (it's going to get easier, but you can still do some of this today) So a lot to do later, but a ton we could do now. > Add great developer documentation for writing tests. > > > Key: SOLR-12930 > URL: https://issues.apache.org/jira/browse/SOLR-12930 > Project: Solr > Issue Type: Sub-task > Security Level: Public(Default Security Level. Issues are Public) > Components: Tests >Reporter: Mark Miller >Priority: Major > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12930) Add great developer documentation for writing tests.
[ https://issues.apache.org/jira/browse/SOLR-12930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16664315#comment-16664315 ] Mark Miller commented on SOLR-12930: Yeah, the bulk of the good stuff will probably come down the road, but we could use really even just the basics for what we have now. When it's time to write a Solr or SolrCloud test as a new dev, it's really quite challenging to understand anything or get started. You just find something and copy and push off in some direction. Even know, those that know how to right good tests, what style of SolrCloud test to write, that you have to beast tests, how to beast them etc. What a unit and integration tests is, why we need both, how to write both etc (it's going to get easier, but you can still do some of this today) So a lot to do later, but a ton we could do now. > Add great developer documentation for writing tests. > > > Key: SOLR-12930 > URL: https://issues.apache.org/jira/browse/SOLR-12930 > Project: Solr > Issue Type: Sub-task > Security Level: Public(Default Security Level. Issues are Public) > Components: Tests >Reporter: Mark Miller >Priority: Major > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: Solr Tests
Yeah, there has been a lot of chipping for a very long time :) Many valiant attempts have been made to get ahold of this problem. I've been part of some of them. It's like zombie whack-a-mole. That is why you could almost read my email as, oh, again the tests, sure, that sounds exciting, no has ever tried to fix the tests before, see you next year. But this time is different. If I get even a quarter the amount of promised help I've already heard about, we will get to solid tests this time, and we will add support to enforce we stay there. I'm already further along the path than it looks, but that will become more clear soon. - Mark On Thu, Oct 25, 2018 at 11:55 AM Cassandra Targett wrote: > Hopefully they don't mind too much that I'm speaking for them, but on > behalf of myself and the 7 other committers that make up the Solr Team at > Lucidworks, we are also in and eager to help. > > We've been trying to chip away at the problem for the past year+, but we > need a systemic change to really make it better. I will do what I can to > delay or defer some of our Lucidworks planned tasks to free up as much time > for people as possible. > > On Thu, Oct 25, 2018 at 1:35 AM Anshum Gupta wrote: > >> +1 Mark! I’m in. >> >> * *Anshum >> >> >> On Oct 24, 2018, at 11:05 PM, Mark Miller wrote: >> >> My boss has indicated I'm going to get a little time to spend on >> addressing our test situation. Turns out the current downward trend >> situation is a little worrying to some that count on the project down the >> road ;) >> >> We can finally dig out though, I promise. >> >> It's going to be a little bit before I'm clear and gathering steam, but >> I've got this figured out. I'm finally playing checkers instead of rock / >> paper / scissors. >> >> I'm going to eat a lot of work to get there and I promise I can get >> there, but I'm going to need some help. >> >> If you have an interest in changing our test situation, and I know some >> of you do, please join me as I invite others to help out more in a short >> time. >> >> Beyond some help with code effort though, please get involved in >> discussions and JIRA issues around changing our test culture and process. >> We need to turn this corner as a group, we need to come together on some >> new behaviors, we need to generate large enough buy in to do something >> lasting. >> >> You all need to learn 'ant beast' (I'm fixing it to work *much* better) >> and my beasting test script gist (each has there place). I'm going to force >> it on you even if you don't ;) >> >> >> https://gist.githubusercontent.com/markrmiller/dbdb792216dc98b018ad/raw/cd084d01f405fa271af4b72a2848dc87374a1f71/gistfile1.sh >> >> For a long time I put most of my attention towards SolrCloud stability >> and scale. From 2012-2015 or so, that felt like the biggest threat to the >> project while also getting relatively little attention. I felt like it was >> 50/50 in 2014 that I'd even be talking to anyone about SolrCloud today. >> That hurdle is done and over with. The biggest threat to Solr and SolrCloud >> now is our tests. >> >> I'll be working from this umbrella issue: >> >> https://issues.apache.org/jira/browse/SOLR-12801 : Fix the tests, remove >> BadApples and AwaitsFix annotations, improve env for test development. >> >> Join the discussion. >> >> - Mark >> -- >> - Mark >> about.me/markrmiller >> >> >> -- - Mark about.me/markrmiller
[jira] [Commented] (SOLR-12930) Add great developer documentation for writing tests.
[ https://issues.apache.org/jira/browse/SOLR-12930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16664306#comment-16664306 ] Anshum Gupta commented on SOLR-12930: - [~markrmil...@gmail.com] - we'd do this once we've done or decided on how/what to do with most of the other stuff, right? i.e. decided and split the unit/integration tests, fix the framework etc. > Add great developer documentation for writing tests. > > > Key: SOLR-12930 > URL: https://issues.apache.org/jira/browse/SOLR-12930 > Project: Solr > Issue Type: Sub-task > Security Level: Public(Default Security Level. Issues are Public) > Components: Tests >Reporter: Mark Miller >Priority: Major > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-12930) Add great developer documentation for writing tests.
Mark Miller created SOLR-12930: -- Summary: Add great developer documentation for writing tests. Key: SOLR-12930 URL: https://issues.apache.org/jira/browse/SOLR-12930 Project: Solr Issue Type: Sub-task Security Level: Public (Default Security Level. Issues are Public) Reporter: Mark Miller -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12930) Add great developer documentation for writing tests.
[ https://issues.apache.org/jira/browse/SOLR-12930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller updated SOLR-12930: --- Component/s: Tests > Add great developer documentation for writing tests. > > > Key: SOLR-12930 > URL: https://issues.apache.org/jira/browse/SOLR-12930 > Project: Solr > Issue Type: Sub-task > Security Level: Public(Default Security Level. Issues are Public) > Components: Tests >Reporter: Mark Miller >Priority: Major > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12801) Fix the tests, remove BadApples and AwaitsFix annotations, improve env for test development.
[ https://issues.apache.org/jira/browse/SOLR-12801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16664300#comment-16664300 ] Mark Miller commented on SOLR-12801: bq. I suppose it doesn't fit within the scope of this issue though. I think it's very much in scope! TestHarness would be great to lose IMO. Also in scope is finishing moving cloud tests off the old inheritance pattern and using MiniSolrCloudCluster. > Fix the tests, remove BadApples and AwaitsFix annotations, improve env for > test development. > > > Key: SOLR-12801 > URL: https://issues.apache.org/jira/browse/SOLR-12801 > Project: Solr > Issue Type: Task > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Mark Miller >Assignee: Mark Miller >Priority: Critical > > A single issue to counteract the single issue adding tons of annotations, the > continued addition of new flakey tests, and the continued addition of > flakiness to existing tests. > Lots more to come. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12921) Separate Solr unit tests and integration tests.
[ https://issues.apache.org/jira/browse/SOLR-12921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16664294#comment-16664294 ] Mark Miller commented on SOLR-12921: bq. WDYT about changing precommit I think to start we just want some separating. You want to be able to glance at a commit and understand if its adding unit tests, integration tests, or both. There are shades of gray, but we all deal with that color constantly. I'd prefer we had it so separate that you did ant test or ant integration test, but that's no so important. I think it would be most useful and least intrusive to just separate them in the source tree initially. It would not have to be done all in one shot. One simple idea would be that unit tests stay in the current package and integration tests go in currentpackage.integrationtest. I'm not sure what is best. I just know the current situation where everyone mostly just keeps adding huge integration tests means that when they sometimes fail, no one looks at it. We have a huge unit test deficit. This won't fix that, but it's part of a larger solution you can see taking shape in linked JIRAs. Much better out of the box Mocks will be another piece. > Separate Solr unit tests and integration tests. > --- > > Key: SOLR-12921 > URL: https://issues.apache.org/jira/browse/SOLR-12921 > Project: Solr > Issue Type: Sub-task > Security Level: Public(Default Security Level. Issues are Public) > Components: Tests >Reporter: Mark Miller >Priority: Major > > We basically just have "tests" now. We should have separate locations for > unit and integration tests and new work should have a good reason to not > include both. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12795) Introduce 'limit' parameter in FacetStream.
[ https://issues.apache.org/jira/browse/SOLR-12795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amrit Sarkar updated SOLR-12795: Attachment: (was: SOLR-12795.patch) > Introduce 'limit' parameter in FacetStream. > --- > > Key: SOLR-12795 > URL: https://issues.apache.org/jira/browse/SOLR-12795 > Project: Solr > Issue Type: Sub-task > Security Level: Public(Default Security Level. Issues are Public) > Components: streaming expressions >Reporter: Amrit Sarkar >Priority: Major > Attachments: SOLR-12795.patch, SOLR-12795.patch, SOLR-12795.patch > > > Today facetStream takes a "bucketSizeLimit" parameter. Here is what the doc > says about this parameter - The number of buckets to include. This value is > applied to each dimension. > Now let's say we create a facet stream with 3 nested facets. For example > "year_i,month_i,day_i" and provide 10 as the bucketSizeLimit. > FacetStream would return 10 results to us for this facet expression while the > total number of unqiue values are 1000 (10*10*10 ) > The API should have a separate parameter "limit" which limits the number of > tuples (say 500) while bucketSizeLimit should be used to specify the size of > each bucket in the JSON Facet API. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-12-ea+12) - Build # 23093 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/23093/ Java: 64bit/jdk-12-ea+12 -XX:-UseCompressedOops -XX:+UseParallelGC 55 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.client.solrj.io.stream.StreamDecoratorTest Error Message: 14 threads leaked from SUITE scope at org.apache.solr.client.solrj.io.stream.StreamDecoratorTest: 1) Thread[id=1981, name=TEST-StreamDecoratorTest.testParallelHavingStream-seed#[A035ADB66C5FCE8E]-SendThread(127.0.0.1:37371), state=TIMED_WAITING, group=TGRP-StreamDecoratorTest] at java.base@12-ea/java.lang.Thread.sleep(Native Method) at app//org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:105) at app//org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1000) at app//org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1063)2) Thread[id=309, name=zkConnectionManagerCallback-131-thread-1, state=WAITING, group=TGRP-StreamDecoratorTest] at java.base@12-ea/jdk.internal.misc.Unsafe.park(Native Method) at java.base@12-ea/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194) at java.base@12-ea/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2081) at java.base@12-ea/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:433) at java.base@12-ea/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1054) at java.base@12-ea/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1114) at java.base@12-ea/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base@12-ea/java.lang.Thread.run(Thread.java:835)3) Thread[id=1980, name=Connection evictor, state=TIMED_WAITING, group=TGRP-StreamDecoratorTest] at java.base@12-ea/java.lang.Thread.sleep(Native Method) at app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66) at java.base@12-ea/java.lang.Thread.run(Thread.java:835)4) Thread[id=321, name=Connection evictor, state=TIMED_WAITING, group=TGRP-StreamDecoratorTest] at java.base@12-ea/java.lang.Thread.sleep(Native Method) at app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66) at java.base@12-ea/java.lang.Thread.run(Thread.java:835)5) Thread[id=1986, name=Connection evictor, state=TIMED_WAITING, group=TGRP-StreamDecoratorTest] at java.base@12-ea/java.lang.Thread.sleep(Native Method) at app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66) at java.base@12-ea/java.lang.Thread.run(Thread.java:835)6) Thread[id=1987, name=Connection evictor, state=TIMED_WAITING, group=TGRP-StreamDecoratorTest] at java.base@12-ea/java.lang.Thread.sleep(Native Method) at app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66) at java.base@12-ea/java.lang.Thread.run(Thread.java:835)7) Thread[id=324, name=Connection evictor, state=TIMED_WAITING, group=TGRP-StreamDecoratorTest] at java.base@12-ea/java.lang.Thread.sleep(Native Method) at app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66) at java.base@12-ea/java.lang.Thread.run(Thread.java:835)8) Thread[id=312, name=Connection evictor, state=TIMED_WAITING, group=TGRP-StreamDecoratorTest] at java.base@12-ea/java.lang.Thread.sleep(Native Method) at app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66) at java.base@12-ea/java.lang.Thread.run(Thread.java:835)9) Thread[id=308, name=TEST-StreamDecoratorTest.testExecutorStream-seed#[A035ADB66C5FCE8E]-EventThread, state=WAITING, group=TGRP-StreamDecoratorTest] at java.base@12-ea/jdk.internal.misc.Unsafe.park(Native Method) at java.base@12-ea/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194) at java.base@12-ea/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2081) at java.base@12-ea/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:433) at app//org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:502) 10) Thread[id=1983, name=zkConnectionManagerCallback-980-thread-1, state=WAITING, group=TGRP-StreamDecoratorTest] at java.base@12-ea/jdk.internal.misc.Unsafe.park(Native Method) at java.base@12-ea/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194) at java.base@12-ea/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2081) at java.base@12-ea/java.util.concurrent.LinkedBlocking
[jira] [Updated] (SOLR-12795) Introduce 'limit' parameter in FacetStream.
[ https://issues.apache.org/jira/browse/SOLR-12795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amrit Sarkar updated SOLR-12795: Attachment: SOLR-12795.patch > Introduce 'limit' parameter in FacetStream. > --- > > Key: SOLR-12795 > URL: https://issues.apache.org/jira/browse/SOLR-12795 > Project: Solr > Issue Type: Sub-task > Security Level: Public(Default Security Level. Issues are Public) > Components: streaming expressions >Reporter: Amrit Sarkar >Priority: Major > Attachments: SOLR-12795.patch, SOLR-12795.patch, SOLR-12795.patch > > > Today facetStream takes a "bucketSizeLimit" parameter. Here is what the doc > says about this parameter - The number of buckets to include. This value is > applied to each dimension. > Now let's say we create a facet stream with 3 nested facets. For example > "year_i,month_i,day_i" and provide 10 as the bucketSizeLimit. > FacetStream would return 10 results to us for this facet expression while the > total number of unqiue values are 1000 (10*10*10 ) > The API should have a separate parameter "limit" which limits the number of > tuples (say 500) while bucketSizeLimit should be used to specify the size of > each bucket in the JSON Facet API. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12921) Separate Solr unit tests and integration tests.
[ https://issues.apache.org/jira/browse/SOLR-12921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16664290#comment-16664290 ] Mark Miller commented on SOLR-12921: bq. There's all shades of gray i Everything has shades of gray, can't let that bog you down. A unit test is a test that tests a particular piece of code, is very fast, and often involves mocks. An integration test spins up the whole system and tests something by running everything pretty much for real. Unit tests should be very easy to debug. Integration tests can be very difficult to debug. Both are valuable, both should be added for new features. The problem is that there is a mix of people not understanding that or being to lazy to care. We need to introduce structure that teaches people how you write proper tests. > Separate Solr unit tests and integration tests. > --- > > Key: SOLR-12921 > URL: https://issues.apache.org/jira/browse/SOLR-12921 > Project: Solr > Issue Type: Sub-task > Security Level: Public(Default Security Level. Issues are Public) > Components: Tests >Reporter: Mark Miller >Priority: Major > > We basically just have "tests" now. We should have separate locations for > unit and integration tests and new work should have a good reason to not > include both. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12795) Introduce 'limit' parameter in FacetStream.
[ https://issues.apache.org/jira/browse/SOLR-12795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16664258#comment-16664258 ] Amrit Sarkar commented on SOLR-12795: - Completed tests [StreamingTest], but feel they are extremely light and can be hardened. > Introduce 'limit' parameter in FacetStream. > --- > > Key: SOLR-12795 > URL: https://issues.apache.org/jira/browse/SOLR-12795 > Project: Solr > Issue Type: Sub-task > Security Level: Public(Default Security Level. Issues are Public) > Components: streaming expressions >Reporter: Amrit Sarkar >Priority: Major > Attachments: SOLR-12795.patch, SOLR-12795.patch, SOLR-12795.patch > > > Today facetStream takes a "bucketSizeLimit" parameter. Here is what the doc > says about this parameter - The number of buckets to include. This value is > applied to each dimension. > Now let's say we create a facet stream with 3 nested facets. For example > "year_i,month_i,day_i" and provide 10 as the bucketSizeLimit. > FacetStream would return 10 results to us for this facet expression while the > total number of unqiue values are 1000 (10*10*10 ) > The API should have a separate parameter "limit" which limits the number of > tuples (say 500) while bucketSizeLimit should be used to specify the size of > each bucket in the JSON Facet API. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12795) Introduce 'limit' parameter in FacetStream.
[ https://issues.apache.org/jira/browse/SOLR-12795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amrit Sarkar updated SOLR-12795: Attachment: SOLR-12795.patch > Introduce 'limit' parameter in FacetStream. > --- > > Key: SOLR-12795 > URL: https://issues.apache.org/jira/browse/SOLR-12795 > Project: Solr > Issue Type: Sub-task > Security Level: Public(Default Security Level. Issues are Public) > Components: streaming expressions >Reporter: Amrit Sarkar >Priority: Major > Attachments: SOLR-12795.patch, SOLR-12795.patch, SOLR-12795.patch > > > Today facetStream takes a "bucketSizeLimit" parameter. Here is what the doc > says about this parameter - The number of buckets to include. This value is > applied to each dimension. > Now let's say we create a facet stream with 3 nested facets. For example > "year_i,month_i,day_i" and provide 10 as the bucketSizeLimit. > FacetStream would return 10 results to us for this facet expression while the > total number of unqiue values are 1000 (10*10*10 ) > The API should have a separate parameter "limit" which limits the number of > tuples (say 500) while bucketSizeLimit should be used to specify the size of > each bucket in the JSON Facet API. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12057) CDCR does not replicate to Collections with TLOG Replicas
[ https://issues.apache.org/jira/browse/SOLR-12057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amrit Sarkar updated SOLR-12057: Attachment: SOLR-12057.patch > CDCR does not replicate to Collections with TLOG Replicas > - > > Key: SOLR-12057 > URL: https://issues.apache.org/jira/browse/SOLR-12057 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: CDCR >Affects Versions: 7.2 >Reporter: Webster Homer >Assignee: Varun Thacker >Priority: Major > Attachments: SOLR-12057.patch, SOLR-12057.patch, SOLR-12057.patch, > SOLR-12057.patch, SOLR-12057.patch, cdcr-fail-with-tlog-pull.patch, > cdcr-fail-with-tlog-pull.patch > > > We created a collection using TLOG replicas in our QA clouds. > We have a locally hosted solrcloud with 2 nodes, all our collections have 2 > shards. We use CDCR to replicate the collections from this environment to 2 > data centers hosted in Google cloud. This seems to work fairly well for our > collections with NRT replicas. However the new TLOG collection has problems. > > The google cloud solrclusters have 4 nodes each (3 separate Zookeepers). 2 > shards per collection with 2 replicas per shard. > > We never see data show up in the cloud collections, but we do see tlog files > show up on the cloud servers. I can see that all of the servers have cdcr > started, buffers are disabled. > The cdcr source configuration is: > > "requestHandler":{"/cdcr":{ > "name":"/cdcr", > "class":"solr.CdcrRequestHandler", > "replica":[ > { > > "zkHost":"[xxx-mzk01.sial.com:2181|http://xxx-mzk01.sial.com:2181/],[xxx-mzk02.sial.com:2181|http://xxx-mzk02.sial.com:2181/],[xxx-mzk03.sial.com:2181/solr|http://xxx-mzk03.sial.com:2181/solr]";, > "source":"b2b-catalog-material-180124T", > "target":"b2b-catalog-material-180124T"}, > { > > "zkHost":"[-mzk01.sial.com:2181|http://-mzk01.sial.com:2181/],[-mzk02.sial.com:2181|http://-mzk02.sial.com:2181/],[-mzk03.sial.com:2181/solr|http://-mzk03.sial.com:2181/solr]";, > "source":"b2b-catalog-material-180124T", > "target":"b2b-catalog-material-180124T"}], > "replicator":{ > "threadPoolSize":4, > "schedule":500, > "batchSize":250}, > "updateLogSynchronizer":\{"schedule":6 > > The target configurations in the 2 clouds are the same: > "requestHandler":{"/cdcr":{ "name":"/cdcr", > "class":"solr.CdcrRequestHandler", "buffer":{"defaultState":"disabled"}}} > > All of our collections have a timestamp field, index_date. In the source > collection all the records have a date of 2/28/2018 but the target > collections have a latest date of 1/26/2018 > > I don't see cdcr errors in the logs, but we use logstash to search them, and > we're still perfecting that. > > We have a number of similar collections that behave correctly. This is the > only collection that is a TLOG collection. It appears that CDCR doesn't > support TLOG collections. > > It looks like the data is getting to the target servers. I see tlog files > with the right timestamps. Looking at the timestamps on the documents in the > collection none of the data appears to have been loaded.In the solr.log I see > lots of /cdcr messages action=LASTPROCESSEDVERSION, > action=COLLECTIONCHECKPOINT, and action=SHARDCHECKPOINT > > no errors > > Target collections autoCommit is set to 6 I tried sending a commit > explicitly no difference. cdcr is uploading data, but no new data appears in > the collection. > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12057) CDCR does not replicate to Collections with TLOG Replicas
[ https://issues.apache.org/jira/browse/SOLR-12057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amrit Sarkar updated SOLR-12057: Attachment: (was: SOLR-12057.patch) > CDCR does not replicate to Collections with TLOG Replicas > - > > Key: SOLR-12057 > URL: https://issues.apache.org/jira/browse/SOLR-12057 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: CDCR >Affects Versions: 7.2 >Reporter: Webster Homer >Assignee: Varun Thacker >Priority: Major > Attachments: SOLR-12057.patch, SOLR-12057.patch, SOLR-12057.patch, > SOLR-12057.patch, SOLR-12057.patch, cdcr-fail-with-tlog-pull.patch, > cdcr-fail-with-tlog-pull.patch > > > We created a collection using TLOG replicas in our QA clouds. > We have a locally hosted solrcloud with 2 nodes, all our collections have 2 > shards. We use CDCR to replicate the collections from this environment to 2 > data centers hosted in Google cloud. This seems to work fairly well for our > collections with NRT replicas. However the new TLOG collection has problems. > > The google cloud solrclusters have 4 nodes each (3 separate Zookeepers). 2 > shards per collection with 2 replicas per shard. > > We never see data show up in the cloud collections, but we do see tlog files > show up on the cloud servers. I can see that all of the servers have cdcr > started, buffers are disabled. > The cdcr source configuration is: > > "requestHandler":{"/cdcr":{ > "name":"/cdcr", > "class":"solr.CdcrRequestHandler", > "replica":[ > { > > "zkHost":"[xxx-mzk01.sial.com:2181|http://xxx-mzk01.sial.com:2181/],[xxx-mzk02.sial.com:2181|http://xxx-mzk02.sial.com:2181/],[xxx-mzk03.sial.com:2181/solr|http://xxx-mzk03.sial.com:2181/solr]";, > "source":"b2b-catalog-material-180124T", > "target":"b2b-catalog-material-180124T"}, > { > > "zkHost":"[-mzk01.sial.com:2181|http://-mzk01.sial.com:2181/],[-mzk02.sial.com:2181|http://-mzk02.sial.com:2181/],[-mzk03.sial.com:2181/solr|http://-mzk03.sial.com:2181/solr]";, > "source":"b2b-catalog-material-180124T", > "target":"b2b-catalog-material-180124T"}], > "replicator":{ > "threadPoolSize":4, > "schedule":500, > "batchSize":250}, > "updateLogSynchronizer":\{"schedule":6 > > The target configurations in the 2 clouds are the same: > "requestHandler":{"/cdcr":{ "name":"/cdcr", > "class":"solr.CdcrRequestHandler", "buffer":{"defaultState":"disabled"}}} > > All of our collections have a timestamp field, index_date. In the source > collection all the records have a date of 2/28/2018 but the target > collections have a latest date of 1/26/2018 > > I don't see cdcr errors in the logs, but we use logstash to search them, and > we're still perfecting that. > > We have a number of similar collections that behave correctly. This is the > only collection that is a TLOG collection. It appears that CDCR doesn't > support TLOG collections. > > It looks like the data is getting to the target servers. I see tlog files > with the right timestamps. Looking at the timestamps on the documents in the > collection none of the data appears to have been loaded.In the solr.log I see > lots of /cdcr messages action=LASTPROCESSEDVERSION, > action=COLLECTIONCHECKPOINT, and action=SHARDCHECKPOINT > > no errors > > Target collections autoCommit is set to 6 I tried sending a commit > explicitly no difference. cdcr is uploading data, but no new data appears in > the collection. > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12922) Facet parser plugin for json.facet
[ https://issues.apache.org/jira/browse/SOLR-12922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16664236#comment-16664236 ] David Smiley commented on SOLR-12922: - Could a comment be posted to explain what this means? It's not apparent from the title & description. Query parsers produce a Lucene Query. What would documents would the Query match here pertaining to faceting? > Facet parser plugin for json.facet > -- > > Key: SOLR-12922 > URL: https://issues.apache.org/jira/browse/SOLR-12922 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) > Components: Facet Module >Reporter: Mikhail Khludnev >Priority: Minor > Attachments: SOLR-12922.patch, SOLR-12922.patch > > > Why don't introduce a plugin for json facet parsers? Attaching draft patch, > it just demonstrate the thing. Test fails, iirc. Opinions? -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12922) Facet parser plugin for json.facet
[ https://issues.apache.org/jira/browse/SOLR-12922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16664234#comment-16664234 ] Michael Gibney commented on SOLR-12922: --- This is interesting. Could the behavior of the proof-of-concept {{FuncRangeFacetParser}} be achieved with stock range faceting? Regardless, it's a good proof-of-concept; but curiosity has me wondering about additional use cases that might help illustrate some of the potential uses/implications of the proposed plugin extension. Also, I just uploaded a slightly modified patch with passing test. > Facet parser plugin for json.facet > -- > > Key: SOLR-12922 > URL: https://issues.apache.org/jira/browse/SOLR-12922 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) > Components: Facet Module >Reporter: Mikhail Khludnev >Priority: Minor > Attachments: SOLR-12922.patch, SOLR-12922.patch > > > Why don't introduce a plugin for json facet parsers? Attaching draft patch, > it just demonstrate the thing. Test fails, iirc. Opinions? -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12057) CDCR does not replicate to Collections with TLOG Replicas
[ https://issues.apache.org/jira/browse/SOLR-12057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amrit Sarkar updated SOLR-12057: Attachment: SOLR-12057.patch > CDCR does not replicate to Collections with TLOG Replicas > - > > Key: SOLR-12057 > URL: https://issues.apache.org/jira/browse/SOLR-12057 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: CDCR >Affects Versions: 7.2 >Reporter: Webster Homer >Assignee: Varun Thacker >Priority: Major > Attachments: SOLR-12057.patch, SOLR-12057.patch, SOLR-12057.patch, > SOLR-12057.patch, SOLR-12057.patch, cdcr-fail-with-tlog-pull.patch, > cdcr-fail-with-tlog-pull.patch > > > We created a collection using TLOG replicas in our QA clouds. > We have a locally hosted solrcloud with 2 nodes, all our collections have 2 > shards. We use CDCR to replicate the collections from this environment to 2 > data centers hosted in Google cloud. This seems to work fairly well for our > collections with NRT replicas. However the new TLOG collection has problems. > > The google cloud solrclusters have 4 nodes each (3 separate Zookeepers). 2 > shards per collection with 2 replicas per shard. > > We never see data show up in the cloud collections, but we do see tlog files > show up on the cloud servers. I can see that all of the servers have cdcr > started, buffers are disabled. > The cdcr source configuration is: > > "requestHandler":{"/cdcr":{ > "name":"/cdcr", > "class":"solr.CdcrRequestHandler", > "replica":[ > { > > "zkHost":"[xxx-mzk01.sial.com:2181|http://xxx-mzk01.sial.com:2181/],[xxx-mzk02.sial.com:2181|http://xxx-mzk02.sial.com:2181/],[xxx-mzk03.sial.com:2181/solr|http://xxx-mzk03.sial.com:2181/solr]";, > "source":"b2b-catalog-material-180124T", > "target":"b2b-catalog-material-180124T"}, > { > > "zkHost":"[-mzk01.sial.com:2181|http://-mzk01.sial.com:2181/],[-mzk02.sial.com:2181|http://-mzk02.sial.com:2181/],[-mzk03.sial.com:2181/solr|http://-mzk03.sial.com:2181/solr]";, > "source":"b2b-catalog-material-180124T", > "target":"b2b-catalog-material-180124T"}], > "replicator":{ > "threadPoolSize":4, > "schedule":500, > "batchSize":250}, > "updateLogSynchronizer":\{"schedule":6 > > The target configurations in the 2 clouds are the same: > "requestHandler":{"/cdcr":{ "name":"/cdcr", > "class":"solr.CdcrRequestHandler", "buffer":{"defaultState":"disabled"}}} > > All of our collections have a timestamp field, index_date. In the source > collection all the records have a date of 2/28/2018 but the target > collections have a latest date of 1/26/2018 > > I don't see cdcr errors in the logs, but we use logstash to search them, and > we're still perfecting that. > > We have a number of similar collections that behave correctly. This is the > only collection that is a TLOG collection. It appears that CDCR doesn't > support TLOG collections. > > It looks like the data is getting to the target servers. I see tlog files > with the right timestamps. Looking at the timestamps on the documents in the > collection none of the data appears to have been loaded.In the solr.log I see > lots of /cdcr messages action=LASTPROCESSEDVERSION, > action=COLLECTIONCHECKPOINT, and action=SHARDCHECKPOINT > > no errors > > Target collections autoCommit is set to 6 I tried sending a commit > explicitly no difference. cdcr is uploading data, but no new data appears in > the collection. > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12057) CDCR does not replicate to Collections with TLOG Replicas
[ https://issues.apache.org/jira/browse/SOLR-12057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amrit Sarkar updated SOLR-12057: Attachment: (was: SOLR-12057.patch) > CDCR does not replicate to Collections with TLOG Replicas > - > > Key: SOLR-12057 > URL: https://issues.apache.org/jira/browse/SOLR-12057 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: CDCR >Affects Versions: 7.2 >Reporter: Webster Homer >Assignee: Varun Thacker >Priority: Major > Attachments: SOLR-12057.patch, SOLR-12057.patch, SOLR-12057.patch, > SOLR-12057.patch, SOLR-12057.patch, cdcr-fail-with-tlog-pull.patch, > cdcr-fail-with-tlog-pull.patch > > > We created a collection using TLOG replicas in our QA clouds. > We have a locally hosted solrcloud with 2 nodes, all our collections have 2 > shards. We use CDCR to replicate the collections from this environment to 2 > data centers hosted in Google cloud. This seems to work fairly well for our > collections with NRT replicas. However the new TLOG collection has problems. > > The google cloud solrclusters have 4 nodes each (3 separate Zookeepers). 2 > shards per collection with 2 replicas per shard. > > We never see data show up in the cloud collections, but we do see tlog files > show up on the cloud servers. I can see that all of the servers have cdcr > started, buffers are disabled. > The cdcr source configuration is: > > "requestHandler":{"/cdcr":{ > "name":"/cdcr", > "class":"solr.CdcrRequestHandler", > "replica":[ > { > > "zkHost":"[xxx-mzk01.sial.com:2181|http://xxx-mzk01.sial.com:2181/],[xxx-mzk02.sial.com:2181|http://xxx-mzk02.sial.com:2181/],[xxx-mzk03.sial.com:2181/solr|http://xxx-mzk03.sial.com:2181/solr]";, > "source":"b2b-catalog-material-180124T", > "target":"b2b-catalog-material-180124T"}, > { > > "zkHost":"[-mzk01.sial.com:2181|http://-mzk01.sial.com:2181/],[-mzk02.sial.com:2181|http://-mzk02.sial.com:2181/],[-mzk03.sial.com:2181/solr|http://-mzk03.sial.com:2181/solr]";, > "source":"b2b-catalog-material-180124T", > "target":"b2b-catalog-material-180124T"}], > "replicator":{ > "threadPoolSize":4, > "schedule":500, > "batchSize":250}, > "updateLogSynchronizer":\{"schedule":6 > > The target configurations in the 2 clouds are the same: > "requestHandler":{"/cdcr":{ "name":"/cdcr", > "class":"solr.CdcrRequestHandler", "buffer":{"defaultState":"disabled"}}} > > All of our collections have a timestamp field, index_date. In the source > collection all the records have a date of 2/28/2018 but the target > collections have a latest date of 1/26/2018 > > I don't see cdcr errors in the logs, but we use logstash to search them, and > we're still perfecting that. > > We have a number of similar collections that behave correctly. This is the > only collection that is a TLOG collection. It appears that CDCR doesn't > support TLOG collections. > > It looks like the data is getting to the target servers. I see tlog files > with the right timestamps. Looking at the timestamps on the documents in the > collection none of the data appears to have been loaded.In the solr.log I see > lots of /cdcr messages action=LASTPROCESSEDVERSION, > action=COLLECTIONCHECKPOINT, and action=SHARDCHECKPOINT > > no errors > > Target collections autoCommit is set to 6 I tried sending a commit > explicitly no difference. cdcr is uploading data, but no new data appears in > the collection. > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12057) CDCR does not replicate to Collections with TLOG Replicas
[ https://issues.apache.org/jira/browse/SOLR-12057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amrit Sarkar updated SOLR-12057: Attachment: SOLR-12057.patch > CDCR does not replicate to Collections with TLOG Replicas > - > > Key: SOLR-12057 > URL: https://issues.apache.org/jira/browse/SOLR-12057 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: CDCR >Affects Versions: 7.2 >Reporter: Webster Homer >Assignee: Varun Thacker >Priority: Major > Attachments: SOLR-12057.patch, SOLR-12057.patch, SOLR-12057.patch, > SOLR-12057.patch, SOLR-12057.patch, cdcr-fail-with-tlog-pull.patch, > cdcr-fail-with-tlog-pull.patch > > > We created a collection using TLOG replicas in our QA clouds. > We have a locally hosted solrcloud with 2 nodes, all our collections have 2 > shards. We use CDCR to replicate the collections from this environment to 2 > data centers hosted in Google cloud. This seems to work fairly well for our > collections with NRT replicas. However the new TLOG collection has problems. > > The google cloud solrclusters have 4 nodes each (3 separate Zookeepers). 2 > shards per collection with 2 replicas per shard. > > We never see data show up in the cloud collections, but we do see tlog files > show up on the cloud servers. I can see that all of the servers have cdcr > started, buffers are disabled. > The cdcr source configuration is: > > "requestHandler":{"/cdcr":{ > "name":"/cdcr", > "class":"solr.CdcrRequestHandler", > "replica":[ > { > > "zkHost":"[xxx-mzk01.sial.com:2181|http://xxx-mzk01.sial.com:2181/],[xxx-mzk02.sial.com:2181|http://xxx-mzk02.sial.com:2181/],[xxx-mzk03.sial.com:2181/solr|http://xxx-mzk03.sial.com:2181/solr]";, > "source":"b2b-catalog-material-180124T", > "target":"b2b-catalog-material-180124T"}, > { > > "zkHost":"[-mzk01.sial.com:2181|http://-mzk01.sial.com:2181/],[-mzk02.sial.com:2181|http://-mzk02.sial.com:2181/],[-mzk03.sial.com:2181/solr|http://-mzk03.sial.com:2181/solr]";, > "source":"b2b-catalog-material-180124T", > "target":"b2b-catalog-material-180124T"}], > "replicator":{ > "threadPoolSize":4, > "schedule":500, > "batchSize":250}, > "updateLogSynchronizer":\{"schedule":6 > > The target configurations in the 2 clouds are the same: > "requestHandler":{"/cdcr":{ "name":"/cdcr", > "class":"solr.CdcrRequestHandler", "buffer":{"defaultState":"disabled"}}} > > All of our collections have a timestamp field, index_date. In the source > collection all the records have a date of 2/28/2018 but the target > collections have a latest date of 1/26/2018 > > I don't see cdcr errors in the logs, but we use logstash to search them, and > we're still perfecting that. > > We have a number of similar collections that behave correctly. This is the > only collection that is a TLOG collection. It appears that CDCR doesn't > support TLOG collections. > > It looks like the data is getting to the target servers. I see tlog files > with the right timestamps. Looking at the timestamps on the documents in the > collection none of the data appears to have been loaded.In the solr.log I see > lots of /cdcr messages action=LASTPROCESSEDVERSION, > action=COLLECTIONCHECKPOINT, and action=SHARDCHECKPOINT > > no errors > > Target collections autoCommit is set to 6 I tried sending a commit > explicitly no difference. cdcr is uploading data, but no new data appears in > the collection. > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12057) CDCR does not replicate to Collections with TLOG Replicas
[ https://issues.apache.org/jira/browse/SOLR-12057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amrit Sarkar updated SOLR-12057: Attachment: (was: SOLR-12057.patch) > CDCR does not replicate to Collections with TLOG Replicas > - > > Key: SOLR-12057 > URL: https://issues.apache.org/jira/browse/SOLR-12057 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: CDCR >Affects Versions: 7.2 >Reporter: Webster Homer >Assignee: Varun Thacker >Priority: Major > Attachments: SOLR-12057.patch, SOLR-12057.patch, SOLR-12057.patch, > SOLR-12057.patch, SOLR-12057.patch, cdcr-fail-with-tlog-pull.patch, > cdcr-fail-with-tlog-pull.patch > > > We created a collection using TLOG replicas in our QA clouds. > We have a locally hosted solrcloud with 2 nodes, all our collections have 2 > shards. We use CDCR to replicate the collections from this environment to 2 > data centers hosted in Google cloud. This seems to work fairly well for our > collections with NRT replicas. However the new TLOG collection has problems. > > The google cloud solrclusters have 4 nodes each (3 separate Zookeepers). 2 > shards per collection with 2 replicas per shard. > > We never see data show up in the cloud collections, but we do see tlog files > show up on the cloud servers. I can see that all of the servers have cdcr > started, buffers are disabled. > The cdcr source configuration is: > > "requestHandler":{"/cdcr":{ > "name":"/cdcr", > "class":"solr.CdcrRequestHandler", > "replica":[ > { > > "zkHost":"[xxx-mzk01.sial.com:2181|http://xxx-mzk01.sial.com:2181/],[xxx-mzk02.sial.com:2181|http://xxx-mzk02.sial.com:2181/],[xxx-mzk03.sial.com:2181/solr|http://xxx-mzk03.sial.com:2181/solr]";, > "source":"b2b-catalog-material-180124T", > "target":"b2b-catalog-material-180124T"}, > { > > "zkHost":"[-mzk01.sial.com:2181|http://-mzk01.sial.com:2181/],[-mzk02.sial.com:2181|http://-mzk02.sial.com:2181/],[-mzk03.sial.com:2181/solr|http://-mzk03.sial.com:2181/solr]";, > "source":"b2b-catalog-material-180124T", > "target":"b2b-catalog-material-180124T"}], > "replicator":{ > "threadPoolSize":4, > "schedule":500, > "batchSize":250}, > "updateLogSynchronizer":\{"schedule":6 > > The target configurations in the 2 clouds are the same: > "requestHandler":{"/cdcr":{ "name":"/cdcr", > "class":"solr.CdcrRequestHandler", "buffer":{"defaultState":"disabled"}}} > > All of our collections have a timestamp field, index_date. In the source > collection all the records have a date of 2/28/2018 but the target > collections have a latest date of 1/26/2018 > > I don't see cdcr errors in the logs, but we use logstash to search them, and > we're still perfecting that. > > We have a number of similar collections that behave correctly. This is the > only collection that is a TLOG collection. It appears that CDCR doesn't > support TLOG collections. > > It looks like the data is getting to the target servers. I see tlog files > with the right timestamps. Looking at the timestamps on the documents in the > collection none of the data appears to have been loaded.In the solr.log I see > lots of /cdcr messages action=LASTPROCESSEDVERSION, > action=COLLECTIONCHECKPOINT, and action=SHARDCHECKPOINT > > no errors > > Target collections autoCommit is set to 6 I tried sending a commit > explicitly no difference. cdcr is uploading data, but no new data appears in > the collection. > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12057) CDCR does not replicate to Collections with TLOG Replicas
[ https://issues.apache.org/jira/browse/SOLR-12057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16664231#comment-16664231 ] Amrit Sarkar commented on SOLR-12057: - Thanks Varun; polished the patch as per feedback and created SOLR-12917 to create a framework for related CDCR tests and avoid redundancy. > CDCR does not replicate to Collections with TLOG Replicas > - > > Key: SOLR-12057 > URL: https://issues.apache.org/jira/browse/SOLR-12057 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: CDCR >Affects Versions: 7.2 >Reporter: Webster Homer >Assignee: Varun Thacker >Priority: Major > Attachments: SOLR-12057.patch, SOLR-12057.patch, SOLR-12057.patch, > SOLR-12057.patch, SOLR-12057.patch, cdcr-fail-with-tlog-pull.patch, > cdcr-fail-with-tlog-pull.patch > > > We created a collection using TLOG replicas in our QA clouds. > We have a locally hosted solrcloud with 2 nodes, all our collections have 2 > shards. We use CDCR to replicate the collections from this environment to 2 > data centers hosted in Google cloud. This seems to work fairly well for our > collections with NRT replicas. However the new TLOG collection has problems. > > The google cloud solrclusters have 4 nodes each (3 separate Zookeepers). 2 > shards per collection with 2 replicas per shard. > > We never see data show up in the cloud collections, but we do see tlog files > show up on the cloud servers. I can see that all of the servers have cdcr > started, buffers are disabled. > The cdcr source configuration is: > > "requestHandler":{"/cdcr":{ > "name":"/cdcr", > "class":"solr.CdcrRequestHandler", > "replica":[ > { > > "zkHost":"[xxx-mzk01.sial.com:2181|http://xxx-mzk01.sial.com:2181/],[xxx-mzk02.sial.com:2181|http://xxx-mzk02.sial.com:2181/],[xxx-mzk03.sial.com:2181/solr|http://xxx-mzk03.sial.com:2181/solr]";, > "source":"b2b-catalog-material-180124T", > "target":"b2b-catalog-material-180124T"}, > { > > "zkHost":"[-mzk01.sial.com:2181|http://-mzk01.sial.com:2181/],[-mzk02.sial.com:2181|http://-mzk02.sial.com:2181/],[-mzk03.sial.com:2181/solr|http://-mzk03.sial.com:2181/solr]";, > "source":"b2b-catalog-material-180124T", > "target":"b2b-catalog-material-180124T"}], > "replicator":{ > "threadPoolSize":4, > "schedule":500, > "batchSize":250}, > "updateLogSynchronizer":\{"schedule":6 > > The target configurations in the 2 clouds are the same: > "requestHandler":{"/cdcr":{ "name":"/cdcr", > "class":"solr.CdcrRequestHandler", "buffer":{"defaultState":"disabled"}}} > > All of our collections have a timestamp field, index_date. In the source > collection all the records have a date of 2/28/2018 but the target > collections have a latest date of 1/26/2018 > > I don't see cdcr errors in the logs, but we use logstash to search them, and > we're still perfecting that. > > We have a number of similar collections that behave correctly. This is the > only collection that is a TLOG collection. It appears that CDCR doesn't > support TLOG collections. > > It looks like the data is getting to the target servers. I see tlog files > with the right timestamps. Looking at the timestamps on the documents in the > collection none of the data appears to have been loaded.In the solr.log I see > lots of /cdcr messages action=LASTPROCESSEDVERSION, > action=COLLECTIONCHECKPOINT, and action=SHARDCHECKPOINT > > no errors > > Target collections autoCommit is set to 6 I tried sending a commit > explicitly no difference. cdcr is uploading data, but no new data appears in > the collection. > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12057) CDCR does not replicate to Collections with TLOG Replicas
[ https://issues.apache.org/jira/browse/SOLR-12057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amrit Sarkar updated SOLR-12057: Attachment: SOLR-12057.patch > CDCR does not replicate to Collections with TLOG Replicas > - > > Key: SOLR-12057 > URL: https://issues.apache.org/jira/browse/SOLR-12057 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: CDCR >Affects Versions: 7.2 >Reporter: Webster Homer >Assignee: Varun Thacker >Priority: Major > Attachments: SOLR-12057.patch, SOLR-12057.patch, SOLR-12057.patch, > SOLR-12057.patch, SOLR-12057.patch, cdcr-fail-with-tlog-pull.patch, > cdcr-fail-with-tlog-pull.patch > > > We created a collection using TLOG replicas in our QA clouds. > We have a locally hosted solrcloud with 2 nodes, all our collections have 2 > shards. We use CDCR to replicate the collections from this environment to 2 > data centers hosted in Google cloud. This seems to work fairly well for our > collections with NRT replicas. However the new TLOG collection has problems. > > The google cloud solrclusters have 4 nodes each (3 separate Zookeepers). 2 > shards per collection with 2 replicas per shard. > > We never see data show up in the cloud collections, but we do see tlog files > show up on the cloud servers. I can see that all of the servers have cdcr > started, buffers are disabled. > The cdcr source configuration is: > > "requestHandler":{"/cdcr":{ > "name":"/cdcr", > "class":"solr.CdcrRequestHandler", > "replica":[ > { > > "zkHost":"[xxx-mzk01.sial.com:2181|http://xxx-mzk01.sial.com:2181/],[xxx-mzk02.sial.com:2181|http://xxx-mzk02.sial.com:2181/],[xxx-mzk03.sial.com:2181/solr|http://xxx-mzk03.sial.com:2181/solr]";, > "source":"b2b-catalog-material-180124T", > "target":"b2b-catalog-material-180124T"}, > { > > "zkHost":"[-mzk01.sial.com:2181|http://-mzk01.sial.com:2181/],[-mzk02.sial.com:2181|http://-mzk02.sial.com:2181/],[-mzk03.sial.com:2181/solr|http://-mzk03.sial.com:2181/solr]";, > "source":"b2b-catalog-material-180124T", > "target":"b2b-catalog-material-180124T"}], > "replicator":{ > "threadPoolSize":4, > "schedule":500, > "batchSize":250}, > "updateLogSynchronizer":\{"schedule":6 > > The target configurations in the 2 clouds are the same: > "requestHandler":{"/cdcr":{ "name":"/cdcr", > "class":"solr.CdcrRequestHandler", "buffer":{"defaultState":"disabled"}}} > > All of our collections have a timestamp field, index_date. In the source > collection all the records have a date of 2/28/2018 but the target > collections have a latest date of 1/26/2018 > > I don't see cdcr errors in the logs, but we use logstash to search them, and > we're still perfecting that. > > We have a number of similar collections that behave correctly. This is the > only collection that is a TLOG collection. It appears that CDCR doesn't > support TLOG collections. > > It looks like the data is getting to the target servers. I see tlog files > with the right timestamps. Looking at the timestamps on the documents in the > collection none of the data appears to have been loaded.In the solr.log I see > lots of /cdcr messages action=LASTPROCESSEDVERSION, > action=COLLECTIONCHECKPOINT, and action=SHARDCHECKPOINT > > no errors > > Target collections autoCommit is set to 6 I tried sending a commit > explicitly no difference. cdcr is uploading data, but no new data appears in > the collection. > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12922) Facet parser plugin for json.facet
[ https://issues.apache.org/jira/browse/SOLR-12922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Gibney updated SOLR-12922: -- Attachment: SOLR-12922.patch > Facet parser plugin for json.facet > -- > > Key: SOLR-12922 > URL: https://issues.apache.org/jira/browse/SOLR-12922 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) > Components: Facet Module >Reporter: Mikhail Khludnev >Priority: Minor > Attachments: SOLR-12922.patch, SOLR-12922.patch > > > Why don't introduce a plugin for json facet parsers? Attaching draft patch, > it just demonstrate the thing. Test fails, iirc. Opinions? -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6354) Add minChildren and maxChildren options to ToParentBlockJoinQuery
[ https://issues.apache.org/jira/browse/LUCENE-6354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16664223#comment-16664223 ] Ben Weisburd commented on LUCENE-6354: -- It would be great if this could be merged to unblock https://github.com/elastic/elasticsearch/issues/10043 > Add minChildren and maxChildren options to ToParentBlockJoinQuery > - > > Key: LUCENE-6354 > URL: https://issues.apache.org/jira/browse/LUCENE-6354 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Martijn van Groningen >Priority: Major > Attachments: LUCENE-6354.patch, LUCENE-6354.patch, LUCENE-6354.patch, > LUCENE-6354.patch > > > This effectively allows to ignore parent documents with too few children > documents via the minChildren option or too many matching children documents > via the maxChildren option. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-repro - Build # 1778 - Unstable
Build: https://builds.apache.org/job/Lucene-Solr-repro/1778/ [...truncated 32 lines...] [repro] Jenkins log URL: https://builds.apache.org/job/Lucene-Solr-Tests-master/2900/consoleText [repro] Revision: 26e14986af7aa60b72940f611f63b2a50fbb9980 [repro] Repro line: ant test -Dtestcase=CloudSolrClientTest -Dtests.method=testParallelUpdateQTime -Dtests.seed=7B32C9F4972944D7 -Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=be -Dtests.timezone=America/Guatemala -Dtests.asserts=true -Dtests.file.encoding=US-ASCII [repro] git rev-parse --abbrev-ref HEAD [repro] git rev-parse HEAD [repro] Initial local git branch/revision: 8d109393492924cdde9663b9b9c4da00daaae433 [repro] git fetch [repro] git checkout 26e14986af7aa60b72940f611f63b2a50fbb9980 [...truncated 2 lines...] [repro] git merge --ff-only [...truncated 1 lines...] [repro] ant clean [...truncated 6 lines...] [repro] Test suites by module: [repro]solr/solrj [repro] CloudSolrClientTest [repro] ant compile-test [...truncated 2703 lines...] [repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 -Dtests.class="*.CloudSolrClientTest" -Dtests.showOutput=onerror -Dtests.seed=7B32C9F4972944D7 -Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=be -Dtests.timezone=America/Guatemala -Dtests.asserts=true -Dtests.file.encoding=US-ASCII [...truncated 1542 lines...] [repro] Setting last failure code to 256 [repro] Failures: [repro] 3/5 failed: org.apache.solr.client.solrj.impl.CloudSolrClientTest [repro] git checkout 8d109393492924cdde9663b9b9c4da00daaae433 [...truncated 2 lines...] [repro] Exiting with code 256 [...truncated 6 lines...] - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7964) suggest.highlight=true does not work when using context filter query
[ https://issues.apache.org/jira/browse/SOLR-7964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16664221#comment-16664221 ] Amrit Sarkar commented on SOLR-7964: If we can change the Priority to *Major* and include *master* in the affected versions; we may get it included in the next version. > suggest.highlight=true does not work when using context filter query > > > Key: SOLR-7964 > URL: https://issues.apache.org/jira/browse/SOLR-7964 > Project: Solr > Issue Type: Improvement > Components: Suggester >Affects Versions: 5.4 >Reporter: Arcadius Ahouansou >Priority: Minor > Labels: suggester > Attachments: SOLR-7964.patch, SOLR_7964.patch, SOLR_7964.patch > > > When using the new suggester context filtering query param > {{suggest.contextFilterQuery}} introduced in SOLR-7888, the param > {{suggest.highlight=true}} has no effect. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-12929) TimeRoutedAliasUpdateProcessorTest.test debug and un-badapple
David Smiley created SOLR-12929: --- Summary: TimeRoutedAliasUpdateProcessorTest.test debug and un-badapple Key: SOLR-12929 URL: https://issues.apache.org/jira/browse/SOLR-12929 Project: Solr Issue Type: Test Security Level: Public (Default Security Level. Issues are Public) Components: SolrCloud Reporter: David Smiley TimeRoutedAliasUpdateProcessorTest.test was BadApple'd recently. We need to debug this test. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-SmokeRelease-master - Build # 1163 - Still Failing
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/1163/ No tests ran. Build Log: [...truncated 23412 lines...] [asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: invalid part, must have at least one section (e.g., chapter, appendix, etc.) [asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid part, must have at least one section (e.g., chapter, appendix, etc.) [java] Processed 2436 links (1988 relative) to 3199 anchors in 248 files [echo] Validated Links & Anchors via: /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr-ref-guide/bare-bones-html/ -dist-changes: [copy] Copying 4 files to /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/changes package: -unpack-solr-tgz: -ensure-solr-tgz-exists: [mkdir] Created dir: /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr.tgz.unpacked [untar] Expanding: /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/solr-8.0.0.tgz into /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr.tgz.unpacked generate-maven-artifacts: resolve: resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml resolve: resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: i
[jira] [Commented] (SOLR-12902) Block Expensive Queries custom Solr component
[ https://issues.apache.org/jira/browse/SOLR-12902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16664209#comment-16664209 ] Varun Thacker commented on SOLR-12902: -- Thanks Anshum! In the approach Tirth has taken , it's a search component. So to configure what needs to be blocked one would need to configure the "defaults" section of a request handler. Would this model be consistent with the related hooks that you're referring to? > Block Expensive Queries custom Solr component > - > > Key: SOLR-12902 > URL: https://issues.apache.org/jira/browse/SOLR-12902 > Project: Solr > Issue Type: Task > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Tirth Rajen Mehta >Priority: Minor > Time Spent: 10m > Remaining Estimate: 0h > > Added a Block Expensive Queries custom Solr component ( > [https://github.com/apache/lucene-solr/pull/47|https://github.com/apache/lucene-solr/pull/477)] > ) : > * This search component can be plugged into your SearchHandler if you would > like to block some well known expensive queries. > * The queries that are blocked and failed by component currently are deep > pagination queries as they are known to consume lot of memory and CPU. These > are > * > ** queries with a start offset which is greater than the configured > maxStartOffset config parameter value > ** queries with a row param value which is greater than the configured > maxRowsFetch config parameter value -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12903) Query Source Tracker custom Solr component
[ https://issues.apache.org/jira/browse/SOLR-12903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16664201#comment-16664201 ] Varun Thacker commented on SOLR-12903: -- Hi Tirth, What's the motivation behind this? Is it to make sure only clients that have access to the secret values can query the system? > Query Source Tracker custom Solr component > -- > > Key: SOLR-12903 > URL: https://issues.apache.org/jira/browse/SOLR-12903 > Project: Solr > Issue Type: Task > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Tirth Rajen Mehta >Priority: Minor > Time Spent: 10m > Remaining Estimate: 0h > > Added a Query Source Tracker custom Solr component > (https://github.com/apache/lucene-solr/pull/478) : > * This component can be configured for a RequestHandler for query requests. > * This component mandates that clients to pass in a "qi" request parameter > with a valid value which is configured in the SearchComponent definition in > the solrconfig.xml file. > * It fails the query if the "qi" parameter is missing or if the value passed > in is in-valid. This behavior of failing the queries can be controlled by the > failQueries config parameter. > * It also collects the rate per sec metric per unique "qi" value. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] lucene-solr pull request #483: Log Delete Query Processor custom solr compon...
Github user vthacker commented on a diff in the pull request: https://github.com/apache/lucene-solr/pull/483#discussion_r228304003 --- Diff: solr/core/src/java/org/apache/solr/update/processor/LogUpdateProcessorFactory.java --- @@ -187,12 +190,23 @@ public void finish() throws IOException { log.info(getLogStringAndClearRspToLog()); } + if (deleteLog.isInfoEnabled()) { --- End diff -- What if it's a mixed set of commands like this example : http://lucene.apache.org/solr/guide/7_5/uploading-data-with-index-handlers.html#sending-json-update-commands Is the goal here to log any request that contains a delete or log only deletes to a separate file? The final condition check here will need to vary based on that --- - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12928) TimeRoutedAliasUpdateProcessorTest testSliceRouting fails 1% of time
[ https://issues.apache.org/jira/browse/SOLR-12928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Smiley updated SOLR-12928: Attachment: testSliceRouting b23054.log.zip > TimeRoutedAliasUpdateProcessorTest testSliceRouting fails 1% of time > > > Key: SOLR-12928 > URL: https://issues.apache.org/jira/browse/SOLR-12928 > Project: Solr > Issue Type: Test > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Reporter: David Smiley >Priority: Major > Attachments: testSliceRouting b23054.log.zip > > > org.apache.solr.update.processor.TimeRoutedAliasUpdateProcessorTest#testSliceRouting > fails 1% of time: > [http://fucit.org/solr-jenkins-reports/failure-report.html] -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12928) TimeRoutedAliasUpdateProcessorTest testSliceRouting fails 1% of time
[ https://issues.apache.org/jira/browse/SOLR-12928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16664193#comment-16664193 ] David Smiley commented on SOLR-12928: - I looked at a recent failure... env: {noformat} Started by upstream project "Lucene-Solr-master-Linux" build number 23054 originally caused by: [ScriptTrigger] Groovy Expression evaluation to true. (log) [EnvInject] - Loading node environment variables. [EnvInject] - Preparing an environment for the build. [EnvInject] - Keeping Jenkins system variables. [EnvInject] - Keeping Jenkins build variables. [EnvInject] - Evaluating the Groovy script content Using Java: 64bit/jdk-12-ea+12 -XX:+UseCompressedOops -XX:+UseG1GC [EnvInject] - Injecting contributions. Building on master in workspace /var/lib/jenkins/workspace/Lucene-Solr-7.x-Linux Fetching changes from the remote Git repository Cleaning workspace Checking out Revision 2f61f96bfae9d97e3536305e49865433e28737c2 (refs/remotes/origin/branch_7x) Commit message: "SOLR-10981: Support for stream.url or stream.file pointing to gzipped data" No emails were triggered. Setting ANT_1_8_2_HOME=/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2 [description-setter] Description set: Java: 64bit/jdk-12-ea+12 -XX:+UseCompressedOops -XX:+UseG1GC Setting ANT_1_8_2_HOME=/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2 [Lucene-Solr-7.x-Linux] $ /var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/bin/ant "-Dargs=-XX:+UseCompressedOops -XX:+UseG1GC" jenkins-hourly Buildfile: /home/jenkins/workspace/Lucene-Solr-7.x-Linux/build.xml {noformat} And took out the logs for this particular test. I noticed SSL problem: {noformat} Caused by: javax.net.ssl.SSLException: Received fatal alert: internal_error [junit4] 2> at java.base/sun.security.ssl.Alert.createSSLException(Alert.java:129) [junit4] 2> at java.base/sun.security.ssl.Alert.createSSLException(Alert.java:117) [junit4] 2> at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:308) [junit4] 2> at java.base/sun.security.ssl.Alert$AlertConsumer.consume(Alert.java:279) [junit4] 2> at java.base/sun.security.ssl.TransportContext.dispatch(TransportContext.java:181) [junit4] 2> at java.base/sun.security.ssl.SSLTransport.decode(SSLTransport.java:164) {noformat} CC [~gus_heck] > TimeRoutedAliasUpdateProcessorTest testSliceRouting fails 1% of time > > > Key: SOLR-12928 > URL: https://issues.apache.org/jira/browse/SOLR-12928 > Project: Solr > Issue Type: Test > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Reporter: David Smiley >Priority: Major > > org.apache.solr.update.processor.TimeRoutedAliasUpdateProcessorTest#testSliceRouting > fails 1% of time: > [http://fucit.org/solr-jenkins-reports/failure-report.html] -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12902) Block Expensive Queries custom Solr component
[ https://issues.apache.org/jira/browse/SOLR-12902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16664187#comment-16664187 ] Anshum Gupta commented on SOLR-12902: - [~varunthacker] I'll take a look. I just glanced through this and seemed like a decent starting point. I've worked on a bunch of things for update/core/schema side of things, but that is different w.r.t. where it hooks in. Whatever we do, we should think about how we expose the settings and have them all seem connected at both query and update side. > Block Expensive Queries custom Solr component > - > > Key: SOLR-12902 > URL: https://issues.apache.org/jira/browse/SOLR-12902 > Project: Solr > Issue Type: Task > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Tirth Rajen Mehta >Priority: Minor > Time Spent: 10m > Remaining Estimate: 0h > > Added a Block Expensive Queries custom Solr component ( > [https://github.com/apache/lucene-solr/pull/47|https://github.com/apache/lucene-solr/pull/477)] > ) : > * This search component can be plugged into your SearchHandler if you would > like to block some well known expensive queries. > * The queries that are blocked and failed by component currently are deep > pagination queries as they are known to consume lot of memory and CPU. These > are > * > ** queries with a start offset which is greater than the configured > maxStartOffset config parameter value > ** queries with a row param value which is greater than the configured > maxRowsFetch config parameter value -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-12928) TimeRoutedAliasUpdateProcessorTest testSliceRouting fails 1% of time
David Smiley created SOLR-12928: --- Summary: TimeRoutedAliasUpdateProcessorTest testSliceRouting fails 1% of time Key: SOLR-12928 URL: https://issues.apache.org/jira/browse/SOLR-12928 Project: Solr Issue Type: Test Security Level: Public (Default Security Level. Issues are Public) Components: SolrCloud Reporter: David Smiley org.apache.solr.update.processor.TimeRoutedAliasUpdateProcessorTest#testSliceRouting fails 1% of time: [http://fucit.org/solr-jenkins-reports/failure-report.html] -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] lucene-solr pull request #483: Log Delete Query Processor custom solr compon...
Github user vthacker commented on a diff in the pull request: https://github.com/apache/lucene-solr/pull/483#discussion_r228301278 --- Diff: solr/server/resources/log4j2.xml --- @@ -67,6 +67,10 @@ + + --- End diff -- Is LogDeleteQueryProcessorFactory defined somewhere? Or should the class name be `LogUpdateProcessorFactory` ? --- - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-12927) Ref Guide: Upgrade Notes for 7.6
Cassandra Targett created SOLR-12927: Summary: Ref Guide: Upgrade Notes for 7.6 Key: SOLR-12927 URL: https://issues.apache.org/jira/browse/SOLR-12927 Project: Solr Issue Type: Improvement Security Level: Public (Default Security Level. Issues are Public) Components: documentation Reporter: Cassandra Targett Assignee: Cassandra Targett Fix For: 7.6 Add Upgrade Notes from CHANGES and any other relevant changes worth mentioning. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12902) Block Expensive Queries custom Solr component
[ https://issues.apache.org/jira/browse/SOLR-12902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16664178#comment-16664178 ] Varun Thacker commented on SOLR-12902: -- Hi Tirth, I've left a few comments on the PR . I think continuing the review on the PR would be the best approach Future ideas could be block facet requests with rows=-1 , wildcards in the middle ( you've already mentioned leading wildcards in the docs ) etc. [~anshumg] @Tomas do you guys have any thoughts on this approach? > Block Expensive Queries custom Solr component > - > > Key: SOLR-12902 > URL: https://issues.apache.org/jira/browse/SOLR-12902 > Project: Solr > Issue Type: Task > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Tirth Rajen Mehta >Priority: Minor > Time Spent: 10m > Remaining Estimate: 0h > > Added a Block Expensive Queries custom Solr component ( > [https://github.com/apache/lucene-solr/pull/47|https://github.com/apache/lucene-solr/pull/477)] > ) : > * This search component can be plugged into your SearchHandler if you would > like to block some well known expensive queries. > * The queries that are blocked and failed by component currently are deep > pagination queries as they are known to consume lot of memory and CPU. These > are > * > ** queries with a start offset which is greater than the configured > maxStartOffset config parameter value > ** queries with a row param value which is greater than the configured > maxRowsFetch config parameter value -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS-EA] Lucene-Solr-7.x-Linux (64bit/jdk-12-ea+12) - Build # 2977 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2977/ Java: 64bit/jdk-12-ea+12 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC 34 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.handler.admin.AutoscalingHistoryHandlerTest Error Message: Could not find collection : AutoscalingHistoryHandlerTest_collection Stack Trace: org.apache.solr.common.SolrException: Could not find collection : AutoscalingHistoryHandlerTest_collection at __randomizedtesting.SeedInfo.seed([F6166BFD0E7E00F5]:0) at org.apache.solr.common.cloud.ClusterState.getCollection(ClusterState.java:118) at org.apache.solr.cloud.SolrCloudTestCase.getCollectionState(SolrCloudTestCase.java:258) at org.apache.solr.handler.admin.AutoscalingHistoryHandlerTest.waitForRecovery(AutoscalingHistoryHandlerTest.java:403) at org.apache.solr.handler.admin.AutoscalingHistoryHandlerTest.setupCluster(AutoscalingHistoryHandlerTest.java:97) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:875) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.base/java.lang.Thread.run(Thread.java:835) FAILED: junit.framework.TestSuite.org.apache.solr.handler.admin.AutoscalingHistoryHandlerTest Error Message: Could not find collection : AutoscalingHistoryHandlerTest_collection Stack Trace: org.apache.solr.common.SolrException: Could not find collection : AutoscalingHistoryHandlerTest_collection at __randomizedtesting.SeedInfo.seed([F6166BFD0E7E00F5]:0) at org.apache.solr.common.cloud.ClusterState.getCollection(ClusterState.java:118) at org.apache.solr.cloud.SolrCloudTestCase.getCollectionState(SolrCloudTestCase.java:258) at org.apache.solr.handler.admin.AutoscalingHistoryHandlerTest.waitForRecovery(AutoscalingHistoryHandlerTest.java:403) at org.apache.solr.handler.admin.AutoscalingHistoryHandlerTest.setupCluster(AutoscalingHistoryHandlerTest.java:97) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:875) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAd
[GitHub] lucene-solr issue #477: Block Expensive Queries custom component
Github user vthacker commented on the issue: https://github.com/apache/lucene-solr/pull/477 Hi Tirth, It would be great if we could have a test case for this. --- - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] lucene-solr pull request #477: Block Expensive Queries custom component
Github user vthacker commented on a diff in the pull request: https://github.com/apache/lucene-solr/pull/477#discussion_r228296662 --- Diff: solr/core/src/java/org/apache/solr/search/BlockExpensiveQueries.java --- @@ -0,0 +1,99 @@ +package org.apache.solr.search; + +import java.io.IOException; + +import org.apache.lucene.analysis.Analyzer; +import org.apache.lucene.analysis.util.TokenFilterFactory; +import org.apache.solr.analysis.ReversedWildcardFilterFactory; +import org.apache.solr.analysis.TokenizerChain; +import org.apache.solr.common.util.NamedList; +import org.apache.solr.handler.component.ResponseBuilder; +import org.apache.solr.handler.component.SearchComponent; +import org.apache.solr.request.SolrQueryRequest; +import org.apache.solr.response.SolrQueryResponse; +import org.apache.solr.search.SortSpec; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +/** + * This search component can be plugged into your SearchHandler if you would like to block some well known expensive queries. + * The queries that are blocked and failed by component currently are deep pagination queries as they are known to consume lot of memory and CPU + * + * queries with a start offset which is greater than the configured maxStartOffset config parameter value + * queries with a row param value which is greater than the configured maxRowsFetch config parameter value + * + * + * In future we would also like to extend this component to prevent + * + * facet pivot queries, controlled by a config param + * regular facet queries, controlled by a config param + * query with wildcard in the prefix if the field does not have ReversedWildCartPattern configured + * + * + * + */ + +public class BlockExpensiveQueries extends SearchComponent { + +private static final Logger LOG = LoggerFactory.getLogger(BlockExpensiveQueries.class); + +private int maxStartOffset = 1; +private int maxRowsFetch = 1000; +private NamedList initParams; + +@Override +@SuppressWarnings("unchecked") +public void init(NamedList args) { +LOG.info("Loading the BlockExpensiveQueries component"); +super.init(args); +this.initParams = args; + +if (args != null) { +Object o = args.get("defaults"); +if (o != null && o instanceof NamedList) { +maxStartOffset = (Integer)((NamedList)o).get("maxStartOffset"); +maxRowsFetch = (Integer)((NamedList)o).get("maxRowsFetch"); +LOG.info("Using maxStartOffset={}. maxRowsFetch={}", maxStartOffset, maxRowsFetch); +} +} else { +LOG.info("Using default values, maxStartOffset={}. maxRowsFetch={}", maxStartOffset, maxRowsFetch); +} +} + +@Override +public void prepare(ResponseBuilder rb) throws IOException { +SolrQueryRequest req = rb.req; +SolrQueryResponse rsp = rb.rsp; +SortSpec sortSpec = rb.getSortSpec(); +int offset = sortSpec.getOffset(); +int count = sortSpec.getCount(); +LOG.info("Query offset={}, rows={}", offset, count); + +//check if cursorMark is used if we would like to allow deep pagination with cursor mark queries +boolean isDistributed = req.getParams().getBool("distrib", true); +if (isDistributed) { +String cursorMarkMsg = "Queries with high \"start\" or high \"rows\" parameters are a performance problem in Solr. " + + "If you really have a use-case for such queries, consider using \"cursors\" for pagination of results. " + + "Refer: https://lucene.apache.org/solr/guide/pagination-of-results.html.";; +if (offset > maxStartOffset) { +throw new IOException(String.format("The start=%s value exceeded the max offset allowed value of %s. %s", --- End diff -- Maybe this should be a SolrException with BAD_REQUEST as the error code? So something like ... `throw new SolrException(SolrException.ErrorCode.BAD_REQUEST,"error message"` --- - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-7.x-Windows (64bit/jdk-9.0.4) - Build # 852 - Failure!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/852/ Java: 64bit/jdk-9.0.4 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC 1 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.handler.component.PhrasesIdentificationComponentTest Error Message: Could not remove the following files (in the order of attempts): C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\solr-core\test\J0\temp\solr.handler.component.PhrasesIdentificationComponentTest_BE8E603D610A04C5-001\init-core-data-001: java.nio.file.AccessDeniedException: C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\solr-core\test\J0\temp\solr.handler.component.PhrasesIdentificationComponentTest_BE8E603D610A04C5-001\init-core-data-001 C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\solr-core\test\J0\temp\solr.handler.component.PhrasesIdentificationComponentTest_BE8E603D610A04C5-001: java.nio.file.DirectoryNotEmptyException: C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\solr-core\test\J0\temp\solr.handler.component.PhrasesIdentificationComponentTest_BE8E603D610A04C5-001 Stack Trace: java.io.IOException: Could not remove the following files (in the order of attempts): C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\solr-core\test\J0\temp\solr.handler.component.PhrasesIdentificationComponentTest_BE8E603D610A04C5-001\init-core-data-001: java.nio.file.AccessDeniedException: C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\solr-core\test\J0\temp\solr.handler.component.PhrasesIdentificationComponentTest_BE8E603D610A04C5-001\init-core-data-001 C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\solr-core\test\J0\temp\solr.handler.component.PhrasesIdentificationComponentTest_BE8E603D610A04C5-001: java.nio.file.DirectoryNotEmptyException: C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\solr-core\test\J0\temp\solr.handler.component.PhrasesIdentificationComponentTest_BE8E603D610A04C5-001 at __randomizedtesting.SeedInfo.seed([BE8E603D610A04C5]:0) at org.apache.lucene.util.IOUtils.rm(IOUtils.java:318) at org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216) at com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.base/java.lang.Thread.run(Thread.java:844) Build Log: [...truncated 1940 lines...] [junit4] JVM J0: stderr was not empty, see: C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\temp\junit4-J0-20181025_170122_476166421905204857327.syserr [junit4] >>> JVM J0 emitted unexpected output (verbatim) [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release. [junit4] <<< JVM J0: EOF [...truncated 3 lines...] [junit4] JVM J1: stderr was not empty, see: C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\temp\junit4-J1-20181025_170122_4771838786431062091.syserr [junit4] >>> JVM J1 emitted unexpected output (verbatim) [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release. [junit4] <<< JVM J1: EOF [...truncated 316 lines...] [junit4] JVM J1: stderr was not empty, see: C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\test-framework\test\temp\junit4-J1-20181025_170700_0128090105000625339413.syserr [junit4] >>> JVM J1 emitted unexpected output (verbatim) [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release. [junit4] <<< JVM J1: EOF [...truncated 3 lines...] [junit4] JVM J0: stderr was not empty, see: C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\test-framework\test\temp\junit4-J0-20181025_170700_0124907239359584395256.syserr [junit4] >>> JVM J0 emitted unexpected output (verbatim) [junit4] OpenJDK 6
[JENKINS] Lucene-Solr-Tests-7.x - Build # 975 - Unstable
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/975/ 5 tests failed. FAILED: org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica Error Message: Expected new active leader null Live Nodes: [127.0.0.1:40256_solr, 127.0.0.1:42327_solr, 127.0.0.1:42695_solr] Last available state: DocCollection(raceDeleteReplica_false//collections/raceDeleteReplica_false/state.json/14)={ "pullReplicas":"0", "replicationFactor":"2", "shards":{"shard1":{ "range":"8000-7fff", "state":"active", "replicas":{ "core_node4":{ "core":"raceDeleteReplica_false_shard1_replica_n2", "base_url":"http://127.0.0.1:46433/solr";, "node_name":"127.0.0.1:46433_solr", "state":"down", "type":"NRT", "force_set_state":"false", "leader":"true"}, "core_node6":{ "core":"raceDeleteReplica_false_shard1_replica_n5", "base_url":"http://127.0.0.1:46433/solr";, "node_name":"127.0.0.1:46433_solr", "state":"down", "type":"NRT", "force_set_state":"false", "router":{"name":"compositeId"}, "maxShardsPerNode":"1", "autoAddReplicas":"false", "nrtReplicas":"2", "tlogReplicas":"0"} Stack Trace: java.lang.AssertionError: Expected new active leader null Live Nodes: [127.0.0.1:40256_solr, 127.0.0.1:42327_solr, 127.0.0.1:42695_solr] Last available state: DocCollection(raceDeleteReplica_false//collections/raceDeleteReplica_false/state.json/14)={ "pullReplicas":"0", "replicationFactor":"2", "shards":{"shard1":{ "range":"8000-7fff", "state":"active", "replicas":{ "core_node4":{ "core":"raceDeleteReplica_false_shard1_replica_n2", "base_url":"http://127.0.0.1:46433/solr";, "node_name":"127.0.0.1:46433_solr", "state":"down", "type":"NRT", "force_set_state":"false", "leader":"true"}, "core_node6":{ "core":"raceDeleteReplica_false_shard1_replica_n5", "base_url":"http://127.0.0.1:46433/solr";, "node_name":"127.0.0.1:46433_solr", "state":"down", "type":"NRT", "force_set_state":"false", "router":{"name":"compositeId"}, "maxShardsPerNode":"1", "autoAddReplicas":"false", "nrtReplicas":"2", "tlogReplicas":"0"} at __randomizedtesting.SeedInfo.seed([AE05E79B7C9B3838:C413864B146972F2]:0) at org.junit.Assert.fail(Assert.java:93) at org.apache.solr.cloud.SolrCloudTestCase.waitForState(SolrCloudTestCase.java:280) at org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica(DeleteReplicaTest.java:334) at org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica(DeleteReplicaTest.java:230) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.jav
[jira] [Commented] (SOLR-7964) suggest.highlight=true does not work when using context filter query
[ https://issues.apache.org/jira/browse/SOLR-7964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16664137#comment-16664137 ] Jigar Shah commented on SOLR-7964: -- +1 Waiting for this fix to be included from 6.5 version. Any plans to include in 7.x version. Many Thanks! J > suggest.highlight=true does not work when using context filter query > > > Key: SOLR-7964 > URL: https://issues.apache.org/jira/browse/SOLR-7964 > Project: Solr > Issue Type: Improvement > Components: Suggester >Affects Versions: 5.4 >Reporter: Arcadius Ahouansou >Priority: Minor > Labels: suggester > Attachments: SOLR-7964.patch, SOLR_7964.patch, SOLR_7964.patch > > > When using the new suggester context filtering query param > {{suggest.contextFilterQuery}} introduced in SOLR-7888, the param > {{suggest.highlight=true}} has no effect. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-12926) TransactionLog version consistency with doc's _version_
[ https://issues.apache.org/jira/browse/SOLR-12926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Smiley updated SOLR-12926: Attachment: SOLR-12926.patch > TransactionLog version consistency with doc's _version_ > --- > > Key: SOLR-12926 > URL: https://issues.apache.org/jira/browse/SOLR-12926 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Reporter: David Smiley >Priority: Major > Attachments: SOLR-12926.patch > > > In the TransactionLog I see that there's some metadata for the document -- > it's ID and a version (a long). Should the \_version\_ in the document be > the same as this metadata (which gets there via UpdateCommand.getVersion ? > Sometimes the doc doesn't have a version field so lets assume it's 0 (same as > UpdateCommand's default). I added an assertion on write() that checks they > are consistent and I found one test that failed (metadata=0, > doc=1615316737550450688) > {{org.apache.solr.cloud.MigrateRouteKeyTest#multipleShardMigrateTest}} > * So should they always be consistent? If so... > * We should assert this (I'll attach a quick 'n dirty patch of this) > * Document UpdateCommand.getVersion > * > org.apache.solr.handler.component.RealTimeGetComponent#getInputDocumentFromTlog > is too complicated in taking AtomicLong as an "out" parameter. If the > caller wants the version, they should get it themselves from the document > like any normal field. > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12423) Upgrade to Tika 1.19.1 when available
[ https://issues.apache.org/jira/browse/SOLR-12423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16664112#comment-16664112 ] Tim Allison commented on SOLR-12423: Thank you [~ctargett]! > Upgrade to Tika 1.19.1 when available > - > > Key: SOLR-12423 > URL: https://issues.apache.org/jira/browse/SOLR-12423 > Project: Solr > Issue Type: Task > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Tim Allison >Assignee: Erick Erickson >Priority: Major > Fix For: 7.6, master (8.0) > > Attachments: SOLR-12423.patch > > Time Spent: 50m > Remaining Estimate: 0h > > In Tika 1.19, there will be the ability to call the ForkParser and specify a > directory of jars from which to load the classes for the Parser in the child > processes. This will allow us to remove all of the parser dependencies from > Solr. We’ll still need tika-core, of course, but we could drop tika-app.jar > in the child process’ bin directory and be done with the upgrade... no more > fiddly dependency upgrades and threat of jar hell. > The ForkParser also protects against ooms, infinite loops and jvm crashes. > W00t! > This issue covers the basic upgrading to 1.19.1. For the migration to the > ForkParser, see SOLR-11721. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12801) Fix the tests, remove BadApples and AwaitsFix annotations, improve env for test development.
[ https://issues.apache.org/jira/browse/SOLR-12801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16664080#comment-16664080 ] David Smiley commented on SOLR-12801: - I've been interested in making tests easier to maintain and write -- SOLR-11872 (managed SolrClient instead of TestHarness stuff) I suppose it doesn't fit within the scope of this issue though. One relationship however is the idea that you could run tests globally but indicate you only want to run tests that, say, can work for standalone (not SolrCloud) and/or which can use just one shard. Many tests could be either-or -- the test doesn't fundamentally care either way. With some adjustments in that issue, they could be written that way. I did this for a specific client but it could have been upstreamed to Solr. Another FYI that I think is hugely important to test maintenance is SOLR-10229 concerning preventing proliferation of one-off test files. The "how" is debatable but the goal is important. > Fix the tests, remove BadApples and AwaitsFix annotations, improve env for > test development. > > > Key: SOLR-12801 > URL: https://issues.apache.org/jira/browse/SOLR-12801 > Project: Solr > Issue Type: Task > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Mark Miller >Assignee: Mark Miller >Priority: Critical > > A single issue to counteract the single issue adding tons of annotations, the > continued addition of new flakey tests, and the continued addition of > flakiness to existing tests. > Lots more to come. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-11) - Build # 23092 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/23092/ Java: 64bit/jdk-11 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC 39 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.handler.admin.AutoscalingHistoryHandlerTest Error Message: Could not find collection : AutoscalingHistoryHandlerTest_collection Stack Trace: org.apache.solr.common.SolrException: Could not find collection : AutoscalingHistoryHandlerTest_collection at __randomizedtesting.SeedInfo.seed([AE1F9C760D46ECD2]:0) at org.apache.solr.common.cloud.ClusterState.getCollection(ClusterState.java:118) at org.apache.solr.cloud.SolrCloudTestCase.getCollectionState(SolrCloudTestCase.java:258) at org.apache.solr.handler.admin.AutoscalingHistoryHandlerTest.waitForRecovery(AutoscalingHistoryHandlerTest.java:403) at org.apache.solr.handler.admin.AutoscalingHistoryHandlerTest.setupCluster(AutoscalingHistoryHandlerTest.java:97) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:875) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.base/java.lang.Thread.run(Thread.java:834) FAILED: junit.framework.TestSuite.org.apache.solr.handler.admin.AutoscalingHistoryHandlerTest Error Message: Could not find collection : AutoscalingHistoryHandlerTest_collection Stack Trace: org.apache.solr.common.SolrException: Could not find collection : AutoscalingHistoryHandlerTest_collection at __randomizedtesting.SeedInfo.seed([AE1F9C760D46ECD2]:0) at org.apache.solr.common.cloud.ClusterState.getCollection(ClusterState.java:118) at org.apache.solr.cloud.SolrCloudTestCase.getCollectionState(SolrCloudTestCase.java:258) at org.apache.solr.handler.admin.AutoscalingHistoryHandlerTest.waitForRecovery(AutoscalingHistoryHandlerTest.java:403) at org.apache.solr.handler.admin.AutoscalingHistoryHandlerTest.setupCluster(AutoscalingHistoryHandlerTest.java:97) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:875) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdap
[jira] [Resolved] (SOLR-11777) eq() ValueSource (aka Function Query) ought to support strings
[ https://issues.apache.org/jira/browse/SOLR-11777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Smiley resolved SOLR-11777. - Resolution: Duplicate > eq() ValueSource (aka Function Query) ought to support strings > -- > > Key: SOLR-11777 > URL: https://issues.apache.org/jira/browse/SOLR-11777 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: David Smiley >Priority: Major > > The {{eq()}} (boolean equals) ValueSource (aka Function Query) ought to > support strings; it currently only supports numeric fields. > The work-around is to do something like > {{exists(query(\{!v=field:value\}))}}. That will be slow unless the field is > indexed. For DocValues-only it could be efficient but is dependent on > LUCENE-8103. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12423) Upgrade to Tika 1.19.1 when available
[ https://issues.apache.org/jira/browse/SOLR-12423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16664024#comment-16664024 ] ASF subversion and git services commented on SOLR-12423: Commit 01ce3ef8ae8d2e6cc8c41fd214b6f55f7380e441 in lucene-solr's branch refs/heads/branch_7x from [~ctargett] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=01ce3ef ] SOLR-12423: fix Tika version in CHANGES > Upgrade to Tika 1.19.1 when available > - > > Key: SOLR-12423 > URL: https://issues.apache.org/jira/browse/SOLR-12423 > Project: Solr > Issue Type: Task > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Tim Allison >Assignee: Erick Erickson >Priority: Major > Fix For: 7.6, master (8.0) > > Attachments: SOLR-12423.patch > > Time Spent: 50m > Remaining Estimate: 0h > > In Tika 1.19, there will be the ability to call the ForkParser and specify a > directory of jars from which to load the classes for the Parser in the child > processes. This will allow us to remove all of the parser dependencies from > Solr. We’ll still need tika-core, of course, but we could drop tika-app.jar > in the child process’ bin directory and be done with the upgrade... no more > fiddly dependency upgrades and threat of jar hell. > The ForkParser also protects against ooms, infinite loops and jvm crashes. > W00t! > This issue covers the basic upgrading to 1.19.1. For the migration to the > ForkParser, see SOLR-11721. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-5004) Allow a shard to be split into 'n' sub-shards
[ https://issues.apache.org/jira/browse/SOLR-5004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16664023#comment-16664023 ] ASF subversion and git services commented on SOLR-5004: --- Commit f3981c850a588a97d1061bc0d68805f5f9728bf1 in lucene-solr's branch refs/heads/branch_7x from [~ctargett] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f3981c8 ] SOLR-5004: put param names and values in monospace > Allow a shard to be split into 'n' sub-shards > - > > Key: SOLR-5004 > URL: https://issues.apache.org/jira/browse/SOLR-5004 > Project: Solr > Issue Type: Improvement > Components: SolrCloud >Affects Versions: 4.3, 4.3.1 >Reporter: Anshum Gupta >Assignee: Anshum Gupta >Priority: Major > Fix For: 7.6, master (8.0) > > Attachments: SOLR-5004.01.patch, SOLR-5004.02.patch, > SOLR-5004.03.patch, SOLR-5004.04.patch, SOLR-5004.patch, SOLR-5004.patch > > > As of now, a SPLITSHARD call is hardcoded to create 2 sub-shards from the > parent one. Accept a parameter to split into n sub-shards. > Default it to 2 and perhaps also have an upper bound to it. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12926) TransactionLog version consistency with doc's _version_
[ https://issues.apache.org/jira/browse/SOLR-12926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16664022#comment-16664022 ] David Smiley commented on SOLR-12926: - [~ichattopadhyaya] I see you modified {{org.apache.solr.handler.component.RealTimeGetComponent#getInputDocumentFromTlog}} to have the AtomicLong parameter for SOLR-5944 Can you shed some light here? > TransactionLog version consistency with doc's _version_ > --- > > Key: SOLR-12926 > URL: https://issues.apache.org/jira/browse/SOLR-12926 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Reporter: David Smiley >Priority: Major > > In the TransactionLog I see that there's some metadata for the document -- > it's ID and a version (a long). Should the \_version\_ in the document be > the same as this metadata (which gets there via UpdateCommand.getVersion ? > Sometimes the doc doesn't have a version field so lets assume it's 0 (same as > UpdateCommand's default). I added an assertion on write() that checks they > are consistent and I found one test that failed (metadata=0, > doc=1615316737550450688) > {{org.apache.solr.cloud.MigrateRouteKeyTest#multipleShardMigrateTest}} > * So should they always be consistent? If so... > * We should assert this (I'll attach a quick 'n dirty patch of this) > * Document UpdateCommand.getVersion > * > org.apache.solr.handler.component.RealTimeGetComponent#getInputDocumentFromTlog > is too complicated in taking AtomicLong as an "out" parameter. If the > caller wants the version, they should get it themselves from the document > like any normal field. > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-12926) TransactionLog version consistency with doc's _version_
David Smiley created SOLR-12926: --- Summary: TransactionLog version consistency with doc's _version_ Key: SOLR-12926 URL: https://issues.apache.org/jira/browse/SOLR-12926 Project: Solr Issue Type: Improvement Security Level: Public (Default Security Level. Issues are Public) Components: SolrCloud Reporter: David Smiley In the TransactionLog I see that there's some metadata for the document -- it's ID and a version (a long). Should the \_version\_ in the document be the same as this metadata (which gets there via UpdateCommand.getVersion ? Sometimes the doc doesn't have a version field so lets assume it's 0 (same as UpdateCommand's default). I added an assertion on write() that checks they are consistent and I found one test that failed (metadata=0, doc=1615316737550450688) {{org.apache.solr.cloud.MigrateRouteKeyTest#multipleShardMigrateTest}} * So should they always be consistent? If so... * We should assert this (I'll attach a quick 'n dirty patch of this) * Document UpdateCommand.getVersion * org.apache.solr.handler.component.RealTimeGetComponent#getInputDocumentFromTlog is too complicated in taking AtomicLong as an "out" parameter. If the caller wants the version, they should get it themselves from the document like any normal field. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12423) Upgrade to Tika 1.19.1 when available
[ https://issues.apache.org/jira/browse/SOLR-12423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16664016#comment-16664016 ] ASF subversion and git services commented on SOLR-12423: Commit 8d109393492924cdde9663b9b9c4da00daaae433 in lucene-solr's branch refs/heads/master from [~ctargett] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=8d10939 ] SOLR-12423: fix Tika version in CHANGES > Upgrade to Tika 1.19.1 when available > - > > Key: SOLR-12423 > URL: https://issues.apache.org/jira/browse/SOLR-12423 > Project: Solr > Issue Type: Task > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Tim Allison >Assignee: Erick Erickson >Priority: Major > Fix For: 7.6, master (8.0) > > Attachments: SOLR-12423.patch > > Time Spent: 50m > Remaining Estimate: 0h > > In Tika 1.19, there will be the ability to call the ForkParser and specify a > directory of jars from which to load the classes for the Parser in the child > processes. This will allow us to remove all of the parser dependencies from > Solr. We’ll still need tika-core, of course, but we could drop tika-app.jar > in the child process’ bin directory and be done with the upgrade... no more > fiddly dependency upgrades and threat of jar hell. > The ForkParser also protects against ooms, infinite loops and jvm crashes. > W00t! > This issue covers the basic upgrading to 1.19.1. For the migration to the > ForkParser, see SOLR-11721. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-5004) Allow a shard to be split into 'n' sub-shards
[ https://issues.apache.org/jira/browse/SOLR-5004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16664015#comment-16664015 ] ASF subversion and git services commented on SOLR-5004: --- Commit 93ccdce57c85fa652efa6b328344a267ba3319fd in lucene-solr's branch refs/heads/master from [~ctargett] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=93ccdce ] SOLR-5004: put param names and values in monospace > Allow a shard to be split into 'n' sub-shards > - > > Key: SOLR-5004 > URL: https://issues.apache.org/jira/browse/SOLR-5004 > Project: Solr > Issue Type: Improvement > Components: SolrCloud >Affects Versions: 4.3, 4.3.1 >Reporter: Anshum Gupta >Assignee: Anshum Gupta >Priority: Major > Fix For: 7.6, master (8.0) > > Attachments: SOLR-5004.01.patch, SOLR-5004.02.patch, > SOLR-5004.03.patch, SOLR-5004.04.patch, SOLR-5004.patch, SOLR-5004.patch > > > As of now, a SPLITSHARD call is hardcoded to create 2 sub-shards from the > parent one. Accept a parameter to split into n sub-shards. > Default it to 2 and perhaps also have an upper bound to it. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: Solr Tests
Hopefully they don't mind too much that I'm speaking for them, but on behalf of myself and the 7 other committers that make up the Solr Team at Lucidworks, we are also in and eager to help. We've been trying to chip away at the problem for the past year+, but we need a systemic change to really make it better. I will do what I can to delay or defer some of our Lucidworks planned tasks to free up as much time for people as possible. On Thu, Oct 25, 2018 at 1:35 AM Anshum Gupta wrote: > +1 Mark! I’m in. > > * *Anshum > > > On Oct 24, 2018, at 11:05 PM, Mark Miller wrote: > > My boss has indicated I'm going to get a little time to spend on > addressing our test situation. Turns out the current downward trend > situation is a little worrying to some that count on the project down the > road ;) > > We can finally dig out though, I promise. > > It's going to be a little bit before I'm clear and gathering steam, but > I've got this figured out. I'm finally playing checkers instead of rock / > paper / scissors. > > I'm going to eat a lot of work to get there and I promise I can get there, > but I'm going to need some help. > > If you have an interest in changing our test situation, and I know some of > you do, please join me as I invite others to help out more in a short time. > > Beyond some help with code effort though, please get involved in > discussions and JIRA issues around changing our test culture and process. > We need to turn this corner as a group, we need to come together on some > new behaviors, we need to generate large enough buy in to do something > lasting. > > You all need to learn 'ant beast' (I'm fixing it to work *much* better) > and my beasting test script gist (each has there place). I'm going to force > it on you even if you don't ;) > > > https://gist.githubusercontent.com/markrmiller/dbdb792216dc98b018ad/raw/cd084d01f405fa271af4b72a2848dc87374a1f71/gistfile1.sh > > For a long time I put most of my attention towards SolrCloud stability and > scale. From 2012-2015 or so, that felt like the biggest threat to the > project while also getting relatively little attention. I felt like it was > 50/50 in 2014 that I'd even be talking to anyone about SolrCloud today. > That hurdle is done and over with. The biggest threat to Solr and SolrCloud > now is our tests. > > I'll be working from this umbrella issue: > > https://issues.apache.org/jira/browse/SOLR-12801 : Fix the tests, remove > BadApples and AwaitsFix annotations, improve env for test development. > > Join the discussion. > > - Mark > -- > - Mark > about.me/markrmiller > > >
[jira] [Comment Edited] (SOLR-12921) Separate Solr unit tests and integration tests.
[ https://issues.apache.org/jira/browse/SOLR-12921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16663990#comment-16663990 ] Erick Erickson edited comment on SOLR-12921 at 10/25/18 4:48 PM: - WDYT about changing precommit to forbid new files (and maybe changes too) in the current test file tree? The idea here is that if we want new work to respect the reorganization, adding that precommit check would freeze the current mixup and we could move tests into the new structure over time rather than all at once. I have no idea how practical this is frankly, and it might mean some period of significant code duplication. If it can all be done in one big shot that'd be great. It's just that trying to do it all at once seems pretty hard. A finer-grained approach would be to forbid specific directories under the tree, e.g. ...solr/core/src/test/org/apache/solr/analysis/ was (Author: erickerickson): WDYT about changing precommit to forbid new files (and maybe changes too) in the current test file tree? The idea here is that if we want new work to respect the reorganization, adding that precommit check would freeze the current mixup and we could move tests into the new structure over time rather than all at once. I have no idea how practical this is frankly, and it might mean some period of significant code duplication. If it can all be done in one big shot that'd be great. It's just that trying to do it all at once seems pretty hard. > Separate Solr unit tests and integration tests. > --- > > Key: SOLR-12921 > URL: https://issues.apache.org/jira/browse/SOLR-12921 > Project: Solr > Issue Type: Sub-task > Security Level: Public(Default Security Level. Issues are Public) > Components: Tests >Reporter: Mark Miller >Priority: Major > > We basically just have "tests" now. We should have separate locations for > unit and integration tests and new work should have a good reason to not > include both. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: [jira] [Commented] (SOLR-12423) Upgrade to Tika 1.19.1 when available
None at all, thanks for catching that! On Thu, Oct 25, 2018 at 9:39 AM Cassandra Targett (JIRA) wrote: > > > [ > https://issues.apache.org/jira/browse/SOLR-12423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16663981#comment-16663981 > ] > > Cassandra Targett commented on SOLR-12423: > -- > > This is marked as Fixed for 7.6, and the issue appears in CHANGES under 7.6, > but the version listed under "Versions of Major Components" still lists 1.18 > as the Tika version, while the same section under 8.0 lists 1.19.1. > > This seems like a simple oversight, but [~erickerickson], > [~talli...@apache.org], any specific reason why I shouldn't just fix that? > > > Upgrade to Tika 1.19.1 when available > > - > > > > Key: SOLR-12423 > > URL: https://issues.apache.org/jira/browse/SOLR-12423 > > Project: Solr > > Issue Type: Task > > Security Level: Public(Default Security Level. Issues are Public) > >Reporter: Tim Allison > >Assignee: Erick Erickson > >Priority: Major > > Fix For: 7.6, master (8.0) > > > > Attachments: SOLR-12423.patch > > > > Time Spent: 50m > > Remaining Estimate: 0h > > > > In Tika 1.19, there will be the ability to call the ForkParser and specify > > a directory of jars from which to load the classes for the Parser in the > > child processes. This will allow us to remove all of the parser > > dependencies from Solr. We’ll still need tika-core, of course, but we could > > drop tika-app.jar in the child process’ bin directory and be done with the > > upgrade... no more fiddly dependency upgrades and threat of jar hell. > > The ForkParser also protects against ooms, infinite loops and jvm crashes. > > W00t! > > This issue covers the basic upgrading to 1.19.1. For the migration to the > > ForkParser, see SOLR-11721. > > > > -- > This message was sent by Atlassian JIRA > (v7.6.3#76005) > > - > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org > For additional commands, e-mail: dev-h...@lucene.apache.org > - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12921) Separate Solr unit tests and integration tests.
[ https://issues.apache.org/jira/browse/SOLR-12921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16663990#comment-16663990 ] Erick Erickson commented on SOLR-12921: --- WDYT about changing precommit to forbid new files (and maybe changes too) in the current test file tree? The idea here is that if we want new work to respect the reorganization, adding that precommit check would freeze the current mixup and we could move tests into the new structure over time rather than all at once. I have no idea how practical this is frankly, and it might mean some period of significant code duplication. If it can all be done in one big shot that'd be great. It's just that trying to do it all at once seems pretty hard. > Separate Solr unit tests and integration tests. > --- > > Key: SOLR-12921 > URL: https://issues.apache.org/jira/browse/SOLR-12921 > Project: Solr > Issue Type: Sub-task > Security Level: Public(Default Security Level. Issues are Public) > Components: Tests >Reporter: Mark Miller >Priority: Major > > We basically just have "tests" now. We should have separate locations for > unit and integration tests and new work should have a good reason to not > include both. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (LUCENE-8534) Another case of Polygon tessellator going into an infinite loop
[ https://issues.apache.org/jira/browse/LUCENE-8534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16663911#comment-16663911 ] Ignacio Vera edited comment on LUCENE-8534 at 10/25/18 4:39 PM: [~nknize] attached is a patch with some findings: 1.- Method Tesellator#isIntersectingPolygon seems incorrect as it only checks the intersection with the first edge. This seems to cause creation of invalid diagonals when splitting the polygon. 2.- Method Tesellator#cureLocalIntersections produces incorrect ears. I think it needs to call the method above. 3.- when splitting a polygon, it needs to resorted if using Morton indexing. It seems that for doing that the linked list needs to be reseted for Z values. This fixes the issues I had a visual inspection of the tessellation and it is looking good. was (Author: ivera): @nknize, attached is a patch with some findings: 1.- Method Tesellator#isIntersectingPolygon seems incorrect as it only checks the intersection with the first edge. This seems to cause creation of invalid diagonals when splitting the polygon. 2.- Method Tesellator#cureLocalIntersections produces incorrect ears. I think it needs to call the method above. 3.- when splitting a polygon, it needs to resorted if using Morton indexing. It seems that for doing that the linked list needs to be reseted for Z values. This fixes the issues I had a visual inspection of the tessellation and it is looking good. > Another case of Polygon tessellator going into an infinite loop > --- > > Key: LUCENE-8534 > URL: https://issues.apache.org/jira/browse/LUCENE-8534 > Project: Lucene - Core > Issue Type: Bug > Components: modules/sandbox >Reporter: Ignacio Vera >Priority: Major > Attachments: LUCENE-8534.patch, LUCENE-8534.patch, LUCENE-8534.patch, > bigPolygon.wkt, image-2018-10-19-12-25-07-849.png > > > Related to LUCENE-8454, another case where tesselator never returns when > processing a polygon. > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-12423) Upgrade to Tika 1.19.1 when available
[ https://issues.apache.org/jira/browse/SOLR-12423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16663981#comment-16663981 ] Cassandra Targett commented on SOLR-12423: -- This is marked as Fixed for 7.6, and the issue appears in CHANGES under 7.6, but the version listed under "Versions of Major Components" still lists 1.18 as the Tika version, while the same section under 8.0 lists 1.19.1. This seems like a simple oversight, but [~erickerickson], [~talli...@apache.org], any specific reason why I shouldn't just fix that? > Upgrade to Tika 1.19.1 when available > - > > Key: SOLR-12423 > URL: https://issues.apache.org/jira/browse/SOLR-12423 > Project: Solr > Issue Type: Task > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Tim Allison >Assignee: Erick Erickson >Priority: Major > Fix For: 7.6, master (8.0) > > Attachments: SOLR-12423.patch > > Time Spent: 50m > Remaining Estimate: 0h > > In Tika 1.19, there will be the ability to call the ForkParser and specify a > directory of jars from which to load the classes for the Parser in the child > processes. This will allow us to remove all of the parser dependencies from > Solr. We’ll still need tika-core, of course, but we could drop tika-app.jar > in the child process’ bin directory and be done with the upgrade... no more > fiddly dependency upgrades and threat of jar hell. > The ForkParser also protects against ooms, infinite loops and jvm crashes. > W00t! > This issue covers the basic upgrading to 1.19.1. For the migration to the > ForkParser, see SOLR-11721. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: Solr Tests
Well, winter is coming on with long cold nights. What better thing to do than hunch over my computer working with tests? +1, I'll certainly add what I can On Wed, Oct 24, 2018 at 11:35 PM Anshum Gupta wrote: > > +1 Mark! I’m in. > > Anshum > > > On Oct 24, 2018, at 11:05 PM, Mark Miller wrote: > > My boss has indicated I'm going to get a little time to spend on addressing > our test situation. Turns out the current downward trend situation is a > little worrying to some that count on the project down the road ;) > > We can finally dig out though, I promise. > > It's going to be a little bit before I'm clear and gathering steam, but I've > got this figured out. I'm finally playing checkers instead of rock / paper / > scissors. > > I'm going to eat a lot of work to get there and I promise I can get there, > but I'm going to need some help. > > If you have an interest in changing our test situation, and I know some of > you do, please join me as I invite others to help out more in a short time. > > Beyond some help with code effort though, please get involved in discussions > and JIRA issues around changing our test culture and process. We need to turn > this corner as a group, we need to come together on some new behaviors, we > need to generate large enough buy in to do something lasting. > > You all need to learn 'ant beast' (I'm fixing it to work *much* better) and > my beasting test script gist (each has there place). I'm going to force it on > you even if you don't ;) > > https://gist.githubusercontent.com/markrmiller/dbdb792216dc98b018ad/raw/cd084d01f405fa271af4b72a2848dc87374a1f71/gistfile1.sh > > For a long time I put most of my attention towards SolrCloud stability and > scale. From 2012-2015 or so, that felt like the biggest threat to the project > while also getting relatively little attention. I felt like it was 50/50 in > 2014 that I'd even be talking to anyone about SolrCloud today. That hurdle is > done and over with. The biggest threat to Solr and SolrCloud now is our tests. > > I'll be working from this umbrella issue: > > https://issues.apache.org/jira/browse/SOLR-12801 : Fix the tests, remove > BadApples and AwaitsFix annotations, improve env for test development. > > Join the discussion. > > - Mark > -- > - Mark > about.me/markrmiller > > - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: Solr Tests
This situation has irritated me to no end and "something to help tests" is on my short list of things to contribute, along with "streaming expression stuff", "what ever happened to 3191" and of course continuing to help with TRA's Count me in. On Thu, Oct 25, 2018, 2:35 AM Anshum Gupta +1 Mark! I’m in. > > * *Anshum > > > On Oct 24, 2018, at 11:05 PM, Mark Miller wrote: > > My boss has indicated I'm going to get a little time to spend on > addressing our test situation. Turns out the current downward trend > situation is a little worrying to some that count on the project down the > road ;) > > We can finally dig out though, I promise. > > It's going to be a little bit before I'm clear and gathering steam, but > I've got this figured out. I'm finally playing checkers instead of rock / > paper / scissors. > > I'm going to eat a lot of work to get there and I promise I can get there, > but I'm going to need some help. > > If you have an interest in changing our test situation, and I know some of > you do, please join me as I invite others to help out more in a short time. > > Beyond some help with code effort though, please get involved in > discussions and JIRA issues around changing our test culture and process. > We need to turn this corner as a group, we need to come together on some > new behaviors, we need to generate large enough buy in to do something > lasting. > > You all need to learn 'ant beast' (I'm fixing it to work *much* better) > and my beasting test script gist (each has there place). I'm going to force > it on you even if you don't ;) > > > https://gist.githubusercontent.com/markrmiller/dbdb792216dc98b018ad/raw/cd084d01f405fa271af4b72a2848dc87374a1f71/gistfile1.sh > > For a long time I put most of my attention towards SolrCloud stability and > scale. From 2012-2015 or so, that felt like the biggest threat to the > project while also getting relatively little attention. I felt like it was > 50/50 in 2014 that I'd even be talking to anyone about SolrCloud today. > That hurdle is done and over with. The biggest threat to Solr and SolrCloud > now is our tests. > > I'll be working from this umbrella issue: > > https://issues.apache.org/jira/browse/SOLR-12801 : Fix the tests, remove > BadApples and AwaitsFix annotations, improve env for test development. > > Join the discussion. > > - Mark > -- > - Mark > about.me/markrmiller > > >