[jira] [Assigned] (SOLR-13674) NodeAddedTrigger does not support configuration of relica type hint
[ https://issues.apache.org/jira/browse/SOLR-13674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shalin Shekhar Mangar reassigned SOLR-13674: Assignee: Shalin Shekhar Mangar > NodeAddedTrigger does not support configuration of relica type hint > --- > > Key: SOLR-13674 > URL: https://issues.apache.org/jira/browse/SOLR-13674 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 7.6 >Reporter: Irena Shaigorodsky >Assignee: Shalin Shekhar Mangar >Priority: Major > Time Spent: 10m > Remaining Estimate: 0h > > The current code > org.apache.solr.cloud.autoscaling.ComputePlanAction#getNodeAddedSuggester > only sets COLL_SHARD hint, as a result any added replica will be NRT one. > Our current setup has TLOG nodes on physical hardware and PULL nodes on k8s > that are recycled periodically. An attempt to add those will bring the nodes > in the cluster as NRT one. > The root cause is > org.apache.solr.client.solrj.cloud.autoscaling.AddReplicaSuggester#tryEachNode > that expects to find the hint REPLICATYPE and defaults to NRT one. > -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13674) NodeAddedTrigger does not support configuration of relica type hint
[ https://issues.apache.org/jira/browse/SOLR-13674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16899345#comment-16899345 ] Irena Shaigorodsky commented on SOLR-13674: --- https://github.com/apache/lucene-solr/pull/821 > NodeAddedTrigger does not support configuration of relica type hint > --- > > Key: SOLR-13674 > URL: https://issues.apache.org/jira/browse/SOLR-13674 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 7.6 >Reporter: Irena Shaigorodsky >Priority: Major > Time Spent: 10m > Remaining Estimate: 0h > > The current code > org.apache.solr.cloud.autoscaling.ComputePlanAction#getNodeAddedSuggester > only sets COLL_SHARD hint, as a result any added replica will be NRT one. > Our current setup has TLOG nodes on physical hardware and PULL nodes on k8s > that are recycled periodically. An attempt to add those will bring the nodes > in the cluster as NRT one. > The root cause is > org.apache.solr.client.solrj.cloud.autoscaling.AddReplicaSuggester#tryEachNode > that expects to find the hint REPLICATYPE and defaults to NRT one. > -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] [lucene-solr] ishaigor opened a new pull request #821: SOLR-13674: Add relica type property to NodeAddedTrigger
ishaigor opened a new pull request #821: SOLR-13674: Add relica type property to NodeAddedTrigger URL: https://github.com/apache/lucene-solr/pull/821 # Description The auto-scaling nodes are always added as NRT as there is no way to specify desired replica type in the trigger. For the TLOG/PULL cluster that does not work. # Solution Added replica type property to the NodeAddedTrigger to be used with preferredOperation 'ADDREPLICA'. Set the resolved replica type as suggester hint so the AddReplicaSuggester can make use of it, similarly to the PolicyHelper. I also updated branch_7x and branch_8x in my repository but not sure as for the process to merge those. # Tests Added test that when policy allows for PULL nodes, node_add_trigger configured with replica type PULL adds the expected replica type. # Checklist Please review the following and check all that apply: - [ ] I have reviewed the guidelines for [How to Contribute](https://wiki.apache.org/solr/HowToContribute) and my code conforms to the standards described there to the best of my ability. - [ ] I have created a Jira issue and added the issue ID to my pull request title. - [ ] I am authorized to contribute this code to the ASF and have removed any code I do not have a license to distribute. - [ ] I have developed this patch against the `master` branch. - [ ] I have run `ant precommit` and the appropriate test suite. - [ ] I have added tests for my changes. - [ ] I have added documentation for the [Ref Guide](https://github.com/apache/lucene-solr/tree/master/solr/solr-ref-guide) (for Solr changes only). This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-12.0.1) - Build # 24489 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/24489/ Java: 64bit/jdk-12.0.1 -XX:-UseCompressedOops -XX:+UseParallelGC 6 tests failed. FAILED: org.apache.solr.cloud.rule.RulesTest.doIntegrationTest Error Message: Timeout occurred while waiting response from server at: https://127.0.0.1:44695/solr Stack Trace: org.apache.solr.client.solrj.SolrServerException: Timeout occurred while waiting response from server at: https://127.0.0.1:44695/solr at __randomizedtesting.SeedInfo.seed([5A686D09F1CECDC3:BF5B2A88EDBA3FC1]:0) at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:667) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:262) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:245) at org.apache.solr.client.solrj.impl.LBSolrClient.doRequest(LBSolrClient.java:368) at org.apache.solr.client.solrj.impl.LBSolrClient.request(LBSolrClient.java:296) at org.apache.solr.client.solrj.impl.BaseCloudSolrClient.sendRequest(BaseCloudSolrClient.java:1128) at org.apache.solr.client.solrj.impl.BaseCloudSolrClient.requestWithRetryOnStaleState(BaseCloudSolrClient.java:897) at org.apache.solr.client.solrj.impl.BaseCloudSolrClient.request(BaseCloudSolrClient.java:829) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:228) at org.apache.solr.cloud.MiniSolrCloudCluster.deleteAllCollections(MiniSolrCloudCluster.java:547) at org.apache.solr.cloud.rule.RulesTest.removeCollections(RulesTest.java:69) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:567) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:996) at org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:239) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1918 - Failure
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1918/ No tests ran. Build Log: [...truncated 25 lines...] ERROR: Failed to check out http://svn.apache.org/repos/asf/lucene/test-data org.tmatesoft.svn.core.SVNException: svn: E175002: connection refused by the server svn: E175002: OPTIONS request failed on '/repos/asf/lucene/test-data' at org.tmatesoft.svn.core.internal.wc.SVNErrorManager.error(SVNErrorManager.java:112) at org.tmatesoft.svn.core.internal.wc.SVNErrorManager.error(SVNErrorManager.java:96) at org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.request(HTTPConnection.java:765) at org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.request(HTTPConnection.java:352) at org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.request(HTTPConnection.java:340) at org.tmatesoft.svn.core.internal.io.dav.DAVConnection.performHttpRequest(DAVConnection.java:910) at org.tmatesoft.svn.core.internal.io.dav.DAVConnection.exchangeCapabilities(DAVConnection.java:702) at org.tmatesoft.svn.core.internal.io.dav.DAVConnection.open(DAVConnection.java:113) at org.tmatesoft.svn.core.internal.io.dav.DAVRepository.openConnection(DAVRepository.java:1035) at org.tmatesoft.svn.core.internal.io.dav.DAVRepository.getLatestRevision(DAVRepository.java:164) at org.tmatesoft.svn.core.internal.wc2.ng.SvnNgRepositoryAccess.getRevisionNumber(SvnNgRepositoryAccess.java:119) at org.tmatesoft.svn.core.internal.wc2.SvnRepositoryAccess.getLocations(SvnRepositoryAccess.java:178) at org.tmatesoft.svn.core.internal.wc2.ng.SvnNgRepositoryAccess.createRepositoryFor(SvnNgRepositoryAccess.java:43) at org.tmatesoft.svn.core.internal.wc2.ng.SvnNgAbstractUpdate.checkout(SvnNgAbstractUpdate.java:831) at org.tmatesoft.svn.core.internal.wc2.ng.SvnNgCheckout.run(SvnNgCheckout.java:26) at org.tmatesoft.svn.core.internal.wc2.ng.SvnNgCheckout.run(SvnNgCheckout.java:11) at org.tmatesoft.svn.core.internal.wc2.ng.SvnNgOperationRunner.run(SvnNgOperationRunner.java:20) at org.tmatesoft.svn.core.internal.wc2.SvnOperationRunner.run(SvnOperationRunner.java:21) at org.tmatesoft.svn.core.wc2.SvnOperationFactory.run(SvnOperationFactory.java:1239) at org.tmatesoft.svn.core.wc2.SvnOperation.run(SvnOperation.java:294) at hudson.scm.subversion.CheckoutUpdater$SubversionUpdateTask.perform(CheckoutUpdater.java:133) at hudson.scm.subversion.WorkspaceUpdater$UpdateTask.delegateTo(WorkspaceUpdater.java:168) at hudson.scm.subversion.WorkspaceUpdater$UpdateTask.delegateTo(WorkspaceUpdater.java:176) at hudson.scm.subversion.UpdateUpdater$TaskImpl.perform(UpdateUpdater.java:134) at hudson.scm.subversion.WorkspaceUpdater$UpdateTask.delegateTo(WorkspaceUpdater.java:168) at hudson.scm.SubversionSCM$CheckOutTask.perform(SubversionSCM.java:1041) at hudson.scm.SubversionSCM$CheckOutTask.invoke(SubversionSCM.java:1017) at hudson.scm.SubversionSCM$CheckOutTask.invoke(SubversionSCM.java:990) at hudson.FilePath$FileCallableWrapper.call(FilePath.java:3086) at hudson.remoting.UserRequest.perform(UserRequest.java:212) at hudson.remoting.UserRequest.perform(UserRequest.java:54) at hudson.remoting.Request$2.run(Request.java:369) at hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:744) Caused by: java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:345) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:589) at org.tmatesoft.svn.core.internal.util.SVNSocketConnection.run(SVNSocketConnection.java:57) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ... 4 more java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:345) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
Use of asf-git feature branches
(Continuing discussion on list instead of in arbitrary Jira) My original concern was that pushing your feature branches to asf git adds much noise to mailing lists and should be used sparingly for issues where we expect co-authoring. > What we need is avoid notifications for commits to all the JIRA branches +1 if you can pull that off, that would help. But if we start pushing hundreds of jira branches it clutters things up, so we should remember to delete those branches after merge. > Using a feature branch is good for collaboration. Every committer > automatically had access to your branch That should also be possible on github PRs now (there’s a checkbox), but have not tried it yet. Jan Høydahl > 3. aug. 2019 kl. 01:13 skrev Noble Paul (JIRA) : > > >[ > https://issues.apache.org/jira/browse/SOLR-13677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16899269#comment-16899269 > ] > > Noble Paul commented on SOLR-13677: > --- > > What we need is avoid notifications for commits to all the JIRA branches . > Using a feature branch is good for collaboration. Every committer > automatically had access to your branch > >> All Metrics Gauges should be unregistered by the objects that registered them >> - >> >>Key: SOLR-13677 >>URL: https://issues.apache.org/jira/browse/SOLR-13677 >>Project: Solr >> Issue Type: Improvement >> Security Level: Public(Default Security Level. Issues are Public) >> Components: metrics >> Reporter: Noble Paul >> Priority: Major >> Time Spent: 10m >> Remaining Estimate: 0h >> >> The life cycle of Metrics producers are managed by the core (mostly). So, if >> the lifecycle of the object is different from that of the core itself, these >> objects will never be unregistered from the metrics registry. This will lead >> to memory leaks > > > > -- > This message was sent by Atlassian JIRA > (v7.6.14#76016)
[JENKINS] Lucene-Solr-Tests-master - Build # 3482 - Failure
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/3482/ All tests passed Build Log: [...truncated 64499 lines...] -ecj-javadoc-lint-tests: [mkdir] Created dir: /tmp/ecj128669299 [ecj-lint] Compiling 48 source files to /tmp/ecj128669299 [ecj-lint] invalid Class-Path header in manifest of jar file: /home/jenkins/.ivy2/cache/org.restlet.jee/org.restlet/jars/org.restlet-2.3.0.jar [ecj-lint] invalid Class-Path header in manifest of jar file: /home/jenkins/.ivy2/cache/org.restlet.jee/org.restlet.ext.servlet/jars/org.restlet.ext.servlet-2.3.0.jar [ecj-lint] -- [ecj-lint] 1. ERROR in /home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java (at line 23) [ecj-lint] import javax.naming.NamingException; [ecj-lint] [ecj-lint] The type javax.naming.NamingException is not accessible [ecj-lint] -- [ecj-lint] 2. ERROR in /home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java (at line 28) [ecj-lint] public class MockInitialContextFactory implements InitialContextFactory { [ecj-lint] ^ [ecj-lint] The type MockInitialContextFactory must implement the inherited abstract method InitialContextFactory.getInitialContext(Hashtable) [ecj-lint] -- [ecj-lint] 3. ERROR in /home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java (at line 30) [ecj-lint] private final javax.naming.Context context; [ecj-lint] [ecj-lint] The type javax.naming.Context is not accessible [ecj-lint] -- [ecj-lint] 4. ERROR in /home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java (at line 33) [ecj-lint] context = mock(javax.naming.Context.class); [ecj-lint] ^^^ [ecj-lint] context cannot be resolved to a variable [ecj-lint] -- [ecj-lint] 5. ERROR in /home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java (at line 33) [ecj-lint] context = mock(javax.naming.Context.class); [ecj-lint] [ecj-lint] The type javax.naming.Context is not accessible [ecj-lint] -- [ecj-lint] 6. ERROR in /home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java (at line 36) [ecj-lint] when(context.lookup(anyString())).thenAnswer(invocation -> objects.get(invocation.getArgument(0))); [ecj-lint] ^^^ [ecj-lint] context cannot be resolved [ecj-lint] -- [ecj-lint] 7. ERROR in /home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java (at line 38) [ecj-lint] } catch (NamingException e) { [ecj-lint] ^^^ [ecj-lint] NamingException cannot be resolved to a type [ecj-lint] -- [ecj-lint] 8. ERROR in /home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java (at line 45) [ecj-lint] public javax.naming.Context getInitialContext(Hashtable env) { [ecj-lint] [ecj-lint] The type javax.naming.Context is not accessible [ecj-lint] -- [ecj-lint] 9. ERROR in /home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java (at line 46) [ecj-lint] return context; [ecj-lint]^^^ [ecj-lint] context cannot be resolved to a variable [ecj-lint] -- [ecj-lint] 9 problems (9 errors) BUILD FAILED /home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/build.xml:634: The following error occurred while executing this line: /home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/build.xml:101: The following error occurred while executing this line: /home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/build.xml:651: The following error occurred while executing this line: /home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/common-build.xml:479: The following error occurred while executing this line: /home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/lucene/common-build.xml:2015: The following error occurred while executing this line:
[jira] [Commented] (SOLR-13677) All Metrics Gauges should be unregistered by the objects that registered them
[ https://issues.apache.org/jira/browse/SOLR-13677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16899269#comment-16899269 ] Noble Paul commented on SOLR-13677: --- What we need is avoid notifications for commits to all the JIRA branches . Using a feature branch is good for collaboration. Every committer automatically had access to your branch > All Metrics Gauges should be unregistered by the objects that registered them > - > > Key: SOLR-13677 > URL: https://issues.apache.org/jira/browse/SOLR-13677 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: metrics >Reporter: Noble Paul >Priority: Major > Time Spent: 10m > Remaining Estimate: 0h > > The life cycle of Metrics producers are managed by the core (mostly). So, if > the lifecycle of the object is different from that of the core itself, these > objects will never be unregistered from the metrics registry. This will lead > to memory leaks -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13677) All Metrics Gauges should be unregistered by the objects that registered them
[ https://issues.apache.org/jira/browse/SOLR-13677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16899268#comment-16899268 ] Jan Høydahl commented on SOLR-13677: Noble, is there a reason why you push your feature branch to Apache GIT instead of keeping it in your own fork and do a PR when it’s ready? Do you expect there to be collaboration? Reason I ask is that it adds some “noise” to the lists for every push, merge etc. > All Metrics Gauges should be unregistered by the objects that registered them > - > > Key: SOLR-13677 > URL: https://issues.apache.org/jira/browse/SOLR-13677 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: metrics >Reporter: Noble Paul >Priority: Major > Time Spent: 10m > Remaining Estimate: 0h > > The life cycle of Metrics producers are managed by the core (mostly). So, if > the lifecycle of the object is different from that of the core itself, these > objects will never be unregistered from the metrics registry. This will lead > to memory leaks -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13672) Admin UI/ZK Status page: Zookeeper 3.5 : 4lw.commands.whitellist error
[ https://issues.apache.org/jira/browse/SOLR-13672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16899263#comment-16899263 ] Jan Høydahl commented on SOLR-13672: PR reviews welcome. Hope to merge next week. PS. There’s an email thread on zk list about why that membership: line exists, may perhaps lead to a ZK Jira, but anyway, the parsing logic is now more robust. Earlier we aborted everything if a line did not have a separator, now we instead log it but continue. > Admin UI/ZK Status page: Zookeeper 3.5 : 4lw.commands.whitellist error > -- > > Key: SOLR-13672 > URL: https://issues.apache.org/jira/browse/SOLR-13672 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Affects Versions: 8.2 >Reporter: Jörn Franke >Priority: Major > Attachments: SOLR-13672.patch, zk-status.png > > Time Spent: 10m > Remaining Estimate: 0h > > After upgrading to Solr 8.2 and Zookeeper 3.5.5 one sees the following error > in the Admin UI / Cloud / ZkStatus: > {color:#22}*"Errors: - membership: Check 4lw.commands.whitelist setting > in zookeeper{color} > {color:#22}configuration file."*{color} > {color:#22}Aside of the UI, the Solr Cloud nodes seem to work perfectly > normal.{color} > {color:#22}This issue only occurs with ZooKeeper ensembles. It does not > appear if one Zookeeper standalone instance is used.{color} > {color:#22}We tried the 4lw.commands.whitelist with wildcard * and > "mntr,conf,ruok" (with and without spaces).{color} -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-6305) Ability to set the replication factor for index files created by HDFSDirectoryFactory
[ https://issues.apache.org/jira/browse/SOLR-6305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kevin Risden updated SOLR-6305: --- Resolution: Fixed Status: Resolved (was: Patch Available) > Ability to set the replication factor for index files created by > HDFSDirectoryFactory > - > > Key: SOLR-6305 > URL: https://issues.apache.org/jira/browse/SOLR-6305 > Project: Solr > Issue Type: Improvement > Components: Hadoop Integration, hdfs > Environment: hadoop-2.2.0 >Reporter: Timothy Potter >Assignee: Kevin Risden >Priority: Major > Fix For: 8.3 > > Attachments: > 0001-OIQ-23224-SOLR-6305-Fixed-SOLR-6305-by-reading-the-r.patch, > SOLR-6305.patch > > > HdfsFileWriter doesn't allow us to create files in HDFS with a different > replication factor than the configured DFS default because it uses: > {{FsServerDefaults fsDefaults = fileSystem.getServerDefaults(path);}} > Since we have two forms of replication going on when using > HDFSDirectoryFactory, it would be nice to be able to set the HDFS replication > factor for the Solr directories to a lower value than the default. I realize > this might reduce the chance of data locality but since Solr cores each have > their own path in HDFS, we should give operators the option to reduce it. > My original thinking was to just use Hadoop setrep to customize the > replication factor, but that's a one-time shot and doesn't affect new files > created. For instance, I did: > {{hadoop fs -setrep -R 1 solr49/coll1}} > My default dfs replication is set to 3 ^^ I'm setting it to 1 just as an > example > Then added some more docs to the coll1 and did: > {{hadoop fs -stat %r solr49/hdfs1/core_node1/data/index/segments_3}} > 3 <-- should be 1 > So it looks like new files don't inherit the repfact from their parent > directory. > Not sure if we need to go as far as allowing different replication factor per > collection but that should be considered if possible. > I looked at the Hadoop 2.2.0 code to see if there was a way to work through > this using the Configuration object but nothing jumped out at me ... and the > implementation for getServerDefaults(path) is just: > public FsServerDefaults getServerDefaults(Path p) throws IOException { > return getServerDefaults(); > } > Path is ignored ;-) -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-6305) Ability to set the replication factor for index files created by HDFSDirectoryFactory
[ https://issues.apache.org/jira/browse/SOLR-6305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16899257#comment-16899257 ] ASF subversion and git services commented on SOLR-6305: --- Commit 901f381c617233e1613421134178bd3559c3a58d in lucene-solr's branch refs/heads/master from Boris Pasko [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=901f381 ] SOLR-6305: Replication from filesysem defaults, not from server defaults Signed-off-by: Kevin Risden > Ability to set the replication factor for index files created by > HDFSDirectoryFactory > - > > Key: SOLR-6305 > URL: https://issues.apache.org/jira/browse/SOLR-6305 > Project: Solr > Issue Type: Improvement > Components: Hadoop Integration, hdfs > Environment: hadoop-2.2.0 >Reporter: Timothy Potter >Assignee: Kevin Risden >Priority: Major > Fix For: 8.3 > > Attachments: > 0001-OIQ-23224-SOLR-6305-Fixed-SOLR-6305-by-reading-the-r.patch, > SOLR-6305.patch > > > HdfsFileWriter doesn't allow us to create files in HDFS with a different > replication factor than the configured DFS default because it uses: > {{FsServerDefaults fsDefaults = fileSystem.getServerDefaults(path);}} > Since we have two forms of replication going on when using > HDFSDirectoryFactory, it would be nice to be able to set the HDFS replication > factor for the Solr directories to a lower value than the default. I realize > this might reduce the chance of data locality but since Solr cores each have > their own path in HDFS, we should give operators the option to reduce it. > My original thinking was to just use Hadoop setrep to customize the > replication factor, but that's a one-time shot and doesn't affect new files > created. For instance, I did: > {{hadoop fs -setrep -R 1 solr49/coll1}} > My default dfs replication is set to 3 ^^ I'm setting it to 1 just as an > example > Then added some more docs to the coll1 and did: > {{hadoop fs -stat %r solr49/hdfs1/core_node1/data/index/segments_3}} > 3 <-- should be 1 > So it looks like new files don't inherit the repfact from their parent > directory. > Not sure if we need to go as far as allowing different replication factor per > collection but that should be considered if possible. > I looked at the Hadoop 2.2.0 code to see if there was a way to work through > this using the Configuration object but nothing jumped out at me ... and the > implementation for getServerDefaults(path) is just: > public FsServerDefaults getServerDefaults(Path p) throws IOException { > return getServerDefaults(); > } > Path is ignored ;-) -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-6305) Ability to set the replication factor for index files created by HDFSDirectoryFactory
[ https://issues.apache.org/jira/browse/SOLR-6305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16899258#comment-16899258 ] ASF subversion and git services commented on SOLR-6305: --- Commit 858b97a14453d56a636b6150891e7bcfbe01fd69 in lucene-solr's branch refs/heads/branch_8x from Boris Pasko [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=858b97a ] SOLR-6305: Replication from filesysem defaults, not from server defaults Signed-off-by: Kevin Risden > Ability to set the replication factor for index files created by > HDFSDirectoryFactory > - > > Key: SOLR-6305 > URL: https://issues.apache.org/jira/browse/SOLR-6305 > Project: Solr > Issue Type: Improvement > Components: Hadoop Integration, hdfs > Environment: hadoop-2.2.0 >Reporter: Timothy Potter >Assignee: Kevin Risden >Priority: Major > Fix For: 8.3 > > Attachments: > 0001-OIQ-23224-SOLR-6305-Fixed-SOLR-6305-by-reading-the-r.patch, > SOLR-6305.patch > > > HdfsFileWriter doesn't allow us to create files in HDFS with a different > replication factor than the configured DFS default because it uses: > {{FsServerDefaults fsDefaults = fileSystem.getServerDefaults(path);}} > Since we have two forms of replication going on when using > HDFSDirectoryFactory, it would be nice to be able to set the HDFS replication > factor for the Solr directories to a lower value than the default. I realize > this might reduce the chance of data locality but since Solr cores each have > their own path in HDFS, we should give operators the option to reduce it. > My original thinking was to just use Hadoop setrep to customize the > replication factor, but that's a one-time shot and doesn't affect new files > created. For instance, I did: > {{hadoop fs -setrep -R 1 solr49/coll1}} > My default dfs replication is set to 3 ^^ I'm setting it to 1 just as an > example > Then added some more docs to the coll1 and did: > {{hadoop fs -stat %r solr49/hdfs1/core_node1/data/index/segments_3}} > 3 <-- should be 1 > So it looks like new files don't inherit the repfact from their parent > directory. > Not sure if we need to go as far as allowing different replication factor per > collection but that should be considered if possible. > I looked at the Hadoop 2.2.0 code to see if there was a way to work through > this using the Configuration object but nothing jumped out at me ... and the > implementation for getServerDefaults(path) is just: > public FsServerDefaults getServerDefaults(Path p) throws IOException { > return getServerDefaults(); > } > Path is ignored ;-) -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-13-ea+shipilev-fastdebug) - Build # 24488 - Failure!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/24488/ Java: 64bit/jdk-13-ea+shipilev-fastdebug -XX:+UseCompressedOops -XX:+UseG1GC All tests passed Build Log: [...truncated 7672 lines...] [junit4] JVM J0: stdout was not empty, see: /home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/facet/test/temp/junit4-J0-20190802_222952_0867846234130197395661.sysout [junit4] >>> JVM J0 emitted unexpected output (verbatim) [junit4] # To suppress the following error report, specify this argument [junit4] # after -XX: or in .hotspotrc: SuppressErrorAt=/loopPredicate.cpp:315 [junit4] # [junit4] # A fatal error has been detected by the Java Runtime Environment: [junit4] # [junit4] # Internal Error (/home/buildbot/worker/jdk13-linux/build/src/hotspot/share/opto/loopPredicate.cpp:315), pid=25085, tid=25122 [junit4] # assert(dom_r->unique_ctrl_out()->is_Call()) failed: unc expected [junit4] # [junit4] # JRE version: OpenJDK Runtime Environment (13.0) (fastdebug build 13-testing+0-builds.shipilev.net-openjdk-jdk13-b9-20190621-jdk-1326) [junit4] # Java VM: OpenJDK 64-Bit Server VM (fastdebug 13-testing+0-builds.shipilev.net-openjdk-jdk13-b9-20190621-jdk-1326, mixed mode, sharing, tiered, compressed oops, g1 gc, linux-amd64) [junit4] # Problematic frame: [junit4] # V [libjvm.so+0x119f2cd] PhaseIdealLoop::clone_loop_predicates_fix_mem(ProjNode*, ProjNode*, PhaseIdealLoop*, PhaseIterGVN*)+0x12d [junit4] # [junit4] # No core dump will be written. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again [junit4] # [junit4] # An error report file with more information is saved as: [junit4] # /home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/facet/test/J0/hs_err_pid25085.log [junit4] # [junit4] # Compiler replay data is saved as: [junit4] # /home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/facet/test/J0/replay_pid25085.log [junit4] # [junit4] # If you would like to submit a bug report, please visit: [junit4] # http://bugreport.java.com/bugreport/crash.jsp [junit4] # [junit4] Current thread is 25122 [junit4] Dumping core ... [junit4] <<< JVM J0: EOF [junit4] JVM J0: stderr was not empty, see: /home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/facet/test/temp/junit4-J0-20190802_222952_08616220303801019669064.syserr [junit4] >>> JVM J0 emitted unexpected output (verbatim) [junit4] OpenJDK 64-Bit Server VM warning: increase O_BUFLEN in ostream.hpp -- output truncated [junit4] <<< JVM J0: EOF [...truncated 26 lines...] [junit4] ERROR: JVM J0 ended with an exception, command line: /home/jenkins/tools/java/64bit/jdk-13-ea+shipilev-fastdebug/bin/java -XX:+UseCompressedOops -XX:+UseG1GC -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/home/jenkins/workspace/Lucene-Solr-master-Linux/heapdumps -ea -esa --illegal-access=deny -Dtests.prefix=tests -Dtests.seed=1FF57B8EB6F2D56B -Xmx512M -Dtests.iters= -Dtests.verbose=false -Dtests.infostream=false -Dtests.codec=random -Dtests.postingsformat=random -Dtests.docvaluesformat=random -Dtests.locale=random -Dtests.timezone=random -Dtests.directory=random -Dtests.linedocsfile=europarl.lines.txt.gz -Dtests.luceneMatchVersion=9.0.0 -Dtests.cleanthreads=perMethod -Djava.util.logging.config.file=/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/tools/junit4/logging.properties -Dtests.nightly=false -Dtests.weekly=false -Dtests.monster=false -Dtests.slow=true -Dtests.asserts=true -Dtests.multiplier=3 -DtempDir=./temp -Djava.io.tmpdir=./temp -Dcommon.dir=/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene -Dclover.db.dir=/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/clover/db -Djava.security.policy=/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/tools/junit4/tests.policy -Dtests.LUCENE_VERSION=9.0.0 -Djetty.testMode=1 -Djetty.insecurerandom=1 -Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory -Djava.awt.headless=true -Djdk.map.althashing.threshold=0 -Dtests.src.home=/home/jenkins/workspace/Lucene-Solr-master-Linux -Djava.security.egd=file:/dev/./urandom -Djunit4.childvm.cwd=/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/facet/test/J0 -Djunit4.tempDir=/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/facet/test/temp -Djunit4.childvm.id=0 -Djunit4.childvm.count=3 -Dfile.encoding=ISO-8859-1 -Djava.security.manager=org.apache.lucene.util.TestSecurityManager -Dtests.filterstacks=true -Dtests.leaveTemporary=false -Dtests.badapples=false -classpath
[JENKINS] Lucene-Solr-Tests-8.x - Build # 340 - Unstable
Build: https://builds.apache.org/job/Lucene-Solr-Tests-8.x/340/ 1 tests failed. FAILED: org.apache.solr.security.BasicAuthIntegrationTest.testBasicAuth Error Message: must have failed Stack Trace: java.lang.AssertionError: must have failed at __randomizedtesting.SeedInfo.seed([FDA60C7ADE0CA54D:41C87A687A5F2637]:0) at org.junit.Assert.fail(Assert.java:88) at org.apache.solr.security.BasicAuthIntegrationTest.testBasicAuth(BasicAuthIntegrationTest.java:206) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.lang.Thread.run(Thread.java:748) Build Log: [...truncated 15955 lines...] [junit4] Suite: org.apache.solr.security.BasicAuthIntegrationTest [junit4] 2> 4091752 INFO (TEST-BasicAuthIntegrationTest.testBasicAuth-seed#[FDA60C7ADE0CA54D]) [ ] o.a.s.c.MiniSolrCloudCluster Starting cluster of 3 servers in
[jira] [Commented] (SOLR-13678) ZkStateReader.removeCollectionPropsWatcher can deadlock with concurrent zkCallback thread on props watcher
[ https://issues.apache.org/jira/browse/SOLR-13678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16899236#comment-16899236 ] Hoss Man commented on SOLR-13678: - AFAICT CollectionPropWatcher isn't used internally by solr anywhere, so this issue will only ipmact solr clients that explicitly register their own watchers. /cc [~tomasflobbe] & [~prusko] and linking to SOLR-11960 where this was ntroduced. > ZkStateReader.removeCollectionPropsWatcher can deadlock with concurrent > zkCallback thread on props watcher > -- > > Key: SOLR-13678 > URL: https://issues.apache.org/jira/browse/SOLR-13678 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Hoss Man >Priority: Major > Attachments: collectionpropswatcher-deadlock-jstack.txt > > > while investigating an (unrelated) test bug in CollectionPropsTest I > discovered a deadlock situation that can occur when calling > {{ZkStateReader.removeCollectionPropsWatcher()}} if a zkCallback thread tries > to concurrently fire the watchers set on the collection props. > {{ZkStateReader.removeCollectionPropsWatcher()}} is itself called when a > {{CollectionPropsWatcher.onStateChanged()}} impl returns "true" -- meaning > that IIUC any usage of {{CollectionPropsWatcher}} could potentially result in > this type of deadlock situation. > {noformat} > "TEST-CollectionPropsTest.testReadWriteCached-seed#[D3C6921874D1CFEB]" #15 > prio=5 os_prio=0 cpu=567.78ms elapsed=682.12s tid=0x7 > fa5e8343800 nid=0x3f61 waiting for monitor entry [0x7fa62d222000] >java.lang.Thread.State: BLOCKED (on object monitor) > at > org.apache.solr.common.cloud.ZkStateReader.lambda$removeCollectionPropsWatcher$20(ZkStateReader.java:2001) > - waiting to lock <0xe6207500> (a > java.util.concurrent.ConcurrentHashMap) > at > org.apache.solr.common.cloud.ZkStateReader$$Lambda$617/0x0001006c1840.apply(Unknown > Source) > at > java.util.concurrent.ConcurrentHashMap.compute(java.base@11.0.3/ConcurrentHashMap.java:1932) > - locked <0xeb9156b8> (a > java.util.concurrent.ConcurrentHashMap$Node) > at > org.apache.solr.common.cloud.ZkStateReader.removeCollectionPropsWatcher(ZkStateReader.java:1994) > at > org.apache.solr.cloud.CollectionPropsTest.testReadWriteCached(CollectionPropsTest.java:125) > ... > "zkCallback-88-thread-2" #213 prio=5 os_prio=0 cpu=14.06ms elapsed=672.65s > tid=0x7fa6041bf000 nid=0x402f waiting for monitor ent > ry [0x7fa5b8f39000] >java.lang.Thread.State: BLOCKED (on object monitor) > at > java.util.concurrent.ConcurrentHashMap.compute(java.base@11.0.3/ConcurrentHashMap.java:1923) > - waiting to lock <0xeb9156b8> (a > java.util.concurrent.ConcurrentHashMap$Node) > at > org.apache.solr.common.cloud.ZkStateReader$PropsNotification.(ZkStateReader.java:2262) > at > org.apache.solr.common.cloud.ZkStateReader.notifyPropsWatchers(ZkStateReader.java:2243) > at > org.apache.solr.common.cloud.ZkStateReader$PropsWatcher.refreshAndWatch(ZkStateReader.java:1458) > - locked <0xe6207500> (a > java.util.concurrent.ConcurrentHashMap) > at > org.apache.solr.common.cloud.ZkStateReader$PropsWatcher.process(ZkStateReader.java:1440) > at > org.apache.solr.common.cloud.SolrZkClient$ProcessWatchWithExecutor.lambda$process$1(SolrZkClient.java:838) > at > org.apache.solr.common.cloud.SolrZkClient$ProcessWatchWithExecutor$$Lambda$253/0x0001004a4440.run(Unknown > Source) > at > java.util.concurrent.Executors$RunnableAdapter.call(java.base@11.0.3/Executors.java:515) > at > java.util.concurrent.FutureTask.run(java.base@11.0.3/FutureTask.java:264) > at > org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209) > at > org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$$Lambda$140/0x000100308c40.run(Unknown > Source) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.3/ThreadPoolExecutor.java:1128) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.3/ThreadPoolExecutor.java:628) > at java.lang.Thread.run(java.base@11.0.3/Thread.java:834) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-13678) ZkStateReader.removeCollectionPropsWatcher can deadlock with concurrent zkCallback thread on props watcher
[ https://issues.apache.org/jira/browse/SOLR-13678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hoss Man updated SOLR-13678: Attachment: collectionpropswatcher-deadlock-jstack.txt Status: Open (was: Open) attaching the full jstack output that i captured from observing this during a run of {{CollectionPropsTest.testReadWriteCached}} (ie: the source of the snippet included in the summary) Please note that i captured this threaddump while in the process of testing some unrelated changes to other methods in {{CollectionPropsTest}} -- i believe all of my local changes to that test class at the time this thread dump was captured were to code that appeared farther down in the test file then any line numbers that might be mentioned in this threaddump, so all line numbers should be accurate on master circa ~ 52b5ec8068, but i'm not 100% certain. the key thing to focus on is the line numbers and callstack for the non-test code i am 100% certain i had no local changes to the {{CollectionPropsTest.testReadWriteCached}}, or any non-test code. > ZkStateReader.removeCollectionPropsWatcher can deadlock with concurrent > zkCallback thread on props watcher > -- > > Key: SOLR-13678 > URL: https://issues.apache.org/jira/browse/SOLR-13678 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Hoss Man >Priority: Major > Attachments: collectionpropswatcher-deadlock-jstack.txt > > > while investigating an (unrelated) test bug in CollectionPropsTest I > discovered a deadlock situation that can occur when calling > {{ZkStateReader.removeCollectionPropsWatcher()}} if a zkCallback thread tries > to concurrently fire the watchers set on the collection props. > {{ZkStateReader.removeCollectionPropsWatcher()}} is itself called when a > {{CollectionPropsWatcher.onStateChanged()}} impl returns "true" -- meaning > that IIUC any usage of {{CollectionPropsWatcher}} could potentially result in > this type of deadlock situation. > {noformat} > "TEST-CollectionPropsTest.testReadWriteCached-seed#[D3C6921874D1CFEB]" #15 > prio=5 os_prio=0 cpu=567.78ms elapsed=682.12s tid=0x7 > fa5e8343800 nid=0x3f61 waiting for monitor entry [0x7fa62d222000] >java.lang.Thread.State: BLOCKED (on object monitor) > at > org.apache.solr.common.cloud.ZkStateReader.lambda$removeCollectionPropsWatcher$20(ZkStateReader.java:2001) > - waiting to lock <0xe6207500> (a > java.util.concurrent.ConcurrentHashMap) > at > org.apache.solr.common.cloud.ZkStateReader$$Lambda$617/0x0001006c1840.apply(Unknown > Source) > at > java.util.concurrent.ConcurrentHashMap.compute(java.base@11.0.3/ConcurrentHashMap.java:1932) > - locked <0xeb9156b8> (a > java.util.concurrent.ConcurrentHashMap$Node) > at > org.apache.solr.common.cloud.ZkStateReader.removeCollectionPropsWatcher(ZkStateReader.java:1994) > at > org.apache.solr.cloud.CollectionPropsTest.testReadWriteCached(CollectionPropsTest.java:125) > ... > "zkCallback-88-thread-2" #213 prio=5 os_prio=0 cpu=14.06ms elapsed=672.65s > tid=0x7fa6041bf000 nid=0x402f waiting for monitor ent > ry [0x7fa5b8f39000] >java.lang.Thread.State: BLOCKED (on object monitor) > at > java.util.concurrent.ConcurrentHashMap.compute(java.base@11.0.3/ConcurrentHashMap.java:1923) > - waiting to lock <0xeb9156b8> (a > java.util.concurrent.ConcurrentHashMap$Node) > at > org.apache.solr.common.cloud.ZkStateReader$PropsNotification.(ZkStateReader.java:2262) > at > org.apache.solr.common.cloud.ZkStateReader.notifyPropsWatchers(ZkStateReader.java:2243) > at > org.apache.solr.common.cloud.ZkStateReader$PropsWatcher.refreshAndWatch(ZkStateReader.java:1458) > - locked <0xe6207500> (a > java.util.concurrent.ConcurrentHashMap) > at > org.apache.solr.common.cloud.ZkStateReader$PropsWatcher.process(ZkStateReader.java:1440) > at > org.apache.solr.common.cloud.SolrZkClient$ProcessWatchWithExecutor.lambda$process$1(SolrZkClient.java:838) > at > org.apache.solr.common.cloud.SolrZkClient$ProcessWatchWithExecutor$$Lambda$253/0x0001004a4440.run(Unknown > Source) > at > java.util.concurrent.Executors$RunnableAdapter.call(java.base@11.0.3/Executors.java:515) > at > java.util.concurrent.FutureTask.run(java.base@11.0.3/FutureTask.java:264) > at > org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209) > at > org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$$Lambda$140/0x000100308c40.run(Unknown > Source) > at >
[JENKINS] Lucene-Solr-NightlyTests-8.x - Build # 169 - Still Failing
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-8.x/169/ No tests ran. Build Log: [...truncated 25 lines...] ERROR: Failed to check out http://svn.apache.org/repos/asf/lucene/test-data org.tmatesoft.svn.core.SVNException: svn: E175002: connection refused by the server svn: E175002: OPTIONS request failed on '/repos/asf/lucene/test-data' at org.tmatesoft.svn.core.internal.wc.SVNErrorManager.error(SVNErrorManager.java:112) at org.tmatesoft.svn.core.internal.wc.SVNErrorManager.error(SVNErrorManager.java:96) at org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.request(HTTPConnection.java:765) at org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.request(HTTPConnection.java:352) at org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.request(HTTPConnection.java:340) at org.tmatesoft.svn.core.internal.io.dav.DAVConnection.performHttpRequest(DAVConnection.java:910) at org.tmatesoft.svn.core.internal.io.dav.DAVConnection.exchangeCapabilities(DAVConnection.java:702) at org.tmatesoft.svn.core.internal.io.dav.DAVConnection.open(DAVConnection.java:113) at org.tmatesoft.svn.core.internal.io.dav.DAVRepository.openConnection(DAVRepository.java:1035) at org.tmatesoft.svn.core.internal.io.dav.DAVRepository.getLatestRevision(DAVRepository.java:164) at org.tmatesoft.svn.core.internal.wc2.ng.SvnNgRepositoryAccess.getRevisionNumber(SvnNgRepositoryAccess.java:119) at org.tmatesoft.svn.core.internal.wc2.SvnRepositoryAccess.getLocations(SvnRepositoryAccess.java:178) at org.tmatesoft.svn.core.internal.wc2.ng.SvnNgRepositoryAccess.createRepositoryFor(SvnNgRepositoryAccess.java:43) at org.tmatesoft.svn.core.internal.wc2.ng.SvnNgAbstractUpdate.checkout(SvnNgAbstractUpdate.java:831) at org.tmatesoft.svn.core.internal.wc2.ng.SvnNgCheckout.run(SvnNgCheckout.java:26) at org.tmatesoft.svn.core.internal.wc2.ng.SvnNgCheckout.run(SvnNgCheckout.java:11) at org.tmatesoft.svn.core.internal.wc2.ng.SvnNgOperationRunner.run(SvnNgOperationRunner.java:20) at org.tmatesoft.svn.core.internal.wc2.SvnOperationRunner.run(SvnOperationRunner.java:21) at org.tmatesoft.svn.core.wc2.SvnOperationFactory.run(SvnOperationFactory.java:1239) at org.tmatesoft.svn.core.wc2.SvnOperation.run(SvnOperation.java:294) at hudson.scm.subversion.CheckoutUpdater$SubversionUpdateTask.perform(CheckoutUpdater.java:133) at hudson.scm.subversion.WorkspaceUpdater$UpdateTask.delegateTo(WorkspaceUpdater.java:168) at hudson.scm.subversion.WorkspaceUpdater$UpdateTask.delegateTo(WorkspaceUpdater.java:176) at hudson.scm.subversion.UpdateUpdater$TaskImpl.perform(UpdateUpdater.java:134) at hudson.scm.subversion.WorkspaceUpdater$UpdateTask.delegateTo(WorkspaceUpdater.java:168) at hudson.scm.SubversionSCM$CheckOutTask.perform(SubversionSCM.java:1041) at hudson.scm.SubversionSCM$CheckOutTask.invoke(SubversionSCM.java:1017) at hudson.scm.SubversionSCM$CheckOutTask.invoke(SubversionSCM.java:990) at hudson.FilePath$FileCallableWrapper.call(FilePath.java:3086) at hudson.remoting.UserRequest.perform(UserRequest.java:212) at hudson.remoting.UserRequest.perform(UserRequest.java:54) at hudson.remoting.Request$2.run(Request.java:369) at hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:744) Caused by: java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:345) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:589) at org.tmatesoft.svn.core.internal.util.SVNSocketConnection.run(SVNSocketConnection.java:57) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ... 4 more java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:345) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
[jira] [Commented] (SOLR-6305) Ability to set the replication factor for index files created by HDFSDirectoryFactory
[ https://issues.apache.org/jira/browse/SOLR-6305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16899229#comment-16899229 ] Lucene/Solr QA commented on SOLR-6305: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 16s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} Release audit (RAT) {color} | {color:green} 2m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} Check forbidden APIs {color} | {color:green} 2m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} Validate source patterns {color} | {color:green} 2m 9s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 31m 59s{color} | {color:green} core in the patch passed. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 39m 36s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | SOLR-6305 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12976549/SOLR-6305.patch | | Optional Tests | compile javac unit ratsources checkforbiddenapis validatesourcepatterns | | uname | Linux lucene1-us-west 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | ant | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-SOLR-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh | | git revision | master / ee0fd49244 | | ant | version: Apache Ant(TM) version 1.10.5 compiled on March 28 2019 | | Default Java | LTS | | Test Results | https://builds.apache.org/job/PreCommit-SOLR-Build/519/testReport/ | | modules | C: solr solr/core U: solr | | Console output | https://builds.apache.org/job/PreCommit-SOLR-Build/519/console | | Powered by | Apache Yetus 0.7.0 http://yetus.apache.org | This message was automatically generated. > Ability to set the replication factor for index files created by > HDFSDirectoryFactory > - > > Key: SOLR-6305 > URL: https://issues.apache.org/jira/browse/SOLR-6305 > Project: Solr > Issue Type: Improvement > Components: Hadoop Integration, hdfs > Environment: hadoop-2.2.0 >Reporter: Timothy Potter >Assignee: Kevin Risden >Priority: Major > Fix For: 8.3 > > Attachments: > 0001-OIQ-23224-SOLR-6305-Fixed-SOLR-6305-by-reading-the-r.patch, > SOLR-6305.patch > > > HdfsFileWriter doesn't allow us to create files in HDFS with a different > replication factor than the configured DFS default because it uses: > {{FsServerDefaults fsDefaults = fileSystem.getServerDefaults(path);}} > Since we have two forms of replication going on when using > HDFSDirectoryFactory, it would be nice to be able to set the HDFS replication > factor for the Solr directories to a lower value than the default. I realize > this might reduce the chance of data locality but since Solr cores each have > their own path in HDFS, we should give operators the option to reduce it. > My original thinking was to just use Hadoop setrep to customize the > replication factor, but that's a one-time shot and doesn't affect new files > created. For instance, I did: > {{hadoop fs -setrep -R 1 solr49/coll1}} > My default dfs replication is set to 3 ^^ I'm setting it to 1 just as an > example > Then added some more docs to the coll1 and did: > {{hadoop fs -stat %r solr49/hdfs1/core_node1/data/index/segments_3}} > 3 <-- should be 1 > So it looks like new files don't inherit the repfact from their parent > directory. > Not sure if we need to go as far as allowing different replication factor per > collection but that should be considered if possible. > I looked at the Hadoop 2.2.0 code to see if there was a way to work through > this using the Configuration
[jira] [Created] (SOLR-13678) ZkStateReader.removeCollectionPropsWatcher can deadlock with concurrent zkCallback thread on props watcher
Hoss Man created SOLR-13678: --- Summary: ZkStateReader.removeCollectionPropsWatcher can deadlock with concurrent zkCallback thread on props watcher Key: SOLR-13678 URL: https://issues.apache.org/jira/browse/SOLR-13678 Project: Solr Issue Type: Bug Security Level: Public (Default Security Level. Issues are Public) Reporter: Hoss Man while investigating an (unrelated) test bug in CollectionPropsTest I discovered a deadlock situation that can occur when calling {{ZkStateReader.removeCollectionPropsWatcher()}} if a zkCallback thread tries to concurrently fire the watchers set on the collection props. {{ZkStateReader.removeCollectionPropsWatcher()}} is itself called when a {{CollectionPropsWatcher.onStateChanged()}} impl returns "true" -- meaning that IIUC any usage of {{CollectionPropsWatcher}} could potentially result in this type of deadlock situation. {noformat} "TEST-CollectionPropsTest.testReadWriteCached-seed#[D3C6921874D1CFEB]" #15 prio=5 os_prio=0 cpu=567.78ms elapsed=682.12s tid=0x7 fa5e8343800 nid=0x3f61 waiting for monitor entry [0x7fa62d222000] java.lang.Thread.State: BLOCKED (on object monitor) at org.apache.solr.common.cloud.ZkStateReader.lambda$removeCollectionPropsWatcher$20(ZkStateReader.java:2001) - waiting to lock <0xe6207500> (a java.util.concurrent.ConcurrentHashMap) at org.apache.solr.common.cloud.ZkStateReader$$Lambda$617/0x0001006c1840.apply(Unknown Source) at java.util.concurrent.ConcurrentHashMap.compute(java.base@11.0.3/ConcurrentHashMap.java:1932) - locked <0xeb9156b8> (a java.util.concurrent.ConcurrentHashMap$Node) at org.apache.solr.common.cloud.ZkStateReader.removeCollectionPropsWatcher(ZkStateReader.java:1994) at org.apache.solr.cloud.CollectionPropsTest.testReadWriteCached(CollectionPropsTest.java:125) ... "zkCallback-88-thread-2" #213 prio=5 os_prio=0 cpu=14.06ms elapsed=672.65s tid=0x7fa6041bf000 nid=0x402f waiting for monitor ent ry [0x7fa5b8f39000] java.lang.Thread.State: BLOCKED (on object monitor) at java.util.concurrent.ConcurrentHashMap.compute(java.base@11.0.3/ConcurrentHashMap.java:1923) - waiting to lock <0xeb9156b8> (a java.util.concurrent.ConcurrentHashMap$Node) at org.apache.solr.common.cloud.ZkStateReader$PropsNotification.(ZkStateReader.java:2262) at org.apache.solr.common.cloud.ZkStateReader.notifyPropsWatchers(ZkStateReader.java:2243) at org.apache.solr.common.cloud.ZkStateReader$PropsWatcher.refreshAndWatch(ZkStateReader.java:1458) - locked <0xe6207500> (a java.util.concurrent.ConcurrentHashMap) at org.apache.solr.common.cloud.ZkStateReader$PropsWatcher.process(ZkStateReader.java:1440) at org.apache.solr.common.cloud.SolrZkClient$ProcessWatchWithExecutor.lambda$process$1(SolrZkClient.java:838) at org.apache.solr.common.cloud.SolrZkClient$ProcessWatchWithExecutor$$Lambda$253/0x0001004a4440.run(Unknown Source) at java.util.concurrent.Executors$RunnableAdapter.call(java.base@11.0.3/Executors.java:515) at java.util.concurrent.FutureTask.run(java.base@11.0.3/FutureTask.java:264) at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209) at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$$Lambda$140/0x000100308c40.run(Unknown Source) at java.util.concurrent.ThreadPoolExecutor.runWorker(java.base@11.0.3/ThreadPoolExecutor.java:1128) at java.util.concurrent.ThreadPoolExecutor$Worker.run(java.base@11.0.3/ThreadPoolExecutor.java:628) at java.lang.Thread.run(java.base@11.0.3/Thread.java:834) {noformat} -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Closed] (SOLR-5381) Split Clusterstate and scale
[ https://issues.apache.org/jira/browse/SOLR-5381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Noble Paul closed SOLR-5381. > Split Clusterstate and scale > - > > Key: SOLR-5381 > URL: https://issues.apache.org/jira/browse/SOLR-5381 > Project: Solr > Issue Type: Improvement > Components: SolrCloud >Reporter: Noble Paul >Assignee: Noble Paul >Priority: Major > Fix For: 5.0 > > Original Estimate: 2,016h > Remaining Estimate: 2,016h > > clusterstate.json is a single point of contention for all components in > SolrCloud. It would be hard to scale SolrCloud beyond a few thousand nodes > because there are too many updates and too many nodes need to be notified of > the changes. As the no:of nodes go up the size of clusterstate.json keeps > going up and it will soon exceed the limit impossed by ZK. > The first step is to store the shards information in separate nodes and each > node can just listen to the shard node it belongs to. We may also need to > split each collection into its own node and the clusterstate.json just > holding the names of the collections . > This is an umbrella issue -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] [lucene-solr] noblepaul opened a new pull request #820: SOLR-13677: All Metrics Gauges should be unregistered by the objects that registered them
noblepaul opened a new pull request #820: SOLR-13677: All Metrics Gauges should be unregistered by the objects that registered them URL: https://github.com/apache/lucene-solr/pull/820 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-5381) Split Clusterstate and scale
[ https://issues.apache.org/jira/browse/SOLR-5381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Noble Paul resolved SOLR-5381. -- Resolution: Fixed Fix Version/s: 5.0 > Split Clusterstate and scale > - > > Key: SOLR-5381 > URL: https://issues.apache.org/jira/browse/SOLR-5381 > Project: Solr > Issue Type: Improvement > Components: SolrCloud >Reporter: Noble Paul >Assignee: Noble Paul >Priority: Major > Fix For: 5.0 > > Original Estimate: 2,016h > Remaining Estimate: 2,016h > > clusterstate.json is a single point of contention for all components in > SolrCloud. It would be hard to scale SolrCloud beyond a few thousand nodes > because there are too many updates and too many nodes need to be notified of > the changes. As the no:of nodes go up the size of clusterstate.json keeps > going up and it will soon exceed the limit impossed by ZK. > The first step is to store the shards information in separate nodes and each > node can just listen to the shard node it belongs to. We may also need to > split each collection into its own node and the clusterstate.json just > holding the names of the collections . > This is an umbrella issue -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13677) All Metrics Gauges should be unregistered by the objects that registered them
[ https://issues.apache.org/jira/browse/SOLR-13677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16899211#comment-16899211 ] ASF subversion and git services commented on SOLR-13677: Commit 3ce75aac49c79a023a9f1519badfe769e6a8f797 in lucene-solr's branch refs/heads/jira/SOLR-13677 from Noble Paul [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=3ce75aa ] SOLR-13677: initial commit > All Metrics Gauges should be unregistered by the objects that registered them > - > > Key: SOLR-13677 > URL: https://issues.apache.org/jira/browse/SOLR-13677 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: metrics >Reporter: Noble Paul >Priority: Major > > The life cycle of Metrics producers are managed by the core (mostly). So, if > the lifecycle of the object is different from that of the core itself, these > objects will never be unregistered from the metrics registry. This will lead > to memory leaks -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-Tests-master - Build # 3481 - Unstable
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/3481/ 1 tests failed. FAILED: org.apache.solr.cloud.OverseerTest.testShardLeaderChange Error Message: Captured an uncaught exception in thread: Thread[id=1006, name=OverseerCollectionConfigSetProcessor-72136146633228291-127.0.0.1:35959_solr-n_01, state=RUNNABLE, group=Overseer collection creation process.] Stack Trace: com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught exception in thread: Thread[id=1006, name=OverseerCollectionConfigSetProcessor-72136146633228291-127.0.0.1:35959_solr-n_01, state=RUNNABLE, group=Overseer collection creation process.] at __randomizedtesting.SeedInfo.seed([825195C9337607B1:5C02123E29EEF240]:0) Caused by: org.apache.solr.common.AlreadyClosedException at __randomizedtesting.SeedInfo.seed([825195C9337607B1]:0) at org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:69) at org.apache.solr.common.cloud.SolrZkClient.getData(SolrZkClient.java:337) at org.apache.solr.cloud.OverseerTaskProcessor.amILeader(OverseerTaskProcessor.java:425) at org.apache.solr.cloud.OverseerTaskProcessor.run(OverseerTaskProcessor.java:156) at java.base/java.lang.Thread.run(Thread.java:834) Build Log: [...truncated 12925 lines...] [junit4] Suite: org.apache.solr.cloud.OverseerTest [junit4] 2> 303086 INFO (SUITE-OverseerTest-seed#[825195C9337607B1]-worker) [ ] o.a.s.SolrTestCaseJ4 Created dataDir: /home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/build/solr-core/test/J0/temp/solr.cloud.OverseerTest_825195C9337607B1-001/data-dir-33-001 [junit4] 2> 303086 WARN (SUITE-OverseerTest-seed#[825195C9337607B1]-worker) [ ] o.a.s.SolrTestCaseJ4 startTrackingSearchers: numOpens=18 numCloses=18 [junit4] 2> 303086 INFO (SUITE-OverseerTest-seed#[825195C9337607B1]-worker) [ ] o.a.s.SolrTestCaseJ4 Using TrieFields (NUMERIC_POINTS_SYSPROP=false) w/NUMERIC_DOCVALUES_SYSPROP=false [junit4] 2> 303087 INFO (SUITE-OverseerTest-seed#[825195C9337607B1]-worker) [ ] o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (true) via: @org.apache.solr.util.RandomizeSSL(reason="", ssl=0.0/0.0, value=0.0/0.0, clientAuth=0.0/0.0) [junit4] 2> 303088 INFO (SUITE-OverseerTest-seed#[825195C9337607B1]-worker) [ ] o.a.s.SolrTestCaseJ4 SecureRandom sanity checks: test.solr.allowed.securerandom=null & java.security.egd=file:/dev/./urandom [junit4] 2> 303088 INFO (SUITE-OverseerTest-seed#[825195C9337607B1]-worker) [ ] o.a.s.c.ZkTestServer STARTING ZK TEST SERVER [junit4] 2> 303089 INFO (ZkTestServer Run Thread) [ ] o.a.s.c.ZkTestServer client port:0.0.0.0/0.0.0.0:0 [junit4] 2> 303089 INFO (ZkTestServer Run Thread) [ ] o.a.s.c.ZkTestServer Starting server [junit4] 2> 303189 INFO (SUITE-OverseerTest-seed#[825195C9337607B1]-worker) [ ] o.a.s.c.ZkTestServer start zk server on port:35959 [junit4] 2> 303189 INFO (SUITE-OverseerTest-seed#[825195C9337607B1]-worker) [ ] o.a.s.c.ZkTestServer parse host and port list: 127.0.0.1:35959 [junit4] 2> 303189 INFO (SUITE-OverseerTest-seed#[825195C9337607B1]-worker) [ ] o.a.s.c.ZkTestServer connecting to 127.0.0.1 35959 [junit4] 2> 303192 INFO (SUITE-OverseerTest-seed#[825195C9337607B1]-worker) [ ] o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper [junit4] 2> 303197 INFO (zkConnectionManagerCallback-337-thread-1) [ ] o.a.s.c.c.ConnectionManager zkClient has connected [junit4] 2> 303197 INFO (SUITE-OverseerTest-seed#[825195C9337607B1]-worker) [ ] o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper [junit4] 2> 303207 INFO (SUITE-OverseerTest-seed#[825195C9337607B1]-worker) [ ] o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper [junit4] 2> 303214 INFO (zkConnectionManagerCallback-339-thread-1) [ ] o.a.s.c.c.ConnectionManager zkClient has connected [junit4] 2> 303214 INFO (SUITE-OverseerTest-seed#[825195C9337607B1]-worker) [ ] o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper [junit4] 2> 303214 INFO (SUITE-OverseerTest-seed#[825195C9337607B1]-worker) [ ] o.a.s.SolrTestCaseJ4 initCore [junit4] 2> 303214 INFO (SUITE-OverseerTest-seed#[825195C9337607B1]-worker) [ ] o.a.s.SolrTestCaseJ4 initCore end [junit4] 2> 303221 INFO (TEST-OverseerTest.testShardLeaderChange-seed#[825195C9337607B1]) [ ] o.a.s.SolrTestCaseJ4 ###Starting testShardLeaderChange [junit4] 2> 303399 INFO (Thread-155) [ ] o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper [junit4] 2> 303405 INFO (zkConnectionManagerCallback-343-thread-1) [ ] o.a.s.c.c.ConnectionManager zkClient has connected [junit4] 2> 303406 INFO (Thread-155) [ ] o.a.s.c.c.ConnectionManager Client is connected
[JENKINS] Lucene-Solr-8.x-Linux (64bit/jdk-11.0.3) - Build # 960 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-Linux/960/ Java: 64bit/jdk-11.0.3 -XX:-UseCompressedOops -XX:+UseG1GC 1 tests failed. FAILED: org.apache.solr.search.facet.TestJsonFacets.testErrors {p0=STREAM} Error Message: Stack Trace: java.lang.AssertionError at __randomizedtesting.SeedInfo.seed([D8987511DDB3CC4B:EBF5D885F243398]:0) at org.junit.Assert.fail(Assert.java:86) at org.junit.Assert.assertTrue(Assert.java:41) at org.junit.Assert.assertTrue(Assert.java:52) at org.apache.solr.search.facet.TestJsonFacets.doTestErrors(TestJsonFacets.java:3163) at org.apache.solr.search.facet.TestJsonFacets.testErrors(TestJsonFacets.java:3150) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.base/java.lang.Thread.run(Thread.java:834) Build Log: [...truncated 14007 lines...] [junit4] Suite: org.apache.solr.search.facet.TestJsonFacets [junit4] 2> 573133 INFO (SUITE-TestJsonFacets-seed#[D8987511DDB3CC4B]-worker) [ ] o.a.s.SolrTestCaseJ4 SecureRandom sanity
[jira] [Commented] (SOLR-13667) Add upper, lower, trim and split Stream Evaluators
[ https://issues.apache.org/jira/browse/SOLR-13667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16899188#comment-16899188 ] ASF subversion and git services commented on SOLR-13667: Commit 669b2fb0e200a023dcfe2a90a0ce0440b2b2a996 in lucene-solr's branch refs/heads/branch_8x from Joel Bernstein [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=669b2fb ] SOLR-13667: Add upper, lower, trim and split Stream Evaluators > Add upper, lower, trim and split Stream Evaluators > -- > > Key: SOLR-13667 > URL: https://issues.apache.org/jira/browse/SOLR-13667 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) > Components: streaming expressions >Reporter: Joel Bernstein >Priority: Major > Attachments: SOLR-13667.patch, SOLR-13667.patch > > > The upper and lower Stream Evaluators will convert strings to upper and lower > case. The trim Stream Evaluator will trim whitespace from strings and the > split Stream Evaluator will split a string by a delimiter regex. > These functions will operate on both strings and lists of strings. These are > useful functions for cleaning data during the loading process. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13667) Add upper, lower, trim and split Stream Evaluators
[ https://issues.apache.org/jira/browse/SOLR-13667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16899189#comment-16899189 ] ASF subversion and git services commented on SOLR-13667: Commit c69548d39f1e793e6c3e7869e819c0f729cd48f7 in lucene-solr's branch refs/heads/branch_8x from Joel Bernstein [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=c69548d ] SOLR-13667: Fix precommit > Add upper, lower, trim and split Stream Evaluators > -- > > Key: SOLR-13667 > URL: https://issues.apache.org/jira/browse/SOLR-13667 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) > Components: streaming expressions >Reporter: Joel Bernstein >Priority: Major > Attachments: SOLR-13667.patch, SOLR-13667.patch > > > The upper and lower Stream Evaluators will convert strings to upper and lower > case. The trim Stream Evaluator will trim whitespace from strings and the > split Stream Evaluator will split a string by a delimiter regex. > These functions will operate on both strings and lists of strings. These are > useful functions for cleaning data during the loading process. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13672) Admin UI/ZK Status page: Zookeeper 3.5 : 4lw.commands.whitellist error
[ https://issues.apache.org/jira/browse/SOLR-13672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16899184#comment-16899184 ] Jörn Franke commented on SOLR-13672: thanks for the quick check, I was about to send the ZK information when you already commented the issue > Admin UI/ZK Status page: Zookeeper 3.5 : 4lw.commands.whitellist error > -- > > Key: SOLR-13672 > URL: https://issues.apache.org/jira/browse/SOLR-13672 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Affects Versions: 8.2 >Reporter: Jörn Franke >Priority: Major > Attachments: SOLR-13672.patch, zk-status.png > > Time Spent: 10m > Remaining Estimate: 0h > > After upgrading to Solr 8.2 and Zookeeper 3.5.5 one sees the following error > in the Admin UI / Cloud / ZkStatus: > {color:#22}*"Errors: - membership: Check 4lw.commands.whitelist setting > in zookeeper{color} > {color:#22}configuration file."*{color} > {color:#22}Aside of the UI, the Solr Cloud nodes seem to work perfectly > normal.{color} > {color:#22}This issue only occurs with ZooKeeper ensembles. It does not > appear if one Zookeeper standalone instance is used.{color} > {color:#22}We tried the 4lw.commands.whitelist with wildcard * and > "mntr,conf,ruok" (with and without spaces).{color} -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13667) Add upper, lower, trim and split Stream Evaluators
[ https://issues.apache.org/jira/browse/SOLR-13667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16899183#comment-16899183 ] ASF subversion and git services commented on SOLR-13667: Commit ee0fd492444907de763183214d69df11e3284d83 in lucene-solr's branch refs/heads/master from Joel Bernstein [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=ee0fd49 ] SOLR-13667: Fix precommit > Add upper, lower, trim and split Stream Evaluators > -- > > Key: SOLR-13667 > URL: https://issues.apache.org/jira/browse/SOLR-13667 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) > Components: streaming expressions >Reporter: Joel Bernstein >Priority: Major > Attachments: SOLR-13667.patch, SOLR-13667.patch > > > The upper and lower Stream Evaluators will convert strings to upper and lower > case. The trim Stream Evaluator will trim whitespace from strings and the > split Stream Evaluator will split a string by a delimiter regex. > These functions will operate on both strings and lists of strings. These are > useful functions for cleaning data during the loading process. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13667) Add upper, lower, trim and split Stream Evaluators
[ https://issues.apache.org/jira/browse/SOLR-13667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16899182#comment-16899182 ] ASF subversion and git services commented on SOLR-13667: Commit 03a39666c0bd7969e267332fb282f1ba5f7a0866 in lucene-solr's branch refs/heads/master from Joel Bernstein [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=03a3966 ] SOLR-13667: Add upper, lower, trim and split Stream Evaluators > Add upper, lower, trim and split Stream Evaluators > -- > > Key: SOLR-13667 > URL: https://issues.apache.org/jira/browse/SOLR-13667 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) > Components: streaming expressions >Reporter: Joel Bernstein >Priority: Major > Attachments: SOLR-13667.patch, SOLR-13667.patch > > > The upper and lower Stream Evaluators will convert strings to upper and lower > case. The trim Stream Evaluator will trim whitespace from strings and the > split Stream Evaluator will split a string by a delimiter regex. > These functions will operate on both strings and lists of strings. These are > useful functions for cleaning data during the loading process. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] [lucene-solr] magibney commented on a change in pull request #677: SOLR-13257: support for stable replica routing preferences
magibney commented on a change in pull request #677: SOLR-13257: support for stable replica routing preferences URL: https://github.com/apache/lucene-solr/pull/677#discussion_r310278050 ## File path: solr/core/src/java/org/apache/solr/handler/component/AffinityReplicaListTransformer.java ## @@ -0,0 +1,122 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.solr.handler.component; + +import java.lang.invoke.MethodHandles; +import java.util.Arrays; +import java.util.Comparator; +import java.util.List; +import java.util.ListIterator; +import org.apache.solr.common.cloud.Replica; +import org.apache.solr.common.params.SolrParams; +import org.apache.solr.common.util.Hash; +import org.apache.solr.request.SolrQueryRequest; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +/** + * Allows better caching by establishing deterministic evenly-distributed replica routing preferences according to + * either explicitly configured hash routing parameter, or the hash of a query parameter (configurable, usually related + * to the main query). + */ +public class AffinityReplicaListTransformer implements ReplicaListTransformer { + + private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass()); + + private final int routingDividend; + + public AffinityReplicaListTransformer(String hashVal) { +this.routingDividend = Math.abs(Hash.lookup3ycs(hashVal, 0, hashVal.length(), 0)); + } + + public AffinityReplicaListTransformer(int routingDividend) { +this.routingDividend = routingDividend; + } + + /** + * + * @param dividendParam int param to be used directly for mod-based routing + * @param hashParam String param to be hashed into an int for mod-based routing + * @param req the request from which param values will be drawn + * @return null if specified routing vals are not able to be parsed properly + */ + public static ReplicaListTransformer getInstance(String dividendParam, String hashParam, SolrQueryRequest req) { +SolrParams params = req.getOriginalParams(); +String dividendVal; +if (dividendParam != null && (dividendVal = params.get(dividendParam)) != null && !dividendVal.isEmpty()) { + try { +return new AffinityReplicaListTransformer(Integer.parseInt(dividendVal)); Review comment: That would certainly be more succinct. I think the intention here was to be relaxed about parsing (i.e., if you get a garbage dividend param, fall back to hashing the hashParam -- the assumption being that we're doing our best to honor the user's request for _stability_, one way or another). Given that approach, recovering from SolrParams.getInt(String) would have required catching a SolrException, which felt weird to me for some reason. Assuming a change to SolrParams.getInt(String), do you think it makes sense to just be strict, or to be forgiving and catch/log the SolrException that could result from a non-integer dividendParam? (I'm generally in favor of strictness, but in this case it feels a little arbitrary, since param in question doesn't affect the ability to return accurate results ...) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13672) Admin UI/ZK Status page: Zookeeper 3.5 : 4lw.commands.whitellist error
[ https://issues.apache.org/jira/browse/SOLR-13672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16899173#comment-16899173 ] Jan Høydahl commented on SOLR-13672: The screenshot shows another error where whitelist is not configured for one of the zk hosts. The ruok command is the first we try so when it fails we fail the whole host. We now explicitly skip the ‘membership:” line. We are not parsing a config file but text response from a socket call. It’s a mess, some commands respond with tab separated lines, other with = separated :) > Admin UI/ZK Status page: Zookeeper 3.5 : 4lw.commands.whitellist error > -- > > Key: SOLR-13672 > URL: https://issues.apache.org/jira/browse/SOLR-13672 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Affects Versions: 8.2 >Reporter: Jörn Franke >Priority: Major > Attachments: SOLR-13672.patch, zk-status.png > > Time Spent: 10m > Remaining Estimate: 0h > > After upgrading to Solr 8.2 and Zookeeper 3.5.5 one sees the following error > in the Admin UI / Cloud / ZkStatus: > {color:#22}*"Errors: - membership: Check 4lw.commands.whitelist setting > in zookeeper{color} > {color:#22}configuration file."*{color} > {color:#22}Aside of the UI, the Solr Cloud nodes seem to work perfectly > normal.{color} > {color:#22}This issue only occurs with ZooKeeper ensembles. It does not > appear if one Zookeeper standalone instance is used.{color} > {color:#22}We tried the 4lw.commands.whitelist with wildcard * and > "mntr,conf,ruok" (with and without spaces).{color} -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] [lucene-solr] magibney commented on issue #677: SOLR-13257: support for stable replica routing preferences
magibney commented on issue #677: SOLR-13257: support for stable replica routing preferences URL: https://github.com/apache/lucene-solr/pull/677#issuecomment-517823813 Indeed, I'll update the docs under the ShardHandlerFactory section. Thanks! This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] [lucene-solr] magibney commented on a change in pull request #677: SOLR-13257: support for stable replica routing preferences
magibney commented on a change in pull request #677: SOLR-13257: support for stable replica routing preferences URL: https://github.com/apache/lucene-solr/pull/677#discussion_r310274283 ## File path: solr/core/src/java/org/apache/solr/handler/component/AffinityReplicaListTransformerFactory.java ## @@ -0,0 +1,66 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.solr.handler.component; + +import org.apache.solr.common.params.CommonParams; +import org.apache.solr.common.params.ShardParams; +import org.apache.solr.common.util.NamedList; +import org.apache.solr.request.SolrQueryRequest; + +/** + * Review comment: Yes, that makes sense. I'll make `AffinityReplicaListTransformer` package-private (like `AffinityReplicaListTransformerFactory`, and `ShufflingReplicaListTransformer`). I'll flesh out the javadocs too. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] [lucene-solr] cpoerschke commented on issue #300: SOLR-11831: Skip second grouping step if group.limit is 1 (aka Las Vegas Patch)
cpoerschke commented on issue #300: SOLR-11831: Skip second grouping step if group.limit is 1 (aka Las Vegas Patch) URL: https://github.com/apache/lucene-solr/pull/300#issuecomment-517808063 > I rebased on top of the current master ... Thanks @diegoceccarelli! > ... had to change the returned type of serializeOneSearchGroup from Object[] to Object because ... Yes, that sounds right. The `serializeOneSearchGroup` factoring out in master was scoped on the master code at the time only i.e. it did not anticipate what would be needed subsequently e.g. with the `SkipSecondStepSearchResultResultTransformer` here. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-13677) All Metrics Gauges should be unregistered by the objects that registered them
[ https://issues.apache.org/jira/browse/SOLR-13677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Noble Paul updated SOLR-13677: -- Component/s: metrics > All Metrics Gauges should be unregistered by the objects that registered them > - > > Key: SOLR-13677 > URL: https://issues.apache.org/jira/browse/SOLR-13677 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: metrics >Reporter: Noble Paul >Priority: Major > > The life cycle of Metrics producers are managed by the core (mostly). So, if > the lifecycle of the object is different from that of the core itself, these > objects will never be unregistered from the metrics registry. This will lead > to memory leaks -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-13677) All Metrics Gauges should be unregistered by the objects that registered them
Noble Paul created SOLR-13677: - Summary: All Metrics Gauges should be unregistered by the objects that registered them Key: SOLR-13677 URL: https://issues.apache.org/jira/browse/SOLR-13677 Project: Solr Issue Type: Improvement Security Level: Public (Default Security Level. Issues are Public) Reporter: Noble Paul The life cycle of Metrics producers are managed by the core (mostly). So, if the lifecycle of the object is different from that of the core itself, these objects will never be unregistered from the metrics registry. This will lead to memory leaks -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13257) Enable replica routing affinity for better cache usage
[ https://issues.apache.org/jira/browse/SOLR-13257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16899135#comment-16899135 ] Christine Poerschke commented on SOLR-13257: Thanks [~tomasflobbe] for the ping! I left some comments on the pull requests but have no concerns, it's a very elegant code solution indeed, thank you [~mgibney] for developing and contributing it. > Enable replica routing affinity for better cache usage > -- > > Key: SOLR-13257 > URL: https://issues.apache.org/jira/browse/SOLR-13257 > Project: Solr > Issue Type: New Feature > Components: SolrCloud >Reporter: Michael Gibney >Assignee: Tomás Fernández Löbbe >Priority: Minor > Attachments: AffinityShardHandlerFactory.java, SOLR-13257.patch, > SOLR-13257.patch > > Time Spent: 2h 50m > Remaining Estimate: 0h > > For each shard in a distributed request, Solr currently routes each request > randomly via > [ShufflingReplicaListTransformer|https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/handler/component/ShufflingReplicaListTransformer.java] > to a particular replica. In setups with replication factor >1, this normally > results in a situation where subsequent requests (which one would hope/expect > to leverage cached results from previous related requests) end up getting > routed to a replica that hasn't seen any related requests. > The problem can be replicated by issuing a relatively expensive query (maybe > containing common terms?). The first request initializes the > {{queryResultCache}} on the consulted replicas. If replication factor >1 and > there are a sufficient number of shards, subsequent requests will likely be > routed to at least one replica that _hasn't_ seen the query before. The > replicas with uninitialized caches become a bottleneck, and from the client's > perspective, many subsequent requests appear not to benefit from caching at > all. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] [lucene-solr] cpoerschke commented on issue #677: SOLR-13257: support for stable replica routing preferences
cpoerschke commented on issue #677: SOLR-13257: support for stable replica routing preferences URL: https://github.com/apache/lucene-solr/pull/677#issuecomment-517804537 > ... Also, we should probably add some documentation on how to configure the replicaRouting in solr.xml Might the https://github.com/apache/lucene-solr/blob/master/solr/solr-ref-guide/src/format-of-solr-xml.adoc#the-shardhandlerfactory-element section be a suitable place? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] [lucene-solr] brjeter commented on issue #508: Simplified JAVA_VER_NUM to utilize single expr execution
brjeter commented on issue #508: Simplified JAVA_VER_NUM to utilize single expr execution URL: https://github.com/apache/lucene-solr/pull/508#issuecomment-517801789 Yes I just got stuck on this for awhile in our maven build in CI. It'd be nice to have this fixed. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] [lucene-solr] LinkMJB commented on issue #508: Simplified JAVA_VER_NUM to utilize single expr execution
LinkMJB commented on issue #508: Simplified JAVA_VER_NUM to utilize single expr execution URL: https://github.com/apache/lucene-solr/pull/508#issuecomment-517797067 Any chance we can get this reviewed and merged? This PR has been rotting for quite awhile now, and is a small change. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] [lucene-solr] cpoerschke commented on a change in pull request #677: SOLR-13257: support for stable replica routing preferences
cpoerschke commented on a change in pull request #677: SOLR-13257: support for stable replica routing preferences URL: https://github.com/apache/lucene-solr/pull/677#discussion_r310237271 ## File path: solr/core/src/java/org/apache/solr/handler/component/AffinityReplicaListTransformerFactory.java ## @@ -0,0 +1,66 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.solr.handler.component; + +import org.apache.solr.common.params.CommonParams; +import org.apache.solr.common.params.ShardParams; +import org.apache.solr.common.util.NamedList; +import org.apache.solr.request.SolrQueryRequest; + +/** + * Review comment: `AffinityReplicaListTransformer` being public but `AffinityReplicaListTransformerFactory` here not being public jumps out as surprising, I would have expected them both to have the same visibility or potentially for the factory to have more visibility that the class itself. And perhaps the javadocs could mention e.g. about the defaulting-to-Q-param behaviour or so. What do you think? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] [lucene-solr] diegoceccarelli edited a comment on issue #819: SOLR-13676: Reduce log verbosity in TestDistributedGrouping
diegoceccarelli edited a comment on issue #819: SOLR-13676: Reduce log verbosity in TestDistributedGrouping URL: https://github.com/apache/lucene-solr/pull/819#issuecomment-517788448 Please note: I ran precommit but it failed, I'm not sure it is my change, I'll look into that tomorrow.. ``` [asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: invalid part, must have at least one section (e.g., chapter, appendix, etc.) [asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid part, must have at least one section (e.g., chapter, appendix, etc.) ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] [lucene-solr] diegoceccarelli commented on issue #819: SOLR-13676: Reduce log verbosity in TestDistributedGrouping
diegoceccarelli commented on issue #819: SOLR-13676: Reduce log verbosity in TestDistributedGrouping URL: https://github.com/apache/lucene-solr/pull/819#issuecomment-517788448 Please note: I ran precommit but it failed, I'm not sure it is my change, I'll look into that tomorrow.. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] [lucene-solr] diegoceccarelli opened a new pull request #819: SOLR-13676: Reduce log verbosity in TestDistributedGrouping
diegoceccarelli opened a new pull request #819: SOLR-13676: Reduce log verbosity in TestDistributedGrouping URL: https://github.com/apache/lucene-solr/pull/819 using ignoreException # Description SOLR-13404 added a test that expects Solr to fail if grouping is called with group.offset < 0. When the test runs it succeeds but the whole stack trace is printed out in the logs. # Solution This small patch avoid the stack trace by using ignoreException. I also replaced an `assertTrue` with a more specific check. # Tests This patch is improving a test. # Checklist Please review the following and check all that apply: - [x] I have reviewed the guidelines for [How to Contribute](https://wiki.apache.org/solr/HowToContribute) and my code conforms to the standards described there to the best of my ability. - [x] I have created a Jira issue and added the issue ID to my pull request title. - [x] I am authorized to contribute this code to the ASF and have removed any code I do not have a license to distribute. - [x] I have developed this patch against the `master` branch. - [-] I have run `ant precommit` and the appropriate test suite. - [x] I have added tests for my changes. - [...] I have added documentation for the [Ref Guide](https://github.com/apache/lucene-solr/tree/master/solr/solr-ref-guide) (for Solr changes only). This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] [lucene-solr] cpoerschke commented on a change in pull request #677: SOLR-13257: support for stable replica routing preferences
cpoerschke commented on a change in pull request #677: SOLR-13257: support for stable replica routing preferences URL: https://github.com/apache/lucene-solr/pull/677#discussion_r310231462 ## File path: solr/core/src/java/org/apache/solr/handler/component/AffinityReplicaListTransformer.java ## @@ -0,0 +1,122 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.solr.handler.component; + +import java.lang.invoke.MethodHandles; +import java.util.Arrays; +import java.util.Comparator; +import java.util.List; +import java.util.ListIterator; +import org.apache.solr.common.cloud.Replica; +import org.apache.solr.common.params.SolrParams; +import org.apache.solr.common.util.Hash; +import org.apache.solr.request.SolrQueryRequest; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +/** + * Allows better caching by establishing deterministic evenly-distributed replica routing preferences according to + * either explicitly configured hash routing parameter, or the hash of a query parameter (configurable, usually related + * to the main query). + */ +public class AffinityReplicaListTransformer implements ReplicaListTransformer { + + private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass()); + + private final int routingDividend; + + public AffinityReplicaListTransformer(String hashVal) { +this.routingDividend = Math.abs(Hash.lookup3ycs(hashVal, 0, hashVal.length(), 0)); + } + + public AffinityReplicaListTransformer(int routingDividend) { +this.routingDividend = routingDividend; + } + + /** + * + * @param dividendParam int param to be used directly for mod-based routing + * @param hashParam String param to be hashed into an int for mod-based routing + * @param req the request from which param values will be drawn + * @return null if specified routing vals are not able to be parsed properly + */ + public static ReplicaListTransformer getInstance(String dividendParam, String hashParam, SolrQueryRequest req) { +SolrParams params = req.getOriginalParams(); +String dividendVal; +if (dividendParam != null && (dividendVal = params.get(dividendParam)) != null && !dividendVal.isEmpty()) { + try { +return new AffinityReplicaListTransformer(Integer.parseInt(dividendVal)); Review comment: Might the `Integer SolrParams.getInt(String)` method be an alternative way of parsing the `dividendParam`? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-13676) Reduce log verbosity in TestDistributedGrouping using ignoreException
Diego Ceccarelli created SOLR-13676: --- Summary: Reduce log verbosity in TestDistributedGrouping using ignoreException Key: SOLR-13676 URL: https://issues.apache.org/jira/browse/SOLR-13676 Project: Solr Issue Type: Improvement Security Level: Public (Default Security Level. Issues are Public) Components: SolrCloud Reporter: Diego Ceccarelli SOLR-13404 added a test that expects Solr to fail if grouping is called with {{group.offset < 0}}. When the test runs it succeeds but the whole stack trace is printed out in the logs. This small patch avoid the stack trace by using {{ignoreException}}. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13257) Enable replica routing affinity for better cache usage
[ https://issues.apache.org/jira/browse/SOLR-13257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16899041#comment-16899041 ] Tomás Fernández Löbbe commented on SOLR-13257: -- Sorry [~mgibney], I've been quite busy these days. Code looks good, I'll merge soon unless there are any concerns ([~cpoerschke]?) > Enable replica routing affinity for better cache usage > -- > > Key: SOLR-13257 > URL: https://issues.apache.org/jira/browse/SOLR-13257 > Project: Solr > Issue Type: New Feature > Components: SolrCloud >Reporter: Michael Gibney >Assignee: Tomás Fernández Löbbe >Priority: Minor > Attachments: AffinityShardHandlerFactory.java, SOLR-13257.patch, > SOLR-13257.patch > > Time Spent: 2h 20m > Remaining Estimate: 0h > > For each shard in a distributed request, Solr currently routes each request > randomly via > [ShufflingReplicaListTransformer|https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/handler/component/ShufflingReplicaListTransformer.java] > to a particular replica. In setups with replication factor >1, this normally > results in a situation where subsequent requests (which one would hope/expect > to leverage cached results from previous related requests) end up getting > routed to a replica that hasn't seen any related requests. > The problem can be replicated by issuing a relatively expensive query (maybe > containing common terms?). The first request initializes the > {{queryResultCache}} on the consulted replicas. If replication factor >1 and > there are a sufficient number of shards, subsequent requests will likely be > routed to at least one replica that _hasn't_ seen the query before. The > replicas with uninitialized caches become a bottleneck, and from the client's > perspective, many subsequent requests appear not to benefit from caching at > all. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8944) "I am authorized to contribute" wording in the Pull Request Template
[ https://issues.apache.org/jira/browse/LUCENE-8944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16899040#comment-16899040 ] Christine Poerschke commented on LUCENE-8944: - Here's some thoughts that occurred to me this week: * "code" could be broadened out to "changes" to include code as well as documentation and test contributions. * "ASF" could be expanded to "Apache Software Foundation" or "Apache Software Foundation (ASF)". * Could we sign-post contributors to with further information that may be helpful in a "I don't know if I'm authorized or not." scenario? * How to provide helpful feedback for pull requests were the "I am authorized" checklist item is unchecked? Is an unchecked checklist item different to a pull request that does not have the checklist or the checklist item? If we accept pull requests without the checklist item then is it perhaps not strictly necessary to have the checklist item? ** I've looked around a little in the Apache documentation and at some other Apache projects' pull request templates but found no obvious answer to this. Perhaps a [Legal Discuss|https://issues.apache.org/jira/projects/LEGAL/issues] issue could be opened but first wanted to ask on a project level here. What do others think? > "I am authorized to contribute" wording in the Pull Request Template > > > Key: LUCENE-8944 > URL: https://issues.apache.org/jira/browse/LUCENE-8944 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Christine Poerschke >Priority: Minor > > This ticket is to consider potential revisions to one of the checklist items > in the [pull request > template|https://github.com/apache/lucene-solr/blob/master/.github/PULL_REQUEST_TEMPLATE.md] > -- its current wording is: > bq. \[ \] I am authorized to contribute this code to the ASF and have removed > any code I do not have a license to distribute. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (LUCENE-8944) "I am authorized to contribute" wording in the Pull Request Template
Christine Poerschke created LUCENE-8944: --- Summary: "I am authorized to contribute" wording in the Pull Request Template Key: LUCENE-8944 URL: https://issues.apache.org/jira/browse/LUCENE-8944 Project: Lucene - Core Issue Type: Improvement Reporter: Christine Poerschke This ticket is to consider potential revisions to one of the checklist items in the [pull request template|https://github.com/apache/lucene-solr/blob/master/.github/PULL_REQUEST_TEMPLATE.md] -- its current wording is: bq. \[ \] I am authorized to contribute this code to the ASF and have removed any code I do not have a license to distribute. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Assigned] (SOLR-13257) Enable replica routing affinity for better cache usage
[ https://issues.apache.org/jira/browse/SOLR-13257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tomás Fernández Löbbe reassigned SOLR-13257: Assignee: Tomás Fernández Löbbe > Enable replica routing affinity for better cache usage > -- > > Key: SOLR-13257 > URL: https://issues.apache.org/jira/browse/SOLR-13257 > Project: Solr > Issue Type: New Feature > Components: SolrCloud >Reporter: Michael Gibney >Assignee: Tomás Fernández Löbbe >Priority: Minor > Attachments: AffinityShardHandlerFactory.java, SOLR-13257.patch, > SOLR-13257.patch > > Time Spent: 2h 20m > Remaining Estimate: 0h > > For each shard in a distributed request, Solr currently routes each request > randomly via > [ShufflingReplicaListTransformer|https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/handler/component/ShufflingReplicaListTransformer.java] > to a particular replica. In setups with replication factor >1, this normally > results in a situation where subsequent requests (which one would hope/expect > to leverage cached results from previous related requests) end up getting > routed to a replica that hasn't seen any related requests. > The problem can be replicated by issuing a relatively expensive query (maybe > containing common terms?). The first request initializes the > {{queryResultCache}} on the consulted replicas. If replication factor >1 and > there are a sufficient number of shards, subsequent requests will likely be > routed to at least one replica that _hasn't_ seen the query before. The > replicas with uninitialized caches become a bottleneck, and from the client's > perspective, many subsequent requests appear not to benefit from caching at > all. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-13666) Pull Request Template to sign-post to the Solr Ref Guide source
[ https://issues.apache.org/jira/browse/SOLR-13666?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Christine Poerschke resolved SOLR-13666. Resolution: Done Assignee: Christine Poerschke Fix Version/s: master (9.0) > Pull Request Template to sign-post to the Solr Ref Guide source > --- > > Key: SOLR-13666 > URL: https://issues.apache.org/jira/browse/SOLR-13666 > Project: Solr > Issue Type: Wish >Reporter: Christine Poerschke >Assignee: Christine Poerschke >Priority: Minor > Fix For: master (9.0) > > Time Spent: 20m > Remaining Estimate: 0h > -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] [lucene-solr] asfgit closed pull request #814: SOLR-13666: Pull Request Template to sign-post to the Solr Ref Guide source
asfgit closed pull request #814: SOLR-13666: Pull Request Template to sign-post to the Solr Ref Guide source URL: https://github.com/apache/lucene-solr/pull/814 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13666) Pull Request Template to sign-post to the Solr Ref Guide source
[ https://issues.apache.org/jira/browse/SOLR-13666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16899030#comment-16899030 ] ASF subversion and git services commented on SOLR-13666: Commit e2440d06d8950f41352a5656db6289683c0bd9ee in lucene-solr's branch refs/heads/master from Christine Poerschke [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=e2440d0 ] SOLR-13666: pull request template now sign-posts to Solr Reference Guide source (Closes #814 PR.) > Pull Request Template to sign-post to the Solr Ref Guide source > --- > > Key: SOLR-13666 > URL: https://issues.apache.org/jira/browse/SOLR-13666 > Project: Solr > Issue Type: Wish >Reporter: Christine Poerschke >Priority: Minor > Time Spent: 10m > Remaining Estimate: 0h > -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-6305) Ability to set the replication factor for index files created by HDFSDirectoryFactory
[ https://issues.apache.org/jira/browse/SOLR-6305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kevin Risden updated SOLR-6305: --- Fix Version/s: 8.3 > Ability to set the replication factor for index files created by > HDFSDirectoryFactory > - > > Key: SOLR-6305 > URL: https://issues.apache.org/jira/browse/SOLR-6305 > Project: Solr > Issue Type: Improvement > Components: Hadoop Integration, hdfs > Environment: hadoop-2.2.0 >Reporter: Timothy Potter >Assignee: Kevin Risden >Priority: Major > Fix For: 8.3 > > Attachments: > 0001-OIQ-23224-SOLR-6305-Fixed-SOLR-6305-by-reading-the-r.patch, > SOLR-6305.patch > > > HdfsFileWriter doesn't allow us to create files in HDFS with a different > replication factor than the configured DFS default because it uses: > {{FsServerDefaults fsDefaults = fileSystem.getServerDefaults(path);}} > Since we have two forms of replication going on when using > HDFSDirectoryFactory, it would be nice to be able to set the HDFS replication > factor for the Solr directories to a lower value than the default. I realize > this might reduce the chance of data locality but since Solr cores each have > their own path in HDFS, we should give operators the option to reduce it. > My original thinking was to just use Hadoop setrep to customize the > replication factor, but that's a one-time shot and doesn't affect new files > created. For instance, I did: > {{hadoop fs -setrep -R 1 solr49/coll1}} > My default dfs replication is set to 3 ^^ I'm setting it to 1 just as an > example > Then added some more docs to the coll1 and did: > {{hadoop fs -stat %r solr49/hdfs1/core_node1/data/index/segments_3}} > 3 <-- should be 1 > So it looks like new files don't inherit the repfact from their parent > directory. > Not sure if we need to go as far as allowing different replication factor per > collection but that should be considered if possible. > I looked at the Hadoop 2.2.0 code to see if there was a way to work through > this using the Configuration object but nothing jumped out at me ... and the > implementation for getServerDefaults(path) is just: > public FsServerDefaults getServerDefaults(Path p) throws IOException { > return getServerDefaults(); > } > Path is ignored ;-) -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-5381) Split Clusterstate and scale
[ https://issues.apache.org/jira/browse/SOLR-5381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16899026#comment-16899026 ] David Smiley commented on SOLR-5381: Shall we mark this closed now? I see this as done, save for one item. And I know work has been done on improving Overseer efficiency a tone since 2013. CC [~noble.paul] > Split Clusterstate and scale > - > > Key: SOLR-5381 > URL: https://issues.apache.org/jira/browse/SOLR-5381 > Project: Solr > Issue Type: Improvement > Components: SolrCloud >Reporter: Noble Paul >Assignee: Noble Paul >Priority: Major > Original Estimate: 2,016h > Remaining Estimate: 2,016h > > clusterstate.json is a single point of contention for all components in > SolrCloud. It would be hard to scale SolrCloud beyond a few thousand nodes > because there are too many updates and too many nodes need to be notified of > the changes. As the no:of nodes go up the size of clusterstate.json keeps > going up and it will soon exceed the limit impossed by ZK. > The first step is to store the shards information in separate nodes and each > node can just listen to the shard node it belongs to. We may also need to > split each collection into its own node and the clusterstate.json just > holding the names of the collections . > This is an umbrella issue -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-6305) Ability to set the replication factor for index files created by HDFSDirectoryFactory
[ https://issues.apache.org/jira/browse/SOLR-6305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kevin Risden updated SOLR-6305: --- Attachment: SOLR-6305.patch > Ability to set the replication factor for index files created by > HDFSDirectoryFactory > - > > Key: SOLR-6305 > URL: https://issues.apache.org/jira/browse/SOLR-6305 > Project: Solr > Issue Type: Improvement > Components: Hadoop Integration, hdfs > Environment: hadoop-2.2.0 >Reporter: Timothy Potter >Assignee: Kevin Risden >Priority: Major > Attachments: > 0001-OIQ-23224-SOLR-6305-Fixed-SOLR-6305-by-reading-the-r.patch, > SOLR-6305.patch > > > HdfsFileWriter doesn't allow us to create files in HDFS with a different > replication factor than the configured DFS default because it uses: > {{FsServerDefaults fsDefaults = fileSystem.getServerDefaults(path);}} > Since we have two forms of replication going on when using > HDFSDirectoryFactory, it would be nice to be able to set the HDFS replication > factor for the Solr directories to a lower value than the default. I realize > this might reduce the chance of data locality but since Solr cores each have > their own path in HDFS, we should give operators the option to reduce it. > My original thinking was to just use Hadoop setrep to customize the > replication factor, but that's a one-time shot and doesn't affect new files > created. For instance, I did: > {{hadoop fs -setrep -R 1 solr49/coll1}} > My default dfs replication is set to 3 ^^ I'm setting it to 1 just as an > example > Then added some more docs to the coll1 and did: > {{hadoop fs -stat %r solr49/hdfs1/core_node1/data/index/segments_3}} > 3 <-- should be 1 > So it looks like new files don't inherit the repfact from their parent > directory. > Not sure if we need to go as far as allowing different replication factor per > collection but that should be considered if possible. > I looked at the Hadoop 2.2.0 code to see if there was a way to work through > this using the Configuration object but nothing jumped out at me ... and the > implementation for getServerDefaults(path) is just: > public FsServerDefaults getServerDefaults(Path p) throws IOException { > return getServerDefaults(); > } > Path is ignored ;-) -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-6305) Ability to set the replication factor for index files created by HDFSDirectoryFactory
[ https://issues.apache.org/jira/browse/SOLR-6305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16899024#comment-16899024 ] Kevin Risden commented on SOLR-6305: Updated patch from [~bpasko] with commit message and CHANGES. Looking at committing soon. > Ability to set the replication factor for index files created by > HDFSDirectoryFactory > - > > Key: SOLR-6305 > URL: https://issues.apache.org/jira/browse/SOLR-6305 > Project: Solr > Issue Type: Improvement > Components: Hadoop Integration, hdfs > Environment: hadoop-2.2.0 >Reporter: Timothy Potter >Assignee: Kevin Risden >Priority: Major > Attachments: > 0001-OIQ-23224-SOLR-6305-Fixed-SOLR-6305-by-reading-the-r.patch, > SOLR-6305.patch > > > HdfsFileWriter doesn't allow us to create files in HDFS with a different > replication factor than the configured DFS default because it uses: > {{FsServerDefaults fsDefaults = fileSystem.getServerDefaults(path);}} > Since we have two forms of replication going on when using > HDFSDirectoryFactory, it would be nice to be able to set the HDFS replication > factor for the Solr directories to a lower value than the default. I realize > this might reduce the chance of data locality but since Solr cores each have > their own path in HDFS, we should give operators the option to reduce it. > My original thinking was to just use Hadoop setrep to customize the > replication factor, but that's a one-time shot and doesn't affect new files > created. For instance, I did: > {{hadoop fs -setrep -R 1 solr49/coll1}} > My default dfs replication is set to 3 ^^ I'm setting it to 1 just as an > example > Then added some more docs to the coll1 and did: > {{hadoop fs -stat %r solr49/hdfs1/core_node1/data/index/segments_3}} > 3 <-- should be 1 > So it looks like new files don't inherit the repfact from their parent > directory. > Not sure if we need to go as far as allowing different replication factor per > collection but that should be considered if possible. > I looked at the Hadoop 2.2.0 code to see if there was a way to work through > this using the Configuration object but nothing jumped out at me ... and the > implementation for getServerDefaults(path) is just: > public FsServerDefaults getServerDefaults(Path p) throws IOException { > return getServerDefaults(); > } > Path is ignored ;-) -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-6305) Ability to set the replication factor for index files created by HDFSDirectoryFactory
[ https://issues.apache.org/jira/browse/SOLR-6305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kevin Risden updated SOLR-6305: --- Status: Patch Available (was: Open) > Ability to set the replication factor for index files created by > HDFSDirectoryFactory > - > > Key: SOLR-6305 > URL: https://issues.apache.org/jira/browse/SOLR-6305 > Project: Solr > Issue Type: Improvement > Components: Hadoop Integration, hdfs > Environment: hadoop-2.2.0 >Reporter: Timothy Potter >Assignee: Kevin Risden >Priority: Major > Attachments: > 0001-OIQ-23224-SOLR-6305-Fixed-SOLR-6305-by-reading-the-r.patch > > > HdfsFileWriter doesn't allow us to create files in HDFS with a different > replication factor than the configured DFS default because it uses: > {{FsServerDefaults fsDefaults = fileSystem.getServerDefaults(path);}} > Since we have two forms of replication going on when using > HDFSDirectoryFactory, it would be nice to be able to set the HDFS replication > factor for the Solr directories to a lower value than the default. I realize > this might reduce the chance of data locality but since Solr cores each have > their own path in HDFS, we should give operators the option to reduce it. > My original thinking was to just use Hadoop setrep to customize the > replication factor, but that's a one-time shot and doesn't affect new files > created. For instance, I did: > {{hadoop fs -setrep -R 1 solr49/coll1}} > My default dfs replication is set to 3 ^^ I'm setting it to 1 just as an > example > Then added some more docs to the coll1 and did: > {{hadoop fs -stat %r solr49/hdfs1/core_node1/data/index/segments_3}} > 3 <-- should be 1 > So it looks like new files don't inherit the repfact from their parent > directory. > Not sure if we need to go as far as allowing different replication factor per > collection but that should be considered if possible. > I looked at the Hadoop 2.2.0 code to see if there was a way to work through > this using the Configuration object but nothing jumped out at me ... and the > implementation for getServerDefaults(path) is just: > public FsServerDefaults getServerDefaults(Path p) throws IOException { > return getServerDefaults(); > } > Path is ignored ;-) -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Assigned] (SOLR-6305) Ability to set the replication factor for index files created by HDFSDirectoryFactory
[ https://issues.apache.org/jira/browse/SOLR-6305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kevin Risden reassigned SOLR-6305: -- Assignee: Kevin Risden > Ability to set the replication factor for index files created by > HDFSDirectoryFactory > - > > Key: SOLR-6305 > URL: https://issues.apache.org/jira/browse/SOLR-6305 > Project: Solr > Issue Type: Improvement > Components: Hadoop Integration, hdfs > Environment: hadoop-2.2.0 >Reporter: Timothy Potter >Assignee: Kevin Risden >Priority: Major > Attachments: > 0001-OIQ-23224-SOLR-6305-Fixed-SOLR-6305-by-reading-the-r.patch > > > HdfsFileWriter doesn't allow us to create files in HDFS with a different > replication factor than the configured DFS default because it uses: > {{FsServerDefaults fsDefaults = fileSystem.getServerDefaults(path);}} > Since we have two forms of replication going on when using > HDFSDirectoryFactory, it would be nice to be able to set the HDFS replication > factor for the Solr directories to a lower value than the default. I realize > this might reduce the chance of data locality but since Solr cores each have > their own path in HDFS, we should give operators the option to reduce it. > My original thinking was to just use Hadoop setrep to customize the > replication factor, but that's a one-time shot and doesn't affect new files > created. For instance, I did: > {{hadoop fs -setrep -R 1 solr49/coll1}} > My default dfs replication is set to 3 ^^ I'm setting it to 1 just as an > example > Then added some more docs to the coll1 and did: > {{hadoop fs -stat %r solr49/hdfs1/core_node1/data/index/segments_3}} > 3 <-- should be 1 > So it looks like new files don't inherit the repfact from their parent > directory. > Not sure if we need to go as far as allowing different replication factor per > collection but that should be considered if possible. > I looked at the Hadoop 2.2.0 code to see if there was a way to work through > this using the Configuration object but nothing jumped out at me ... and the > implementation for getServerDefaults(path) is just: > public FsServerDefaults getServerDefaults(Path p) throws IOException { > return getServerDefaults(); > } > Path is ignored ;-) -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] [lucene-solr] diegoceccarelli commented on issue #300: SOLR-11831: Skip second grouping step if group.limit is 1 (aka Las Vegas Patch)
diegoceccarelli commented on issue #300: SOLR-11831: Skip second grouping step if group.limit is 1 (aka Las Vegas Patch) URL: https://github.com/apache/lucene-solr/pull/300#issuecomment-517760941 @cpoerschke I rebased on top of the current master, please note that: 1. I had to change the returned type of `serializeOneSearchGroup` from `Object[]` to `Object` because `SkipSecondStepSearchResultResultTransformer` will return a `NamedList` 2. I moved the check on `QueryComponent` cfd22cd (SOLR-12249) - ( when group.format=grouped then, validate group.offset) into `GroupingSpecification::validate` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13672) Admin UI/ZK Status page: Zookeeper 3.5 : 4lw.commands.whitellist error
[ https://issues.apache.org/jira/browse/SOLR-13672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16898983#comment-16898983 ] Shawn Heisey commented on SOLR-13672: - I was just noticing in the screenshot that our error message says the problem was with the 'ruok' command. If it's actually the 'conf' command that's failing, maybe the error message needs a little improving. > Admin UI/ZK Status page: Zookeeper 3.5 : 4lw.commands.whitellist error > -- > > Key: SOLR-13672 > URL: https://issues.apache.org/jira/browse/SOLR-13672 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Affects Versions: 8.2 >Reporter: Jörn Franke >Priority: Major > Attachments: SOLR-13672.patch, zk-status.png > > Time Spent: 10m > Remaining Estimate: 0h > > After upgrading to Solr 8.2 and Zookeeper 3.5.5 one sees the following error > in the Admin UI / Cloud / ZkStatus: > {color:#22}*"Errors: - membership: Check 4lw.commands.whitelist setting > in zookeeper{color} > {color:#22}configuration file."*{color} > {color:#22}Aside of the UI, the Solr Cloud nodes seem to work perfectly > normal.{color} > {color:#22}This issue only occurs with ZooKeeper ensembles. It does not > appear if one Zookeeper standalone instance is used.{color} > {color:#22}We tried the 4lw.commands.whitelist with wildcard * and > "mntr,conf,ruok" (with and without spaces).{color} -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13672) Admin UI/ZK Status page: Zookeeper 3.5 : 4lw.commands.whitellist error
[ https://issues.apache.org/jira/browse/SOLR-13672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16898980#comment-16898980 ] Shawn Heisey commented on SOLR-13672: - I checked the ZK server code for parsing a config file. That code treats it as a properties file. We might want to do the same. > Admin UI/ZK Status page: Zookeeper 3.5 : 4lw.commands.whitellist error > -- > > Key: SOLR-13672 > URL: https://issues.apache.org/jira/browse/SOLR-13672 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Affects Versions: 8.2 >Reporter: Jörn Franke >Priority: Major > Attachments: SOLR-13672.patch, zk-status.png > > Time Spent: 10m > Remaining Estimate: 0h > > After upgrading to Solr 8.2 and Zookeeper 3.5.5 one sees the following error > in the Admin UI / Cloud / ZkStatus: > {color:#22}*"Errors: - membership: Check 4lw.commands.whitelist setting > in zookeeper{color} > {color:#22}configuration file."*{color} > {color:#22}Aside of the UI, the Solr Cloud nodes seem to work perfectly > normal.{color} > {color:#22}This issue only occurs with ZooKeeper ensembles. It does not > appear if one Zookeeper standalone instance is used.{color} > {color:#22}We tried the 4lw.commands.whitelist with wildcard * and > "mntr,conf,ruok" (with and without spaces).{color} -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Assigned] (SOLR-13647) default solr.in.sh contains uncommented lines
[ https://issues.apache.org/jira/browse/SOLR-13647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jan Høydahl reassigned SOLR-13647: -- Assignee: Jan Høydahl > default solr.in.sh contains uncommented lines > - > > Key: SOLR-13647 > URL: https://issues.apache.org/jira/browse/SOLR-13647 > Project: Solr > Issue Type: Bug >Affects Versions: 8.1.1 >Reporter: John >Assignee: Jan Høydahl >Priority: Trivial > Fix For: 8.2 > > Time Spent: 50m > Remaining Estimate: 0h > > default version of this file should be completely commented > ENABLE_REMOTE_JMX_OPTS had defaults -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS-EA] Lucene-Solr-8.2-Linux (64bit/jdk-13-ea+26) - Build # 509 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.2-Linux/509/ Java: 64bit/jdk-13-ea+26 -XX:+UseCompressedOops -XX:+UseParallelGC 1 tests failed. FAILED: org.apache.solr.cloud.AliasIntegrationTest.testClusterStateProviderAPI Error Message: {} expected:<2> but was:<0> Stack Trace: java.lang.AssertionError: {} expected:<2> but was:<0> at __randomizedtesting.SeedInfo.seed([C42FCD4EE3CB227F:DBF8516290C0DB34]:0) at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.failNotEquals(Assert.java:834) at org.junit.Assert.assertEquals(Assert.java:645) at org.apache.solr.cloud.AliasIntegrationTest.testClusterStateProviderAPI(AliasIntegrationTest.java:303) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:567) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.base/java.lang.Thread.run(Thread.java:830) Build Log: [...truncated 14543 lines...] [junit4] Suite: org.apache.solr.cloud.AliasIntegrationTest [junit4] 2> Creating dataDir:
[jira] [Commented] (SOLR-13672) Admin UI/ZK Status page: Zookeeper 3.5 : 4lw.commands.whitellist error
[ https://issues.apache.org/jira/browse/SOLR-13672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16898913#comment-16898913 ] Jan Høydahl commented on SOLR-13672: Sorry, did not see your patch since I was working on PR [GitHub Pull Request #818|https://github.com/apache/lucene-solr/pull/818] The "membership: " line looks like a header line for the server.N lines that follows. Or it could possibly be that is is supposed to have a value and they used wrong separator. Anyway, my PR explicitly skip that line and print warnings for other unknown/malformed lines found in the future. I decided to also make visible a few of the new configs in the UI. Here's how the page now looks like if it finds two good zk servers and one which is not configured with whitelist: !zk-status.png! > Admin UI/ZK Status page: Zookeeper 3.5 : 4lw.commands.whitellist error > -- > > Key: SOLR-13672 > URL: https://issues.apache.org/jira/browse/SOLR-13672 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Affects Versions: 8.2 >Reporter: Jörn Franke >Priority: Major > Attachments: SOLR-13672.patch, zk-status.png > > Time Spent: 10m > Remaining Estimate: 0h > > After upgrading to Solr 8.2 and Zookeeper 3.5.5 one sees the following error > in the Admin UI / Cloud / ZkStatus: > {color:#22}*"Errors: - membership: Check 4lw.commands.whitelist setting > in zookeeper{color} > {color:#22}configuration file."*{color} > {color:#22}Aside of the UI, the Solr Cloud nodes seem to work perfectly > normal.{color} > {color:#22}This issue only occurs with ZooKeeper ensembles. It does not > appear if one Zookeeper standalone instance is used.{color} > {color:#22}We tried the 4lw.commands.whitelist with wildcard * and > "mntr,conf,ruok" (with and without spaces).{color} -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-13672) Admin UI/ZK Status page: Zookeeper 3.5 : 4lw.commands.whitellist error
[ https://issues.apache.org/jira/browse/SOLR-13672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jan Høydahl updated SOLR-13672: --- Attachment: zk-status.png > Admin UI/ZK Status page: Zookeeper 3.5 : 4lw.commands.whitellist error > -- > > Key: SOLR-13672 > URL: https://issues.apache.org/jira/browse/SOLR-13672 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Affects Versions: 8.2 >Reporter: Jörn Franke >Priority: Major > Attachments: SOLR-13672.patch, zk-status.png > > Time Spent: 10m > Remaining Estimate: 0h > > After upgrading to Solr 8.2 and Zookeeper 3.5.5 one sees the following error > in the Admin UI / Cloud / ZkStatus: > {color:#22}*"Errors: - membership: Check 4lw.commands.whitelist setting > in zookeeper{color} > {color:#22}configuration file."*{color} > {color:#22}Aside of the UI, the Solr Cloud nodes seem to work perfectly > normal.{color} > {color:#22}This issue only occurs with ZooKeeper ensembles. It does not > appear if one Zookeeper standalone instance is used.{color} > {color:#22}We tried the 4lw.commands.whitelist with wildcard * and > "mntr,conf,ruok" (with and without spaces).{color} -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-13667) Add upper, lower, trim and split Stream Evaluators
[ https://issues.apache.org/jira/browse/SOLR-13667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joel Bernstein updated SOLR-13667: -- Attachment: SOLR-13667.patch > Add upper, lower, trim and split Stream Evaluators > -- > > Key: SOLR-13667 > URL: https://issues.apache.org/jira/browse/SOLR-13667 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) > Components: streaming expressions >Reporter: Joel Bernstein >Priority: Major > Attachments: SOLR-13667.patch, SOLR-13667.patch > > > The upper and lower Stream Evaluators will convert strings to upper and lower > case. The trim Stream Evaluator will trim whitespace from strings and the > split Stream Evaluator will split a string by a delimiter regex. > These functions will operate on both strings and lists of strings. These are > useful functions for cleaning data during the loading process. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-13675) Allow zplot to visualize 2D cluster centroids
[ https://issues.apache.org/jira/browse/SOLR-13675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joel Bernstein updated SOLR-13675: -- Summary: Allow zplot to visualize 2D cluster centroids (was: All zplot to visualize 2D cluster centroids) > Allow zplot to visualize 2D cluster centroids > - > > Key: SOLR-13675 > URL: https://issues.apache.org/jira/browse/SOLR-13675 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) > Components: streaming expressions >Reporter: Joel Bernstein >Assignee: Joel Bernstein >Priority: Major > > Currently zplot can visualize 2D clusters in Apache Zeppelin. This ticket > will allow zplot to plot 2D cluster centroids as well. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-13675) All zplot to visualize 2D cluster centroids
Joel Bernstein created SOLR-13675: - Summary: All zplot to visualize 2D cluster centroids Key: SOLR-13675 URL: https://issues.apache.org/jira/browse/SOLR-13675 Project: Solr Issue Type: New Feature Security Level: Public (Default Security Level. Issues are Public) Components: streaming expressions Reporter: Joel Bernstein Currently zplot can visualize 2D clusters in Apache Zeppelin. This ticket will allow zplot to plot 2D cluster centroids as well. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Assigned] (SOLR-13675) All zplot to visualize 2D cluster centroids
[ https://issues.apache.org/jira/browse/SOLR-13675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joel Bernstein reassigned SOLR-13675: - Assignee: Joel Bernstein > All zplot to visualize 2D cluster centroids > --- > > Key: SOLR-13675 > URL: https://issues.apache.org/jira/browse/SOLR-13675 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) > Components: streaming expressions >Reporter: Joel Bernstein >Assignee: Joel Bernstein >Priority: Major > > Currently zplot can visualize 2D clusters in Apache Zeppelin. This ticket > will allow zplot to plot 2D cluster centroids as well. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] [lucene-solr] janhoy opened a new pull request #818: SOLR-13672: Zk Status page now parses response from Zookeeper 3.5.5 correctly
janhoy opened a new pull request #818: SOLR-13672: Zk Status page now parses response from Zookeeper 3.5.5 correctly URL: https://github.com/apache/lucene-solr/pull/818 # Description Admin UI/ZK Status page: Zookeeper 3.5 : 4lw.commands.whitellist error Turned out that there is a new line `membership:` in `conf` 4lw response for a quorum that does not follow the `key=value` format, so our parsing crashed. # Solution * Be more lenient when parsing zk response and disregard the known `membership:` line * Do not stop parsing when one error occurs, but continue reading response from other ZK hosts, gathering up errors in the errors array to display on top # Tests Added a new test that mocks the raw response for `ruok`, `mntr` and `conf` from zk, so we can test how the handler parses the response and maps them to error messages etc. # Checklist Please review the following and check all that apply: - [x] I have reviewed the guidelines for [How to Contribute](https://wiki.apache.org/solr/HowToContribute) and my code conforms to the standards described there to the best of my ability. - [x] I have created a Jira issue and added the issue ID to my pull request title. - [x] I am authorized to contribute this code to the ASF and have removed any code I do not have a license to distribute. - [x] I have developed this patch against the `master` branch. - [x] I have run `ant precommit` and the appropriate test suite. - [x] I have added tests for my changes. - [ ] I have added documentation for the Ref Guide (for Solr changes only). This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (LUCENE-8943) Incorrect IDF in MultiPhraseQuery and SpanOrQuery
[ https://issues.apache.org/jira/browse/LUCENE-8943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16898859#comment-16898859 ] Christoph Goller edited comment on LUCENE-8943 at 8/2/19 12:39 PM: --- Why is this an issue? Because IDFs of SpanOrQueriy and MultiPhraseQuery can get gigantic meaning that such queries have an unexpectedly high impact on the final score. was (Author: gol...@detego-software.de): Why is this an issue? Because IDFs of SpanOrQueriy and MultiPhraseQuery can get gigantic meaning that such queries get an unexpectedly high impact on the final score. > Incorrect IDF in MultiPhraseQuery and SpanOrQuery > - > > Key: LUCENE-8943 > URL: https://issues.apache.org/jira/browse/LUCENE-8943 > Project: Lucene - Core > Issue Type: Bug > Components: core/query/scoring >Affects Versions: 8.0 >Reporter: Christoph Goller >Priority: Major > > I recently stumbled across a very old bug in the IDF computation for > MultiPhraseQuery and SpanOrQuery. > BM25Similarity and TFIDFSimilarity / ClassicSimilarity have a method for > combining IDF values from more than on term / TermStatistics. > I mean the method: > Explanation idfExplain(CollectionStatistics collectionStats, TermStatistics > termStats[]) > It simply adds up the IDFs from all termStats[]. > This method is used e.g. in PhraseQuery where it makes sense. If we assume > that for the phrase "New York" the occurrences of both words are independent, > we can multiply their probabilitis and since IDFs are logarithmic we add them > up. Seems to be a reasonable approximation. However, this method is also used > to add up the IDFs of all terms in a MultiPhraseQuery as can be seen in: > Similarity.SimScorer getStats(IndexSearcher searcher) > A MultiPhraseQuery is actually a PhraseQuery with alternatives at individual > positions. IDFs of alternative terms for one position should not be added up. > Instead we should use the minimum value as an approcimation because this > corresponds to the docFreq of the most frequent term and we know that this is > a lower bound for the docFreq for this position. > In SpanOrQuerry we have the same problem It uses buildSimWeight(...) from > SpanWeight and adds up all IDFs of all OR-clauses. > If my arguments are not convincing, look at SynonymQuery / SynonymWeight in > the constructor: > SynonymWeight(Query query, IndexSearcher searcher, ScoreMode scoreMode, float > boost) > A SynonymQuery is also a kind of OR-query and it uses the maximum of the > docFreq of all its alternative terms. I think this is how it should be. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-Tests-master - Build # 3479 - Failure
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/3479/ All tests passed Build Log: [...truncated 63992 lines...] -ecj-javadoc-lint-src: [mkdir] Created dir: /tmp/ecj816699443 [ecj-lint] Compiling 1284 source files to /tmp/ecj816699443 [ecj-lint] Processing annotations [ecj-lint] Annotations processed [ecj-lint] Processing annotations [ecj-lint] No elements to process [ecj-lint] invalid Class-Path header in manifest of jar file: /home/jenkins/.ivy2/cache/org.restlet.jee/org.restlet/jars/org.restlet-2.3.0.jar [ecj-lint] invalid Class-Path header in manifest of jar file: /home/jenkins/.ivy2/cache/org.restlet.jee/org.restlet.ext.servlet/jars/org.restlet.ext.servlet-2.3.0.jar [ecj-lint] -- [ecj-lint] 1. WARNING in /home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/core/src/java/org/apache/solr/client/solrj/embedded/EmbeddedSolrServer.java (at line 219) [ecj-lint] return (NamedList) new JavaBinCodec(resolver).unmarshal(in); [ecj-lint]^^ [ecj-lint] Resource leak: '' is never closed [ecj-lint] -- [ecj-lint] -- [ecj-lint] 2. WARNING in /home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/core/src/java/org/apache/solr/cloud/autoscaling/sim/SimCloudManager.java (at line 788) [ecj-lint] throw new UnsupportedOperationException("must add at least 1 node first"); [ecj-lint] ^^ [ecj-lint] Resource leak: 'queryRequest' is not closed at this location [ecj-lint] -- [ecj-lint] 3. WARNING in /home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/core/src/java/org/apache/solr/cloud/autoscaling/sim/SimCloudManager.java (at line 794) [ecj-lint] throw new UnsupportedOperationException("must add at least 1 node first"); [ecj-lint] ^^ [ecj-lint] Resource leak: 'queryRequest' is not closed at this location [ecj-lint] -- [ecj-lint] -- [ecj-lint] 4. ERROR in /home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/core/src/java/org/apache/solr/core/SolrResourceLoader.java (at line 19) [ecj-lint] import javax.naming.Context; [ecj-lint] [ecj-lint] The type javax.naming.Context is not accessible [ecj-lint] -- [ecj-lint] 5. ERROR in /home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/core/src/java/org/apache/solr/core/SolrResourceLoader.java (at line 20) [ecj-lint] import javax.naming.InitialContext; [ecj-lint]^^^ [ecj-lint] The type javax.naming.InitialContext is not accessible [ecj-lint] -- [ecj-lint] 6. ERROR in /home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/core/src/java/org/apache/solr/core/SolrResourceLoader.java (at line 21) [ecj-lint] import javax.naming.NamingException; [ecj-lint] [ecj-lint] The type javax.naming.NamingException is not accessible [ecj-lint] -- [ecj-lint] 7. ERROR in /home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/core/src/java/org/apache/solr/core/SolrResourceLoader.java (at line 22) [ecj-lint] import javax.naming.NoInitialContextException; [ecj-lint]^^ [ecj-lint] The type javax.naming.NoInitialContextException is not accessible [ecj-lint] -- [ecj-lint] 8. ERROR in /home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/core/src/java/org/apache/solr/core/SolrResourceLoader.java (at line 776) [ecj-lint] Context c = new InitialContext(); [ecj-lint] ^^^ [ecj-lint] Context cannot be resolved to a type [ecj-lint] -- [ecj-lint] 9. ERROR in /home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/core/src/java/org/apache/solr/core/SolrResourceLoader.java (at line 776) [ecj-lint] Context c = new InitialContext(); [ecj-lint] ^^ [ecj-lint] InitialContext cannot be resolved to a type [ecj-lint] -- [ecj-lint] 10. ERROR in /home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/core/src/java/org/apache/solr/core/SolrResourceLoader.java (at line 779) [ecj-lint] } catch (NoInitialContextException e) { [ecj-lint] ^ [ecj-lint] NoInitialContextException cannot be resolved to a type [ecj-lint] -- [ecj-lint] 11. ERROR in /home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/core/src/java/org/apache/solr/core/SolrResourceLoader.java (at line 781) [ecj-lint] } catch (NamingException e) { [ecj-lint] ^^^ [ecj-lint] NamingException cannot be resolved to a type [ecj-lint] -- [ecj-lint] -- [ecj-lint] 12. WARNING in
[jira] [Commented] (LUCENE-8943) Incorrect IDF in MultiPhraseQuery and SpanOrQuery
[ https://issues.apache.org/jira/browse/LUCENE-8943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16898859#comment-16898859 ] Christoph Goller commented on LUCENE-8943: -- Why is this an issue? Because IDFs of SpanOrQueriy and MultiPhraseQuery can get gigantic meaning that such queries get an unexpectedly high impact on the final score. > Incorrect IDF in MultiPhraseQuery and SpanOrQuery > - > > Key: LUCENE-8943 > URL: https://issues.apache.org/jira/browse/LUCENE-8943 > Project: Lucene - Core > Issue Type: Bug > Components: core/query/scoring >Affects Versions: 8.0 >Reporter: Christoph Goller >Priority: Major > > I recently stumbled across a very old bug in the IDF computation for > MultiPhraseQuery and SpanOrQuery. > BM25Similarity and TFIDFSimilarity / ClassicSimilarity have a method for > combining IDF values from more than on term / TermStatistics. > I mean the method: > Explanation idfExplain(CollectionStatistics collectionStats, TermStatistics > termStats[]) > It simply adds up the IDFs from all termStats[]. > This method is used e.g. in PhraseQuery where it makes sense. If we assume > that for the phrase "New York" the occurrences of both words are independent, > we can multiply their probabilitis and since IDFs are logarithmic we add them > up. Seems to be a reasonable approximation. However, this method is also used > to add up the IDFs of all terms in a MultiPhraseQuery as can be seen in: > Similarity.SimScorer getStats(IndexSearcher searcher) > A MultiPhraseQuery is actually a PhraseQuery with alternatives at individual > positions. IDFs of alternative terms for one position should not be added up. > Instead we should use the minimum value as an approcimation because this > corresponds to the docFreq of the most frequent term and we know that this is > a lower bound for the docFreq for this position. > In SpanOrQuerry we have the same problem It uses buildSimWeight(...) from > SpanWeight and adds up all IDFs of all OR-clauses. > If my arguments are not convincing, look at SynonymQuery / SynonymWeight in > the constructor: > SynonymWeight(Query query, IndexSearcher searcher, ScoreMode scoreMode, float > boost) > A SynonymQuery is also a kind of OR-query and it uses the maximum of the > docFreq of all its alternative terms. I think this is how it should be. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-13672) Admin UI/ZK Status page: Zookeeper 3.5 : 4lw.commands.whitellist error
[ https://issues.apache.org/jira/browse/SOLR-13672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shawn Heisey updated SOLR-13672: Attachment: SOLR-13672.patch > Admin UI/ZK Status page: Zookeeper 3.5 : 4lw.commands.whitellist error > -- > > Key: SOLR-13672 > URL: https://issues.apache.org/jira/browse/SOLR-13672 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Affects Versions: 8.2 >Reporter: Jörn Franke >Priority: Major > Attachments: SOLR-13672.patch > > > After upgrading to Solr 8.2 and Zookeeper 3.5.5 one sees the following error > in the Admin UI / Cloud / ZkStatus: > {color:#22}*"Errors: - membership: Check 4lw.commands.whitelist setting > in zookeeper{color} > {color:#22}configuration file."*{color} > {color:#22}Aside of the UI, the Solr Cloud nodes seem to work perfectly > normal.{color} > {color:#22}This issue only occurs with ZooKeeper ensembles. It does not > appear if one Zookeeper standalone instance is used.{color} > {color:#22}We tried the 4lw.commands.whitelist with wildcard * and > "mntr,conf,ruok" (with and without spaces).{color} -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13672) Admin UI/ZK Status page: Zookeeper 3.5 : 4lw.commands.whitellist error
[ https://issues.apache.org/jira/browse/SOLR-13672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16898853#comment-16898853 ] Shawn Heisey commented on SOLR-13672: - I think I would call this a bug in ZK. But since getting a fix from them could take a long time, we need to tackle this in Solr. There are probably two ways to handle this. 1) Look for the = separator, and if not found, use the : separator. 2) Treat the conf output as a .properties file and let Java parse it for us. I'm attaching a patch that takes the first approach. > Admin UI/ZK Status page: Zookeeper 3.5 : 4lw.commands.whitellist error > -- > > Key: SOLR-13672 > URL: https://issues.apache.org/jira/browse/SOLR-13672 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Affects Versions: 8.2 >Reporter: Jörn Franke >Priority: Major > > After upgrading to Solr 8.2 and Zookeeper 3.5.5 one sees the following error > in the Admin UI / Cloud / ZkStatus: > {color:#22}*"Errors: - membership: Check 4lw.commands.whitelist setting > in zookeeper{color} > {color:#22}configuration file."*{color} > {color:#22}Aside of the UI, the Solr Cloud nodes seem to work perfectly > normal.{color} > {color:#22}This issue only occurs with ZooKeeper ensembles. It does not > appear if one Zookeeper standalone instance is used.{color} > {color:#22}We tried the 4lw.commands.whitelist with wildcard * and > "mntr,conf,ruok" (with and without spaces).{color} -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-BadApples-NightlyTests-master - Build # 73 - Still Failing
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-NightlyTests-master/73/ No tests ran. Build Log: [...truncated 25 lines...] ERROR: Failed to check out http://svn.apache.org/repos/asf/lucene/test-data org.tmatesoft.svn.core.SVNException: svn: E175002: connection refused by the server svn: E175002: OPTIONS request failed on '/repos/asf/lucene/test-data' at org.tmatesoft.svn.core.internal.wc.SVNErrorManager.error(SVNErrorManager.java:112) at org.tmatesoft.svn.core.internal.wc.SVNErrorManager.error(SVNErrorManager.java:96) at org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.request(HTTPConnection.java:765) at org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.request(HTTPConnection.java:352) at org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.request(HTTPConnection.java:340) at org.tmatesoft.svn.core.internal.io.dav.DAVConnection.performHttpRequest(DAVConnection.java:910) at org.tmatesoft.svn.core.internal.io.dav.DAVConnection.exchangeCapabilities(DAVConnection.java:702) at org.tmatesoft.svn.core.internal.io.dav.DAVConnection.open(DAVConnection.java:113) at org.tmatesoft.svn.core.internal.io.dav.DAVRepository.openConnection(DAVRepository.java:1035) at org.tmatesoft.svn.core.internal.io.dav.DAVRepository.getLatestRevision(DAVRepository.java:164) at org.tmatesoft.svn.core.internal.wc2.ng.SvnNgRepositoryAccess.getRevisionNumber(SvnNgRepositoryAccess.java:119) at org.tmatesoft.svn.core.internal.wc2.SvnRepositoryAccess.getLocations(SvnRepositoryAccess.java:178) at org.tmatesoft.svn.core.internal.wc2.ng.SvnNgRepositoryAccess.createRepositoryFor(SvnNgRepositoryAccess.java:43) at org.tmatesoft.svn.core.internal.wc2.ng.SvnNgAbstractUpdate.checkout(SvnNgAbstractUpdate.java:831) at org.tmatesoft.svn.core.internal.wc2.ng.SvnNgCheckout.run(SvnNgCheckout.java:26) at org.tmatesoft.svn.core.internal.wc2.ng.SvnNgCheckout.run(SvnNgCheckout.java:11) at org.tmatesoft.svn.core.internal.wc2.ng.SvnNgOperationRunner.run(SvnNgOperationRunner.java:20) at org.tmatesoft.svn.core.internal.wc2.SvnOperationRunner.run(SvnOperationRunner.java:21) at org.tmatesoft.svn.core.wc2.SvnOperationFactory.run(SvnOperationFactory.java:1239) at org.tmatesoft.svn.core.wc2.SvnOperation.run(SvnOperation.java:294) at hudson.scm.subversion.CheckoutUpdater$SubversionUpdateTask.perform(CheckoutUpdater.java:133) at hudson.scm.subversion.WorkspaceUpdater$UpdateTask.delegateTo(WorkspaceUpdater.java:168) at hudson.scm.subversion.WorkspaceUpdater$UpdateTask.delegateTo(WorkspaceUpdater.java:176) at hudson.scm.subversion.UpdateUpdater$TaskImpl.perform(UpdateUpdater.java:134) at hudson.scm.subversion.WorkspaceUpdater$UpdateTask.delegateTo(WorkspaceUpdater.java:168) at hudson.scm.SubversionSCM$CheckOutTask.perform(SubversionSCM.java:1041) at hudson.scm.SubversionSCM$CheckOutTask.invoke(SubversionSCM.java:1017) at hudson.scm.SubversionSCM$CheckOutTask.invoke(SubversionSCM.java:990) at hudson.FilePath$FileCallableWrapper.call(FilePath.java:3086) at hudson.remoting.UserRequest.perform(UserRequest.java:212) at hudson.remoting.UserRequest.perform(UserRequest.java:54) at hudson.remoting.Request$2.run(Request.java:369) at hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:744) Caused by: java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:345) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:589) at org.tmatesoft.svn.core.internal.util.SVNSocketConnection.run(SVNSocketConnection.java:57) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ... 4 more java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:345) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188) at
[jira] [Created] (LUCENE-8943) Incorrect IDF in MultiPhraseQuery and SpanOrQuery
Christoph Goller created LUCENE-8943: Summary: Incorrect IDF in MultiPhraseQuery and SpanOrQuery Key: LUCENE-8943 URL: https://issues.apache.org/jira/browse/LUCENE-8943 Project: Lucene - Core Issue Type: Bug Components: core/query/scoring Affects Versions: 8.0 Reporter: Christoph Goller I recently stumbled across a very old bug in the IDF computation for MultiPhraseQuery and SpanOrQuery. BM25Similarity and TFIDFSimilarity / ClassicSimilarity have a method for combining IDF values from more than on term / TermStatistics. I mean the method: Explanation idfExplain(CollectionStatistics collectionStats, TermStatistics termStats[]) It simply adds up the IDFs from all termStats[]. This method is used e.g. in PhraseQuery where it makes sense. If we assume that for the phrase "New York" the occurrences of both words are independent, we can multiply their probabilitis and since IDFs are logarithmic we add them up. Seems to be a reasonable approximation. However, this method is also used to add up the IDFs of all terms in a MultiPhraseQuery as can be seen in: Similarity.SimScorer getStats(IndexSearcher searcher) A MultiPhraseQuery is actually a PhraseQuery with alternatives at individual positions. IDFs of alternative terms for one position should not be added up. Instead we should use the minimum value as an approcimation because this corresponds to the docFreq of the most frequent term and we know that this is a lower bound for the docFreq for this position. In SpanOrQuerry we have the same problem It uses buildSimWeight(...) from SpanWeight and adds up all IDFs of all OR-clauses. If my arguments are not convincing, look at SynonymQuery / SynonymWeight in the constructor: SynonymWeight(Query query, IndexSearcher searcher, ScoreMode scoreMode, float boost) A SynonymQuery is also a kind of OR-query and it uses the maximum of the docFreq of all its alternative terms. I think this is how it should be. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] [lucene-solr] atris opened a new pull request #817: SOLR-13655:Upgrade Collections.unModifiableSet to Set.of and Set.copyOf
atris opened a new pull request #817: SOLR-13655:Upgrade Collections.unModifiableSet to Set.of and Set.copyOf URL: https://github.com/apache/lucene-solr/pull/817 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1917 - Still Unstable
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1917/ 1 tests failed. FAILED: org.apache.solr.cloud.rule.RulesTest.doIntegrationTest Error Message: Should have found shard1 w/2active replicas + shard2 w/1active replica Timeout waiting to see state for collection=rulesColl :DocCollection(rulesColl//collections/rulesColl/state.json/19)={ "pullReplicas":"0", "replicationFactor":"2", "shards":{ "shard1":{ "range":null, "state":"active", "replicas":{ "core_node2":{ "core":"rulesColl_shard1_replica_n1", "base_url":"http://127.0.0.1:34773/solr;, "node_name":"127.0.0.1:34773_solr", "state":"active", "type":"NRT", "force_set_state":"false"}, "core_node4":{ "core":"rulesColl_shard1_replica_n3", "base_url":"http://127.0.0.1:44752/solr;, "node_name":"127.0.0.1:44752_solr", "state":"active", "type":"NRT", "force_set_state":"false", "leader":"true"}}}, "shard2":{ "range":null, "state":"active", "replicas":{ "core_node7":{ "core":"rulesColl_shard2_replica_n5", "base_url":"http://127.0.0.1:36112/solr;, "node_name":"127.0.0.1:36112_solr", "state":"active", "type":"NRT", "force_set_state":"false", "leader":"true"}, "core_node8":{ "core":"rulesColl_shard2_replica_n6", "base_url":"http://127.0.0.1:36356/solr;, "node_name":"127.0.0.1:36356_solr", "state":"active", "type":"NRT", "force_set_state":"false"}, "core_node10":{ "core":"rulesColl_shard2_replica_n9", "base_url":"http://127.0.0.1:42642/solr;, "node_name":"127.0.0.1:42642_solr", "state":"active", "type":"NRT", "force_set_state":"false", "router":{"name":"implicit"}, "maxShardsPerNode":"1", "autoAddReplicas":"false", "snitch":[{"class":"ImplicitSnitch"}], "nrtReplicas":"2", "tlogReplicas":"0", "rule":[ {"cores":"<4"}, { "node":"*", "replica":"<2"}, {"freedisk":">1"}]} Live Nodes: [127.0.0.1:34773_solr, 127.0.0.1:36112_solr, 127.0.0.1:36356_solr, 127.0.0.1:42642_solr, 127.0.0.1:44752_solr] Last available state: DocCollection(rulesColl//collections/rulesColl/state.json/19)={ "pullReplicas":"0", "replicationFactor":"2", "shards":{ "shard1":{ "range":null, "state":"active", "replicas":{ "core_node2":{ "core":"rulesColl_shard1_replica_n1", "base_url":"http://127.0.0.1:34773/solr;, "node_name":"127.0.0.1:34773_solr", "state":"active", "type":"NRT", "force_set_state":"false"}, "core_node4":{ "core":"rulesColl_shard1_replica_n3", "base_url":"http://127.0.0.1:44752/solr;, "node_name":"127.0.0.1:44752_solr", "state":"active", "type":"NRT", "force_set_state":"false", "leader":"true"}}}, "shard2":{ "range":null, "state":"active", "replicas":{ "core_node7":{ "core":"rulesColl_shard2_replica_n5", "base_url":"http://127.0.0.1:36112/solr;, "node_name":"127.0.0.1:36112_solr", "state":"active", "type":"NRT", "force_set_state":"false", "leader":"true"}, "core_node8":{ "core":"rulesColl_shard2_replica_n6", "base_url":"http://127.0.0.1:36356/solr;, "node_name":"127.0.0.1:36356_solr", "state":"active", "type":"NRT", "force_set_state":"false"}, "core_node10":{ "core":"rulesColl_shard2_replica_n9", "base_url":"http://127.0.0.1:42642/solr;, "node_name":"127.0.0.1:42642_solr", "state":"active", "type":"NRT", "force_set_state":"false", "router":{"name":"implicit"}, "maxShardsPerNode":"1", "autoAddReplicas":"false", "snitch":[{"class":"ImplicitSnitch"}], "nrtReplicas":"2", "tlogReplicas":"0", "rule":[ {"cores":"<4"}, { "node":"*", "replica":"<2"}, {"freedisk":">1"}]} Stack Trace: java.lang.AssertionError: Should have found shard1 w/2active replicas + shard2 w/1active replica Timeout waiting to see state for collection=rulesColl :DocCollection(rulesColl//collections/rulesColl/state.json/19)={ "pullReplicas":"0", "replicationFactor":"2", "shards":{ "shard1":{ "range":null, "state":"active", "replicas":{ "core_node2":{ "core":"rulesColl_shard1_replica_n1", "base_url":"http://127.0.0.1:34773/solr;, "node_name":"127.0.0.1:34773_solr", "state":"active", "type":"NRT", "force_set_state":"false"}, "core_node4":{
[JENKINS] Lucene-Solr-8.2-Linux (32bit/jdk1.8.0_201) - Build # 508 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.2-Linux/508/ Java: 32bit/jdk1.8.0_201 -server -XX:+UseConcMarkSweepGC 1 tests failed. FAILED: org.apache.solr.cloud.autoscaling.sim.TestSnapshotCloudManager.testSimulatorFromSnapshot Error Message: expected:<[/, /aliases.json, /autoscaling, /autoscaling.json, /autoscaling/events, /autoscaling/events/.auto_add_replicas, /autoscaling/events/.scheduled_maintenance, /autoscaling/nodeAdded, /autoscaling/nodeLost, /collections, /collections/.system, /collections/.system/counter, /collections/.system/leader_elect, /collections/.system/leaders, /collections/.system/state.json, /collections/.system/terms, /collections/.system/terms/shard1, /configs, /configs/.system, /configs/.system/managed-schema, /configs/.system/schema.xml.bak, /configs/.system/solrconfig.xml, /configs/_default, /configs/_default/lang, /configs/_default/lang/contractions_ca.txt, /configs/_default/lang/contractions_fr.txt, /configs/_default/lang/contractions_ga.txt, /configs/_default/lang/contractions_it.txt, /configs/_default/lang/hyphenations_ga.txt, /configs/_default/lang/stemdict_nl.txt, /configs/_default/lang/stoptags_ja.txt, /configs/_default/lang/stopwords_ar.txt, /configs/_default/lang/stopwords_bg.txt, /configs/_default/lang/stopwords_ca.txt, /configs/_default/lang/stopwords_cz.txt, /configs/_default/lang/stopwords_da.txt, /configs/_default/lang/stopwords_de.txt, /configs/_default/lang/stopwords_el.txt, /configs/_default/lang/stopwords_en.txt, /configs/_default/lang/stopwords_es.txt, /configs/_default/lang/stopwords_et.txt, /configs/_default/lang/stopwords_eu.txt, /configs/_default/lang/stopwords_fa.txt, /configs/_default/lang/stopwords_fi.txt, /configs/_default/lang/stopwords_fr.txt, /configs/_default/lang/stopwords_ga.txt, /configs/_default/lang/stopwords_gl.txt, /configs/_default/lang/stopwords_hi.txt, /configs/_default/lang/stopwords_hu.txt, /configs/_default/lang/stopwords_hy.txt, /configs/_default/lang/stopwords_id.txt, /configs/_default/lang/stopwords_it.txt, /configs/_default/lang/stopwords_ja.txt, /configs/_default/lang/stopwords_lv.txt, /configs/_default/lang/stopwords_nl.txt, /configs/_default/lang/stopwords_no.txt, /configs/_default/lang/stopwords_pt.txt, /configs/_default/lang/stopwords_ro.txt, /configs/_default/lang/stopwords_ru.txt, /configs/_default/lang/stopwords_sv.txt, /configs/_default/lang/stopwords_th.txt, /configs/_default/lang/stopwords_tr.txt, /configs/_default/lang/userdict_ja.txt, /configs/_default/managed-schema, /configs/_default/params.json, /configs/_default/protwords.txt, /configs/_default/solrconfig.xml, /configs/_default/stopwords.txt, /configs/_default/synonyms.txt, /configs/conf, /configs/conf/schema.xml, /configs/conf/solrconfig.xml, /live_nodes, /overseer, /overseer/async_ids, /overseer/collection-map-completed, /overseer/collection-map-failure, /overseer/collection-map-running, /overseer/collection-queue-work, /overseer/queue, /overseer/queue-work, /overseer_elect, /overseer_elect/election, /overseer_elect/election/72105189068898311-127.0.0.1:38021_solr-n_00, /overseer_elect/election/72105189068898314-127.0.0.1:37561_solr-n_01, /overseer_elect/election/72105189068898317-127.0.0.1:38957_solr-n_02, /overseer_elect/leader, /security.json, /solr.xml]> but was:<[/, /aliases.json, /autoscaling, /autoscaling.json, /autoscaling/events, /autoscaling/events/.auto_add_replicas, /autoscaling/events/.scheduled_maintenance, /autoscaling/events/.scheduled_maintenance/qn-00, /autoscaling/nodeAdded, /autoscaling/nodeLost, /collections, /collections/.system, /collections/.system/counter, /collections/.system/leader_elect, /collections/.system/leaders, /collections/.system/state.json, /collections/.system/terms, /collections/.system/terms/shard1, /configs, /configs/.system, /configs/.system/managed-schema, /configs/.system/schema.xml.bak, /configs/.system/solrconfig.xml, /configs/_default, /configs/_default/lang, /configs/_default/lang/contractions_ca.txt, /configs/_default/lang/contractions_fr.txt, /configs/_default/lang/contractions_ga.txt, /configs/_default/lang/contractions_it.txt, /configs/_default/lang/hyphenations_ga.txt, /configs/_default/lang/stemdict_nl.txt, /configs/_default/lang/stoptags_ja.txt, /configs/_default/lang/stopwords_ar.txt, /configs/_default/lang/stopwords_bg.txt, /configs/_default/lang/stopwords_ca.txt, /configs/_default/lang/stopwords_cz.txt, /configs/_default/lang/stopwords_da.txt, /configs/_default/lang/stopwords_de.txt, /configs/_default/lang/stopwords_el.txt, /configs/_default/lang/stopwords_en.txt, /configs/_default/lang/stopwords_es.txt, /configs/_default/lang/stopwords_et.txt, /configs/_default/lang/stopwords_eu.txt, /configs/_default/lang/stopwords_fa.txt, /configs/_default/lang/stopwords_fi.txt, /configs/_default/lang/stopwords_fr.txt, /configs/_default/lang/stopwords_ga.txt,
[jira] [Commented] (SOLR-13672) Admin UI/ZK Status page: Zookeeper 3.5 : 4lw.commands.whitellist error
[ https://issues.apache.org/jira/browse/SOLR-13672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16898708#comment-16898708 ] Jan Høydahl commented on SOLR-13672: Ok, I reproduced it here. Apparently 3.5.5 has some new format of the response in `conf` 4lw command, e.g.: {noformat} clientPort=2181 secureClientPort=-1 dataDir=/data/version-2 dataDirSize=201326640 dataLogDir=/datalog/version-2 dataLogSize=582 tickTime=2000 maxClientCnxns=60 minSessionTimeout=4000 maxSessionTimeout=4 serverId=1 initLimit=5 syncLimit=2 electionAlg=3 electionPort=3888 quorumPort=2888 peerType=0 membership: server.1=zoo1:2888:3888:participant;0.0.0.0:2181 server.2=zoo2:2888:3888:participant;0.0.0.0:2181 server.3=zoo3:2888:3888:participant;0.0.0.0:2181 version=0Connection closed by foreign host. {noformat} Obviously the parsing logic expects strictly {{key=value}} for each line, but there is a line here that is only {{membership:}} which I believe causes some parsing error in {{ZookeeperStatusHandler}} > Admin UI/ZK Status page: Zookeeper 3.5 : 4lw.commands.whitellist error > -- > > Key: SOLR-13672 > URL: https://issues.apache.org/jira/browse/SOLR-13672 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrCloud >Affects Versions: 8.2 >Reporter: Jörn Franke >Priority: Major > > After upgrading to Solr 8.2 and Zookeeper 3.5.5 one sees the following error > in the Admin UI / Cloud / ZkStatus: > {color:#22}*"Errors: - membership: Check 4lw.commands.whitelist setting > in zookeeper{color} > {color:#22}configuration file."*{color} > {color:#22}Aside of the UI, the Solr Cloud nodes seem to work perfectly > normal.{color} > {color:#22}This issue only occurs with ZooKeeper ensembles. It does not > appear if one Zookeeper standalone instance is used.{color} > {color:#22}We tried the 4lw.commands.whitelist with wildcard * and > "mntr,conf,ruok" (with and without spaces).{color} -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-11.0.3) - Build # 24485 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/24485/ Java: 64bit/jdk-11.0.3 -XX:+UseCompressedOops -XX:+UseParallelGC 2 tests failed. FAILED: org.apache.lucene.search.suggest.analyzing.AnalyzingInfixSuggesterTest.testRandomNRT Error Message: Captured an uncaught exception in thread: Thread[id=91, name=Thread-75, state=RUNNABLE, group=TGRP-AnalyzingInfixSuggesterTest] Stack Trace: com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught exception in thread: Thread[id=91, name=Thread-75, state=RUNNABLE, group=TGRP-AnalyzingInfixSuggesterTest] at __randomizedtesting.SeedInfo.seed([CD74F0D62C7F75ED:695AFE6B74A0A951]:0) Caused by: java.lang.AssertionError at __randomizedtesting.SeedInfo.seed([CD74F0D62C7F75ED]:0) at java.base/java.util.HashMap$TreeNode.moveRootToFront(HashMap.java:1896) at java.base/java.util.HashMap$TreeNode.putTreeVal(HashMap.java:2061) at java.base/java.util.HashMap.putVal(HashMap.java:633) at java.base/java.util.HashMap.putIfAbsent(HashMap.java:1057) at org.apache.lucene.search.LRUQueryCache.putIfAbsent(LRUQueryCache.java:292) at org.apache.lucene.search.LRUQueryCache$CachingWrapperWeight.bulkScorer(LRUQueryCache.java:817) at org.apache.lucene.search.BooleanWeight.optionalBulkScorer(BooleanWeight.java:190) at org.apache.lucene.search.BooleanWeight.booleanScorer(BooleanWeight.java:247) at org.apache.lucene.search.BooleanWeight.bulkScorer(BooleanWeight.java:321) at org.apache.lucene.search.LRUQueryCache$CachingWrapperWeight.cache(LRUQueryCache.java:684) at org.apache.lucene.search.LRUQueryCache$CachingWrapperWeight.bulkScorer(LRUQueryCache.java:816) at org.apache.lucene.search.MultiTermQueryConstantScoreWrapper$1.bulkScorer(MultiTermQueryConstantScoreWrapper.java:195) at org.apache.lucene.search.LRUQueryCache$CachingWrapperWeight.bulkScorer(LRUQueryCache.java:819) at org.apache.lucene.search.BooleanWeight.optionalBulkScorer(BooleanWeight.java:190) at org.apache.lucene.search.BooleanWeight.booleanScorer(BooleanWeight.java:247) at org.apache.lucene.search.BooleanWeight.bulkScorer(BooleanWeight.java:321) at org.apache.lucene.search.LRUQueryCache$CachingWrapperWeight.bulkScorer(LRUQueryCache.java:819) at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:719) at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:511) at org.apache.lucene.search.suggest.analyzing.AnalyzingInfixSuggester.lookup(AnalyzingInfixSuggester.java:660) at org.apache.lucene.search.suggest.analyzing.AnalyzingInfixSuggester.lookup(AnalyzingInfixSuggester.java:468) at org.apache.lucene.search.suggest.analyzing.AnalyzingInfixSuggesterTest$LookupThread.run(AnalyzingInfixSuggesterTest.java:533) FAILED: org.apache.solr.search.facet.TestCloudJSONFacetSKG.testRandom Error Message: Error from server at http://127.0.0.1:38419/solr/org.apache.solr.search.facet.TestCloudJSONFacetSKG_collection: Error from server at null: Expected mime type application/octet-stream but got text/html.Error 500 Server Error HTTP ERROR 500 Problem accessing /solr/org.apache.solr.search.facet.TestCloudJSONFacetSKG_collection_shard2_replica_n2/select. Reason: Server ErrorCaused by:java.lang.AssertionError at java.base/java.util.HashMap$TreeNode.moveRootToFront(HashMap.java:1896) at java.base/java.util.HashMap$TreeNode.putTreeVal(HashMap.java:2061) at java.base/java.util.HashMap.putVal(HashMap.java:633) at java.base/java.util.HashMap.put(HashMap.java:607) at org.apache.solr.search.LRUCache.put(LRUCache.java:201) at org.apache.solr.search.SolrCacheHolder.put(SolrCacheHolder.java:46) at org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1449) at org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:568) at org.apache.solr.handler.component.QueryComponent.doProcessUngroupedSearch(QueryComponent.java:1484) at org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:398) at org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:305) at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199) at org.apache.solr.core.SolrCore.execute(SolrCore.java:2581) at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:780) at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:566) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:423) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:350) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1610) at org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:165) at