[JENKINS] Lucene-Solr-BadApples-master-Linux (64bit/jdk-11.0.3) - Build # 245 - Failure!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-BadApples-master-Linux/245/ Java: 64bit/jdk-11.0.3 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC All tests passed Build Log: [...truncated 2036 lines...] [junit4] JVM J1: stderr was not empty, see: /home/jenkins/workspace/Lucene-Solr-BadApples-master-Linux/lucene/build/core/test/temp/junit4-J1-20190810_032258_6687873972815784767945.syserr [junit4] >>> JVM J1 emitted unexpected output (verbatim) [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release. [junit4] <<< JVM J1: EOF [junit4] JVM J0: stderr was not empty, see: /home/jenkins/workspace/Lucene-Solr-BadApples-master-Linux/lucene/build/core/test/temp/junit4-J0-20190810_032258_6678278824079539119404.syserr [junit4] >>> JVM J0 emitted unexpected output (verbatim) [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release. [junit4] <<< JVM J0: EOF [...truncated 5 lines...] [junit4] JVM J2: stderr was not empty, see: /home/jenkins/workspace/Lucene-Solr-BadApples-master-Linux/lucene/build/core/test/temp/junit4-J2-20190810_032258_6671385448772256935858.syserr [junit4] >>> JVM J2 emitted unexpected output (verbatim) [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release. [junit4] <<< JVM J2: EOF [...truncated 304 lines...] [junit4] JVM J0: stderr was not empty, see: /home/jenkins/workspace/Lucene-Solr-BadApples-master-Linux/lucene/build/test-framework/test/temp/junit4-J0-20190810_033524_3552937124486609648036.syserr [junit4] >>> JVM J0 emitted unexpected output (verbatim) [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release. [junit4] <<< JVM J0: EOF [junit4] JVM J2: stderr was not empty, see: /home/jenkins/workspace/Lucene-Solr-BadApples-master-Linux/lucene/build/test-framework/test/temp/junit4-J2-20190810_033524_3567562009133575028473.syserr [junit4] >>> JVM J2 emitted unexpected output (verbatim) [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release. [junit4] <<< JVM J2: EOF [...truncated 3 lines...] [junit4] JVM J1: stderr was not empty, see: /home/jenkins/workspace/Lucene-Solr-BadApples-master-Linux/lucene/build/test-framework/test/temp/junit4-J1-20190810_033524_35518003450206616373216.syserr [junit4] >>> JVM J1 emitted unexpected output (verbatim) [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release. [junit4] <<< JVM J1: EOF [...truncated 1094 lines...] [junit4] JVM J2: stderr was not empty, see: /home/jenkins/workspace/Lucene-Solr-BadApples-master-Linux/lucene/build/analysis/common/test/temp/junit4-J2-20190810_033711_1572439639318589782031.syserr [junit4] >>> JVM J2 emitted unexpected output (verbatim) [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release. [junit4] <<< JVM J2: EOF [junit4] JVM J0: stderr was not empty, see: /home/jenkins/workspace/Lucene-Solr-BadApples-master-Linux/lucene/build/analysis/common/test/temp/junit4-J0-20190810_033711_1575695727316970143014.syserr [junit4] >>> JVM J0 emitted unexpected output (verbatim) [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release. [junit4] <<< JVM J0: EOF [junit4] JVM J1: stderr was not empty, see: /home/jenkins/workspace/Lucene-Solr-BadApples-master-Linux/lucene/build/analysis/common/test/temp/junit4-J1-20190810_033711_1576930770392735520980.syserr [junit4] >>> JVM J1 emitted unexpected output (verbatim) [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release. [junit4] <<< JVM J1: EOF [...truncated 244 lines...] [junit4] JVM J2: stderr was not empty, see: /home/jenkins/workspace/Lucene-Solr-BadApples-master-Linux/lucene/build/analysis/icu/test/temp/junit4-J2-20190810_034002_35418116263536099357368.syserr [junit4] >>> JVM J2 emitted unexpected output (verbatim) [junit4] OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release. [junit4] <<< JVM J2: EOF [...truncated 3 lines...] [junit4] JVM J1: stderr was not empty, see:
[jira] [Updated] (SOLR-13680) Close Resources Properly
[ https://issues.apache.org/jira/browse/SOLR-13680?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Munendra S N updated SOLR-13680: Issue Type: Improvement (was: Bug) > Close Resources Properly > > > Key: SOLR-13680 > URL: https://issues.apache.org/jira/browse/SOLR-13680 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 8.2 >Reporter: Furkan KAMACI >Assignee: Munendra S N >Priority: Major > Fix For: 8.3 > > Attachments: SOLR-13680.patch, SOLR-13680.patch > > Time Spent: 10m > Remaining Estimate: 0h > > Files, streams or connections which implements Closeable or AutoCloseable > interface should be closed after use. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-8.x-Linux (32bit/jdk1.8.0_201) - Build # 992 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-Linux/992/ Java: 32bit/jdk1.8.0_201 -server -XX:+UseSerialGC 1 tests failed. FAILED: org.apache.solr.search.facet.TestCloudJSONFacetJoinDomain.testRandom Error Message: No live SolrServers available to handle this request:[https://127.0.0.1:38065/solr/org.apache.solr.search.facet.TestCloudJSONFacetJoinDomain_collection] Stack Trace: org.apache.solr.client.solrj.SolrServerException: No live SolrServers available to handle this request:[https://127.0.0.1:38065/solr/org.apache.solr.search.facet.TestCloudJSONFacetJoinDomain_collection] at __randomizedtesting.SeedInfo.seed([2FD4CF08C9F9EA8E:5D98EA0778995CFD]:0) at org.apache.solr.client.solrj.impl.LBSolrClient.request(LBSolrClient.java:345) at org.apache.solr.client.solrj.impl.BaseCloudSolrClient.sendRequest(BaseCloudSolrClient.java:1128) at org.apache.solr.client.solrj.impl.BaseCloudSolrClient.requestWithRetryOnStaleState(BaseCloudSolrClient.java:897) at org.apache.solr.client.solrj.impl.BaseCloudSolrClient.request(BaseCloudSolrClient.java:829) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211) at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:987) at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:1002) at org.apache.solr.search.facet.TestCloudJSONFacetJoinDomain.assertFacetCountsAreCorrect(TestCloudJSONFacetJoinDomain.java:504) at org.apache.solr.search.facet.TestCloudJSONFacetJoinDomain.assertFacetCountsAreCorrect(TestCloudJSONFacetJoinDomain.java:512) at org.apache.solr.search.facet.TestCloudJSONFacetJoinDomain.assertFacetCountsAreCorrect(TestCloudJSONFacetJoinDomain.java:462) at org.apache.solr.search.facet.TestCloudJSONFacetJoinDomain.testRandom(TestCloudJSONFacetJoinDomain.java:401) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at
[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1925 - Failure
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1925/ No tests ran. Build Log: [...truncated 25 lines...] ERROR: Failed to check out http://svn.apache.org/repos/asf/lucene/test-data org.tmatesoft.svn.core.SVNException: svn: E175002: connection refused by the server svn: E175002: OPTIONS request failed on '/repos/asf/lucene/test-data' at org.tmatesoft.svn.core.internal.wc.SVNErrorManager.error(SVNErrorManager.java:112) at org.tmatesoft.svn.core.internal.wc.SVNErrorManager.error(SVNErrorManager.java:96) at org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.request(HTTPConnection.java:765) at org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.request(HTTPConnection.java:352) at org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.request(HTTPConnection.java:340) at org.tmatesoft.svn.core.internal.io.dav.DAVConnection.performHttpRequest(DAVConnection.java:910) at org.tmatesoft.svn.core.internal.io.dav.DAVConnection.exchangeCapabilities(DAVConnection.java:702) at org.tmatesoft.svn.core.internal.io.dav.DAVConnection.open(DAVConnection.java:113) at org.tmatesoft.svn.core.internal.io.dav.DAVRepository.openConnection(DAVRepository.java:1035) at org.tmatesoft.svn.core.internal.io.dav.DAVRepository.getLatestRevision(DAVRepository.java:164) at org.tmatesoft.svn.core.internal.wc2.ng.SvnNgRepositoryAccess.getRevisionNumber(SvnNgRepositoryAccess.java:119) at org.tmatesoft.svn.core.internal.wc2.SvnRepositoryAccess.getLocations(SvnRepositoryAccess.java:178) at org.tmatesoft.svn.core.internal.wc2.ng.SvnNgRepositoryAccess.createRepositoryFor(SvnNgRepositoryAccess.java:43) at org.tmatesoft.svn.core.internal.wc2.ng.SvnNgAbstractUpdate.checkout(SvnNgAbstractUpdate.java:831) at org.tmatesoft.svn.core.internal.wc2.ng.SvnNgCheckout.run(SvnNgCheckout.java:26) at org.tmatesoft.svn.core.internal.wc2.ng.SvnNgCheckout.run(SvnNgCheckout.java:11) at org.tmatesoft.svn.core.internal.wc2.ng.SvnNgOperationRunner.run(SvnNgOperationRunner.java:20) at org.tmatesoft.svn.core.internal.wc2.SvnOperationRunner.run(SvnOperationRunner.java:21) at org.tmatesoft.svn.core.wc2.SvnOperationFactory.run(SvnOperationFactory.java:1239) at org.tmatesoft.svn.core.wc2.SvnOperation.run(SvnOperation.java:294) at hudson.scm.subversion.CheckoutUpdater$SubversionUpdateTask.perform(CheckoutUpdater.java:133) at hudson.scm.subversion.WorkspaceUpdater$UpdateTask.delegateTo(WorkspaceUpdater.java:168) at hudson.scm.subversion.WorkspaceUpdater$UpdateTask.delegateTo(WorkspaceUpdater.java:176) at hudson.scm.subversion.UpdateUpdater$TaskImpl.perform(UpdateUpdater.java:134) at hudson.scm.subversion.WorkspaceUpdater$UpdateTask.delegateTo(WorkspaceUpdater.java:168) at hudson.scm.SubversionSCM$CheckOutTask.perform(SubversionSCM.java:1041) at hudson.scm.SubversionSCM$CheckOutTask.invoke(SubversionSCM.java:1017) at hudson.scm.SubversionSCM$CheckOutTask.invoke(SubversionSCM.java:990) at hudson.FilePath$FileCallableWrapper.call(FilePath.java:3086) at hudson.remoting.UserRequest.perform(UserRequest.java:212) at hudson.remoting.UserRequest.perform(UserRequest.java:54) at hudson.remoting.Request$2.run(Request.java:369) at hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:744) Caused by: java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:345) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:589) at org.tmatesoft.svn.core.internal.util.SVNSocketConnection.run(SVNSocketConnection.java:57) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ... 4 more java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:345) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
[jira] [Commented] (SOLR-13682) command line option to export data to a file
[ https://issues.apache.org/jira/browse/SOLR-13682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16904297#comment-16904297 ] ASF subversion and git services commented on SOLR-13682: Commit a1712fdd58e936bf3460b3062c464a8a1fff1bff in lucene-solr's branch refs/heads/jira/SOLR-13682 from Noble Paul [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=a1712fd ] SOLR-13682: with testcase > command line option to export data to a file > > > Key: SOLR-13682 > URL: https://issues.apache.org/jira/browse/SOLR-13682 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Noble Paul >Assignee: Noble Paul >Priority: Major > > example > {code:java} > bin/solr export -url http://localhost:8983/solr/gettingstarted > {code} > This will export all the docs in a collection called {{gettingstarted}} into > a file called {{gettingstarted.json}} > additional options are > * {{format}} : {{jsonl}} (default) or {{javabin}} > * {{out}} : export file name > * {{query}} : a custom query , default is **:** > * {{fields}}: a comma separated list of fields to be exported > * {{limit}} : no:of docs. default is 100 , send {{-1}} to import all the > docs > h2. Importing using {{curl}} > importing json file > {code:java} > curl -X POST -d @gettingstarted.json > http://localhost:18983/solr/gettingstarted/update/json/docs?commit=true > {code} > importing javabin format file > {code:java} > curl -X POST --header "Content-Type: application/javabin" --data-binary > @gettingstarted.javabin > http://localhost:7574/solr/gettingstarted/update?commit=true > {code} -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-13687) Enable the bin/solr script to accept a solr url to run commands
[ https://issues.apache.org/jira/browse/SOLR-13687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Noble Paul updated SOLR-13687: -- Description: The problem we have today with our {{bin/solr}} script is that we have to run it from one of the nodes where Solr is running. This is a security issue b/c only admins are usaully be allowed to login to a machine where solr is running.If you have multiple cluster running in that host we don't know which one it's going to use. It is much easier to write a simple script that works over a url and the user has no ambiguity as to how it works. You can just unpack a solr distribution to your local machine and start using the script without bothering to install solr . The following commands can easily be executed remotely. These commands can accept the base url of any solr node in the cluster and perform the opertaion * healthcheck * create * create_core * create_collection * delete, version, * config * autoscaling was: The following commands can easily be executed remotely. These commands can accept the base url of any solr node in the cluster and perform the opertaion * healthcheck * create * create_core * create_collection * delete, version, * config * autoscaling > Enable the bin/solr script to accept a solr url to run commands > > > Key: SOLR-13687 > URL: https://issues.apache.org/jira/browse/SOLR-13687 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Noble Paul >Priority: Major > > The problem we have today with our {{bin/solr}} script is that we have to run > it from one of the nodes where Solr is running. This is a security issue b/c > only admins are usaully be allowed to login to a machine where solr is > running.If you have multiple cluster running in that host we don't know which > one it's going to use. It is much easier to write a simple script that works > over a url and the user has no ambiguity as to how it works. You can just > unpack a solr distribution to your local machine and start using the script > without bothering to install solr . > The following commands can easily be executed remotely. These commands can > accept the base url of any solr node in the cluster and perform the opertaion > * healthcheck > * create > * create_core > * create_collection > * delete, version, > * config > * autoscaling -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-13682) command line option to export data to a file
[ https://issues.apache.org/jira/browse/SOLR-13682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16904287#comment-16904287 ] Noble Paul edited comment on SOLR-13682 at 8/10/19 1:21 AM: The problem we have today with our {{bin/solr}} script is that we have to run it from one of the nodes where Solr is running. This is a security issue b/c only admins are usaully be allowed to login to a machine where solr is running.If you have multiple cluster running in that host we don't know which one it's going to use. It is much easier to write a simple script that works over a url and the user has no ambiguity as to how it works. You can just unpack a solr distribution to your local machine and start using the script without bothering to install solr . I've opened SOLR-13687 was (Author: noble.paul): The problem we have today with our {{bin/solr}} script is that we have to run it from one of the nodes where Solr is running. This is a security issue b/c only admins are usaully be allowed to login to a machine where solr is running.If you have multiple cluster running in that host we don't know which one it's going to use. It is much easier to write a simple script that works over a url and the user has no ambiguity as to how it works. You can just unpack a solr distribution to your local machine and start using the script without bothering to install solr . > command line option to export data to a file > > > Key: SOLR-13682 > URL: https://issues.apache.org/jira/browse/SOLR-13682 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Noble Paul >Assignee: Noble Paul >Priority: Major > > example > {code:java} > bin/solr export -url http://localhost:8983/solr/gettingstarted > {code} > This will export all the docs in a collection called {{gettingstarted}} into > a file called {{gettingstarted.json}} > additional options are > * {{format}} : {{jsonl}} (default) or {{javabin}} > * {{out}} : export file name > * {{query}} : a custom query , default is **:** > * {{fields}}: a comma separated list of fields to be exported > * {{limit}} : no:of docs. default is 100 , send {{-1}} to import all the > docs > h2. Importing using {{curl}} > importing json file > {code:java} > curl -X POST -d @gettingstarted.json > http://localhost:18983/solr/gettingstarted/update/json/docs?commit=true > {code} > importing javabin format file > {code:java} > curl -X POST --header "Content-Type: application/javabin" --data-binary > @gettingstarted.javabin > http://localhost:7574/solr/gettingstarted/update?commit=true > {code} -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-13687) Enable the bin/solr script to accept a solr url to run commands
Noble Paul created SOLR-13687: - Summary: Enable the bin/solr script to accept a solr url to run commands Key: SOLR-13687 URL: https://issues.apache.org/jira/browse/SOLR-13687 Project: Solr Issue Type: Bug Security Level: Public (Default Security Level. Issues are Public) Reporter: Noble Paul The following commands can easily be executed remotely. These commands can accept the base url of any solr node in the cluster and perform the opertaion * healthcheck * create * create_core * create_collection * delete, version, * config * autoscaling -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13682) command line option to export data to a file
[ https://issues.apache.org/jira/browse/SOLR-13682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16904287#comment-16904287 ] Noble Paul commented on SOLR-13682: --- The problem we have today with our {{bin/solr}} script is that we have to run it from one of the nodes where Solr is running. This is a security issue b/c only admins are usaully be allowed to login to a machine where solr is running.If you have multiple cluster running in that host we don't know which one it's going to use. It is much easier to write a simple script that works over a url and the user has no ambiguity as to how it works. You can just unpack a solr distribution to your local machine and start using the script without bothering to install solr . > command line option to export data to a file > > > Key: SOLR-13682 > URL: https://issues.apache.org/jira/browse/SOLR-13682 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Noble Paul >Assignee: Noble Paul >Priority: Major > > example > {code:java} > bin/solr export -url http://localhost:8983/solr/gettingstarted > {code} > This will export all the docs in a collection called {{gettingstarted}} into > a file called {{gettingstarted.json}} > additional options are > * {{format}} : {{jsonl}} (default) or {{javabin}} > * {{out}} : export file name > * {{query}} : a custom query , default is **:** > * {{fields}}: a comma separated list of fields to be exported > * {{limit}} : no:of docs. default is 100 , send {{-1}} to import all the > docs > h2. Importing using {{curl}} > importing json file > {code:java} > curl -X POST -d @gettingstarted.json > http://localhost:18983/solr/gettingstarted/update/json/docs?commit=true > {code} > importing javabin format file > {code:java} > curl -X POST --header "Content-Type: application/javabin" --data-binary > @gettingstarted.javabin > http://localhost:7574/solr/gettingstarted/update?commit=true > {code} -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk-12.0.1) - Build # 5281 - Failure!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/5281/ Java: 64bit/jdk-12.0.1 -XX:+UseCompressedOops -XX:+UseSerialGC All tests passed Build Log: [...truncated 14388 lines...] [junit4] JVM J1: stdout was not empty, see: /Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/build/solr-core/test/temp/junit4-J1-20190809_233203_0035687177204178715046.sysout [junit4] >>> JVM J1 emitted unexpected output (verbatim) [junit4] # [junit4] # A fatal error has been detected by the Java Runtime Environment: [junit4] # [junit4] # SIGFPE (0x8) at pc=0x7fff89b32143, pid=81339, tid=25859 [junit4] # [junit4] # JRE version: OpenJDK Runtime Environment (12.0.1+12) (build 12.0.1+12) [junit4] # Java VM: OpenJDK 64-Bit Server VM (12.0.1+12, mixed mode, sharing, tiered, compressed oops, serial gc, bsd-amd64) [junit4] # Problematic frame: [junit4] # [thread 237999 also had an error] [junit4] C [libsystem_kernel.dylib+0x11143] __commpage_gettimeofday+0x43 [junit4] # [junit4] # No core dump will be written. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again [junit4] # [junit4] # An error report file with more information is saved as: [junit4] # /Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/build/solr-core/test/J1/hs_err_pid81339.log [junit4] Compiled method (nm) 2749963 2791 n 0 jdk.internal.misc.Unsafe::park (native) [junit4] total in heap [0x000113828910,0x000113828c90] = 896 [junit4] relocation [0x000113828a88,0x000113828ab8] = 48 [junit4] main code [0x000113828ac0,0x000113828c90] = 464 [junit4] # [junit4] # If you would like to submit a bug report, please visit: [junit4] # https://github.com/AdoptOpenJDK/openjdk-build/issues [junit4] # [junit4] <<< JVM J1: EOF [...truncated 1278 lines...] [junit4] ERROR: JVM J1 ended with an exception, command line: /Users/jenkins/tools/java/64bit/jdk-12.0.1/bin/java -XX:+UseCompressedOops -XX:+UseSerialGC -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/heapdumps -ea -esa --illegal-access=deny -Dtests.prefix=tests -Dtests.seed=E4826BC4066D2E48 -Xmx512M -Dtests.iters= -Dtests.verbose=false -Dtests.infostream=false -Dtests.codec=random -Dtests.postingsformat=random -Dtests.docvaluesformat=random -Dtests.locale=random -Dtests.timezone=random -Dtests.directory=random -Dtests.linedocsfile=europarl.lines.txt.gz -Dtests.luceneMatchVersion=9.0.0 -Dtests.cleanthreads=perClass -Djava.util.logging.config.file=/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/lucene/tools/junit4/logging.properties -Dtests.nightly=false -Dtests.weekly=false -Dtests.monster=false -Dtests.slow=true -Dtests.asserts=true -Dtests.multiplier=1 -DtempDir=./temp -Djava.io.tmpdir=./temp -Dcommon.dir=/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/lucene -Dclover.db.dir=/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/lucene/build/clover/db -Djava.security.policy=/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/lucene/tools/junit4/solr-tests.policy -Dtests.LUCENE_VERSION=9.0.0 -Djetty.testMode=1 -Djetty.insecurerandom=1 -Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory -Djava.awt.headless=true -Djdk.map.althashing.threshold=0 -Dtests.src.home=/Users/jenkins/workspace/Lucene-Solr-master-MacOSX -Djava.security.egd=file:/dev/./urandom -Djunit4.childvm.cwd=/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/build/solr-core/test/J1 -Djunit4.tempDir=/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/build/solr-core/test/temp -Djunit4.childvm.id=1 -Djunit4.childvm.count=2 -Dfile.encoding=ISO-8859-1 -Dtests.disableHdfs=true -Djava.security.manager=org.apache.lucene.util.TestSecurityManager -Dtests.filterstacks=true -Dtests.leaveTemporary=false -Dtests.badapples=false -classpath
[GitHub] [lucene-solr] MarcusSorealheis edited a comment on issue #805: SOLR-13649 change the default behavior of the basic authentication plugin. [WIP]
MarcusSorealheis edited a comment on issue #805: SOLR-13649 change the default behavior of the basic authentication plugin. [WIP] URL: https://github.com/apache/lucene-solr/pull/805#issuecomment-519736169 @janhoy I've appended Work In Progress to this PR's title because I'm facing a strange build error. It does not seem related: ``` [ecj-lint] -- [ecj-lint] 1. ERROR in /Users/marcussorealheis/github.com/marcussorealheis/lucene-solr/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java (at line 23) [ecj-lint] import javax.naming.NamingException; [ecj-lint] [ecj-lint] The type javax.naming.NamingException is not accessible [ecj-lint] -- [ecj-lint] 2. ERROR in /Users/marcussorealheis/github.com/marcussorealheis/lucene-solr/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java (at line 28) [ecj-lint] public class MockInitialContextFactory implements InitialContextFactory { [ecj-lint] ^ [ecj-lint] The type MockInitialContextFactory must implement the inherited abstract method InitialContextFactory.getInitialContext(Hashtable) [ecj-lint] -- [ecj-lint] 3. ERROR in /Users/marcussorealheis/github.com/marcussorealheis/lucene-solr/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java (at line 30) [ecj-lint] private final javax.naming.Context context; [ecj-lint] [ecj-lint] The type javax.naming.Context is not accessible [ecj-lint] -- [ecj-lint] 4. ERROR in /Users/marcussorealheis/github.com/marcussorealheis/lucene-solr/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java (at line 33) [ecj-lint] context = mock(javax.naming.Context.class); [ecj-lint] ^^^ [ecj-lint] context cannot be resolved to a variable [ecj-lint] -- [ecj-lint] 5. ERROR in /Users/marcussorealheis/github.com/marcussorealheis/lucene-solr/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java (at line 33) [ecj-lint] context = mock(javax.naming.Context.class); [ecj-lint] [ecj-lint] The type javax.naming.Context is not accessible [ecj-lint] -- [ecj-lint] 6. ERROR in /Users/marcussorealheis/github.com/marcussorealheis/lucene-solr/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java (at line 36) [ecj-lint] when(context.lookup(anyString())).thenAnswer(invocation -> objects.get(invocation.getArgument(0))); [ecj-lint] ^^^ [ecj-lint] context cannot be resolved [ecj-lint] -- [ecj-lint] 7. ERROR in /Users/marcussorealheis/github.com/marcussorealheis/lucene-solr/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java (at line 38) [ecj-lint] } catch (NamingException e) { [ecj-lint] ^^^ [ecj-lint] NamingException cannot be resolved to a type [ecj-lint] -- [ecj-lint] 8. ERROR in /Users/marcussorealheis/github.com/marcussorealheis/lucene-solr/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java (at line 45) [ecj-lint] public javax.naming.Context getInitialContext(Hashtable env) { [ecj-lint] [ecj-lint] The type javax.naming.Context is not accessible [ecj-lint] -- [ecj-lint] 9. ERROR in /Users/marcussorealheis/github.com/marcussorealheis/lucene-solr/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java (at line 46) [ecj-lint] return context; [ecj-lint] ^^^ [ecj-lint] context cannot be resolved to a variable [ecj-lint] -- [ecj-lint] 9 problems (9 errors) BUILD FAILED /Users/marcussorealheis/github.com/marcussorealheis/lucene-solr/build.xml:101: The following error occurred while executing this line: /Users/marcussorealheis/github.com/marcussorealheis/lucene-solr/solr/build.xml:651: The following error occurred while executing this line: /Users/marcussorealheis/github.com/marcussorealheis/lucene-solr/solr/common-build.xml:479: The following error occurred while executing this line: /Users/marcussorealheis/github.com/marcussorealheis/lucene-solr/lucene/common-build.xml:2015: The following error occurred while executing this line: /Users/marcussorealheis/github.com/marcussorealheis/lucene-solr/lucene/common-build.xml:2048: Compile failed; see the compiler error output for details.``` I also saw similar errors on the lucene-dev mailing list:
[GitHub] [lucene-solr] MarcusSorealheis edited a comment on issue #805: SOLR-13649 change the default behavior of the basic authentication plugin. [WIP]
MarcusSorealheis edited a comment on issue #805: SOLR-13649 change the default behavior of the basic authentication plugin. [WIP] URL: https://github.com/apache/lucene-solr/pull/805#issuecomment-519736169 @janhoy I've appended Work In Progress to this PR's title because I'm facing a strange build error. It does not seem related: ``` [ecj-lint] -- [ecj-lint] 1. ERROR in /Users/marcussorealheis/github.com/marcussorealheis/lucene-solr/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java (at line 23) [ecj-lint] import javax.naming.NamingException; [ecj-lint] [ecj-lint] The type javax.naming.NamingException is not accessible [ecj-lint] -- [ecj-lint] 2. ERROR in /Users/marcussorealheis/github.com/marcussorealheis/lucene-solr/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java (at line 28) [ecj-lint] public class MockInitialContextFactory implements InitialContextFactory { [ecj-lint] ^ [ecj-lint] The type MockInitialContextFactory must implement the inherited abstract method InitialContextFactory.getInitialContext(Hashtable) [ecj-lint] -- [ecj-lint] 3. ERROR in /Users/marcussorealheis/github.com/marcussorealheis/lucene-solr/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java (at line 30) [ecj-lint] private final javax.naming.Context context; [ecj-lint] [ecj-lint] The type javax.naming.Context is not accessible [ecj-lint] -- [ecj-lint] 4. ERROR in /Users/marcussorealheis/github.com/marcussorealheis/lucene-solr/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java (at line 33) [ecj-lint] context = mock(javax.naming.Context.class); [ecj-lint] ^^^ [ecj-lint] context cannot be resolved to a variable [ecj-lint] -- [ecj-lint] 5. ERROR in /Users/marcussorealheis/github.com/marcussorealheis/lucene-solr/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java (at line 33) [ecj-lint] context = mock(javax.naming.Context.class); [ecj-lint] [ecj-lint] The type javax.naming.Context is not accessible [ecj-lint] -- [ecj-lint] 6. ERROR in /Users/marcussorealheis/github.com/marcussorealheis/lucene-solr/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java (at line 36) [ecj-lint] when(context.lookup(anyString())).thenAnswer(invocation -> objects.get(invocation.getArgument(0))); [ecj-lint] ^^^ [ecj-lint] context cannot be resolved [ecj-lint] -- [ecj-lint] 7. ERROR in /Users/marcussorealheis/github.com/marcussorealheis/lucene-solr/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java (at line 38) [ecj-lint] } catch (NamingException e) { [ecj-lint] ^^^ [ecj-lint] NamingException cannot be resolved to a type [ecj-lint] -- [ecj-lint] 8. ERROR in /Users/marcussorealheis/github.com/marcussorealheis/lucene-solr/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java (at line 45) [ecj-lint] public javax.naming.Context getInitialContext(Hashtable env) { [ecj-lint] [ecj-lint] The type javax.naming.Context is not accessible [ecj-lint] -- [ecj-lint] 9. ERROR in /Users/marcussorealheis/github.com/marcussorealheis/lucene-solr/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java (at line 46) [ecj-lint] return context; [ecj-lint] ^^^ [ecj-lint] context cannot be resolved to a variable [ecj-lint] -- [ecj-lint] 9 problems (9 errors) BUILD FAILED /Users/marcussorealheis/github.com/marcussorealheis/lucene-solr/build.xml:101: The following error occurred while executing this line: /Users/marcussorealheis/github.com/marcussorealheis/lucene-solr/solr/build.xml:651: The following error occurred while executing this line: /Users/marcussorealheis/github.com/marcussorealheis/lucene-solr/solr/common-build.xml:479: The following error occurred while executing this line: /Users/marcussorealheis/github.com/marcussorealheis/lucene-solr/lucene/common-build.xml:2015: The following error occurred while executing this line: /Users/marcussorealheis/github.com/marcussorealheis/lucene-solr/lucene/common-build.xml:2048: Compile failed; see the compiler error output for details.``` [I also saw similar errors on the lucene-dev mailing
[jira] [Created] (SOLR-13686) Decouple Autoscaling triggers from the actions they execute
Megan Carey created SOLR-13686: -- Summary: Decouple Autoscaling triggers from the actions they execute Key: SOLR-13686 URL: https://issues.apache.org/jira/browse/SOLR-13686 Project: Solr Issue Type: Improvement Security Level: Public (Default Security Level. Issues are Public) Components: AutoScaling Affects Versions: 8.0 Reporter: Megan Carey Each of the SolrCloud Autoscaling triggers have default aboveOp/belowOp actions, but in some cases, the trigger is becoming too tightly coupled with its associated actions. This could be considered an abstraction violation, as the trigger's compute and execute actions should be separate from the trigger itself. My proposal is to separate all action-specific configs out of the existing triggers, and instead do the following: # Require that all trigger actions have a Validator, which ensures that the properties map contains valid values # During trigger configuration, pass in a properties map (essentially a JSON blob), an action name, and its associated validator ## Run the validator against the given properties to ensure that the trigger can run without encountering exceptions For example, we would make the IndexSizeTrigger action-agnostic, and remove all shard split parameters from the trigger. When we configure the trigger, we could instead pass in the desired action (e.g. shard split), the parameters for that action (e.g. splitByPrefix, splitFuzz, etc.) in a map, and a validator for that action (e.g. code to ensure that the parameters passed in have valid values; such checks are currently hard-coded into the trigger configuration). -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] [lucene-solr] megancarey opened a new pull request #826: SOLR-13399: add splitByPrefix configuration to IndexSizeTrigger
megancarey opened a new pull request #826: SOLR-13399: add splitByPrefix configuration to IndexSizeTrigger URL: https://github.com/apache/lucene-solr/pull/826 # Description I've added the `splitByPrefix` parameter to the IndexSizeTrigger, so that the trigger is able to execute shard splits by prefix. I also added test cases to validate the split configs baked into this trigger. Plus, I made a small change to the splitByPrefix code for splitting by ID, and cleaned up the tests a bit. # Solution The solution I took to address this problem models the convention of JIRA-12942, which adds the split configs directly into the trigger config. The change I made to the splitByPrefix code doesn't modify functionality at all, but the implementation uses BytesRef.bytesEquals instead of manually comparing byte arrays. # Tests I added additional checks to the test IndexSizeTriggerTest.testTrigger, which validates that the default split configs are added to the trigger config. I also added a check to IndexSizeTriggerTest.testSplitConfig that validates the splitByPrefix override. Finally, I added a test case that attempts trigger configuration with invalid configs, and fails. # Checklist Please review the following and check all that apply: - [x] I have reviewed the guidelines for [How to Contribute](https://wiki.apache.org/solr/HowToContribute) and my code conforms to the standards described there to the best of my ability. - [] I have created a Jira issue and added the issue ID to my pull request title. - I'm piggybacking on existing JIRA SOLR-13399 - [x] I am authorized to contribute this code to the ASF and have removed any code I do not have a license to distribute. - [x] I have developed this patch against the `master` branch. - [x] I have run `ant precommit` and the appropriate test suite. - [x] I have added tests for my changes. - [x] I have added documentation for the [Ref Guide](https://github.com/apache/lucene-solr/tree/master/solr/solr-ref-guide) (for Solr changes only). This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9658) Caches should have an optional way to clean if idle for 'x' mins
[ https://issues.apache.org/jira/browse/SOLR-9658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16904200#comment-16904200 ] Hoss Man commented on SOLR-9658: * i should have noticed/mentioned this in the last patch but: any method (including your new {{markAndSweepByIdleTime()}} that that expects to be called only when markAndSweepLock is already held should really start with {{assert markAndSweepLock.isHeldByCurrentThread();}} * this patch still seems to modify TestJavaBinCodec unneccessarily? (now that you re-added the backcompat constructor) * i don't really think it's a good idea to add these {{CacheListener}} / {{EvictionListener}} APIs at this point w/o a lot more consideration of their lifecycle / usage ** I know you introduced them in response to my suggestion to add hooks for monitoring in tests, but they don't _currently_ seem more useful in the tests then some of the specific suggestions i made before (more comments on this below) and the APIs don't seem to be thought through enough to be generally useful later w/o a lot of re-working... *** Examples: if the point of creating {{CacheListener}} now is to be able to add more methods/hooks to it later, then is why is only {{EvictionListener}} passed down to the {{ConcurrentXXXCache}} impls instead of the entire {{CacheListener}} ? *** And why are there 2 distinct {{EvictionListener}} interfaces, instead of just a common one? ** ... so it would probably be safer/cleaner to avoid adding these APIs now since there are simpler alternatives available for the tests? * Re: "...plus adding support for artificially "advancing" the time" ... this seems overly complex? ** None of the suggestions i made for improving the reliability/coverage of the test require needing to fake the "now" clock: just being able to insert synthetic entries into the cache with artifically old timestamps – which could be done by refactoring out the middle of {{put(...)}} into a new {{putCacheEntry(CacheEntry ... )}} method that would let the (test) caller set an arbitrary {{lastAccessed}} value... {code:java} /** * Useable by tests to create synthetic cache entries, also called by {@link #put} * @lucene.internal */ public CacheEntry putCacheEntry(CacheEntry e) { CacheEntry oldCacheEntry = map.put(key, e); int currentSize; if (oldCacheEntry == null) { currentSize = stats.size.incrementAndGet(); ramBytes.addAndGet(e.ramBytesUsed() + HASHTABLE_RAM_BYTES_PER_ENTRY); // added key + value + entry } else { currentSize = stats.size.get(); ramBytes.addAndGet(-oldCacheEntry.ramBytesUsed()); ramBytes.addAndGet(e.ramBytesUsed()); } if (islive) { stats.putCounter.increment(); } else { stats.nonLivePutCounter.increment(); } return oldCacheEntry; } {code} ** ...that way tests could "setup" a cache containing arbitrary entries (of arbitrary size, with arbitrary create/access times that could be from weeks in the past) and then very precisely inspect the results of the cache after calling {{markAndSweep()}} *** or some other new {{triggerCleanupIfNeeded()}} method that can encapsualte all of the existing {{// Check if we need to clear out old entries from the cache ...}} logic currently at the end of {{put()}} * In general, i really think testing of functionality like this should really focus on testing "what exactly happens when markAndSeep() is called on a cache containing a very specific set of values?" indepdnent from "does markAndSweep() get called eventually & automatically if i configure maxIdleTime?" ** the former can be tested w/o the need of any cleanup threads or faking the TimeSource ** the later can be tested w/o the need of a {{CacheListener}} or {{EvictionListener}} API (or a fake TimeSource) – just create an anonymous subclass of {{ConcurrentXXXCache}} who'se markAndSweep() method decrements a CountDownLatch tha the test tread is waiting on ** isolating the testing of these different concepts not only makes it easier to test more complex aspects of how {{markAndSweep()}} is expected to work (ie: "assert exactly which entries are removed if the sum of the sizes == X == (ramUpperWatermark + Y) but the two smallest entries (whose total size = Y + 1) are the only one with an accessTime older then the idleTime") but also makes it easier to understand & debug failures down the road -if- _when_ they happen. *** as things stand in your patch, -if- _when_ the "did not evict entries in time" assert (eventually) trips in a future jenkins build, we won't immediately be able to tell (w/o added logging) if that's because of a bug in the {{CleanupThread}} that prevented if from calling {{markAndSweep();}} or a bug in {{SimTimeSource.advanceMs()}} ; or a bug somewhere in the cache that prevented {{markAndSweep()}} from recognizing those entries were old; or just a heavily loaded VM CPU
[JENKINS] Lucene-Solr-BadApples-Tests-master - Build # 440 - Failure
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-master/440/ All tests passed Build Log: [...truncated 64487 lines...] -ecj-javadoc-lint-tests: [mkdir] Created dir: /tmp/ecj1605861363 [ecj-lint] Compiling 48 source files to /tmp/ecj1605861363 [ecj-lint] invalid Class-Path header in manifest of jar file: /home/jenkins/.ivy2/cache/org.restlet.jee/org.restlet/jars/org.restlet-2.3.0.jar [ecj-lint] invalid Class-Path header in manifest of jar file: /home/jenkins/.ivy2/cache/org.restlet.jee/org.restlet.ext.servlet/jars/org.restlet.ext.servlet-2.3.0.jar [ecj-lint] -- [ecj-lint] 1. ERROR in /home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java (at line 23) [ecj-lint] import javax.naming.NamingException; [ecj-lint] [ecj-lint] The type javax.naming.NamingException is not accessible [ecj-lint] -- [ecj-lint] 2. ERROR in /home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java (at line 28) [ecj-lint] public class MockInitialContextFactory implements InitialContextFactory { [ecj-lint] ^ [ecj-lint] The type MockInitialContextFactory must implement the inherited abstract method InitialContextFactory.getInitialContext(Hashtable) [ecj-lint] -- [ecj-lint] 3. ERROR in /home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java (at line 30) [ecj-lint] private final javax.naming.Context context; [ecj-lint] [ecj-lint] The type javax.naming.Context is not accessible [ecj-lint] -- [ecj-lint] 4. ERROR in /home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java (at line 33) [ecj-lint] context = mock(javax.naming.Context.class); [ecj-lint] ^^^ [ecj-lint] context cannot be resolved to a variable [ecj-lint] -- [ecj-lint] 5. ERROR in /home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java (at line 33) [ecj-lint] context = mock(javax.naming.Context.class); [ecj-lint] [ecj-lint] The type javax.naming.Context is not accessible [ecj-lint] -- [ecj-lint] 6. ERROR in /home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java (at line 36) [ecj-lint] when(context.lookup(anyString())).thenAnswer(invocation -> objects.get(invocation.getArgument(0))); [ecj-lint] ^^^ [ecj-lint] context cannot be resolved [ecj-lint] -- [ecj-lint] 7. ERROR in /home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java (at line 38) [ecj-lint] } catch (NamingException e) { [ecj-lint] ^^^ [ecj-lint] NamingException cannot be resolved to a type [ecj-lint] -- [ecj-lint] 8. ERROR in /home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java (at line 45) [ecj-lint] public javax.naming.Context getInitialContext(Hashtable env) { [ecj-lint] [ecj-lint] The type javax.naming.Context is not accessible [ecj-lint] -- [ecj-lint] 9. ERROR in /home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/MockInitialContextFactory.java (at line 46) [ecj-lint] return context; [ecj-lint]^^^ [ecj-lint] context cannot be resolved to a variable [ecj-lint] -- [ecj-lint] 9 problems (9 errors) BUILD FAILED /home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/build.xml:643: The following error occurred while executing this line: /home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/build.xml:101: The following error occurred while executing this line: /home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/solr/build.xml:651: The following error occurred while executing this line: /home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-Tests-master/solr/common-build.xml:479: The following error occurred while executing this line:
[JENKINS] Lucene-Solr-8.x-Windows (32bit/jdk1.8.0_201) - Build # 390 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-Windows/390/ Java: 32bit/jdk1.8.0_201 -client -XX:+UseSerialGC 2 tests failed. FAILED: org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test Error Message: Timeout occurred while waiting response from server at: http://127.0.0.1:64210 Stack Trace: org.apache.solr.client.solrj.SolrServerException: Timeout occurred while waiting response from server at: http://127.0.0.1:64210 at __randomizedtesting.SeedInfo.seed([7BC4CFF7793FB67F:F390F02DD7C3DB87]:0) at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:667) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:262) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:245) at org.apache.solr.client.solrj.impl.LBSolrClient.doRequest(LBSolrClient.java:368) at org.apache.solr.client.solrj.impl.LBSolrClient.request(LBSolrClient.java:296) at org.apache.solr.client.solrj.impl.BaseCloudSolrClient.sendRequest(BaseCloudSolrClient.java:1128) at org.apache.solr.client.solrj.impl.BaseCloudSolrClient.requestWithRetryOnStaleState(BaseCloudSolrClient.java:897) at org.apache.solr.client.solrj.impl.BaseCloudSolrClient.request(BaseCloudSolrClient.java:829) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:228) at org.apache.solr.cloud.AbstractFullDistribZkTestBase.createServers(AbstractFullDistribZkTestBase.java:338) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1080) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:1054) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
[GitHub] [lucene-solr] cpoerschke commented on issue #738: SOLR-13532: Fix for non-recovering cores due to low timeouts
cpoerschke commented on issue #738: SOLR-13532: Fix for non-recovering cores due to low timeouts URL: https://github.com/apache/lucene-solr/pull/738#issuecomment-520039117 Hello. https://issues.apache.org/jira/browse/SOLR-13532 is closed now, could this PR be closed too then manually? Sometimes pull requests get auto-closed by the bot when it 'sees' the right phrase in the commit messages but looks like in this case the trigger phrase was not used and/or there was no commit for the `branch_7x` branch which is the target of the PR. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] [lucene-solr] cpoerschke commented on issue #737: SOLR-13532: Fix for non-recovering cores due to low timeouts
cpoerschke commented on issue #737: SOLR-13532: Fix for non-recovering cores due to low timeouts URL: https://github.com/apache/lucene-solr/pull/737#issuecomment-520038871 Hello. https://issues.apache.org/jira/browse/SOLR-13532 is closed now, could this PR be closed too then manually? Sometimes pull requests get auto-closed by the bot when it 'sees' the right phrase in the commit messages but looks like in this case the trigger phrase was not used. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] [lucene-solr] cpoerschke commented on issue #736: SOLR-13532: Fix for non-recovering cores due to low timeouts
cpoerschke commented on issue #736: SOLR-13532: Fix for non-recovering cores due to low timeouts URL: https://github.com/apache/lucene-solr/pull/736#issuecomment-520038784 Hello. https://issues.apache.org/jira/browse/SOLR-13532 is closed now, could this PR be closed too then manually? Sometimes pull requests get auto-closed by the bot when it 'sees' the right phrase in the commit messages but looks like in this case the trigger phrase was not used. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] [lucene-solr] cpoerschke commented on a change in pull request #677: SOLR-13257: support for stable replica routing preferences
cpoerschke commented on a change in pull request #677: SOLR-13257: support for stable replica routing preferences URL: https://github.com/apache/lucene-solr/pull/677#discussion_r312612736 ## File path: solr/core/src/java/org/apache/solr/handler/component/HttpShardHandlerFactory.java ## @@ -230,7 +288,23 @@ public void init(PluginInfo info) { this.accessPolicy = getParameter(args, INIT_FAIRNESS_POLICY, accessPolicy,sb); this.whitelistHostChecker = new WhitelistHostChecker(args == null? null: (String) args.get(INIT_SHARDS_WHITELIST), !getDisableShardsWhitelist()); log.info("Host whitelist initialized: {}", this.whitelistHostChecker); - + +this.httpListenerFactory = new InstrumentedHttpListenerFactory(this.metricNameStrategy); +int connectionTimeout = getParameter(args, HttpClientUtil.PROP_CONNECTION_TIMEOUT, +HttpClientUtil.DEFAULT_CONNECT_TIMEOUT, sb); +int maxConnectionsPerHost = getParameter(args, HttpClientUtil.PROP_MAX_CONNECTIONS_PER_HOST, +HttpClientUtil.DEFAULT_MAXCONNECTIONSPERHOST, sb); +int soTimeout = getParameter(args, HttpClientUtil.PROP_SO_TIMEOUT, +HttpClientUtil.DEFAULT_SO_TIMEOUT, sb); + +this.defaultClient = new Http2SolrClient.Builder() +.connectionTimeout(connectionTimeout) +.idleTimeout(soTimeout) +.maxConnectionsPerHost(maxConnectionsPerHost).build(); +this.defaultClient.addListenerFactory(this.httpListenerFactory); +this.loadbalancer = new LBHttp2SolrClient(defaultClient); +initReplicaListTransformers(getParameter(args, "replicaRouting", null, sb)); + log.debug("created with {}",sb); Review comment: You clarify in the https://github.com/apache/lucene-solr/pull/677/commits/d82dc54fd625762008a5f8b7c69bcc4ee4f57203 commit message that the re-ordering here is because `sb` is being logged here but then it's also subsequently still used. I wonder if an alternative change could be to 'relocate' the logging statement to the end of the `init` method? The logging of a 'created' message in the middle of the init seems surprising from a code comprehension perspective but also just after the `log.debug` there is the `r.setSeed` call and (theoretically at least) moving code from 'after' to 'before' that might make a difference. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] [lucene-solr] cpoerschke commented on issue #677: SOLR-13257: support for stable replica routing preferences
cpoerschke commented on issue #677: SOLR-13257: support for stable replica routing preferences URL: https://github.com/apache/lucene-solr/pull/677#issuecomment-520028607 Thanks @magibney for updating the pull request and splitting the updates into logical commit units, I found that really helpful when incrementally reviewing. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13680) Close Resources Properly
[ https://issues.apache.org/jira/browse/SOLR-13680?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16904127#comment-16904127 ] Lucene/Solr QA commented on SOLR-13680: --- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 8 new or modified test files. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 10s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} Release audit (RAT) {color} | {color:green} 2m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} Check forbidden APIs {color} | {color:green} 2m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} Validate source patterns {color} | {color:green} 2m 5s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 37m 53s{color} | {color:green} core in the patch passed. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 45m 17s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | SOLR-13680 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12977165/SOLR-13680.patch | | Optional Tests | compile javac unit ratsources checkforbiddenapis validatesourcepatterns | | uname | Linux lucene1-us-west 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | ant | | Personality | /home/jenkins/jenkins-slave/workspace/PreCommit-SOLR-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh | | git revision | master / 2e5c554fea | | ant | version: Apache Ant(TM) version 1.10.5 compiled on March 28 2019 | | Default Java | LTS | | Test Results | https://builds.apache.org/job/PreCommit-SOLR-Build/525/testReport/ | | modules | C: solr/core U: solr/core | | Console output | https://builds.apache.org/job/PreCommit-SOLR-Build/525/console | | Powered by | Apache Yetus 0.7.0 http://yetus.apache.org | This message was automatically generated. > Close Resources Properly > > > Key: SOLR-13680 > URL: https://issues.apache.org/jira/browse/SOLR-13680 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 8.2 >Reporter: Furkan KAMACI >Assignee: Munendra S N >Priority: Major > Fix For: 8.3 > > Attachments: SOLR-13680.patch, SOLR-13680.patch > > Time Spent: 10m > Remaining Estimate: 0h > > Files, streams or connections which implements Closeable or AutoCloseable > interface should be closed after use. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-Tests-8.x - Build # 381 - Unstable
Build: https://builds.apache.org/job/Lucene-Solr-Tests-8.x/381/ 4 tests failed. FAILED: org.apache.solr.cloud.AliasIntegrationTest.testClusterStateProviderAPI Error Message: should be a routed alias Stack Trace: java.lang.AssertionError: should be a routed alias at __randomizedtesting.SeedInfo.seed([30E8C6B55840D36C:2F3F5A992B4B2A27]:0) at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.assertTrue(Assert.java:41) at org.apache.solr.cloud.AliasIntegrationTest.testClusterStateProviderAPI(AliasIntegrationTest.java:315) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.lang.Thread.run(Thread.java:748) FAILED: org.apache.solr.common.cloud.TestCollectionStateWatchers.testLiveNodeChangesTriggerWatches Error Message: Error starting up MiniSolrCloudCluster Stack Trace: java.lang.Exception: Error starting up MiniSolrCloudCluster at __randomizedtesting.SeedInfo.seed([36661843AFCF02D1:6D44E6DB950C0033]:0) at
[jira] [Issue Comment Deleted] (SOLR-9952) S3BackupRepository
[ https://issues.apache.org/jira/browse/SOLR-9952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Suryakant Jadhav updated SOLR-9952: --- Comment: was deleted (was: Hi, I am trying to configure Solr with S3. Could you please guide me step by step configuration for setting this up. Can you see if we can install Solr keeping S3 as storage(neither OS file system not hdfs). Best Regards, Suryakant ) > S3BackupRepository > -- > > Key: SOLR-9952 > URL: https://issues.apache.org/jira/browse/SOLR-9952 > Project: Solr > Issue Type: New Feature > Components: Backup/Restore >Reporter: Mikhail Khludnev >Priority: Major > Attachments: > 0001-SOLR-9952-Added-dependencies-for-hadoop-amazon-integ.patch, > 0002-SOLR-9952-Added-integration-test-for-checking-backup.patch, Running Solr > on S3.pdf, core-site.xml.template > > > I'd like to have a backup repository implementation allows to snapshot to AWS > S3 -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Issue Comment Deleted] (SOLR-9952) S3BackupRepository
[ https://issues.apache.org/jira/browse/SOLR-9952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Suryakant Jadhav updated SOLR-9952: --- Comment: was deleted (was: Hi Alexey, I am trying to configure Solr4.10.3 with S3. Could you please guide me step by step configuration for setting this up. ) > S3BackupRepository > -- > > Key: SOLR-9952 > URL: https://issues.apache.org/jira/browse/SOLR-9952 > Project: Solr > Issue Type: New Feature > Components: Backup/Restore >Reporter: Mikhail Khludnev >Priority: Major > Attachments: > 0001-SOLR-9952-Added-dependencies-for-hadoop-amazon-integ.patch, > 0002-SOLR-9952-Added-integration-test-for-checking-backup.patch, Running Solr > on S3.pdf, core-site.xml.template > > > I'd like to have a backup repository implementation allows to snapshot to AWS > S3 -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-BadApples-NightlyTests-master - Build # 74 - Still Failing
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-NightlyTests-master/74/ No tests ran. Build Log: [...truncated 25 lines...] ERROR: Failed to check out http://svn.apache.org/repos/asf/lucene/test-data org.tmatesoft.svn.core.SVNException: svn: E175002: connection refused by the server svn: E175002: OPTIONS request failed on '/repos/asf/lucene/test-data' at org.tmatesoft.svn.core.internal.wc.SVNErrorManager.error(SVNErrorManager.java:112) at org.tmatesoft.svn.core.internal.wc.SVNErrorManager.error(SVNErrorManager.java:96) at org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.request(HTTPConnection.java:765) at org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.request(HTTPConnection.java:352) at org.tmatesoft.svn.core.internal.io.dav.http.HTTPConnection.request(HTTPConnection.java:340) at org.tmatesoft.svn.core.internal.io.dav.DAVConnection.performHttpRequest(DAVConnection.java:910) at org.tmatesoft.svn.core.internal.io.dav.DAVConnection.exchangeCapabilities(DAVConnection.java:702) at org.tmatesoft.svn.core.internal.io.dav.DAVConnection.open(DAVConnection.java:113) at org.tmatesoft.svn.core.internal.io.dav.DAVRepository.openConnection(DAVRepository.java:1035) at org.tmatesoft.svn.core.internal.io.dav.DAVRepository.getLatestRevision(DAVRepository.java:164) at org.tmatesoft.svn.core.internal.wc2.ng.SvnNgRepositoryAccess.getRevisionNumber(SvnNgRepositoryAccess.java:119) at org.tmatesoft.svn.core.internal.wc2.SvnRepositoryAccess.getLocations(SvnRepositoryAccess.java:178) at org.tmatesoft.svn.core.internal.wc2.ng.SvnNgRepositoryAccess.createRepositoryFor(SvnNgRepositoryAccess.java:43) at org.tmatesoft.svn.core.internal.wc2.ng.SvnNgAbstractUpdate.checkout(SvnNgAbstractUpdate.java:831) at org.tmatesoft.svn.core.internal.wc2.ng.SvnNgCheckout.run(SvnNgCheckout.java:26) at org.tmatesoft.svn.core.internal.wc2.ng.SvnNgCheckout.run(SvnNgCheckout.java:11) at org.tmatesoft.svn.core.internal.wc2.ng.SvnNgOperationRunner.run(SvnNgOperationRunner.java:20) at org.tmatesoft.svn.core.internal.wc2.SvnOperationRunner.run(SvnOperationRunner.java:21) at org.tmatesoft.svn.core.wc2.SvnOperationFactory.run(SvnOperationFactory.java:1239) at org.tmatesoft.svn.core.wc2.SvnOperation.run(SvnOperation.java:294) at hudson.scm.subversion.CheckoutUpdater$SubversionUpdateTask.perform(CheckoutUpdater.java:133) at hudson.scm.subversion.WorkspaceUpdater$UpdateTask.delegateTo(WorkspaceUpdater.java:168) at hudson.scm.subversion.WorkspaceUpdater$UpdateTask.delegateTo(WorkspaceUpdater.java:176) at hudson.scm.subversion.UpdateUpdater$TaskImpl.perform(UpdateUpdater.java:134) at hudson.scm.subversion.WorkspaceUpdater$UpdateTask.delegateTo(WorkspaceUpdater.java:168) at hudson.scm.SubversionSCM$CheckOutTask.perform(SubversionSCM.java:1041) at hudson.scm.SubversionSCM$CheckOutTask.invoke(SubversionSCM.java:1017) at hudson.scm.SubversionSCM$CheckOutTask.invoke(SubversionSCM.java:990) at hudson.FilePath$FileCallableWrapper.call(FilePath.java:3086) at hudson.remoting.UserRequest.perform(UserRequest.java:212) at hudson.remoting.UserRequest.perform(UserRequest.java:54) at hudson.remoting.Request$2.run(Request.java:369) at hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:744) Caused by: java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:345) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:589) at org.tmatesoft.svn.core.internal.util.SVNSocketConnection.run(SVNSocketConnection.java:57) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ... 4 more java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:345) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188) at
[JENKINS] Lucene-Solr-repro - Build # 3508 - Still Unstable
Build: https://builds.apache.org/job/Lucene-Solr-repro/3508/ [...truncated 28 lines...] [repro] Jenkins log URL: https://builds.apache.org/job/Lucene-Solr-NightlyTests-8.x/175/consoleText [repro] Revision: 2677ee2955062f91074c759daf953b2ebcd39b6c [repro] Ant options: -Dtests.multiplier=2 -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-8.x/test-data/enwiki.random.lines.txt [repro] Repro line: ant test -Dtestcase=TestDemoParallelLeafReader -Dtests.method=testRandomMultipleSchemaGens -Dtests.seed=626AD41089E8E3CA -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-8.x/test-data/enwiki.random.lines.txt -Dtests.locale=en-ZA -Dtests.timezone=America/Caracas -Dtests.asserts=true -Dtests.file.encoding=UTF-8 [repro] Repro line: ant test -Dtestcase=TestDemoParallelLeafReader -Dtests.method=testBasic -Dtests.seed=626AD41089E8E3CA -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-8.x/test-data/enwiki.random.lines.txt -Dtests.locale=en-ZA -Dtests.timezone=America/Caracas -Dtests.asserts=true -Dtests.file.encoding=UTF-8 [repro] Repro line: ant test -Dtestcase=TestXYPolygonShapeQueries -Dtests.method=testRandomBig -Dtests.seed=3067D6243ABBB092 -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-8.x/test-data/enwiki.random.lines.txt -Dtests.locale=ar -Dtests.timezone=America/Louisville -Dtests.asserts=true -Dtests.file.encoding=UTF-8 [repro] Repro line: ant test -Dtestcase=AliasIntegrationTest -Dtests.method=testClusterStateProviderAPI -Dtests.seed=870CE3B36FCD5A31 -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-8.x/test-data/enwiki.random.lines.txt -Dtests.locale=ar-EG -Dtests.timezone=America/Porto_Acre -Dtests.asserts=true -Dtests.file.encoding=UTF-8 [repro] Repro line: ant test -Dtestcase=HdfsAutoAddReplicasIntegrationTest -Dtests.method=testSimple -Dtests.seed=870CE3B36FCD5A31 -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-8.x/test-data/enwiki.random.lines.txt -Dtests.locale=sr-CS -Dtests.timezone=America/Ensenada -Dtests.asserts=true -Dtests.file.encoding=UTF-8 [repro] Repro line: ant test -Dtestcase=ShardSplitTest -Dtests.method=test -Dtests.seed=870CE3B36FCD5A31 -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-8.x/test-data/enwiki.random.lines.txt -Dtests.locale=de-GR -Dtests.timezone=America/Fort_Wayne -Dtests.asserts=true -Dtests.file.encoding=UTF-8 [repro] Repro line: ant test -Dtestcase=SolrJmxReporterCloudTest -Dtests.seed=870CE3B36FCD5A31 -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-8.x/test-data/enwiki.random.lines.txt -Dtests.locale=en-IN -Dtests.timezone=America/Thunder_Bay -Dtests.asserts=true -Dtests.file.encoding=UTF-8 [repro] git rev-parse --abbrev-ref HEAD [repro] git rev-parse HEAD [repro] Initial local git branch/revision: e59f41b6712b4feb9b810b34108a43281c33e515 [repro] git fetch [repro] git checkout 2677ee2955062f91074c759daf953b2ebcd39b6c [...truncated 2 lines...] [repro] git merge --ff-only [...truncated 1 lines...] [repro] ant clean [...truncated 6 lines...] [repro] Test suites by module: [repro]lucene/core [repro] TestDemoParallelLeafReader [repro]lucene/sandbox [repro] TestXYPolygonShapeQueries [repro]solr/core [repro] AliasIntegrationTest [repro] ShardSplitTest [repro] SolrJmxReporterCloudTest [repro] HdfsAutoAddReplicasIntegrationTest [repro] ant compile-test [...truncated 213 lines...] [repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 -Dtests.class="*.TestDemoParallelLeafReader" -Dtests.showOutput=onerror -Dtests.multiplier=2 -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-8.x/test-data/enwiki.random.lines.txt -Dtests.seed=626AD41089E8E3CA -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-8.x/test-data/enwiki.random.lines.txt -Dtests.locale=en-ZA -Dtests.timezone=America/Caracas -Dtests.asserts=true -Dtests.file.encoding=UTF-8 [...truncated 101 lines...] [repro] ant compile-test [...truncated 252 lines...] [repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 -Dtests.class="*.TestXYPolygonShapeQueries" -Dtests.showOutput=onerror -Dtests.multiplier=2
[jira] [Commented] (SOLR-9952) S3BackupRepository
[ https://issues.apache.org/jira/browse/SOLR-9952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16904057#comment-16904057 ] Jan Høydahl commented on SOLR-9952: --- Kevin told you in last comment that you need to ask such questions on solr-u...@lucene.apache.org list, not here in Jira. > S3BackupRepository > -- > > Key: SOLR-9952 > URL: https://issues.apache.org/jira/browse/SOLR-9952 > Project: Solr > Issue Type: New Feature > Components: Backup/Restore >Reporter: Mikhail Khludnev >Priority: Major > Attachments: > 0001-SOLR-9952-Added-dependencies-for-hadoop-amazon-integ.patch, > 0002-SOLR-9952-Added-integration-test-for-checking-backup.patch, Running Solr > on S3.pdf, core-site.xml.template > > > I'd like to have a backup repository implementation allows to snapshot to AWS > S3 -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-Windows (64bit/jdk-11.0.3) - Build # 8077 - Still unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/8077/ Java: 64bit/jdk-11.0.3 -XX:-UseCompressedOops -XX:+UseParallelGC 1 tests failed. FAILED: org.apache.solr.cloud.ActionThrottleTest.testBasics Error Message: 994ms Stack Trace: java.lang.AssertionError: 994ms at __randomizedtesting.SeedInfo.seed([951DC3CA0A4797B9:A8C56DE632A9C9C9]:0) at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.assertTrue(Assert.java:41) at org.apache.solr.cloud.ActionThrottleTest.testBasics(ActionThrottleTest.java:87) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.base/java.lang.Thread.run(Thread.java:834) Build Log: [...truncated 13632 lines...] [junit4] Suite: org.apache.solr.cloud.ActionThrottleTest [junit4] 2> 1347744 INFO (SUITE-ActionThrottleTest-seed#[951DC3CA0A4797B9]-worker) [ ] o.a.s.SolrTestCaseJ4 SecureRandom sanity checks: test.solr.allowed.securerandom=null & java.security.egd=file:/dev/./urandom [junit4] 2> 1347745 INFO
[jira] [Commented] (SOLR-9952) S3BackupRepository
[ https://issues.apache.org/jira/browse/SOLR-9952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16904006#comment-16904006 ] Suryakant Jadhav commented on SOLR-9952: Hi, I am trying to configure Solr with S3. Could you please guide me step by step configuration for setting this up. Can you see if we can install Solr keeping S3 as storage(neither OS file system not hdfs). Best Regards, Suryakant > S3BackupRepository > -- > > Key: SOLR-9952 > URL: https://issues.apache.org/jira/browse/SOLR-9952 > Project: Solr > Issue Type: New Feature > Components: Backup/Restore >Reporter: Mikhail Khludnev >Priority: Major > Attachments: > 0001-SOLR-9952-Added-dependencies-for-hadoop-amazon-integ.patch, > 0002-SOLR-9952-Added-integration-test-for-checking-backup.patch, Running Solr > on S3.pdf, core-site.xml.template > > > I'd like to have a backup repository implementation allows to snapshot to AWS > S3 -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8943) Incorrect IDF in MultiPhraseQuery and SpanOrQuery
[ https://issues.apache.org/jira/browse/LUCENE-8943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16903968#comment-16903968 ] Jim Ferenczi commented on LUCENE-8943: -- I don't think we can realistically approximate the doc freq of phrases, especially if you consider more than 2 terms. The issue with the score difference of "wifi" (single term) vs "wi fi" (multiple terms) is more a synonym issue where the association between these terms is made at search time. Currently BM25 similarity sums the idf values but this was done to limit the difference with the classic (tfidf) similarity. The other similarities take a simpler approach that just sum the score of each term that appear in the query like a boolean query would do (see MultiSimilarity). It's difficult to pick one approach over the other here but the context is important. For single term synonym (terms that appear at the same position) we have the SynonymQuery that is used to blend the score of such terms. I tend to agree that the MultiPhraseQuery should take the same approach so that each position can score once instead of per terms. However it is difficult to expand this strategy to variable length multi words synonyms. We could try with a specialized MultiWordsSynonymQuery that would apply some strategy (approximation of the doc count like you propose or anything that makes sense here ;) ) to make sure that all variations are scored the same. Does this makes sense ? > Incorrect IDF in MultiPhraseQuery and SpanOrQuery > - > > Key: LUCENE-8943 > URL: https://issues.apache.org/jira/browse/LUCENE-8943 > Project: Lucene - Core > Issue Type: Bug > Components: core/query/scoring >Affects Versions: 8.0 >Reporter: Christoph Goller >Priority: Major > > I recently stumbled across a very old bug in the IDF computation for > MultiPhraseQuery and SpanOrQuery. > BM25Similarity and TFIDFSimilarity / ClassicSimilarity have a method for > combining IDF values from more than on term / TermStatistics. > I mean the method: > Explanation idfExplain(CollectionStatistics collectionStats, TermStatistics > termStats[]) > It simply adds up the IDFs from all termStats[]. > This method is used e.g. in PhraseQuery where it makes sense. If we assume > that for the phrase "New York" the occurrences of both words are independent, > we can multiply their probabilitis and since IDFs are logarithmic we add them > up. Seems to be a reasonable approximation. However, this method is also used > to add up the IDFs of all terms in a MultiPhraseQuery as can be seen in: > Similarity.SimScorer getStats(IndexSearcher searcher) > A MultiPhraseQuery is actually a PhraseQuery with alternatives at individual > positions. IDFs of alternative terms for one position should not be added up. > Instead we should use the minimum value as an approcimation because this > corresponds to the docFreq of the most frequent term and we know that this is > a lower bound for the docFreq for this position. > In SpanOrQuerry we have the same problem It uses buildSimWeight(...) from > SpanWeight and adds up all IDFs of all OR-clauses. > If my arguments are not convincing, look at SynonymQuery / SynonymWeight in > the constructor: > SynonymWeight(Query query, IndexSearcher searcher, ScoreMode scoreMode, float > boost) > A SynonymQuery is also a kind of OR-query and it uses the maximum of the > docFreq of all its alternative terms. I think this is how it should be. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9952) S3BackupRepository
[ https://issues.apache.org/jira/browse/SOLR-9952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16903955#comment-16903955 ] Kevin Risden commented on SOLR-9952: [~suryakant.jadhav] - this is the wrong place to ask. Use the solr-user mailing list for questions [1]. Solr 4.10.3 is old and most likely will not work backing up to S3. [1] https://lucene.apache.org/solr/community.html#mailing-lists-irc > S3BackupRepository > -- > > Key: SOLR-9952 > URL: https://issues.apache.org/jira/browse/SOLR-9952 > Project: Solr > Issue Type: New Feature > Components: Backup/Restore >Reporter: Mikhail Khludnev >Priority: Major > Attachments: > 0001-SOLR-9952-Added-dependencies-for-hadoop-amazon-integ.patch, > 0002-SOLR-9952-Added-integration-test-for-checking-backup.patch, Running Solr > on S3.pdf, core-site.xml.template > > > I'd like to have a backup repository implementation allows to snapshot to AWS > S3 -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9952) S3BackupRepository
[ https://issues.apache.org/jira/browse/SOLR-9952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16903952#comment-16903952 ] Suryakant Jadhav commented on SOLR-9952: Hi Alexey, I am trying to configure Solr4.10.3 with S3. Could you please guide me step by step configuration for setting this up. > S3BackupRepository > -- > > Key: SOLR-9952 > URL: https://issues.apache.org/jira/browse/SOLR-9952 > Project: Solr > Issue Type: New Feature > Components: Backup/Restore >Reporter: Mikhail Khludnev >Priority: Major > Attachments: > 0001-SOLR-9952-Added-dependencies-for-hadoop-amazon-integ.patch, > 0002-SOLR-9952-Added-integration-test-for-checking-backup.patch, Running Solr > on S3.pdf, core-site.xml.template > > > I'd like to have a backup repository implementation allows to snapshot to AWS > S3 -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13680) Close Resources Properly
[ https://issues.apache.org/jira/browse/SOLR-13680?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16903939#comment-16903939 ] Furkan KAMACI commented on SOLR-13680: -- Thanks for the review [~munendrasn] > Close Resources Properly > > > Key: SOLR-13680 > URL: https://issues.apache.org/jira/browse/SOLR-13680 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 8.2 >Reporter: Furkan KAMACI >Assignee: Munendra S N >Priority: Major > Fix For: 8.3 > > Attachments: SOLR-13680.patch, SOLR-13680.patch > > Time Spent: 10m > Remaining Estimate: 0h > > Files, streams or connections which implements Closeable or AutoCloseable > interface should be closed after use. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13680) Close Resources Properly
[ https://issues.apache.org/jira/browse/SOLR-13680?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16903901#comment-16903901 ] Furkan KAMACI commented on SOLR-13680: -- Sure, I'll! > Close Resources Properly > > > Key: SOLR-13680 > URL: https://issues.apache.org/jira/browse/SOLR-13680 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 8.2 >Reporter: Furkan KAMACI >Assignee: Munendra S N >Priority: Major > Fix For: 8.3 > > Attachments: SOLR-13680.patch, SOLR-13680.patch > > Time Spent: 10m > Remaining Estimate: 0h > > Files, streams or connections which implements Closeable or AutoCloseable > interface should be closed after use. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13680) Close Resources Properly
[ https://issues.apache.org/jira/browse/SOLR-13680?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16903900#comment-16903900 ] Munendra S N commented on SOLR-13680: - [^SOLR-13680.patch] Tests were failing due to change in {{ManagedSchema}} so, I have removed them. Stream closing is already handled [here|https://github.com/apache/lucene-solr/blob/2e5c554fea0aea1dfeecb22f03f18fb78cd4/solr/core/src/java/org/apache/solr/schema/ManagedIndexSchema.java#L142]. If the stream is closed early, managed-schema won't be persisted locally which causes test failures > Close Resources Properly > > > Key: SOLR-13680 > URL: https://issues.apache.org/jira/browse/SOLR-13680 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 8.2 >Reporter: Furkan KAMACI >Assignee: Munendra S N >Priority: Major > Fix For: 8.3 > > Attachments: SOLR-13680.patch, SOLR-13680.patch > > Time Spent: 10m > Remaining Estimate: 0h > > Files, streams or connections which implements Closeable or AutoCloseable > interface should be closed after use. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-13680) Close Resources Properly
[ https://issues.apache.org/jira/browse/SOLR-13680?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Munendra S N updated SOLR-13680: Attachment: SOLR-13680.patch > Close Resources Properly > > > Key: SOLR-13680 > URL: https://issues.apache.org/jira/browse/SOLR-13680 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 8.2 >Reporter: Furkan KAMACI >Assignee: Munendra S N >Priority: Major > Fix For: 8.3 > > Attachments: SOLR-13680.patch, SOLR-13680.patch > > Time Spent: 10m > Remaining Estimate: 0h > > Files, streams or connections which implements Closeable or AutoCloseable > interface should be closed after use. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8767) DisjunctionMaxQuery do not work well when multiple search term+mm+query fields with different fieldType.
[ https://issues.apache.org/jira/browse/LUCENE-8767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16903882#comment-16903882 ] Chongchen Chen commented on LUCENE-8767: Hi, [~ZhongHua]. on master branch, I cannot reproduce your problem. Here's my patch that tries to reproduce your problem. [^a.diff] you can run that test. you will find that the parsedQuery is correct. Is there something wrong in my patch? > DisjunctionMaxQuery do not work well when multiple search term+mm+query > fields with different fieldType. > > > Key: LUCENE-8767 > URL: https://issues.apache.org/jira/browse/LUCENE-8767 > Project: Lucene - Core > Issue Type: Bug > Components: core/queryparser >Affects Versions: 7.3 > Environment: Solr: 7.3.1 > Backup: > FieldType for name field: > omitNorms="true"> > > > words="stopwords.txt" enablePositionIncrements="true" /> > generateNumberParts="0" catenateWords="1" catenateNumbers="1" catenateAll="0" > splitOnCaseChange="0" preserveOriginal="1" splitOnNumerics="0"/> > > protected="protwords.txt" /> > > > > FieldType for partNumber field: > omitNorms="true"> > > > > > > >Reporter: ZhongHua Wu >Priority: Critical > Labels: patch > Attachments: a.diff > > > When multiple fields in query fields came from different fieldType, > especially one from KeywordTokenizerFactory, another from > WhitespaceTokenizerFactory, then the generated parse query could not honor > synonyms and mm, which hit incorrect documents. The following is my detail: > # We use Solr 7.3.1 > # Our qf=name^10 partNumber_ntk, while fieldType of name use > solr.WhitespaceTokenizerFactory and solr.WordDelimiterFilterFactory, while > partNumber_ntk is not tokenized and use solr.KeywordTokenizerFactory > # mm=2<3 4<5 6<-80%25 > # The search term is versatil sundress, while 'versatile' and 'testing' are > synonyms, we have documents named " Versatil Empire Waist Sundress" which > should be hit, but failed. > # We test same query on Solr 5.5.4, it works fine, it do not work on Solr > 7.3.1. > q= > (Versatil%20testing)%20sundress=name=edismax=2<3 4<5 > 6<-80%25=name^10%20partNumber_ntk=true=xml=100 > parsedQuery: > +(DisjunctionMaxQueryname:versatil name:test)~2)^10.0 | > partNumber_ntk:versatil testing)) DisjunctionMaxQuery(((name:sundress)^10.0 | > partNumber_ntk:sundress)))~2 > Which seems it incorrect parse name to: name:versatil name:test > If I change the query fields to same fieldType, for example,shortDescription > is in same fieldType of name: > q=(Versatil%20testing)%20sundress=name=edismax=2<3 4<5 > 6<-80%25=name^10%20shortDescription=true=xml=100 > ParsedQuery: > +((DisjunctionMaxQuery(((name:versatil)^10.0 | shortDescription:versatil)) > DisjunctionMaxQuery(((name:test)^10.0 | shortDescription:test))) > DisjunctionMaxQuery(((name:sundress)^10.0 | shortDescription:sundress)))~2 > which hits correctly. > Could someone check this or tell us a quick workaround? Now it have big > impact on customer. > Thanks in advance! The following is backup information: > > > -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-8767) DisjunctionMaxQuery do not work well when multiple search term+mm+query fields with different fieldType.
[ https://issues.apache.org/jira/browse/LUCENE-8767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chongchen Chen updated LUCENE-8767: --- Attachment: a.diff > DisjunctionMaxQuery do not work well when multiple search term+mm+query > fields with different fieldType. > > > Key: LUCENE-8767 > URL: https://issues.apache.org/jira/browse/LUCENE-8767 > Project: Lucene - Core > Issue Type: Bug > Components: core/queryparser >Affects Versions: 7.3 > Environment: Solr: 7.3.1 > Backup: > FieldType for name field: > omitNorms="true"> > > > words="stopwords.txt" enablePositionIncrements="true" /> > generateNumberParts="0" catenateWords="1" catenateNumbers="1" catenateAll="0" > splitOnCaseChange="0" preserveOriginal="1" splitOnNumerics="0"/> > > protected="protwords.txt" /> > > > > FieldType for partNumber field: > omitNorms="true"> > > > > > > >Reporter: ZhongHua Wu >Priority: Critical > Labels: patch > Attachments: a.diff > > > When multiple fields in query fields came from different fieldType, > especially one from KeywordTokenizerFactory, another from > WhitespaceTokenizerFactory, then the generated parse query could not honor > synonyms and mm, which hit incorrect documents. The following is my detail: > # We use Solr 7.3.1 > # Our qf=name^10 partNumber_ntk, while fieldType of name use > solr.WhitespaceTokenizerFactory and solr.WordDelimiterFilterFactory, while > partNumber_ntk is not tokenized and use solr.KeywordTokenizerFactory > # mm=2<3 4<5 6<-80%25 > # The search term is versatil sundress, while 'versatile' and 'testing' are > synonyms, we have documents named " Versatil Empire Waist Sundress" which > should be hit, but failed. > # We test same query on Solr 5.5.4, it works fine, it do not work on Solr > 7.3.1. > q= > (Versatil%20testing)%20sundress=name=edismax=2<3 4<5 > 6<-80%25=name^10%20partNumber_ntk=true=xml=100 > parsedQuery: > +(DisjunctionMaxQueryname:versatil name:test)~2)^10.0 | > partNumber_ntk:versatil testing)) DisjunctionMaxQuery(((name:sundress)^10.0 | > partNumber_ntk:sundress)))~2 > Which seems it incorrect parse name to: name:versatil name:test > If I change the query fields to same fieldType, for example,shortDescription > is in same fieldType of name: > q=(Versatil%20testing)%20sundress=name=edismax=2<3 4<5 > 6<-80%25=name^10%20shortDescription=true=xml=100 > ParsedQuery: > +((DisjunctionMaxQuery(((name:versatil)^10.0 | shortDescription:versatil)) > DisjunctionMaxQuery(((name:test)^10.0 | shortDescription:test))) > DisjunctionMaxQuery(((name:sundress)^10.0 | shortDescription:sundress)))~2 > which hits correctly. > Could someone check this or tell us a quick workaround? Now it have big > impact on customer. > Thanks in advance! The following is backup information: > > > -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13399) compositeId support for shard splitting
[ https://issues.apache.org/jira/browse/SOLR-13399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16903868#comment-16903868 ] ASF subversion and git services commented on SOLR-13399: Commit 0fa9cb54c7c5ceefc9a709f3fbe753db9ab39f97 in lucene-solr's branch refs/heads/branch_8x from Yonik Seeley [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=0fa9cb5 ] SOLR-13399: fix splitByPrefix default to be false > compositeId support for shard splitting > --- > > Key: SOLR-13399 > URL: https://issues.apache.org/jira/browse/SOLR-13399 > Project: Solr > Issue Type: New Feature >Reporter: Yonik Seeley >Assignee: Yonik Seeley >Priority: Major > Fix For: 8.3 > > Attachments: SOLR-13399.patch, SOLR-13399.patch, > SOLR-13399_testfix.patch, SOLR-13399_useId.patch, > ShardSplitTest.master.seed_AE04B5C9BA6E9A4.log.txt > > > Shard splitting does not currently have a way to automatically take into > account the actual distribution (number of documents) in each hash bucket > created by using compositeId hashing. > We should probably add a parameter *splitByPrefix* to the *SPLITSHARD* > command that would look at the number of docs sharing each compositeId prefix > and use that to create roughly equal sized buckets by document count rather > than just assuming an equal distribution across the entire hash range. > Like normal shard splitting, we should bias against splitting within hash > buckets unless necessary (since that leads to larger query fanout.) . Perhaps > this warrants a parameter that would control how much of a size mismatch is > tolerable before resorting to splitting within a bucket. > *allowedSizeDifference*? > To more quickly calculate the number of docs in each bucket, we could index > the prefix in a different field. Iterating over the terms for this field > would quickly give us the number of docs in each (i.e lucene keeps track of > the doc count for each term already.) Perhaps the implementation could be a > flag on the *id* field... something like *indexPrefixes* and poly-fields that > would cause the indexing to be automatically done and alleviate having to > pass in an additional field during indexing and during the call to > *SPLITSHARD*. This whole part is an optimization though and could be split > off into its own issue if desired. > > -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13399) compositeId support for shard splitting
[ https://issues.apache.org/jira/browse/SOLR-13399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16903866#comment-16903866 ] ASF subversion and git services commented on SOLR-13399: Commit 2e5c554fea0aea1dfeecb22f03f18fb78cd4 in lucene-solr's branch refs/heads/master from Yonik Seeley [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=2e5c554 ] SOLR-13399: fix splitByPrefix default to be false > compositeId support for shard splitting > --- > > Key: SOLR-13399 > URL: https://issues.apache.org/jira/browse/SOLR-13399 > Project: Solr > Issue Type: New Feature >Reporter: Yonik Seeley >Assignee: Yonik Seeley >Priority: Major > Fix For: 8.3 > > Attachments: SOLR-13399.patch, SOLR-13399.patch, > SOLR-13399_testfix.patch, SOLR-13399_useId.patch, > ShardSplitTest.master.seed_AE04B5C9BA6E9A4.log.txt > > > Shard splitting does not currently have a way to automatically take into > account the actual distribution (number of documents) in each hash bucket > created by using compositeId hashing. > We should probably add a parameter *splitByPrefix* to the *SPLITSHARD* > command that would look at the number of docs sharing each compositeId prefix > and use that to create roughly equal sized buckets by document count rather > than just assuming an equal distribution across the entire hash range. > Like normal shard splitting, we should bias against splitting within hash > buckets unless necessary (since that leads to larger query fanout.) . Perhaps > this warrants a parameter that would control how much of a size mismatch is > tolerable before resorting to splitting within a bucket. > *allowedSizeDifference*? > To more quickly calculate the number of docs in each bucket, we could index > the prefix in a different field. Iterating over the terms for this field > would quickly give us the number of docs in each (i.e lucene keeps track of > the doc count for each term already.) Perhaps the implementation could be a > flag on the *id* field... something like *indexPrefixes* and poly-fields that > would cause the indexing to be automatically done and alleviate having to > pass in an additional field during indexing and during the call to > *SPLITSHARD*. This whole part is an optimization though and could be split > off into its own issue if desired. > > -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] [lucene-solr] Tdspringsteen commented on issue #777: SOLR-11724: Fix for 'Cdcr Bootstrapping does not cause ''index copying'' to follower nodes on Target' BUG
Tdspringsteen commented on issue #777: SOLR-11724: Fix for 'Cdcr Bootstrapping does not cause ''index copying'' to follower nodes on Target' BUG URL: https://github.com/apache/lucene-solr/pull/777#issuecomment-519904227 Thanks for looking into this and getting it fixed Shalin! This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-8.x-Linux (64bit/jdk1.8.0_201) - Build # 989 - Failure!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-Linux/989/ Java: 64bit/jdk1.8.0_201 -XX:-UseCompressedOops -XX:+UseG1GC All tests passed Build Log: [...truncated 16326 lines...] [junit4] Suite: org.apache.solr.update.processor.DimensionalRoutedAliasUpdateProcessorTest [junit4] 2> 2705107 INFO (SUITE-DimensionalRoutedAliasUpdateProcessorTest-seed#[A19B7527F59C7753]-worker) [ ] o.a.s.SolrTestCaseJ4 SecureRandom sanity checks: test.solr.allowed.securerandom=null & java.security.egd=file:/dev/./urandom [junit4] 2> 2705108 INFO (SUITE-DimensionalRoutedAliasUpdateProcessorTest-seed#[A19B7527F59C7753]-worker) [ ] o.a.s.SolrTestCaseJ4 Created dataDir: /home/jenkins/workspace/Lucene-Solr-8.x-Linux/solr/build/solr-core/test/J1/temp/solr.update.processor.DimensionalRoutedAliasUpdateProcessorTest_A19B7527F59C7753-001/data-dir-366-001 [junit4] 2> 2705108 WARN (SUITE-DimensionalRoutedAliasUpdateProcessorTest-seed#[A19B7527F59C7753]-worker) [ ] o.a.s.SolrTestCaseJ4 startTrackingSearchers: numOpens=1 numCloses=1 [junit4] 2> 2705108 INFO (SUITE-DimensionalRoutedAliasUpdateProcessorTest-seed#[A19B7527F59C7753]-worker) [ ] o.a.s.SolrTestCaseJ4 Using PointFields (NUMERIC_POINTS_SYSPROP=true) w/NUMERIC_DOCVALUES_SYSPROP=false [junit4] 2> 2705109 INFO (SUITE-DimensionalRoutedAliasUpdateProcessorTest-seed#[A19B7527F59C7753]-worker) [ ] o.a.s.SolrTestCaseJ4 Randomized ssl (true) and clientAuth (true) via: @org.apache.solr.util.RandomizeSSL(reason=, ssl=NaN, value=NaN, clientAuth=NaN) [junit4] 2> 2705113 INFO (TEST-DimensionalRoutedAliasUpdateProcessorTest.testTimeCat-seed#[A19B7527F59C7753]) [ ] o.a.s.SolrTestCaseJ4 ###Starting testTimeCat [junit4] 2> 2705113 INFO (TEST-DimensionalRoutedAliasUpdateProcessorTest.testTimeCat-seed#[A19B7527F59C7753]) [ ] o.a.s.c.MiniSolrCloudCluster Starting cluster of 4 servers in /home/jenkins/workspace/Lucene-Solr-8.x-Linux/solr/build/solr-core/test/J1/temp/solr.update.processor.DimensionalRoutedAliasUpdateProcessorTest_A19B7527F59C7753-001/tempDir-001 [junit4] 2> 2705113 INFO (TEST-DimensionalRoutedAliasUpdateProcessorTest.testTimeCat-seed#[A19B7527F59C7753]) [ ] o.a.s.c.ZkTestServer STARTING ZK TEST SERVER [junit4] 2> 2705114 INFO (ZkTestServer Run Thread) [ ] o.a.s.c.ZkTestServer client port:0.0.0.0/0.0.0.0:0 [junit4] 2> 2705114 INFO (ZkTestServer Run Thread) [ ] o.a.s.c.ZkTestServer Starting server [junit4] 2> 2705214 INFO (TEST-DimensionalRoutedAliasUpdateProcessorTest.testTimeCat-seed#[A19B7527F59C7753]) [ ] o.a.s.c.ZkTestServer start zk server on port:35273 [junit4] 2> 2705214 INFO (TEST-DimensionalRoutedAliasUpdateProcessorTest.testTimeCat-seed#[A19B7527F59C7753]) [ ] o.a.s.c.ZkTestServer parse host and port list: 127.0.0.1:35273 [junit4] 2> 2705214 INFO (TEST-DimensionalRoutedAliasUpdateProcessorTest.testTimeCat-seed#[A19B7527F59C7753]) [ ] o.a.s.c.ZkTestServer connecting to 127.0.0.1 35273 [junit4] 2> 2705215 INFO (TEST-DimensionalRoutedAliasUpdateProcessorTest.testTimeCat-seed#[A19B7527F59C7753]) [ ] o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper [junit4] 2> 2705217 INFO (zkConnectionManagerCallback-19009-thread-1) [ ] o.a.s.c.c.ConnectionManager zkClient has connected [junit4] 2> 2705217 INFO (TEST-DimensionalRoutedAliasUpdateProcessorTest.testTimeCat-seed#[A19B7527F59C7753]) [ ] o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper [junit4] 2> 2705220 INFO (TEST-DimensionalRoutedAliasUpdateProcessorTest.testTimeCat-seed#[A19B7527F59C7753]) [ ] o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper [junit4] 2> 2705221 INFO (zkConnectionManagerCallback-19011-thread-1) [ ] o.a.s.c.c.ConnectionManager zkClient has connected [junit4] 2> 2705221 INFO (TEST-DimensionalRoutedAliasUpdateProcessorTest.testTimeCat-seed#[A19B7527F59C7753]) [ ] o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper [junit4] 2> 2705222 INFO (TEST-DimensionalRoutedAliasUpdateProcessorTest.testTimeCat-seed#[A19B7527F59C7753]) [ ] o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper [junit4] 2> 2705223 INFO (zkConnectionManagerCallback-19013-thread-1) [ ] o.a.s.c.c.ConnectionManager zkClient has connected [junit4] 2> 2705223 INFO (TEST-DimensionalRoutedAliasUpdateProcessorTest.testTimeCat-seed#[A19B7527F59C7753]) [ ] o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper [junit4] 2> 2705326 WARN (jetty-launcher-19014-thread-3) [ ] o.e.j.s.AbstractConnector Ignoring deprecated socket close linger time [junit4] 2> 2705326 WARN (jetty-launcher-19014-thread-2) [ ] o.e.j.s.AbstractConnector Ignoring deprecated socket close linger time [junit4] 2> 2705326 WARN (jetty-launcher-19014-thread-1) [ ] o.e.j.s.AbstractConnector Ignoring deprecated
[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1924 - Still unstable
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1924/ 2 tests failed. FAILED: org.apache.solr.cloud.api.collections.ShardSplitTest.test Error Message: Wrong doc count on shard1_0. See SOLR-5309 expected:<472> but was:<361> Stack Trace: java.lang.AssertionError: Wrong doc count on shard1_0. See SOLR-5309 expected:<472> but was:<361> at __randomizedtesting.SeedInfo.seed([DD7515A1D13A85C7:55212A7B7FC6E83F]:0) at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.failNotEquals(Assert.java:834) at org.junit.Assert.assertEquals(Assert.java:645) at org.apache.solr.cloud.api.collections.ShardSplitTest.checkDocCountsAndShardStates(ShardSplitTest.java:1002) at org.apache.solr.cloud.api.collections.ShardSplitTest.splitByUniqueKeyTest(ShardSplitTest.java:794) at org.apache.solr.cloud.api.collections.ShardSplitTest.test(ShardSplitTest.java:111) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1082) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:1054) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at
[JENKINS] Lucene-Solr-8.x-MacOSX (64bit/jdk-11.0.3) - Build # 263 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-MacOSX/263/ Java: 64bit/jdk-11.0.3 -XX:-UseCompressedOops -XX:+UseG1GC 1 tests failed. FAILED: org.apache.solr.cloud.PeerSyncReplicationTest.test Error Message: expected:<154> but was:<152> Stack Trace: java.lang.AssertionError: expected:<154> but was:<152> at __randomizedtesting.SeedInfo.seed([97FF5E808F0B5A5F:1FAB615A21F737A7]:0) at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.failNotEquals(Assert.java:834) at org.junit.Assert.assertEquals(Assert.java:645) at org.junit.Assert.assertEquals(Assert.java:631) at org.apache.solr.cloud.PeerSyncReplicationTest.test(PeerSyncReplicationTest.java:154) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1082) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:1054) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
[JENKINS] Lucene-Solr-SmokeRelease-8.x - Build # 174 - Still Failing
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-8.x/174/ No tests ran. Build Log: [...truncated 24869 lines...] [asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: invalid part, must have at least one section (e.g., chapter, appendix, etc.) [asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid part, must have at least one section (e.g., chapter, appendix, etc.) [java] Processed 2590 links (2119 relative) to 3408 anchors in 259 files [echo] Validated Links & Anchors via: /home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/solr/build/solr-ref-guide/bare-bones-html/ -dist-changes: [copy] Copying 4 files to /home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/solr/package/changes package: -unpack-solr-tgz: -ensure-solr-tgz-exists: [mkdir] Created dir: /home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/solr/build/solr.tgz.unpacked [untar] Expanding: /home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/solr/package/solr-8.3.0.tgz into /home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/solr/build/solr.tgz.unpacked generate-maven-artifacts: resolve: resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml resolve: resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure: [ivy:configure] :: loading settings :: file = /home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml resolve: ivy-availability-check: [loadresource] Do not set property disallowed.ivy.jars.list as its length is 0. -ivy-fail-disallowed-ivy-version: ivy-fail: ivy-configure:
[jira] [Commented] (SOLR-13682) command line option to export data to a file
[ https://issues.apache.org/jira/browse/SOLR-13682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16903742#comment-16903742 ] Ishan Chattopadhyaya commented on SOLR-13682: - bq. I'm +1 on starting a work towards making all bin/solr commands runnable externally and to make the tool itself portable so you can take it with you to any machine with bash and Java. +1 My 2 cents on -c vs. -url: I would prefer -url (which will work on all cases, local as well as external cluster). Imagine a situation where the export is actually running against an alias, not a real collection. Having -c do the export would be misleading in that case. A URL will be valid in both cases. > command line option to export data to a file > > > Key: SOLR-13682 > URL: https://issues.apache.org/jira/browse/SOLR-13682 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Noble Paul >Assignee: Noble Paul >Priority: Major > > example > {code:java} > bin/solr export -url http://localhost:8983/solr/gettingstarted > {code} > This will export all the docs in a collection called {{gettingstarted}} into > a file called {{gettingstarted.json}} > additional options are > * {{format}} : {{jsonl}} (default) or {{javabin}} > * {{out}} : export file name > * {{query}} : a custom query , default is **:** > * {{fields}}: a comma separated list of fields to be exported > * {{limit}} : no:of docs. default is 100 , send {{-1}} to import all the > docs > h2. Importing using {{curl}} > importing json file > {code:java} > curl -X POST -d @gettingstarted.json > http://localhost:18983/solr/gettingstarted/update/json/docs?commit=true > {code} > importing javabin format file > {code:java} > curl -X POST --header "Content-Type: application/javabin" --data-binary > @gettingstarted.javabin > http://localhost:7574/solr/gettingstarted/update?commit=true > {code} -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13682) command line option to export data to a file
[ https://issues.apache.org/jira/browse/SOLR-13682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16903691#comment-16903691 ] Noble Paul commented on SOLR-13682: --- bq. But that would be a new JIRA Yes we should support the url param for most commands bq.meantime I'm more thinking about being consistent with current practices. TBH , it doesn't make sense to give multiple options when there is one sensible option. Users just look at the ref guide or they get the command help and execute it. Consistency is overrated > command line option to export data to a file > > > Key: SOLR-13682 > URL: https://issues.apache.org/jira/browse/SOLR-13682 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Noble Paul >Assignee: Noble Paul >Priority: Major > > example > {code:java} > bin/solr export -url http://localhost:8983/solr/gettingstarted > {code} > This will export all the docs in a collection called {{gettingstarted}} into > a file called {{gettingstarted.json}} > additional options are > * {{format}} : {{jsonl}} (default) or {{javabin}} > * {{out}} : export file name > * {{query}} : a custom query , default is **:** > * {{fields}}: a comma separated list of fields to be exported > * {{limit}} : no:of docs. default is 100 , send {{-1}} to import all the > docs > h2. Importing using {{curl}} > importing json file > {code:java} > curl -X POST -d @gettingstarted.json > http://localhost:18983/solr/gettingstarted/update/json/docs?commit=true > {code} > importing javabin format file > {code:java} > curl -X POST --header "Content-Type: application/javabin" --data-binary > @gettingstarted.javabin > http://localhost:7574/solr/gettingstarted/update?commit=true > {code} -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-13685) Update the leader term in ZK on the condition that the replica is still the leader
Shalin Shekhar Mangar created SOLR-13685: Summary: Update the leader term in ZK on the condition that the replica is still the leader Key: SOLR-13685 URL: https://issues.apache.org/jira/browse/SOLR-13685 Project: Solr Issue Type: Improvement Security Level: Public (Default Security Level. Issues are Public) Components: SolrCloud Reporter: Shalin Shekhar Mangar Fix For: master (9.0), 8.3 While working on SOLR-13141, I realized that the ZkShardTerms.ensureTermIsHigher and related methods do a compare-and-set on the terms but there is no guarantee that the leader is still the leader when the zk update executes. This can potentially lead to race conditions during leader transitions. We should update the term using a zk multi-op conditional on the current replica still being the leader. This will not change any behavior but will only be an additional safety check. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-13682) command line option to export data to a file
[ https://issues.apache.org/jira/browse/SOLR-13682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16903666#comment-16903666 ] Jan Høydahl commented on SOLR-13682: bq. Only when you are playing with Solr you run all these from the local box. I think a lot of users will disagree with you on this. There are also small 3-node clusters out there with limited number of docs that need an export. Most of the current commands take -c for collection name and you could argue that all of those (collection management etc) "should" be run from an external machine, but neither do we document this anywhere, nor do we make it easy for bin/solr to be run on an external machine that does not have a Solr install. I'm +1 on starting a work towards making all bin/solr commands runnable externally and to make the tool itself portable so you can take it with you to any machine with bash and Java. But that would be a new JIRA and in the meantime I'm more thinking about being consistent with current practices. > command line option to export data to a file > > > Key: SOLR-13682 > URL: https://issues.apache.org/jira/browse/SOLR-13682 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Noble Paul >Assignee: Noble Paul >Priority: Major > > example > {code:java} > bin/solr export -url http://localhost:8983/solr/gettingstarted > {code} > This will export all the docs in a collection called {{gettingstarted}} into > a file called {{gettingstarted.json}} > additional options are > * {{format}} : {{jsonl}} (default) or {{javabin}} > * {{out}} : export file name > * {{query}} : a custom query , default is **:** > * {{fields}}: a comma separated list of fields to be exported > * {{limit}} : no:of docs. default is 100 , send {{-1}} to import all the > docs > h2. Importing using {{curl}} > importing json file > {code:java} > curl -X POST -d @gettingstarted.json > http://localhost:18983/solr/gettingstarted/update/json/docs?commit=true > {code} > importing javabin format file > {code:java} > curl -X POST --header "Content-Type: application/javabin" --data-binary > @gettingstarted.javabin > http://localhost:7574/solr/gettingstarted/update?commit=true > {code} -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] [lucene-solr] chenkovsky commented on issue #824: LUCENE-8755: QuadPrefixTree robustness
chenkovsky commented on issue #824: LUCENE-8755: QuadPrefixTree robustness URL: https://github.com/apache/lucene-solr/pull/824#issuecomment-519813035 I reimplemented it. now user can specify the version that the tree's compatible with. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-13682) command line option to export data to a file
[ https://issues.apache.org/jira/browse/SOLR-13682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16902778#comment-16902778 ] Noble Paul edited comment on SOLR-13682 at 8/9/19 7:18 AM: --- bq. Perhaps optimize for the normal case of exporting a collection in the local cluster, this is for the most common usecase . the last part is the collection name. Only when you are playing with Solr you run all these from the local box. Ideally, you will be running a cluster with a handful of nodes and you would want to run your export in another machine where Solr is not running. TBH we should let all the commands take in a url. Most of the commands can be run from any node by just pointing to a solr base url. We just don't do it and I would say it's bad user UX. bq. Also, consider making the default format jsonl OK bq. and default output stdout That would be a bad experience , we are gonna emit a few megabytes of data. We can have an extra option to do so was (Author: noble.paul): bq. Perhaps optimize for the normal case of exporting a collection in the local cluster, this is for the most common usecase . the last part is the collection name. Only when you are playing with Solr you run all these from the local box. Ideally, you will be running a cluster with a handful of nodes and you would want to run your export in another machine where Solr is not running. bq. Also, consider making the default format jsonl OK bq. and default output stdout That would be a bad experience , we are gonna emit a few megabytes of data. We can have an extra option to do so > command line option to export data to a file > > > Key: SOLR-13682 > URL: https://issues.apache.org/jira/browse/SOLR-13682 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Noble Paul >Assignee: Noble Paul >Priority: Major > > example > {code:java} > bin/solr export -url http://localhost:8983/solr/gettingstarted > {code} > This will export all the docs in a collection called {{gettingstarted}} into > a file called {{gettingstarted.json}} > additional options are > * {{format}} : {{jsonl}} (default) or {{javabin}} > * {{out}} : export file name > * {{query}} : a custom query , default is **:** > * {{fields}}: a comma separated list of fields to be exported > * {{limit}} : no:of docs. default is 100 , send {{-1}} to import all the > docs > h2. Importing using {{curl}} > importing json file > {code:java} > curl -X POST -d @gettingstarted.json > http://localhost:18983/solr/gettingstarted/update/json/docs?commit=true > {code} > importing javabin format file > {code:java} > curl -X POST --header "Content-Type: application/javabin" --data-binary > @gettingstarted.javabin > http://localhost:7574/solr/gettingstarted/update?commit=true > {code} -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org