[JENKINS] Lucene-Solr-8.x-Solaris (64bit/jdk1.8.0) - Build # 79 - Still Failing!

2019-04-17 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-Solaris/79/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseG1GC

All tests passed

Build Log:
[...truncated 15911 lines...]
   [junit4] Suite: org.apache.solr.cloud.ShardRoutingTest
   [junit4]   2> Creating dataDir: 
/export/home/jenkins/workspace/Lucene-Solr-8.x-Solaris/solr/build/solr-core/test/J0/temp/solr.cloud.ShardRoutingTest_360C02ED1A0FF3A7-001/init-core-data-001
   [junit4]   2> 2741799 WARN  
(SUITE-ShardRoutingTest-seed#[360C02ED1A0FF3A7]-worker) [] 
o.a.s.SolrTestCaseJ4 startTrackingSearchers: numOpens=2 numCloses=2
   [junit4]   2> 2741799 INFO  
(SUITE-ShardRoutingTest-seed#[360C02ED1A0FF3A7]-worker) [] 
o.a.s.SolrTestCaseJ4 Using PointFields (NUMERIC_POINTS_SYSPROP=true) 
w/NUMERIC_DOCVALUES_SYSPROP=true
   [junit4]   2> 2741801 INFO  
(SUITE-ShardRoutingTest-seed#[360C02ED1A0FF3A7]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (true) and clientAuth (false) via: 
@org.apache.solr.util.RandomizeSSL(reason=, value=NaN, ssl=NaN, clientAuth=NaN)
   [junit4]   2> 2741801 INFO  
(SUITE-ShardRoutingTest-seed#[360C02ED1A0FF3A7]-worker) [] 
o.a.s.SolrTestCaseJ4 SecureRandom sanity checks: 
test.solr.allowed.securerandom=null & java.security.egd=file:/dev/./urandom
   [junit4]   2> 2741801 INFO  
(SUITE-ShardRoutingTest-seed#[360C02ED1A0FF3A7]-worker) [] 
o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: /
   [junit4]   2> 2741806 INFO  
(TEST-ShardRoutingTest.test-seed#[360C02ED1A0FF3A7]) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 2741806 INFO  (ZkTestServer Run Thread) [] 
o.a.s.c.ZkTestServer client port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 2741806 INFO  (ZkTestServer Run Thread) [] 
o.a.s.c.ZkTestServer Starting server
   [junit4]   2> 2741907 INFO  
(TEST-ShardRoutingTest.test-seed#[360C02ED1A0FF3A7]) [] 
o.a.s.c.ZkTestServer start zk server on port:61230
   [junit4]   2> 2741907 INFO  
(TEST-ShardRoutingTest.test-seed#[360C02ED1A0FF3A7]) [] 
o.a.s.c.ZkTestServer parse host and port list: 127.0.0.1:61230
   [junit4]   2> 2741907 INFO  
(TEST-ShardRoutingTest.test-seed#[360C02ED1A0FF3A7]) [] 
o.a.s.c.ZkTestServer connecting to 127.0.0.1 61230
   [junit4]   2> 2741921 INFO  (zkConnectionManagerCallback-17534-thread-1) [   
 ] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 2741925 INFO  (zkConnectionManagerCallback-17536-thread-1) [   
 ] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 2741926 INFO  
(TEST-ShardRoutingTest.test-seed#[360C02ED1A0FF3A7]) [] 
o.a.s.c.ZkTestServer put 
/export/home/jenkins/workspace/Lucene-Solr-8.x-Solaris/solr/core/src/test-files/solr/collection1/conf/solrconfig-tlog.xml
 to /configs/conf1/solrconfig.xml
   [junit4]   2> 2741930 INFO  
(TEST-ShardRoutingTest.test-seed#[360C02ED1A0FF3A7]) [] 
o.a.s.c.ZkTestServer put 
/export/home/jenkins/workspace/Lucene-Solr-8.x-Solaris/solr/core/src/test-files/solr/collection1/conf/schema15.xml
 to /configs/conf1/schema.xml
   [junit4]   2> 2741932 INFO  
(TEST-ShardRoutingTest.test-seed#[360C02ED1A0FF3A7]) [] 
o.a.s.c.ZkTestServer put 
/export/home/jenkins/workspace/Lucene-Solr-8.x-Solaris/solr/core/src/test-files/solr/collection1/conf/solrconfig.snippet.randomindexconfig.xml
 to /configs/conf1/solrconfig.snippet.randomindexconfig.xml
   [junit4]   2> 2741933 INFO  
(TEST-ShardRoutingTest.test-seed#[360C02ED1A0FF3A7]) [] 
o.a.s.c.ZkTestServer put 
/export/home/jenkins/workspace/Lucene-Solr-8.x-Solaris/solr/core/src/test-files/solr/collection1/conf/stopwords.txt
 to /configs/conf1/stopwords.txt
   [junit4]   2> 2741934 INFO  
(TEST-ShardRoutingTest.test-seed#[360C02ED1A0FF3A7]) [] 
o.a.s.c.ZkTestServer put 
/export/home/jenkins/workspace/Lucene-Solr-8.x-Solaris/solr/core/src/test-files/solr/collection1/conf/protwords.txt
 to /configs/conf1/protwords.txt
   [junit4]   2> 2741935 INFO  
(TEST-ShardRoutingTest.test-seed#[360C02ED1A0FF3A7]) [] 
o.a.s.c.ZkTestServer put 
/export/home/jenkins/workspace/Lucene-Solr-8.x-Solaris/solr/core/src/test-files/solr/collection1/conf/currency.xml
 to /configs/conf1/currency.xml
   [junit4]   2> 2741936 INFO  
(TEST-ShardRoutingTest.test-seed#[360C02ED1A0FF3A7]) [] 
o.a.s.c.ZkTestServer put 
/export/home/jenkins/workspace/Lucene-Solr-8.x-Solaris/solr/core/src/test-files/solr/collection1/conf/enumsConfig.xml
 to /configs/conf1/enumsConfig.xml
   [junit4]   2> 2741937 INFO  
(TEST-ShardRoutingTest.test-seed#[360C02ED1A0FF3A7]) [] 
o.a.s.c.ZkTestServer put 
/export/home/jenkins/workspace/Lucene-Solr-8.x-Solaris/solr/core/src/test-files/solr/collection1/conf/open-exchange-rates.json
 to /configs/conf1/open-exchange-rates.json
   [junit4]   2> 2741938 INFO  
(TEST-ShardRoutingTest.test-seed#[360C02ED1A0FF3A7]) [] 
o.a.s.c.ZkTestServer put 
/export/home/jenkins/workspace/Lucene-Solr-8.x-Solaris/solr/core/src/test-files/solr/collection1/conf/mapping-ISOLatin1Accent.txt
 to 

[JENKINS] Lucene-Solr-8.x-Linux (64bit/jdk1.8.0_201) - Build # 420 - Still Unstable!

2019-04-17 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-Linux/420/
Java: 64bit/jdk1.8.0_201 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.cloud.HttpPartitionTest.test

Error Message:
Error from server at http://127.0.0.1:33705: Could not find collection : c8n_1x2

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:33705: Could not find collection : c8n_1x2
at 
__randomizedtesting.SeedInfo.seed([190E661C14EA39A8:915A59C6BA165450]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:649)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBSolrClient.doRequest(LBSolrClient.java:368)
at 
org.apache.solr.client.solrj.impl.LBSolrClient.request(LBSolrClient.java:296)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.sendRequest(BaseCloudSolrClient.java:1055)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.requestWithRetryOnStaleState(BaseCloudSolrClient.java:830)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.request(BaseCloudSolrClient.java:763)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:207)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:224)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createCollectionRetry(AbstractFullDistribZkTestBase.java:2045)
at 
org.apache.solr.cloud.HttpPartitionTest.testRf2(HttpPartitionTest.java:214)
at 
org.apache.solr.cloud.HttpPartitionTest.test(HttpPartitionTest.java:135)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1082)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:1054)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 

[jira] [Commented] (SOLR-13400) Replace Observable pattern in TransientSolrCoreCache

2019-04-17 Thread Erick Erickson (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16820598#comment-16820598
 ] 

Erick Erickson commented on SOLR-13400:
---

[~thetaphi] So the current state here is you did this for master but not 8x? I 
started working on this today and that;'s what I think I'm seeing.

I'll backport to 8x, if that's the case.

> Replace Observable pattern in TransientSolrCoreCache
> 
>
> Key: SOLR-13400
> URL: https://issues.apache.org/jira/browse/SOLR-13400
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Server, SolrCloud
>Affects Versions: 8.0, master (9.0)
>Reporter: Uwe Schindler
>Assignee: Erick Erickson
>Priority: Major
> Fix For: 8.1, master (9.0)
>
> Attachments: SOLR-13400.patch
>
>
> Due to change to Java 11 as minimum version (LUCENE-8738), the usage of the 
> old java.utilObservable pattern is deprecated and cannot be used anymore in 
> Lucene/Solr.
> LUCENE-8738 added a rewritten, more type-safe oimplementation of the observer 
> pattern, but it looks like the is overengenieered. As there is only one 
> listener registered, it would be enough to just call the method on SolrCores 
> class (pkg-protected). On LUCENE-8738, [~erickerickson] suggested to move the 
> callback method to queue closes of core in CoreContainer instead, so all the 
> abstractions can be removed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-13408) Cannot start/stop DaemonStream repeatedly, other API improvements

2019-04-17 Thread Erick Erickson (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-13408.
---
   Resolution: Fixed
Fix Version/s: master (9.0)
   8.1
   7.7.2

> Cannot start/stop DaemonStream repeatedly, other API improvements
> -
>
> Key: SOLR-13408
> URL: https://issues.apache.org/jira/browse/SOLR-13408
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Affects Versions: 7.7, 8.0, master (9.0)
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
> Fix For: 7.7.2, 8.1, master (9.0)
>
> Attachments: SOLR-13408.patch, SOLR-13408.patch, SOLR-13408.patch
>
>
> If I create a DaemonStream then use the API commands to stop it then start it 
> repeatedly, after the first time it's stopped/started, it cannot be stopped 
> again.
> DaemonStream.close() checks whether a local variable "closed" is true, and if 
> so does nothing. Otherwise it closes the stream then sets "closed" to true.
> However, when the stream is started again, "closed" is not set to false, 
> therefore the next time you try to stop the deamon, nothing happens and it 
> continues to run. One other consequence of this is that you can have orphan 
> threads running in the background. Say I
> {code:java}
> stop the daemon
> start it again
> create another one with the same ID
> {code}
> When the new one is created, this code is executed over in 
> StreamHandler.handleRequestBody:
> {code:java}
> daemons.remove(daemonStream.getId()).close();
> {code}
> which will not terminate the stream thread as above. Then the open() method 
> executes this:
> {code:java}
> this.streamRunner = new StreamRunner(runInterval, id);
> {code}
> leaving the thread running.
> Finally, there's an NPE if I try to start a non-existent daemon.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13408) Cannot start/stop DaemonStream repeatedly, other API improvements

2019-04-17 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16820592#comment-16820592
 ] 

ASF subversion and git services commented on SOLR-13408:


Commit ac38f23db2133fcd16e2c81c44ca1695de2718b3 in lucene-solr's branch 
refs/heads/branch_7_7 from erick
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=ac38f23 ]

SOLR-13408: Cannot start/stop DaemonStream repeatedly, other API improvements


> Cannot start/stop DaemonStream repeatedly, other API improvements
> -
>
> Key: SOLR-13408
> URL: https://issues.apache.org/jira/browse/SOLR-13408
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Affects Versions: 7.7, 8.0, master (9.0)
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
> Attachments: SOLR-13408.patch, SOLR-13408.patch, SOLR-13408.patch
>
>
> If I create a DaemonStream then use the API commands to stop it then start it 
> repeatedly, after the first time it's stopped/started, it cannot be stopped 
> again.
> DaemonStream.close() checks whether a local variable "closed" is true, and if 
> so does nothing. Otherwise it closes the stream then sets "closed" to true.
> However, when the stream is started again, "closed" is not set to false, 
> therefore the next time you try to stop the deamon, nothing happens and it 
> continues to run. One other consequence of this is that you can have orphan 
> threads running in the background. Say I
> {code:java}
> stop the daemon
> start it again
> create another one with the same ID
> {code}
> When the new one is created, this code is executed over in 
> StreamHandler.handleRequestBody:
> {code:java}
> daemons.remove(daemonStream.getId()).close();
> {code}
> which will not terminate the stream thread as above. Then the open() method 
> executes this:
> {code:java}
> this.streamRunner = new StreamRunner(runInterval, id);
> {code}
> leaving the thread running.
> Finally, there's an NPE if I try to start a non-existent daemon.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-master - Build # 3290 - Unstable

2019-04-17 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/3290/

1 tests failed.
FAILED:  
org.apache.solr.client.solrj.impl.CloudHttp2SolrClientTest.stateVersionParamTest

Error Message:
Error from server at http://127.0.0.1:41077/solr/collection1: no servers 
hosting shard: shard2

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:41077/solr/collection1: no servers hosting 
shard: shard2
at 
__randomizedtesting.SeedInfo.seed([2278DD4918FBB937:8741D031B3E113BE]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:649)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:207)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:987)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:1002)
at 
org.apache.solr.client.solrj.impl.CloudHttp2SolrClientTest.stateVersionParamTest(CloudHttp2SolrClientTest.java:626)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:239)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[jira] [Commented] (SOLR-13408) Cannot start/stop DaemonStream repeatedly, other API improvements

2019-04-17 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16820577#comment-16820577
 ] 

ASF subversion and git services commented on SOLR-13408:


Commit ca1cc248e072a6ddfa8969b434c270ed7ecf88e3 in lucene-solr's branch 
refs/heads/branch_8x from erick
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=ca1cc24 ]

SOLR-13408: Cannot start/stop DaemonStream repeatedly, other API improvements

(cherry picked from commit a9771a58495b319b36b32381c786d9d9801e64ea)


> Cannot start/stop DaemonStream repeatedly, other API improvements
> -
>
> Key: SOLR-13408
> URL: https://issues.apache.org/jira/browse/SOLR-13408
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Affects Versions: 7.7, 8.0, master (9.0)
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
> Attachments: SOLR-13408.patch, SOLR-13408.patch, SOLR-13408.patch
>
>
> If I create a DaemonStream then use the API commands to stop it then start it 
> repeatedly, after the first time it's stopped/started, it cannot be stopped 
> again.
> DaemonStream.close() checks whether a local variable "closed" is true, and if 
> so does nothing. Otherwise it closes the stream then sets "closed" to true.
> However, when the stream is started again, "closed" is not set to false, 
> therefore the next time you try to stop the deamon, nothing happens and it 
> continues to run. One other consequence of this is that you can have orphan 
> threads running in the background. Say I
> {code:java}
> stop the daemon
> start it again
> create another one with the same ID
> {code}
> When the new one is created, this code is executed over in 
> StreamHandler.handleRequestBody:
> {code:java}
> daemons.remove(daemonStream.getId()).close();
> {code}
> which will not terminate the stream thread as above. Then the open() method 
> executes this:
> {code:java}
> this.streamRunner = new StreamRunner(runInterval, id);
> {code}
> leaving the thread running.
> Finally, there's an NPE if I try to start a non-existent daemon.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8736) LatLonShapePolygonQuery returning incorrect WITHIN results with shared boundaries

2019-04-17 Thread Robert Muir (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16820574#comment-16820574
 ] 

Robert Muir commented on LUCENE-8736:
-

{quote}
The concern I have is the behavior of excluding boundary points does not follow 
OGC specifications; (pg 37, "touches" relationship) which is the standard that 
geo systems should follow; and the standard I think we should follow.
{quote}

I don't agree with that: that document is about shapes, I am talking about 
points. Again, I think it makes sense for point-in-polygon to follow the simple 
formula that everyone has used for decades. It is easy to explain that way. 

Its also easy to explain that a single point (e.g. lat/lon coordinate) can only 
be in at most 1 country, rather than trying to explain how its in two. Same 
goes with county or state or anything else. So the partitioning and other 
properties are important, for points.

This JIRA issue was supposedly about polygons, but it did a real drive-by on 
points. Now they are no longer possible to reason about, tests are failing in 
jenkins, etc. I really think the points should just go back. 

Its fine if shapes behave differently, whatever makes sense for them.

> LatLonShapePolygonQuery returning incorrect WITHIN results with shared 
> boundaries
> -
>
> Key: LUCENE-8736
> URL: https://issues.apache.org/jira/browse/LUCENE-8736
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Nicholas Knize
>Assignee: Nicholas Knize
>Priority: Major
> Fix For: 8.1, master (9.0)
>
> Attachments: LUCENE-8736.patch, LUCENE-8736.patch, 
> adaptive-decoding.patch
>
>
> Triangles that are {{WITHIN}} a target polygon query that also share a 
> boundary with the polygon are incorrectly reported as {{CROSSES}} instead of 
> {{INSIDE}}. This leads to incorrect {{WITHIN}} query results  as demonstrated 
> in the following test:
> {code:java}
>   public void testWithinFailure() throws Exception {
> Directory dir = newDirectory();
> RandomIndexWriter w = new RandomIndexWriter(random(), dir);
> // test polygons:
> Polygon indexPoly1 = new Polygon(new double[] {4d, 4d, 3d, 3d, 4d}, new 
> double[] {3d, 4d, 4d, 3d, 3d});
> Polygon indexPoly2 = new Polygon(new double[] {2d, 2d, 1d, 1d, 2d}, new 
> double[] {6d, 7d, 7d, 6d, 6d});
> Polygon indexPoly3 = new Polygon(new double[] {1d, 1d, 0d, 0d, 1d}, new 
> double[] {3d, 4d, 4d, 3d, 3d});
> Polygon indexPoly4 = new Polygon(new double[] {2d, 2d, 1d, 1d, 2d}, new 
> double[] {0d, 1d, 1d, 0d, 0d});
> // index polygons:
> Document doc;
> addPolygonsToDoc(FIELDNAME, doc = new Document(), indexPoly1);
> w.addDocument(doc);
> addPolygonsToDoc(FIELDNAME, doc = new Document(), indexPoly2);
> w.addDocument(doc);
> addPolygonsToDoc(FIELDNAME, doc = new Document(), indexPoly3);
> w.addDocument(doc);
> addPolygonsToDoc(FIELDNAME, doc = new Document(), indexPoly4);
> w.addDocument(doc);
> / search //
> IndexReader reader = w.getReader();
> w.close();
> IndexSearcher searcher = newSearcher(reader);
> Polygon[] searchPoly = new Polygon[] {new Polygon(new double[] {4d, 4d, 
> 0d, 0d, 4d}, new double[] {0d, 7d, 7d, 0d, 0d})};
> Query q = LatLonShape.newPolygonQuery(FIELDNAME, QueryRelation.WITHIN, 
> searchPoly);
> assertEquals(4, searcher.count(q));
> IOUtils.close(w, reader, dir);
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13408) Cannot start/stop DaemonStream repeatedly, other API improvements

2019-04-17 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16820570#comment-16820570
 ] 

ASF subversion and git services commented on SOLR-13408:


Commit a9771a58495b319b36b32381c786d9d9801e64ea in lucene-solr's branch 
refs/heads/master from erick
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=a9771a5 ]

SOLR-13408: Cannot start/stop DaemonStream repeatedly, other API improvements


> Cannot start/stop DaemonStream repeatedly, other API improvements
> -
>
> Key: SOLR-13408
> URL: https://issues.apache.org/jira/browse/SOLR-13408
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Affects Versions: 7.7, 8.0, master (9.0)
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
> Attachments: SOLR-13408.patch, SOLR-13408.patch, SOLR-13408.patch
>
>
> If I create a DaemonStream then use the API commands to stop it then start it 
> repeatedly, after the first time it's stopped/started, it cannot be stopped 
> again.
> DaemonStream.close() checks whether a local variable "closed" is true, and if 
> so does nothing. Otherwise it closes the stream then sets "closed" to true.
> However, when the stream is started again, "closed" is not set to false, 
> therefore the next time you try to stop the deamon, nothing happens and it 
> continues to run. One other consequence of this is that you can have orphan 
> threads running in the background. Say I
> {code:java}
> stop the daemon
> start it again
> create another one with the same ID
> {code}
> When the new one is created, this code is executed over in 
> StreamHandler.handleRequestBody:
> {code:java}
> daemons.remove(daemonStream.getId()).close();
> {code}
> which will not terminate the stream thread as above. Then the open() method 
> executes this:
> {code:java}
> this.streamRunner = new StreamRunner(runInterval, id);
> {code}
> leaving the thread running.
> Finally, there's an NPE if I try to start a non-existent daemon.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13408) Cannot start/stop DaemonStream repeatedly, other API improvements

2019-04-17 Thread Erick Erickson (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-13408:
--
Summary: Cannot start/stop DaemonStream repeatedly, other API improvements  
(was: Cannot start/stop DaemonStream repeatedly)

> Cannot start/stop DaemonStream repeatedly, other API improvements
> -
>
> Key: SOLR-13408
> URL: https://issues.apache.org/jira/browse/SOLR-13408
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Affects Versions: 7.7, 8.0, master (9.0)
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
> Attachments: SOLR-13408.patch, SOLR-13408.patch
>
>
> If I create a DaemonStream then use the API commands to stop it then start it 
> repeatedly, after the first time it's stopped/started, it cannot be stopped 
> again.
> DaemonStream.close() checks whether a local variable "closed" is true, and if 
> so does nothing. Otherwise it closes the stream then sets "closed" to true.
> However, when the stream is started again, "closed" is not set to false, 
> therefore the next time you try to stop the deamon, nothing happens and it 
> continues to run. One other consequence of this is that you can have orphan 
> threads running in the background. Say I
> {code:java}
> stop the daemon
> start it again
> create another one with the same ID
> {code}
> When the new one is created, this code is executed over in 
> StreamHandler.handleRequestBody:
> {code:java}
> daemons.remove(daemonStream.getId()).close();
> {code}
> which will not terminate the stream thread as above. Then the open() method 
> executes this:
> {code:java}
> this.streamRunner = new StreamRunner(runInterval, id);
> {code}
> leaving the thread running.
> Finally, there's an NPE if I try to start a non-existent daemon.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-8.x-Linux (64bit/jdk-11.0.2) - Build # 419 - Unstable!

2019-04-17 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-Linux/419/
Java: 64bit/jdk-11.0.2 -XX:+UseCompressedOops -XX:+UseParallelGC

4 tests failed.
FAILED:  org.apache.lucene.geo.TestPolygon2D.testIntersectEdgeCases

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([18D071AC3A88F8C:7A789B1E1AE9420C]:0)
at org.junit.Assert.fail(Assert.java:86)
at org.junit.Assert.assertTrue(Assert.java:41)
at org.junit.Assert.assertFalse(Assert.java:64)
at org.junit.Assert.assertFalse(Assert.java:74)
at 
org.apache.lucene.geo.TestPolygon2D.testIntersectEdgeCases(TestPolygon2D.java:246)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:834)


FAILED:  org.apache.lucene.geo.TestPolygon2D.testIntersectEdgeCases

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([18D071AC3A88F8C:7A789B1E1AE9420C]:0)
at org.junit.Assert.fail(Assert.java:86)
at org.junit.Assert.assertTrue(Assert.java:41)
at org.junit.Assert.assertFalse(Assert.java:64)
at org.junit.Assert.assertFalse(Assert.java:74)
at 
org.apache.lucene.geo.TestPolygon2D.testIntersectEdgeCases(TestPolygon2D.java:246)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 

[JENKINS] Lucene-Solr-8.x-MacOSX (64bit/jdk1.8.0) - Build # 85 - Unstable!

2019-04-17 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-MacOSX/85/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  
org.apache.solr.update.processor.CategoryRoutedAliasUpdateProcessorTest.testMaxCardinality

Error Message:
Error from server at http://127.0.0.1:50248/solr: no core retrieved for 
testMaxCardinality

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteExecutionException: 
Error from server at http://127.0.0.1:50248/solr: no core retrieved for 
testMaxCardinality
at 
__randomizedtesting.SeedInfo.seed([3AD7D5BA360E1848:4B163735472834CE]:0)
at 
org.apache.solr.client.solrj.impl.BaseHttpSolrClient$RemoteExecutionException.create(BaseHttpSolrClient.java:65)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:626)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBSolrClient.doRequest(LBSolrClient.java:368)
at 
org.apache.solr.client.solrj.impl.LBSolrClient.request(LBSolrClient.java:296)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.sendRequest(BaseCloudSolrClient.java:1055)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.requestWithRetryOnStaleState(BaseCloudSolrClient.java:830)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.request(BaseCloudSolrClient.java:763)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1274)
at 
org.apache.solr.update.processor.RoutedAliasUpdateProcessorTest.createConfigSet(RoutedAliasUpdateProcessorTest.java:115)
at 
org.apache.solr.update.processor.CategoryRoutedAliasUpdateProcessorTest.testMaxCardinality(CategoryRoutedAliasUpdateProcessorTest.java:300)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 

[jira] [Commented] (SOLR-13238) BlobHandler generates non-padded md5

2019-04-17 Thread Jeff Walraven (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16820539#comment-16820539
 ] 

Jeff Walraven commented on SOLR-13238:
--

[~janhoy] The consequence of this bug is that it causes an incorrect md5 to be 
generated. I came across this while writing a tool that uploads a plugin jar to 
solr. It checks the md5 for validity and to check if the file is different 
before uploading a new jar. When using a standard md5 check (that properly pads 
the hash), the validation will fail. The difficulty with this bug is that it 
only shows up in some cases, so it was not apparent until a file happened to 
have the incorrect hash value.

Currently, the workaround is to use the same md5 hash function on both sides of 
the validation.

> BlobHandler generates non-padded md5
> 
>
> Key: SOLR-13238
> URL: https://issues.apache.org/jira/browse/SOLR-13238
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: blobstore
>Affects Versions: 6.0, 6.6.5, 7.0, 7.6
>Reporter: Jeff Walraven
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Introduced in SOLR-6787
> The blob handler currently uses the following logic for generating/storing 
> the md5 for uploads:
> {code:java}
> MessageDigest m = MessageDigest.getInstance("MD5");
> m.update(payload.array(), payload.position(), payload.limit());
> String md5 = new BigInteger(1, m.digest()).toString(16);
> {code}
> Unfortunately, this method does not provide padding for any md5 with less 
> than 0x10 for its most significant byte. This means that on many occasions it 
> could end up with a md5 hash of 31 characters instead of 32. 
> I have opened a PR with the following recommended change:
> {code:java}
> String md5 = new String(Hex.encodeHex(m.digest()));
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-repro-Java11 - Build # 3 - Unstable

2019-04-17 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro-Java11/3/

[...truncated 29 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/1310/consoleText

[repro] Revision: 3a6f2f7543352978c4602355b715c5d87be2a1bb

[repro] Ant options: -DsmokeTestRelease.java12=/home/jenkins/tools/java/latest12
[repro] Repro line:  ant test  -Dtestcase=TestIndexFileDeleter 
-Dtests.method=testExcInDecRef -Dtests.seed=801370ADF6645F -Dtests.multiplier=2 
-Dtests.locale=pa-Guru-IN -Dtests.timezone=Europe/Madrid -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
3a6f2f7543352978c4602355b715c5d87be2a1bb
[repro] git fetch
[repro] git checkout 3a6f2f7543352978c4602355b715c5d87be2a1bb

[...truncated 1 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]lucene/core
[repro]   TestIndexFileDeleter
[repro] ant compile-test

[...truncated 210 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.TestIndexFileDeleter" -Dtests.showOutput=onerror 
-DsmokeTestRelease.java12=/home/jenkins/tools/java/latest12 
-Dtests.seed=801370ADF6645F -Dtests.multiplier=2 -Dtests.locale=pa-Guru-IN 
-Dtests.timezone=Europe/Madrid -Dtests.asserts=true -Dtests.file.encoding=UTF-8

[...truncated 168 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   3/5 failed: org.apache.lucene.index.TestIndexFileDeleter
[repro] git checkout 3a6f2f7543352978c4602355b715c5d87be2a1bb

[...truncated 1 lines...]
[repro] Exiting with code 256

[...truncated 6 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-11) - Build # 23937 - Unstable!

2019-04-17 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/23937/
Java: 64bit/jdk-11 -XX:+UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.cloud.LeaderVoteWaitTimeoutTest.basicTest

Error Message:
Timeout occurred while waiting response from server at: 
http://127.0.0.1:34871/solr

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occurred while 
waiting response from server at: http://127.0.0.1:34871/solr
at 
__randomizedtesting.SeedInfo.seed([5F96ADE74B0BDD97:AD62BA850FAED0A4]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:660)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBSolrClient.doRequest(LBSolrClient.java:368)
at 
org.apache.solr.client.solrj.impl.LBSolrClient.request(LBSolrClient.java:296)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.sendRequest(BaseCloudSolrClient.java:1055)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.requestWithRetryOnStaleState(BaseCloudSolrClient.java:830)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.request(BaseCloudSolrClient.java:763)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:207)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:224)
at 
org.apache.solr.cloud.LeaderVoteWaitTimeoutTest.basicTest(LeaderVoteWaitTimeoutTest.java:155)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

Re: nested documents performance anomaly

2019-04-17 Thread Jeff Wartes

This is more a solr-user conversation, but one other possibility is that for 
the “1M docs test” you sent 1M insert requests, and for the “1000 parent doc 
test” you sent 1000 insert requests.
Batching multiple documents into a single insert request will yield *much* 
better throughput, and the nested-doc approach essentially forces you to do 
that as a side effect of how the insert request is structured.

So basically Dale’s theory, but applied to the HTTP level instead of the 
segment level.

Some other random tips for indexing speed:

  *   For hard commits, set openSearcher=false
  *   For soft commits, set the commit interval as large as you can stand.

-Jeff Wartes

From: Dale Richardson 
Reply-To: "dev@lucene.apache.org" 
Date: Sunday, April 14, 2019 at 3:58 AM
To: "dev@lucene.apache.org" 
Subject: Re: nested documents performance anomaly

Hi Roi,
My understanding of how the nested relationship is implemented in Lucene is 
that the child document references are physically stored in the same index 
segment as the parent document reference.  For normal queries which index 
segment a document reference is stored in is completely transparent to the 
query result, but the block join operator used for parent-child joins takes 
advantage of this low-level detail to provide for super-fast joins between 
parent and child documents.  A trade off for this technique is that the 
relevant index segment needs to be re-written when any part of the parent-child 
relationship changes.

I suspect that if you are writing all the children documents for a parent, you 
are helpfully batching up all updates to a single index segment into a single 
update, with the subsequent increase in speed.

The constraints that apply in return for this speed boost is that you must have 
all the children document ready to write in one go, and the index updates are 
likely done in a single transaction for each parent (i.e. all or none).  I 
suspect (but have not tested the fact) that indexing/storing 1000 child 
documents to a 1000 parent documents one document at a time would actually be 
slower than just indexing 1 million documents 1 document at a time.

I hope this increases your understanding of the situation.

Regards,
Dale.

From: Roi Wexler 
Sent: Sunday, 14 April 2019 6:59 AM
To: dev@lucene.apache.org
Subject: nested documents performance anomaly


Hi,
we're at the process of testing Solr for its indexing speed which is very 
impotent to our application.
we've witnessed strange behavior that we wish to understand before using it.
when we indexed 1M docs it took about 63 seconds but when we indexed the same 
documents only now we've nested them as 1000 parented with 1000 child documents 
each, it took only 27 seconds.

we know that Lucene don't support nested documents for it has a flat object 
model, and we do see that in fact it does index each of the child documents as 
a separate document.

we have tests shows that we get the same results in case we index all documents 
flat (without childs) or when we index them as 1000 parents with 1000 nested 
documents each.

do we miss something here?
why does it behave like that?
what kind of constraints does child documents have, or what is the price we pay 
to get this better index speed?
we're trying to establish if this is a valid way to get a better performance in 
index speed..

any help will be appreciated.




[jira] [Commented] (SOLR-12120) New plugin type AuditLoggerPlugin

2019-04-17 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-12120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16820461#comment-16820461
 ] 

Jan Høydahl commented on SOLR-12120:


I'll try to beast it some more. There may have a too small timeout in the test 
or something.

> New plugin type AuditLoggerPlugin
> -
>
> Key: SOLR-12120
> URL: https://issues.apache.org/jira/browse/SOLR-12120
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: security
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
> Fix For: 8.1
>
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> Solr needs a well defined plugin point to implement audit logging 
> functionality, which is independent from whatever {{AuthenticationPlugin}} or 
> {{AuthorizationPlugin}} are in use at the time.
> It seems reasonable to introduce a new plugin type {{AuditLoggerPlugin}}. It 
> could be configured in solr.xml or it could be a third type of plugin defined 
> in {{security.json}}, i.e.
> {code:java}
> {
>   "authentication" : { "class" : ... },
>   "authorization" : { "class" : ... },
>   "auditlogging" : { "class" : "x.y.MyAuditLogger", ... }
> }
> {code}
> We could then instrument SolrDispatchFilter to the audit plugin with an 
> AuditEvent at important points such as successful authentication:
> {code:java}
> auditLoggerPlugin.audit(new SolrAuditEvent(EventType.AUTHENTICATED, 
> request)); 
> {code}
>  We will mark the impl as {{@lucene.experimental}} in the first release to 
> let it settle as people write their own plugin implementations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13409) Remove directory listings in Jetty config

2019-04-17 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-13409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16820455#comment-16820455
 ] 

Jan Høydahl commented on SOLR-13409:


Let's consider 7.7.2 as well.

> Remove directory listings in Jetty config
> -
>
> Key: SOLR-13409
> URL: https://issues.apache.org/jira/browse/SOLR-13409
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Server
>Affects Versions: 8.0
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>Priority: Major
> Fix For: 8.1, master (9.0)
>
> Attachments: SOLR-13409.patch
>
>
> In the shipped Jetty configuration the directory listings are enabled, 
> although not used in the admin interface. For security reasons this should be 
> disabled.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk-11.0.2) - Build # 5098 - Failure!

2019-04-17 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/5098/
Java: 64bit/jdk-11.0.2 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.BasicDistributedZkTest.test

Error Message:
Could not load collection from ZK: multiunload2

Stack Trace:
org.apache.solr.common.SolrException: Could not load collection from ZK: 
multiunload2
at 
__randomizedtesting.SeedInfo.seed([585A050363045FFE:D00E3AD9CDF83206]:0)
at 
org.apache.solr.common.cloud.ZkStateReader.getCollectionLive(ZkStateReader.java:1371)
at 
org.apache.solr.common.cloud.ZkStateReader$LazyCollectionRef.get(ZkStateReader.java:748)
at 
org.apache.solr.common.cloud.ClusterState$CollectionRef.get(ClusterState.java:386)
at 
org.apache.solr.common.cloud.ZkStateReader.forceUpdateCollection(ZkStateReader.java:400)
at 
org.apache.solr.cloud.BasicDistributedZkTest.testStopAndStartCoresInOneInstance(BasicDistributedZkTest.java:624)
at 
org.apache.solr.cloud.BasicDistributedZkTest.test(BasicDistributedZkTest.java:427)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1082)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:1054)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 

[jira] [Commented] (LUCENE-8765) OneDimensionBKDWriter valueCount validation didn't include leafCount

2019-04-17 Thread Lucene/Solr QA (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16820431#comment-16820431
 ] 

Lucene/Solr QA commented on LUCENE-8765:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
18s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Release audit (RAT) {color} | 
{color:green}  0m 43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Check forbidden APIs {color} | 
{color:green}  0m 43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Validate source patterns {color} | 
{color:green}  0m 43s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 35m  
8s{color} | {color:green} core in the patch passed. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 41m 58s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | LUCENE-8765 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12966252/0001-Fix-OneDimensionBKDWriter-valueCount-validation-v2.patch
 |
| Optional Tests |  compile  javac  unit  ratsources  checkforbiddenapis  
validatesourcepatterns  |
| uname | Linux lucene2-us-west.apache.org 4.4.0-112-generic #135-Ubuntu SMP 
Fri Jan 19 11:48:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | ant |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-LUCENE-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh
 |
| git revision | master / 3a6f2f7 |
| ant | version: Apache Ant(TM) version 1.9.6 compiled on July 20 2018 |
| Default Java | LTS |
|  Test Results | 
https://builds.apache.org/job/PreCommit-LUCENE-Build/182/testReport/ |
| modules | C: lucene/core U: lucene/core |
| Console output | 
https://builds.apache.org/job/PreCommit-LUCENE-Build/182/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> OneDimensionBKDWriter valueCount validation didn't include leafCount
> 
>
> Key: LUCENE-8765
> URL: https://issues.apache.org/jira/browse/LUCENE-8765
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/other
>Affects Versions: 7.5, master (9.0)
>Reporter: ZhaoYang
>Priority: Minor
> Attachments: 
> 0001-Fix-OneDimensionBKDWriter-valueCount-validation-v2.patch, 
> 0001-Fix-OneDimensionBKDWriter-valueCount-validation.patch
>
>
> {{[OneDimensionBKDWriter#add|https://github.com/jasonstack/lucene-solr/blob/branch_7_5/lucene/core/src/java/org/apache/lucene/util/bkd/BKDWriter.java#L612]}}
>  checks if {{valueCount}} exceeds predefined {{totalPointCount}}, but 
> {{valueCount}} is only updated for every 
> 1024({{DEFAULT_MAX_POINTS_IN_LEAF_NODE}}) points. 
> We should include {{leafCount}} for validation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-13348) CollapsingQParserPlugin should not run scorer for documents not eligible for collapsing

2019-04-17 Thread Ishan Chattopadhyaya (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya reassigned SOLR-13348:
---

Assignee: Ishan Chattopadhyaya

> CollapsingQParserPlugin should not run scorer for documents not eligible for 
> collapsing
> ---
>
> Key: SOLR-13348
> URL: https://issues.apache.org/jira/browse/SOLR-13348
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Affects Versions: 8.0, master (9.0)
>Reporter: Andrzej Wislowski
>Assignee: Ishan Chattopadhyaya
>Priority: Major
> Attachments: SOLR-13348.patch
>
>
> CollapsingQParserPlugin should not run scorer for documents not eligible for 
> collapsing (in cases when score is not needed for collapsing operation) but 
> only for the result sorting, decoration of result fields or boosting.
>  
> Performance improvement example:
> 2_000_000 documents collapsed by sort without score to 130_000 then sorted by 
> score improved from 4300ms to 2700ms
>  
> I am attaching patch on master branch. 
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] danmuzi opened a new pull request #650: LUCENE-8768: Javadoc search support

2019-04-17 Thread GitBox
danmuzi opened a new pull request #650: LUCENE-8768: Javadoc search support
URL: https://github.com/apache/lucene-solr/pull/650
 
 
   This is a PR to support **"Javadoc search"** released in Java 9.
   
   **[Before - Lucene Nightly Core Module Javadoc]**
   
![javadoc-nightly](https://user-images.githubusercontent.com/14330832/56313117-c7fbe280-618c-11e9-8c47-58341959919a.png)
   
   **[After]**
   
![new-javadoc](https://user-images.githubusercontent.com/14330832/56313118-c7fbe280-618c-11e9-8305-23caaf4da2f3.png)
   
   For more information, please refer to the following JIRA link.
   (https://issues.apache.org/jira/browse/LUCENE-8768)
   
   Signed-off-by: Namgyu Kim 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-master - Build # 1310 - Failure

2019-04-17 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/1310/

No tests ran.

Build Log:
[...truncated 23468 lines...]
[asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: 
invalid part, must have at least one section (e.g., chapter, appendix, etc.)
[asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid 
part, must have at least one section (e.g., chapter, appendix, etc.)
 [java] Processed 2526 links (2067 relative) to 3355 anchors in 253 files
 [echo] Validated Links & Anchors via: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr-ref-guide/bare-bones-html/

-dist-changes:
 [copy] Copying 4 files to 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/changes

package:

-unpack-solr-tgz:

-ensure-solr-tgz-exists:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr.tgz.unpacked
[untar] Expanding: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/solr-9.0.0.tgz
 into 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr.tgz.unpacked

generate-maven-artifacts:

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.


[jira] [Updated] (LUCENE-8768) Javadoc search support

2019-04-17 Thread Namgyu Kim (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namgyu Kim updated LUCENE-8768:
---
Description: 
Javadoc search is a new feature since Java 9.
 ([https://openjdk.java.net/jeps/225])

I think there is no reason not to use it if the current Lucene Java version is 
11.

It can be a great help to developers looking at API documentation.

(The elastic search also supports it now!
 
[https://artifacts.elastic.co/javadoc/org/elasticsearch/client/elasticsearch-rest-client/7.0.0/org/elasticsearch/client/package-summary.html])

 

■ Before (Lucene Nightly Core Module Javadoc)

!javadoc-nightly.png!

■ After 

*!new-javadoc.png!*

 

I'll change two lines for this.

1) change Javadoc's noindex option from true to false.
{code:java}
// common-build.xml line 182
{code}
2) add javadoc argument "--no-module-directories"
{code:java}
// common-build.xml line 2100

{code}
Currently there is an issue like the following link in JDK 11, so we need 
"--no-module-directories" option.
 ([https://bugs.openjdk.java.net/browse/JDK-8215291])

 

■ How to test

I did +"ant javadocs-modules"+ on lucene project and check Javadoc.

 

 

  was:
Javadoc search is a new feature since Java 9.
 ([https://openjdk.java.net/jeps/225])

I think there is no reason not to use it if the current Lucene Java version is 
11.

It can be a great help to developers looking at API documentation.

(The elastic search also supports it now!
 
[https://artifacts.elastic.co/javadoc/org/elasticsearch/client/elasticsearch-rest-client/7.0.0/org/elasticsearch/client/package-summary.html])

 

■ Before (Lucene Nightly Core Module Javadoc)

!javadoc-nightly.png!

■ After 

*!new-javadoc.png!*

 

I'll change two lines for this.

1) change Javadoc's noindex option from true to false.
{code:java}
// common-build.xml line 187
{code}
2) add javadoc argument "--no-module-directories"
{code:java}
// common-build.xml line 2283

{code}
Currently there is an issue like the following link in JDK 11, so we need 
"--no-module-directories" option.
 ([https://bugs.openjdk.java.net/browse/JDK-8215291])

 

■ How to test

I did +"ant javadocs-modules"+ on lucene project and check Javadoc.

 

 


> Javadoc search support
> --
>
> Key: LUCENE-8768
> URL: https://issues.apache.org/jira/browse/LUCENE-8768
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Namgyu Kim
>Priority: Major
> Attachments: javadoc-nightly.png, new-javadoc.png
>
>
> Javadoc search is a new feature since Java 9.
>  ([https://openjdk.java.net/jeps/225])
> I think there is no reason not to use it if the current Lucene Java version 
> is 11.
> It can be a great help to developers looking at API documentation.
> (The elastic search also supports it now!
>  
> [https://artifacts.elastic.co/javadoc/org/elasticsearch/client/elasticsearch-rest-client/7.0.0/org/elasticsearch/client/package-summary.html])
>  
> ■ Before (Lucene Nightly Core Module Javadoc)
> !javadoc-nightly.png!
> ■ After 
> *!new-javadoc.png!*
>  
> I'll change two lines for this.
> 1) change Javadoc's noindex option from true to false.
> {code:java}
> // common-build.xml line 182
> {code}
> 2) add javadoc argument "--no-module-directories"
> {code:java}
> // common-build.xml line 2100
>  overview="@{overview}"
> additionalparam="--no-module-directories" // NEW CODE
> packagenames="org.apache.lucene.*,org.apache.solr.*"
> ...
> maxmemory="${javadoc.maxmemory}">
> {code}
> Currently there is an issue like the following link in JDK 11, so we need 
> "--no-module-directories" option.
>  ([https://bugs.openjdk.java.net/browse/JDK-8215291])
>  
> ■ How to test
> I did +"ant javadocs-modules"+ on lucene project and check Javadoc.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8768) Javadoc search support

2019-04-17 Thread Namgyu Kim (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namgyu Kim updated LUCENE-8768:
---
Description: 
Javadoc search is a new feature since Java 9.
 ([https://openjdk.java.net/jeps/225])

I think there is no reason not to use it if the current Lucene Java version is 
11.

It can be a great help to developers looking at API documentation.

(The elastic search also supports it now!
 
[https://artifacts.elastic.co/javadoc/org/elasticsearch/client/elasticsearch-rest-client/7.0.0/org/elasticsearch/client/package-summary.html])

 

■ Before (Lucene Nightly Core Module Javadoc)

!javadoc-nightly.png!

■ After 

*!new-javadoc.png!*

 

I'll change two lines for this.

1) change Javadoc's noindex option from true to false.
{code:java}
// common-build.xml line 187
{code}
2) add javadoc argument "--no-module-directories"
{code:java}
// common-build.xml line 2283

{code}
Currently there is an issue like the following link in JDK 11, so we need 
"--no-module-directories" option.
 ([https://bugs.openjdk.java.net/browse/JDK-8215291])

 

■ How to test

I did +"ant javadocs-modules"+ on lucene project and check Javadoc.

 

 

  was:
Javadoc search is a new feature since Java 9.
([https://openjdk.java.net/jeps/225])

I think there is no reason not to use it if the current Lucene Java version is 
11.

It can be a great help to developers looking at API documentation.

(The elastic search also supports it now!
[https://artifacts.elastic.co/javadoc/org/elasticsearch/client/elasticsearch-rest-client/7.0.0/org/elasticsearch/client/package-summary.html])

 

*- Before (Lucene Nightly Core Module Javadoc) -*

!javadoc-nightly.png!

*- After -*

*!new-javadoc.png!*

 

I'll change two lines for this.

1) change Javadoc's noindex option from true to false.

 
{code:java}
// common-build.xml line 187
{code}
 

2) add javadoc argument "--no-module-directories"

 
{code:java}
// common-build.xml line 2283

{code}
Currently there is an issue like the following link in JDK 11, so we need 
"--no-module-directories" option.
([https://bugs.openjdk.java.net/browse/JDK-8215291])

 

*- How to test -*

I did +"ant javadocs-modules"+ on lucene project and check Javadoc.

 

 


> Javadoc search support
> --
>
> Key: LUCENE-8768
> URL: https://issues.apache.org/jira/browse/LUCENE-8768
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Namgyu Kim
>Priority: Major
> Attachments: javadoc-nightly.png, new-javadoc.png
>
>
> Javadoc search is a new feature since Java 9.
>  ([https://openjdk.java.net/jeps/225])
> I think there is no reason not to use it if the current Lucene Java version 
> is 11.
> It can be a great help to developers looking at API documentation.
> (The elastic search also supports it now!
>  
> [https://artifacts.elastic.co/javadoc/org/elasticsearch/client/elasticsearch-rest-client/7.0.0/org/elasticsearch/client/package-summary.html])
>  
> ■ Before (Lucene Nightly Core Module Javadoc)
> !javadoc-nightly.png!
> ■ After 
> *!new-javadoc.png!*
>  
> I'll change two lines for this.
> 1) change Javadoc's noindex option from true to false.
> {code:java}
> // common-build.xml line 187
> {code}
> 2) add javadoc argument "--no-module-directories"
> {code:java}
> // common-build.xml line 2283
>  overview="@{overview}"
> additionalparam="--no-module-directories" // NEW CODE
> packagenames="org.apache.lucene.*,org.apache.solr.*"
> ...
> maxmemory="${javadoc.maxmemory}">
> {code}
> Currently there is an issue like the following link in JDK 11, so we need 
> "--no-module-directories" option.
>  ([https://bugs.openjdk.java.net/browse/JDK-8215291])
>  
> ■ How to test
> I did +"ant javadocs-modules"+ on lucene project and check Javadoc.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-8768) Javadoc search support

2019-04-17 Thread Namgyu Kim (JIRA)
Namgyu Kim created LUCENE-8768:
--

 Summary: Javadoc search support
 Key: LUCENE-8768
 URL: https://issues.apache.org/jira/browse/LUCENE-8768
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Namgyu Kim
 Attachments: javadoc-nightly.png, new-javadoc.png

Javadoc search is a new feature since Java 9.
([https://openjdk.java.net/jeps/225])

I think there is no reason not to use it if the current Lucene Java version is 
11.

It can be a great help to developers looking at API documentation.

(The elastic search also supports it now!
[https://artifacts.elastic.co/javadoc/org/elasticsearch/client/elasticsearch-rest-client/7.0.0/org/elasticsearch/client/package-summary.html])

 

*- Before (Lucene Nightly Core Module Javadoc) -*

!javadoc-nightly.png!

*- After -*

*!new-javadoc.png!*

 

I'll change two lines for this.

1) change Javadoc's noindex option from true to false.

 
{code:java}
// common-build.xml line 187
{code}
 

2) add javadoc argument "--no-module-directories"

 
{code:java}
// common-build.xml line 2283

{code}
Currently there is an issue like the following link in JDK 11, so we need 
"--no-module-directories" option.
([https://bugs.openjdk.java.net/browse/JDK-8215291])

 

*- How to test -*

I did +"ant javadocs-modules"+ on lucene project and check Javadoc.

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13408) Cannot start/stop DaemonStream repeatedly

2019-04-17 Thread Erick Erickson (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16820326#comment-16820326
 ] 

Erick Erickson commented on SOLR-13408:
---

New patch. I haven't run precommit or the full test suite, I want to beast the 
tests I modified first.

This patch isn't nearly as much of a code change as its size would indicate, 
much of it is tests. I did rearrange some code in StreamHandler

[~joel.bernstein] [~krisden] [~dpgove] any comments you'd care to make welcome 
of course.

I'll commit this in a day or two assuming all goes well.

> Cannot start/stop DaemonStream repeatedly
> -
>
> Key: SOLR-13408
> URL: https://issues.apache.org/jira/browse/SOLR-13408
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Affects Versions: 7.7, 8.0, master (9.0)
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
> Attachments: SOLR-13408.patch, SOLR-13408.patch
>
>
> If I create a DaemonStream then use the API commands to stop it then start it 
> repeatedly, after the first time it's stopped/started, it cannot be stopped 
> again.
> DaemonStream.close() checks whether a local variable "closed" is true, and if 
> so does nothing. Otherwise it closes the stream then sets "closed" to true.
> However, when the stream is started again, "closed" is not set to false, 
> therefore the next time you try to stop the deamon, nothing happens and it 
> continues to run. One other consequence of this is that you can have orphan 
> threads running in the background. Say I
> {code:java}
> stop the daemon
> start it again
> create another one with the same ID
> {code}
> When the new one is created, this code is executed over in 
> StreamHandler.handleRequestBody:
> {code:java}
> daemons.remove(daemonStream.getId()).close();
> {code}
> which will not terminate the stream thread as above. Then the open() method 
> executes this:
> {code:java}
> this.streamRunner = new StreamRunner(runInterval, id);
> {code}
> leaving the thread running.
> Finally, there's an NPE if I try to start a non-existent daemon.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13408) Cannot start/stop DaemonStream repeatedly

2019-04-17 Thread Erick Erickson (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-13408:
--
Attachment: SOLR-13408.patch

> Cannot start/stop DaemonStream repeatedly
> -
>
> Key: SOLR-13408
> URL: https://issues.apache.org/jira/browse/SOLR-13408
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Affects Versions: 7.7, 8.0, master (9.0)
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
> Attachments: SOLR-13408.patch, SOLR-13408.patch
>
>
> If I create a DaemonStream then use the API commands to stop it then start it 
> repeatedly, after the first time it's stopped/started, it cannot be stopped 
> again.
> DaemonStream.close() checks whether a local variable "closed" is true, and if 
> so does nothing. Otherwise it closes the stream then sets "closed" to true.
> However, when the stream is started again, "closed" is not set to false, 
> therefore the next time you try to stop the deamon, nothing happens and it 
> continues to run. One other consequence of this is that you can have orphan 
> threads running in the background. Say I
> {code:java}
> stop the daemon
> start it again
> create another one with the same ID
> {code}
> When the new one is created, this code is executed over in 
> StreamHandler.handleRequestBody:
> {code:java}
> daemons.remove(daemonStream.getId()).close();
> {code}
> which will not terminate the stream thread as above. Then the open() method 
> executes this:
> {code:java}
> this.streamRunner = new StreamRunner(runInterval, id);
> {code}
> leaving the thread running.
> Finally, there's an NPE if I try to start a non-existent daemon.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (SOLR-12120) New plugin type AuditLoggerPlugin

2019-04-17 Thread Hoss Man (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man reopened SOLR-12120:
-

Jan: even after your latest fixes, 
AuditLoggerIntegrationTest.testAsyncQueueDrain is still failing between 10-15% 
of it's jenkins runs ...most seem to be on master (but i'm not sure if that's 
just because we have more jenkins master jobs)

(Note: AuditLoggerIntegrationTest.testAsync is also failing occasionaly, but at 
a much lower rate)

> New plugin type AuditLoggerPlugin
> -
>
> Key: SOLR-12120
> URL: https://issues.apache.org/jira/browse/SOLR-12120
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: security
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
> Fix For: 8.1
>
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> Solr needs a well defined plugin point to implement audit logging 
> functionality, which is independent from whatever {{AuthenticationPlugin}} or 
> {{AuthorizationPlugin}} are in use at the time.
> It seems reasonable to introduce a new plugin type {{AuditLoggerPlugin}}. It 
> could be configured in solr.xml or it could be a third type of plugin defined 
> in {{security.json}}, i.e.
> {code:java}
> {
>   "authentication" : { "class" : ... },
>   "authorization" : { "class" : ... },
>   "auditlogging" : { "class" : "x.y.MyAuditLogger", ... }
> }
> {code}
> We could then instrument SolrDispatchFilter to the audit plugin with an 
> AuditEvent at important points such as successful authentication:
> {code:java}
> auditLoggerPlugin.audit(new SolrAuditEvent(EventType.AUTHENTICATED, 
> request)); 
> {code}
>  We will mark the impl as {{@lucene.experimental}} in the first release to 
> let it settle as people write their own plugin implementations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Metrics 4 upgrade and Ganglia reporter removal

2019-04-17 Thread Andrzej Białecki
Hi all,

I’d like to draw your attention to SOLR-12461: Upgrade Dropwizard Metrics to 
4.0.5 release.

The upgrade would've been straightforward if not for the fact that Dropwizard 
removed support for Ganglia reporter in 4x due to a transitive LGPL dependency 
(on remotetea).

We have to upgrade this library on master - version 3x is not compatible with 
Java 11 (due to the internal use of sun.misc.Unsafe), so the Ganglia reporter 
on master is a goner either way. The question remains what to do about 8x.

We could stick to Metrics 3 on branch 8x in order to preserve back-compat, but 
Metrics 4 contains many important bug fixes and improvements so it’s a shame to 
have to keep using Metrics 3 for the complete lifecycle of 8x... I’m rather 
inclined to break the back-compat (ever so slightly ;) ) for the greater good - 
do the upgrade and remove Ganglia in 8x, and put a note in CHANGES to that 
effect.

What do people think about this?

—

Andrzej Białecki


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Possible blacklisting of Lucidworks network IPs

2019-04-17 Thread Robert Muir
Hi,

I believe you want to contact infrastruct...@apache.org about this. I think
usually such blacklisting happens for a reason, so its ideal to fix the
root cause of any problem that caused blacklisting in the first place, so
that it does not happen again.

On Wed, Apr 17, 2019, 12:41 PM Eric Auensen 
wrote:

> Hi,
> I work for Lucidworks and users in our San Francisco office have not been
> able to access the http://lucene.apache.org/ website since ~1:45 PM PDT
> on 4/16/19. This works from other networks. A trace route yields the
> following:
>
> traceroute to lucene.apache.org (95.216.24.32), 30 hops max, 46 byte packets
>  1  64-79-115-129.static.wiline.com (64.79.115.129)  1.366 ms  3.106 ms  
> 2.108 ms
>  2  10.20.218.18 (10.20.218.18)  2.757 ms  2.117 ms  2.228 ms
>  3  10.20.218.129 (10.20.218.129)  1.124 ms  1.283 ms  1.093 ms
>  4  10.20.218.146 (10.20.218.146)  1.082 ms  1.148 ms  1.117 ms
>  5  10.1.3.2 (10.1.3.2)  1.695 ms  1.277 ms  1.113 ms
>  6  10.21.77.26 (10.21.77.26)  1.971 ms  1.665 ms  2.249 ms
>  7  10.17.119.4 (10.17.119.4)  1.835 ms  1.647 ms  2.400 ms
>  8  172.16.100.21 (172.16.100.21)  1.553 ms  1.694 ms  1.620 ms
>  9  xe-3-2-1.mpr3.sfo7.us.above.net (208.184.37.89)  1.480 ms  1.516 ms  
> 1.105 ms
> 10  ae7.cr1.sjc2.us.zip.zayo.com (64.125.30.224)  2.868 ms  3.034 ms  3.724 ms
> 11  ae27.cs1.sjc2.us.eth.zayo.com (64.125.30.230)  3.963 ms  28.723 ms  3.503 
> ms
> 12  ae9.mpr1.pao1.us.zip.zayo.com (64.125.27.189)  3.012 ms  3.153 ms  3.238 
> ms
> 13  palo-b1-link.telia.net (62.115.48.57)  2.979 ms  2.885 ms  2.861 ms
> 14  nyk-bb3-link.telia.net (62.115.114.4)  181.193 ms  nyk-bb4-link.telia.net 
> (62.115.122.37)  178.786 ms  nyk-bb3-link.telia.net (62.115.114.4)  174.646 ms
> 15  kbn-bb4-link.telia.net (80.91.254.90)  184.178 ms  kbn-bb3-link.telia.net 
> (213.155.134.51)  177.935 ms  kbn-bb4-link.telia.net (80.91.254.90)  178.485 
> ms
> 16  s-bb4-link.telia.net (62.115.139.172)  177.475 ms  s-bb3-link.telia.net 
> (62.115.139.168)  174.505 ms  177.946 ms
> 17  hls-b1-link.telia.net (80.91.246.85)  178.658 ms  174.183 ms  
> hls-b1-link.telia.net (62.115.123.31)  183.096 ms
> 18  hetzner-ic-326014-hls-b1.c.telia.net (213.248.66.77)  196.690 ms  174.061 
> ms  169.396 ms
> 19  core31.hel1.hetzner.com (213.239.224.38)  167.001 ms  
> core32.hel1.hetzner.com (213.239.224.26)  166.702 ms  169.973 ms
> 20  ex9k1.dc2.hel1.hetzner.com (213.239.224.134)  167.445 ms  
> ex9k1.dc2.hel1.hetzner.com (213.239.224.138)  170.322 ms  
> ex9k1.dc2.hel1.hetzner.com (213.239.224.134)  166.927 ms
> 21  *  *
>
> It's possible that you may have blacklisted our network IP range
> (64.79.115.130 - 64.79.115.134). Would you please check to see if we are
> blacklisted and then whitelist this range?
>
> Thanks,
> Eric
> *Eric Auens**en, Senior IT Specialist*
> For IT support: https://servicedesk.lucidworks.com
> Lucidworks, Inc. 
> e: eric.auen...@lucidworks.com
> p: 415.329.6424
>


Re: Possible blacklisting of Lucidworks network IPs

2019-04-17 Thread Cassandra Targett
Since this is from one of my colleagues at Lucidworks, I’ll talk to INFRA about 
this. Sorry for the noise to the list.

Cassandra
On Apr 17, 2019, 11:41 AM -0500, Eric Auensen , 
wrote:
> Hi,
> I work for Lucidworks and users in our San Francisco office have not been 
> able to access the http://lucene.apache.org/ website since ~1:45 PM PDT on 
> 4/16/19. This works from other networks. A trace route yields the following:
> traceroute to lucene.apache.org (95.216.24.32), 30 hops max, 46 byte packets
> 1  64-79-115-129.static.wiline.com (64.79.115.129)  1.366 ms  3.106 ms  2.108 
> ms
> 2  10.20.218.18 (10.20.218.18)  2.757 ms  2.117 ms  2.228 ms
> 3  10.20.218.129 (10.20.218.129)  1.124 ms  1.283 ms  1.093 ms
> 4  10.20.218.146 (10.20.218.146)  1.082 ms  1.148 ms  1.117 ms
> 5  10.1.3.2 (10.1.3.2)  1.695 ms  1.277 ms  1.113 ms
> 6  10.21.77.26 (10.21.77.26)  1.971 ms  1.665 ms  2.249 ms
> 7  10.17.119.4 (10.17.119.4)  1.835 ms  1.647 ms  2.400 ms
> 8  172.16.100.21 (172.16.100.21)  1.553 ms  1.694 ms  1.620 ms
> 9  xe-3-2-1.mpr3.sfo7.us.above.net (208.184.37.89)  1.480 ms  1.516 ms  1.105 
> ms
> 10  ae7.cr1.sjc2.us.zip.zayo.com (64.125.30.224)  2.868 ms  3.034 ms  3.724 ms
> 11  ae27.cs1.sjc2.us.eth.zayo.com (64.125.30.230)  3.963 ms  28.723 ms  3.503 
> ms
> 12  ae9.mpr1.pao1.us.zip.zayo.com (64.125.27.189)  3.012 ms  3.153 ms  3.238 
> ms
> 13  palo-b1-link.telia.net (62.115.48.57)  2.979 ms  2.885 ms  2.861 ms
> 14  nyk-bb3-link.telia.net (62.115.114.4)  181.193 ms  nyk-bb4-link.telia.net 
> (62.115.122.37)  178.786 ms  nyk-bb3-link.telia.net (62.115.114.4)  174.646 ms
> 15  kbn-bb4-link.telia.net (80.91.254.90)  184.178 ms  kbn-bb3-link.telia.net 
> (213.155.134.51)  177.935 ms  kbn-bb4-link.telia.net (80.91.254.90)  178.485 
> ms
> 16  s-bb4-link.telia.net (62.115.139.172)  177.475 ms  s-bb3-link.telia.net 
> (62.115.139.168)  174.505 ms  177.946 ms
> 17  hls-b1-link.telia.net (80.91.246.85)  178.658 ms  174.183 ms  
> hls-b1-link.telia.net (62.115.123.31)  183.096 ms
> 18  hetzner-ic-326014-hls-b1.c.telia.net (213.248.66.77)  196.690 ms  174.061 
> ms  169.396 ms
> 19  core31.hel1.hetzner.com (213.239.224.38)  167.001 ms  
> core32.hel1.hetzner.com (213.239.224.26)  166.702 ms  169.973 ms
> 20  ex9k1.dc2.hel1.hetzner.com (213.239.224.134)  167.445 ms  
> ex9k1.dc2.hel1.hetzner.com (213.239.224.138)  170.322 ms  
> ex9k1.dc2.hel1.hetzner.com (213.239.224.134)  166.927 ms
> 21  *  *
> It's possible that you may have blacklisted our network IP range 
> (64.79.115.130 - 64.79.115.134). Would you please check to see if we are 
> blacklisted and then whitelist this range?
>
> Thanks,
> Eric
> Eric Auensen, Senior IT Specialist
> For IT support: https://servicedesk.lucidworks.com
> Lucidworks, Inc.
> e: eric.auen...@lucidworks.com
> p: 415.329.6424


[jira] [Updated] (SOLR-12461) Upgrade Dropwizard Metrics to 4.0.5 release

2019-04-17 Thread Andrzej Bialecki (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki  updated SOLR-12461:
-
Summary: Upgrade Dropwizard Metrics to 4.0.5 release  (was: Upgrade 
Dropwizard Metrics to 4.0.2 release)

> Upgrade Dropwizard Metrics to 4.0.5 release
> ---
>
> Key: SOLR-12461
> URL: https://issues.apache.org/jira/browse/SOLR-12461
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>Priority: Major
> Attachments: SOLR-12461.patch
>
>
> This version of the library contains several improvements and it's compatible 
> with Java 9. 
> However, starting from 4.0.0 metrics-ganglia is no longer available, which 
> means that if we upgrade we will have to remove the corresponding 
> {{SolrGangliaReporter}}.
> Such change is not back-compatible, so I see the following options:
> * wait with the upgrade until 8.0
> * upgrade and remove {{SolrGangliaReporter}} and describe this in the release 
> notes.
> Any other suggestions?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Possible blacklisting of Lucidworks network IPs

2019-04-17 Thread Eric Auensen
Hi,
I work for Lucidworks and users in our San Francisco office have not been
able to access the http://lucene.apache.org/ website since ~1:45 PM PDT on
4/16/19. This works from other networks. A trace route yields the following:

traceroute to lucene.apache.org (95.216.24.32), 30 hops max, 46 byte packets
 1  64-79-115-129.static.wiline.com (64.79.115.129)  1.366 ms  3.106
ms  2.108 ms
 2  10.20.218.18 (10.20.218.18)  2.757 ms  2.117 ms  2.228 ms
 3  10.20.218.129 (10.20.218.129)  1.124 ms  1.283 ms  1.093 ms
 4  10.20.218.146 (10.20.218.146)  1.082 ms  1.148 ms  1.117 ms
 5  10.1.3.2 (10.1.3.2)  1.695 ms  1.277 ms  1.113 ms
 6  10.21.77.26 (10.21.77.26)  1.971 ms  1.665 ms  2.249 ms
 7  10.17.119.4 (10.17.119.4)  1.835 ms  1.647 ms  2.400 ms
 8  172.16.100.21 (172.16.100.21)  1.553 ms  1.694 ms  1.620 ms
 9  xe-3-2-1.mpr3.sfo7.us.above.net (208.184.37.89)  1.480 ms  1.516
ms  1.105 ms
10  ae7.cr1.sjc2.us.zip.zayo.com (64.125.30.224)  2.868 ms  3.034 ms  3.724 ms
11  ae27.cs1.sjc2.us.eth.zayo.com (64.125.30.230)  3.963 ms  28.723 ms  3.503 ms
12  ae9.mpr1.pao1.us.zip.zayo.com (64.125.27.189)  3.012 ms  3.153 ms  3.238 ms
13  palo-b1-link.telia.net (62.115.48.57)  2.979 ms  2.885 ms  2.861 ms
14  nyk-bb3-link.telia.net (62.115.114.4)  181.193 ms
nyk-bb4-link.telia.net (62.115.122.37)  178.786 ms
nyk-bb3-link.telia.net (62.115.114.4)  174.646 ms
15  kbn-bb4-link.telia.net (80.91.254.90)  184.178 ms
kbn-bb3-link.telia.net (213.155.134.51)  177.935 ms
kbn-bb4-link.telia.net (80.91.254.90)  178.485 ms
16  s-bb4-link.telia.net (62.115.139.172)  177.475 ms
s-bb3-link.telia.net (62.115.139.168)  174.505 ms  177.946 ms
17  hls-b1-link.telia.net (80.91.246.85)  178.658 ms  174.183 ms
hls-b1-link.telia.net (62.115.123.31)  183.096 ms
18  hetzner-ic-326014-hls-b1.c.telia.net (213.248.66.77)  196.690 ms
174.061 ms  169.396 ms
19  core31.hel1.hetzner.com (213.239.224.38)  167.001 ms
core32.hel1.hetzner.com (213.239.224.26)  166.702 ms  169.973 ms
20  ex9k1.dc2.hel1.hetzner.com (213.239.224.134)  167.445 ms
ex9k1.dc2.hel1.hetzner.com (213.239.224.138)  170.322 ms
ex9k1.dc2.hel1.hetzner.com (213.239.224.134)  166.927 ms
21  *  *

It's possible that you may have blacklisted our network IP range
(64.79.115.130 - 64.79.115.134). Would you please check to see if we are
blacklisted and then whitelist this range?

Thanks,
Eric
*Eric Auens**en, Senior IT Specialist*
For IT support: https://servicedesk.lucidworks.com
Lucidworks, Inc. 
e: eric.auen...@lucidworks.com
p: 415.329.6424


[JENKINS] Solr-reference-guide-8.x - Build # 2294 - Failure

2019-04-17 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Solr-reference-guide-8.x/2294/

Log: 
Started by timer
Started by timer
Started by timer
Started by timer
Started by timer
Started by timer
Started by timer
Started by timer
Started by timer
Started by timer
Started by timer
Started by timer
Started by timer
Started by timer
Started by timer
Started by timer
Started by timer
Started by timer
Started by timer
Started by timer
Started by timer
Started by timer
Started by timer
Started by timer
[EnvInject] - Loading node environment variables.
Building remotely on websites1 (git-websites svn-websites) in workspace 
/home/jenkins/jenkins-slave/workspace/Solr-reference-guide-8.x
FATAL: java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
Also:   hudson.remoting.Channel$CallSiteStackTrace: Remote call to 
JNLP4-connect connection from 37.48.69.226/37.48.69.226:52020
at 
hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1741)
at hudson.remoting.Request.call(Request.java:202)
at hudson.remoting.Channel.call(Channel.java:954)
at hudson.FilePath.act(FilePath.java:1072)
at hudson.FilePath.act(FilePath.java:1061)
at org.jenkinsci.plugins.gitclient.Git.getClient(Git.java:137)
at hudson.plugins.git.GitSCM.createClient(GitSCM.java:822)
at hudson.plugins.git.GitSCM.createClient(GitSCM.java:813)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at 
hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at 
jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1810)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at 
hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused: hudson.remoting.RequestAbortedException
at hudson.remoting.Request.abort(Request.java:340)
at hudson.remoting.Channel.terminate(Channel.java:1038)
at 
org.jenkinsci.remoting.protocol.impl.ChannelApplicationLayer.onReadClosed(ChannelApplicationLayer.java:209)
at 
org.jenkinsci.remoting.protocol.ApplicationLayer.onRecvClosed(ApplicationLayer.java:222)
at 
org.jenkinsci.remoting.protocol.ProtocolStack$Ptr.onRecvClosed(ProtocolStack.java:816)
at 
org.jenkinsci.remoting.protocol.FilterLayer.onRecvClosed(FilterLayer.java:287)
at 
org.jenkinsci.remoting.protocol.impl.SSLEngineFilterLayer.onRecvClosed(SSLEngineFilterLayer.java:172)
at 
org.jenkinsci.remoting.protocol.ProtocolStack$Ptr.onRecvClosed(ProtocolStack.java:816)
at 
org.jenkinsci.remoting.protocol.NetworkLayer.onRecvClosed(NetworkLayer.java:154)
at 
org.jenkinsci.remoting.protocol.impl.NIONetworkLayer.ready(NIONetworkLayer.java:142)
at org.jenkinsci.remoting.protocol.IOHub$OnReady.run(IOHub.java:795)
at 
jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28)
at 
jenkins.security.ImpersonatingExecutorService$1.run(ImpersonatingExecutorService.java:59)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
ERROR: Step ‘Archive the artifacts’ failed: no workspace for 
Solr-reference-guide-8.x #2294
ERROR: Step ‘Publish Javadoc’ failed: no workspace for Solr-reference-guide-8.x 
#2294
ERROR: websites1 is offline; cannot locate JDK 1.8 (latest)
ERROR: websites1 is offline; cannot locate JDK 1.8 (latest)
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any
ERROR: websites1 is offline; cannot locate JDK 1.8 (latest)
ERROR: websites1 is offline; cannot locate JDK 1.8 (latest)
ERROR: websites1 is offline; cannot locate JDK 1.8 (latest)
ERROR: websites1 is offline; cannot locate JDK 1.8 (latest)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-13348) CollapsingQParserPlugin should not run scorer for documents not eligible for collapsing

2019-04-17 Thread Lucene/Solr QA (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16820244#comment-16820244
 ] 

Lucene/Solr QA commented on SOLR-13348:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
7s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Release audit (RAT) {color} | 
{color:green}  2m  2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Check forbidden APIs {color} | 
{color:green}  2m  2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Validate source patterns {color} | 
{color:green}  2m  2s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 46m  
1s{color} | {color:green} core in the patch passed. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 53m  9s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | SOLR-13348 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12963735/SOLR-13348.patch |
| Optional Tests |  compile  javac  unit  ratsources  checkforbiddenapis  
validatesourcepatterns  |
| uname | Linux lucene1-us-west 4.4.0-137-generic #163~14.04.1-Ubuntu SMP Mon 
Sep 24 17:14:57 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | ant |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-SOLR-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh
 |
| git revision | master / 3a6f2f7 |
| ant | version: Apache Ant(TM) version 1.9.3 compiled on July 24 2018 |
| Default Java | LTS |
|  Test Results | 
https://builds.apache.org/job/PreCommit-SOLR-Build/374/testReport/ |
| modules | C: solr/core U: solr/core |
| Console output | 
https://builds.apache.org/job/PreCommit-SOLR-Build/374/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> CollapsingQParserPlugin should not run scorer for documents not eligible for 
> collapsing
> ---
>
> Key: SOLR-13348
> URL: https://issues.apache.org/jira/browse/SOLR-13348
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Affects Versions: 8.0, master (9.0)
>Reporter: Andrzej Wislowski
>Priority: Major
> Attachments: SOLR-13348.patch
>
>
> CollapsingQParserPlugin should not run scorer for documents not eligible for 
> collapsing (in cases when score is not needed for collapsing operation) but 
> only for the result sorting, decoration of result fields or boosting.
>  
> Performance improvement example:
> 2_000_000 documents collapsed by sort without score to 130_000 then sorted by 
> score improved from 4300ms to 2700ms
>  
> I am attaching patch on master branch. 
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12461) Upgrade Dropwizard Metrics to 4.0.2 release

2019-04-17 Thread Andrzej Bialecki (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16820216#comment-16820216
 ] 

Andrzej Bialecki  commented on SOLR-12461:
--

The upgrade is required for master because metrics-3x is not compatible with 
Java 11. For 8x the upgrade is not strictly required but version 4x contains 
many important bugfixes and optimizations.

> Upgrade Dropwizard Metrics to 4.0.2 release
> ---
>
> Key: SOLR-12461
> URL: https://issues.apache.org/jira/browse/SOLR-12461
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>Priority: Major
> Attachments: SOLR-12461.patch
>
>
> This version of the library contains several improvements and it's compatible 
> with Java 9. 
> However, starting from 4.0.0 metrics-ganglia is no longer available, which 
> means that if we upgrade we will have to remove the corresponding 
> {{SolrGangliaReporter}}.
> Such change is not back-compatible, so I see the following options:
> * wait with the upgrade until 8.0
> * upgrade and remove {{SolrGangliaReporter}} and describe this in the release 
> notes.
> Any other suggestions?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8765) OneDimensionBKDWriter valueCount validation didn't include leafCount

2019-04-17 Thread ZhaoYang (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16820212#comment-16820212
 ] 

ZhaoYang commented on LUCENE-8765:
--

Thanks for the feedback, updated the patch with {{expectThrows}}

> OneDimensionBKDWriter valueCount validation didn't include leafCount
> 
>
> Key: LUCENE-8765
> URL: https://issues.apache.org/jira/browse/LUCENE-8765
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/other
>Affects Versions: 7.5, master (9.0)
>Reporter: ZhaoYang
>Priority: Minor
> Attachments: 
> 0001-Fix-OneDimensionBKDWriter-valueCount-validation-v2.patch, 
> 0001-Fix-OneDimensionBKDWriter-valueCount-validation.patch
>
>
> {{[OneDimensionBKDWriter#add|https://github.com/jasonstack/lucene-solr/blob/branch_7_5/lucene/core/src/java/org/apache/lucene/util/bkd/BKDWriter.java#L612]}}
>  checks if {{valueCount}} exceeds predefined {{totalPointCount}}, but 
> {{valueCount}} is only updated for every 
> 1024({{DEFAULT_MAX_POINTS_IN_LEAF_NODE}}) points. 
> We should include {{leafCount}} for validation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13407) Reject updates sent to non-routed multi collection aliases

2019-04-17 Thread Andrzej Bialecki (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16820214#comment-16820214
 ] 

Andrzej Bialecki  commented on SOLR-13407:
--

Here's a patch that implements this change:
 * aliases need to be checked whether they refer to multiple collections but 
also also whether they are routed or non-routed. Since this happens both in 
{{BaseCloudSolrClient}} and in {{HttpSolrCall}} the only common interface to 
use was {{ClusterStateProvider}} so this patch extends it to allow access to 
alias properties.
 * added convenience methods for checking whether an alias is routed.
 * a few other changes to try and minimize object allocations when expanding 
aliases.

> Reject updates sent to non-routed multi collection aliases
> --
>
> Key: SOLR-13407
> URL: https://issues.apache.org/jira/browse/SOLR-13407
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Andrzej Bialecki 
>Priority: Major
> Fix For: 8.1, master (9.0)
>
> Attachments: SOLR-13407.patch
>
>
> Spin-off from SOLR-13262.
> Currently Solr uses a convention that updates sent to multi-collection 
> aliases are applied only to the first collection on the list, which is 
> nonintuitive and may hide bugs or accidental configuration changes made 
> either in Solr or in client applications.
> This issue proposes to reject all such updates with an error.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8765) OneDimensionBKDWriter valueCount validation didn't include leafCount

2019-04-17 Thread ZhaoYang (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhaoYang updated LUCENE-8765:
-
Attachment: 0001-Fix-OneDimensionBKDWriter-valueCount-validation-v2.patch

> OneDimensionBKDWriter valueCount validation didn't include leafCount
> 
>
> Key: LUCENE-8765
> URL: https://issues.apache.org/jira/browse/LUCENE-8765
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/other
>Affects Versions: 7.5, master (9.0)
>Reporter: ZhaoYang
>Priority: Minor
> Attachments: 
> 0001-Fix-OneDimensionBKDWriter-valueCount-validation-v2.patch, 
> 0001-Fix-OneDimensionBKDWriter-valueCount-validation.patch
>
>
> {{[OneDimensionBKDWriter#add|https://github.com/jasonstack/lucene-solr/blob/branch_7_5/lucene/core/src/java/org/apache/lucene/util/bkd/BKDWriter.java#L612]}}
>  checks if {{valueCount}} exceeds predefined {{totalPointCount}}, but 
> {{valueCount}} is only updated for every 
> 1024({{DEFAULT_MAX_POINTS_IN_LEAF_NODE}}) points. 
> We should include {{leafCount}} for validation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13407) Reject updates sent to non-routed multi collection aliases

2019-04-17 Thread Andrzej Bialecki (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki  updated SOLR-13407:
-
Attachment: SOLR-13407.patch

> Reject updates sent to non-routed multi collection aliases
> --
>
> Key: SOLR-13407
> URL: https://issues.apache.org/jira/browse/SOLR-13407
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Andrzej Bialecki 
>Priority: Major
> Fix For: 8.1, master (9.0)
>
> Attachments: SOLR-13407.patch
>
>
> Spin-off from SOLR-13262.
> Currently Solr uses a convention that updates sent to multi-collection 
> aliases are applied only to the first collection on the list, which is 
> nonintuitive and may hide bugs or accidental configuration changes made 
> either in Solr or in client applications.
> This issue proposes to reject all such updates with an error.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr 8.1

2019-04-17 Thread Adrien Grand
+1 to do a 8.1 soon too, thanks Ishan for volunteering! I'm expecting
you'd build a RC soon after cutting the branch?

On Wed, Apr 17, 2019 at 10:08 AM Ishan Chattopadhyaya
 wrote:
>
> +1 for a 8.1 soon. I can volunteer for RM.
> Does 30 April (about 2 weeks from now) sound reasonable for a branch cutting?
>
> On Wed, Apr 17, 2019 at 1:14 PM Ignacio Vera  wrote:
> >
> > Hi all,
> >
> > Feature freeze for 8.0 was long time ago (January 29th) and there is 
> > interesting stuff that has not been released yet. In Lucene in particular 
> > there is the new BKD tree strategy for segment merging  which provides a 
> > significant performance boost for high dimensions, the new Luke module or 
> > the new query visitor API for naming a few. I see that in Solr there is as 
> > well quite a few unreleased changes.
> >
> > I might not be able to be the release manager this time as I will be on 
> > holidays the next few weeks but I would like to gauge the interest in the 
> > community to have a new release soonish.
> >
> > Cheers,
> >
> > Ignacio
> >
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>


-- 
Adrien

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8765) OneDimensionBKDWriter valueCount validation didn't include leafCount

2019-04-17 Thread Adrien Grand (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16820188#comment-16820188
 ] 

Adrien Grand commented on LUCENE-8765:
--

Good catch! This patch seems to have been created off an old master checkout, 
but it is still easy to apply. One minor thing I'd like to change is to use 
expectThrows in the test rather than a try/catch block if that works for you.

> OneDimensionBKDWriter valueCount validation didn't include leafCount
> 
>
> Key: LUCENE-8765
> URL: https://issues.apache.org/jira/browse/LUCENE-8765
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/other
>Affects Versions: 7.5, master (9.0)
>Reporter: ZhaoYang
>Priority: Minor
> Attachments: 
> 0001-Fix-OneDimensionBKDWriter-valueCount-validation.patch
>
>
> {{[OneDimensionBKDWriter#add|https://github.com/jasonstack/lucene-solr/blob/branch_7_5/lucene/core/src/java/org/apache/lucene/util/bkd/BKDWriter.java#L612]}}
>  checks if {{valueCount}} exceeds predefined {{totalPointCount}}, but 
> {{valueCount}} is only updated for every 
> 1024({{DEFAULT_MAX_POINTS_IN_LEAF_NODE}}) points. 
> We should include {{leafCount}} for validation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-MAVEN] Lucene-Solr-Maven-master #2538: POMs out of sync

2019-04-17 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-master/2538/

No tests ran.

Build Log:
[...truncated 18070 lines...]
BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-master/build.xml:672: The 
following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-master/build.xml:209: The 
following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-master/lucene/build.xml:408:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-master/lucene/common-build.xml:1707:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-master/lucene/common-build.xml:579:
 Error deploying artifact 'org.apache.lucene:lucene-core:jar': Error deploying 
artifact: Error transferring file

Total time: 9 minutes 5 seconds
Build step 'Invoke Ant' marked build as failure
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-repro - Build # 3187 - Unstable

2019-04-17 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/3187/

[...truncated 29 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-NightlyTests-8.x/73/consoleText

[repro] Revision: 16243311ba6496ab7b6070d4a1ffd981c17664f8

[repro] Ant options: -Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-8.x/test-data/enwiki.random.lines.txt
[repro] Repro line:  ant test  -Dtestcase=HdfsAutoAddReplicasIntegrationTest 
-Dtests.method=testSimple -Dtests.seed=8F04F081A78B20B3 -Dtests.multiplier=2 
-Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-8.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=en-US -Dtests.timezone=Zulu -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=ReindexCollectionTest 
-Dtests.method=testReshapeReindexing -Dtests.seed=8F04F081A78B20B3 
-Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-8.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=hu-HU -Dtests.timezone=PST8PDT -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=SolrRrdBackendFactoryTest 
-Dtests.method=testBasic -Dtests.seed=8F04F081A78B20B3 -Dtests.multiplier=2 
-Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-8.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=el -Dtests.timezone=America/Noronha -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=AuditLoggerIntegrationTest 
-Dtests.method=testAsyncQueueDrain -Dtests.seed=8F04F081A78B20B3 
-Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-8.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=fr-BE -Dtests.timezone=Africa/Lusaka -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
3a6f2f7543352978c4602355b715c5d87be2a1bb
[repro] git fetch
[repro] git checkout 16243311ba6496ab7b6070d4a1ffd981c17664f8

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   AuditLoggerIntegrationTest
[repro]   HdfsAutoAddReplicasIntegrationTest
[repro]   ReindexCollectionTest
[repro]   SolrRrdBackendFactoryTest
[repro] ant compile-test

[...truncated 3576 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=20 
-Dtests.class="*.AuditLoggerIntegrationTest|*.HdfsAutoAddReplicasIntegrationTest|*.ReindexCollectionTest|*.SolrRrdBackendFactoryTest"
 -Dtests.showOutput=onerror -Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-8.x/test-data/enwiki.random.lines.txt
 -Dtests.seed=8F04F081A78B20B3 -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-8.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=fr-BE -Dtests.timezone=Africa/Lusaka -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[...truncated 2598 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   0/5 failed: org.apache.solr.cloud.ReindexCollectionTest
[repro]   0/5 failed: org.apache.solr.metrics.rrd.SolrRrdBackendFactoryTest
[repro]   0/5 failed: org.apache.solr.security.AuditLoggerIntegrationTest
[repro]   1/5 failed: 
org.apache.solr.cloud.autoscaling.HdfsAutoAddReplicasIntegrationTest
[repro] git checkout 3a6f2f7543352978c4602355b715c5d87be2a1bb

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 6 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-13394) Change default GC from CMS to G1

2019-04-17 Thread Uwe Schindler (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16820152#comment-16820152
 ] 

Uwe Schindler commented on SOLR-13394:
--

Please remove the {{-XX:+AggressiveOpts}}, really!

> Change default GC from CMS to G1
> 
>
> Key: SOLR-13394
> URL: https://issues.apache.org/jira/browse/SOLR-13394
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ishan Chattopadhyaya
>Priority: Major
> Attachments: SOLR-13394.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> CMS has been deprecated in new versions of Java 
> (http://openjdk.java.net/jeps/291). This issue is to switch Solr default from 
> CMS to G1.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] uschindler commented on a change in pull request #644: SOLR-13394: Change default GC from CMS to G1

2019-04-17 Thread GitBox
uschindler commented on a change in pull request #644: SOLR-13394: Change 
default GC from CMS to G1
URL: https://github.com/apache/lucene-solr/pull/644#discussion_r276282835
 
 

 ##
 File path: solr/bin/solr.cmd
 ##
 @@ -1167,20 +1167,14 @@ set SOLR_OPTS=%SOLR_JAVA_STACK_SIZE% %SOLR_OPTS%
 IF "%SOLR_TIMEZONE%"=="" set SOLR_TIMEZONE=UTC
 
 IF "%GC_TUNE%"=="" (
-  set GC_TUNE=-XX:NewRatio=3 ^
-   -XX:SurvivorRatio=4 ^
-   -XX:TargetSurvivorRatio=90 ^
-   -XX:MaxTenuringThreshold=8 ^
-   -XX:+UseConcMarkSweepGC ^
-   -XX:ConcGCThreads=4 -XX:ParallelGCThreads=4 ^
-   -XX:+CMSScavengeBeforeRemark ^
-   -XX:PretenureSizeThreshold=64m ^
-   -XX:+UseCMSInitiatingOccupancyOnly ^
-   -XX:CMSInitiatingOccupancyFraction=50 ^
-   -XX:CMSMaxAbortablePrecleanTime=6000 ^
-   -XX:+CMSParallelRemarkEnabled ^
-   -XX:+ParallelRefProcEnabled ^
-   -XX:-OmitStackTraceInFastThrow
+  set GC_TUNE=-XX:+UseG1GC ^
+-XX:+PerfDisableSharedMem ^
+-XX:+ParallelRefProcEnabled ^
+-XX:G1HeapRegionSize=16m ^
+-XX:MaxGCPauseMillis=250 ^
+-XX:InitiatingHeapOccupancyPercent=45 ^
+-XX:+UseLargePages ^
+-XX:+AggressiveOpts
 
 Review comment:
   Don't enable aggressive opts!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-8.x-Solaris (64bit/jdk1.8.0) - Build # 78 - Failure!

2019-04-17 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-Solaris/78/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC

5 tests failed.
FAILED:  org.apache.solr.cloud.HttpPartitionWithTlogReplicasTest.test

Error Message:
org.apache.solr.common.SolrException: java.util.concurrent.TimeoutException: 
Could not connect to ZooKeeper 127.0.0.1:65424 within 3 ms

Stack Trace:
java.lang.RuntimeException: org.apache.solr.common.SolrException: 
java.util.concurrent.TimeoutException: Could not connect to ZooKeeper 
127.0.0.1:65424 within 3 ms
at 
__randomizedtesting.SeedInfo.seed([E65DC8A62F898F66:6E09F77C8175E29E]:0)
at org.apache.solr.cloud.ZkTestServer.run(ZkTestServer.java:602)
at org.apache.solr.cloud.ZkTestServer.run(ZkTestServer.java:526)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.distribSetUp(AbstractDistribZkTestBase.java:73)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.distribSetUp(AbstractFullDistribZkTestBase.java:246)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:1049)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.solr.common.SolrException: 
java.util.concurrent.TimeoutException: Could not connect to ZooKeeper 
127.0.0.1:65424 within 3 ms
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:201)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:126)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:112)
at org.apache.solr.cloud.ZkTestServer.init(ZkTestServer.java:447)
at 

[GitHub] [lucene-solr] s1monw merged pull request #649: Use Map.copyOf in lucene core

2019-04-17 Thread GitBox
s1monw merged pull request #649: Use Map.copyOf in lucene core
URL: https://github.com/apache/lucene-solr/pull/649
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8765) OneDimensionBKDWriter valueCount validation didn't include leafCount

2019-04-17 Thread Lucene/Solr QA (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16820077#comment-16820077
 ] 

Lucene/Solr QA commented on LUCENE-8765:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
40s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Release audit (RAT) {color} | 
{color:green}  2m  3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Check forbidden APIs {color} | 
{color:green}  2m  3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Validate source patterns {color} | 
{color:green}  2m  3s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 32m  
7s{color} | {color:green} core in the patch passed. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 39m 47s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | LUCENE-8765 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12966179/0001-Fix-OneDimensionBKDWriter-valueCount-validation.patch
 |
| Optional Tests |  compile  javac  unit  ratsources  checkforbiddenapis  
validatesourcepatterns  |
| uname | Linux lucene2-us-west.apache.org 4.4.0-112-generic #135-Ubuntu SMP 
Fri Jan 19 11:48:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | ant |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-LUCENE-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh
 |
| git revision | master / fb28958 |
| ant | version: Apache Ant(TM) version 1.9.6 compiled on July 20 2018 |
| Default Java | LTS |
|  Test Results | 
https://builds.apache.org/job/PreCommit-LUCENE-Build/181/testReport/ |
| modules | C: lucene/core U: lucene/core |
| Console output | 
https://builds.apache.org/job/PreCommit-LUCENE-Build/181/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> OneDimensionBKDWriter valueCount validation didn't include leafCount
> 
>
> Key: LUCENE-8765
> URL: https://issues.apache.org/jira/browse/LUCENE-8765
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/other
>Affects Versions: 7.5, master (9.0)
>Reporter: ZhaoYang
>Priority: Minor
> Attachments: 
> 0001-Fix-OneDimensionBKDWriter-valueCount-validation.patch
>
>
> {{[OneDimensionBKDWriter#add|https://github.com/jasonstack/lucene-solr/blob/branch_7_5/lucene/core/src/java/org/apache/lucene/util/bkd/BKDWriter.java#L612]}}
>  checks if {{valueCount}} exceeds predefined {{totalPointCount}}, but 
> {{valueCount}} is only updated for every 
> 1024({{DEFAULT_MAX_POINTS_IN_LEAF_NODE}}) points. 
> We should include {{leafCount}} for validation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-8.x-Linux (64bit/jdk-13-ea+shipilev-fastdebug) - Build # 416 - Failure!

2019-04-17 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-Linux/416/
Java: 64bit/jdk-13-ea+shipilev-fastdebug -XX:+UseCompressedOops -XX:+UseG1GC

All tests passed

Build Log:
[...truncated 5284 lines...]
   [junit4] JVM J2: stdout was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-8.x-Linux/lucene/build/backward-codecs/test/temp/junit4-J2-20190417_123724_93715167829388789745892.sysout
   [junit4] >>> JVM J2 emitted unexpected output (verbatim) 
   [junit4] # To suppress the following error report, specify this argument
   [junit4] # after -XX: or in .hotspotrc:  SuppressErrorAt=/split_if.cpp:116
   [junit4] #
   [junit4] # A fatal error has been detected by the Java Runtime Environment:
   [junit4] #
   [junit4] #  Internal Error 
(/home/buildbot/worker/jdkX-linux/build/src/hotspot/share/opto/split_if.cpp:116),
 pid=2531, tid=2581
   [junit4] #  Error: assert(bol->is_Bool()) failed
   [junit4] #
   [junit4] # JRE version: OpenJDK Runtime Environment (13.0) (fastdebug build 
13-testing+0-builds.shipilev.net-openjdk-jdk-b769-20190316-jdk-1312)
   [junit4] # Java VM: OpenJDK 64-Bit Server VM (fastdebug 
13-testing+0-builds.shipilev.net-openjdk-jdk-b769-20190316-jdk-1312, mixed 
mode, sharing, tiered, compressed oops, g1 gc, linux-amd64)
   [junit4] # Problematic frame:
   [junit4] # V  [libjvm.so+0x16658f3]  PhaseIdealLoop::split_up(Node*, Node*, 
Node*) [clone .part.44]+0xe73
   [junit4] #
   [junit4] # No core dump will be written. Core dumps have been disabled. To 
enable core dumping, try "ulimit -c unlimited" before starting Java again
   [junit4] #
   [junit4] # An error report file with more information is saved as:
   [junit4] # 
/home/jenkins/workspace/Lucene-Solr-8.x-Linux/lucene/build/backward-codecs/test/J2/hs_err_pid2531.log
   [junit4] [thread 2643 also had an error]
   [junit4] 
   [junit4] [timeout occurred during error reporting in step ""] after 30 s.
   [junit4] #
   [junit4] # If you would like to submit a bug report, please visit:
   [junit4] #   http://bugreport.java.com/bugreport/crash.jsp
   [junit4] #
   [junit4] Current thread is 2581
   [junit4] Dumping core ...
   [junit4] <<< JVM J2: EOF 

   [junit4] JVM J2: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-8.x-Linux/lucene/build/backward-codecs/test/temp/junit4-J2-20190417_123724_93717375428245172142047.syserr
   [junit4] >>> JVM J2 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM warning: increase O_BUFLEN in ostream.hpp 
-- output truncated
   [junit4] <<< JVM J2: EOF 

[...truncated 3 lines...]
   [junit4] ERROR: JVM J2 ended with an exception, command line: 
/home/jenkins/tools/java/64bit/jdk-13-ea+shipilev-fastdebug/bin/java 
-XX:+UseCompressedOops -XX:+UseG1GC -XX:+HeapDumpOnOutOfMemoryError 
-XX:HeapDumpPath=/home/jenkins/workspace/Lucene-Solr-8.x-Linux/heapdumps -ea 
-esa --illegal-access=deny -Dtests.prefix=tests -Dtests.seed=AC2AA77705AC4DD7 
-Xmx512M -Dtests.iters= -Dtests.verbose=false -Dtests.infostream=false 
-Dtests.codec=random -Dtests.postingsformat=random 
-Dtests.docvaluesformat=random -Dtests.locale=random -Dtests.timezone=random 
-Dtests.directory=random -Dtests.linedocsfile=europarl.lines.txt.gz 
-Dtests.luceneMatchVersion=8.1.0 -Dtests.cleanthreads=perMethod 
-Djava.util.logging.config.file=/home/jenkins/workspace/Lucene-Solr-8.x-Linux/lucene/tools/junit4/logging.properties
 -Dtests.nightly=false -Dtests.weekly=false -Dtests.monster=false 
-Dtests.slow=true -Dtests.asserts=true -Dtests.multiplier=3 -DtempDir=./temp 
-Djava.io.tmpdir=./temp 
-Dcommon.dir=/home/jenkins/workspace/Lucene-Solr-8.x-Linux/lucene 
-Dclover.db.dir=/home/jenkins/workspace/Lucene-Solr-8.x-Linux/lucene/build/clover/db
 
-Djava.security.policy=/home/jenkins/workspace/Lucene-Solr-8.x-Linux/lucene/tools/junit4/tests.policy
 -Dtests.LUCENE_VERSION=8.1.0 -Djetty.testMode=1 -Djetty.insecurerandom=1 
-Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory 
-Djava.awt.headless=true -Djdk.map.althashing.threshold=0 
-Dtests.src.home=/home/jenkins/workspace/Lucene-Solr-8.x-Linux 
-Djava.security.egd=file:/dev/./urandom 
-Djunit4.childvm.cwd=/home/jenkins/workspace/Lucene-Solr-8.x-Linux/lucene/build/backward-codecs/test/J2
 
-Djunit4.tempDir=/home/jenkins/workspace/Lucene-Solr-8.x-Linux/lucene/build/backward-codecs/test/temp
 -Djunit4.childvm.id=2 -Djunit4.childvm.count=3 -Dfile.encoding=US-ASCII 
-Djava.security.manager=org.apache.lucene.util.TestSecurityManager 
-Dtests.filterstacks=true -Dtests.leaveTemporary=false -Dtests.badapples=false 
-classpath 

[jira] [Issue Comment Deleted] (SOLR-13350) Explore collector managers for multi-threaded search

2019-04-17 Thread Ishan Chattopadhyaya (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya updated SOLR-13350:

Comment: was deleted

(was: Thanks [~keshareenv] for the patch. Settings look good to me; these seem 
to be based on [~elyograg]'s wiki page.

Can you please also update the Upgrade Notes section of the solr/CHANGES.txt 
with the information that users need to take notice?)

> Explore collector managers for multi-threaded search
> 
>
> Key: SOLR-13350
> URL: https://issues.apache.org/jira/browse/SOLR-13350
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ishan Chattopadhyaya
>Assignee: Ishan Chattopadhyaya
>Priority: Major
> Attachments: SOLR-13350.patch, SOLR-13350.patch
>
>
> AFAICT, SolrIndexSearcher can be used only to search all the segments of an 
> index in series. However, using CollectorManagers, segments can be searched 
> concurrently and result in reduced latency. Opening this issue to explore the 
> effectiveness of using CollectorManagers in SolrIndexSearcher from latency 
> and throughput perspective.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13394) Change default GC from CMS to G1

2019-04-17 Thread Ishan Chattopadhyaya (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16820047#comment-16820047
 ] 

Ishan Chattopadhyaya commented on SOLR-13394:
-

Thanks [~keshareenv] for the patch. Settings look good to me; these seem to be 
based on Shawn Heisey's wiki page.

Can you please also update the Upgrade Notes section of the solr/CHANGES.txt 
with the information that users need to take notice?



> Change default GC from CMS to G1
> 
>
> Key: SOLR-13394
> URL: https://issues.apache.org/jira/browse/SOLR-13394
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ishan Chattopadhyaya
>Priority: Major
> Attachments: SOLR-13394.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> CMS has been deprecated in new versions of Java 
> (http://openjdk.java.net/jeps/291). This issue is to switch Solr default from 
> CMS to G1.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13350) Explore collector managers for multi-threaded search

2019-04-17 Thread Ishan Chattopadhyaya (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16820032#comment-16820032
 ] 

Ishan Chattopadhyaya commented on SOLR-13350:
-

Thanks [~keshareenv] for the patch. Settings look good to me; these seem to be 
based on [~elyograg]'s wiki page.

Can you please also update the Upgrade Notes section of the solr/CHANGES.txt 
with the information that users need to take notice?

> Explore collector managers for multi-threaded search
> 
>
> Key: SOLR-13350
> URL: https://issues.apache.org/jira/browse/SOLR-13350
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ishan Chattopadhyaya
>Assignee: Ishan Chattopadhyaya
>Priority: Major
> Attachments: SOLR-13350.patch, SOLR-13350.patch
>
>
> AFAICT, SolrIndexSearcher can be used only to search all the segments of an 
> index in series. However, using CollectorManagers, segments can be searched 
> concurrently and result in reduced latency. Opening this issue to explore the 
> effectiveness of using CollectorManagers in SolrIndexSearcher from latency 
> and throughput perspective.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-13350) Explore collector managers for multi-threaded search

2019-04-17 Thread Ishan Chattopadhyaya (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16820024#comment-16820024
 ] 

Ishan Chattopadhyaya edited comment on SOLR-13350 at 4/17/19 12:20 PM:
---

Updating the patch with more functionality covered. Enabled this by default (as 
opposed to enabling via a parameter, which is what we want to do), and all 
tests passing. Still, lots of cleanup and refactoring pending.


was (Author: ichattopadhyaya):
Updating the patch with more functionality covered. Enabled this by default (as 
opposed to enabling via a parameter, which is what we want to do), and all 
tests passing.

> Explore collector managers for multi-threaded search
> 
>
> Key: SOLR-13350
> URL: https://issues.apache.org/jira/browse/SOLR-13350
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ishan Chattopadhyaya
>Assignee: Ishan Chattopadhyaya
>Priority: Major
> Attachments: SOLR-13350.patch, SOLR-13350.patch
>
>
> AFAICT, SolrIndexSearcher can be used only to search all the segments of an 
> index in series. However, using CollectorManagers, segments can be searched 
> concurrently and result in reduced latency. Opening this issue to explore the 
> effectiveness of using CollectorManagers in SolrIndexSearcher from latency 
> and throughput perspective.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13350) Explore collector managers for multi-threaded search

2019-04-17 Thread Ishan Chattopadhyaya (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16820024#comment-16820024
 ] 

Ishan Chattopadhyaya commented on SOLR-13350:
-

Updating the patch with more functionality covered. Enabled this by default (as 
opposed to enabling via a parameter, which is what we want to do), and all 
tests passing.

> Explore collector managers for multi-threaded search
> 
>
> Key: SOLR-13350
> URL: https://issues.apache.org/jira/browse/SOLR-13350
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ishan Chattopadhyaya
>Assignee: Ishan Chattopadhyaya
>Priority: Major
> Attachments: SOLR-13350.patch, SOLR-13350.patch
>
>
> AFAICT, SolrIndexSearcher can be used only to search all the segments of an 
> index in series. However, using CollectorManagers, segments can be searched 
> concurrently and result in reduced latency. Opening this issue to explore the 
> effectiveness of using CollectorManagers in SolrIndexSearcher from latency 
> and throughput perspective.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13350) Explore collector managers for multi-threaded search

2019-04-17 Thread Ishan Chattopadhyaya (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya updated SOLR-13350:

Attachment: SOLR-13350.patch

> Explore collector managers for multi-threaded search
> 
>
> Key: SOLR-13350
> URL: https://issues.apache.org/jira/browse/SOLR-13350
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ishan Chattopadhyaya
>Assignee: Ishan Chattopadhyaya
>Priority: Major
> Attachments: SOLR-13350.patch, SOLR-13350.patch
>
>
> AFAICT, SolrIndexSearcher can be used only to search all the segments of an 
> index in series. However, using CollectorManagers, segments can be searched 
> concurrently and result in reduced latency. Opening this issue to explore the 
> effectiveness of using CollectorManagers in SolrIndexSearcher from latency 
> and throughput perspective.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8755) QuadPrefixTree robustness: can throw exception while indexing a point at high precision

2019-04-17 Thread David Smiley (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8755?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated LUCENE-8755:
-
Component/s: (was: core/index)
 modules/spatial-extras
Summary: QuadPrefixTree robustness: can throw exception while indexing 
a point at high precision  (was: java.lang.IndexOutOfBoundsException: Index: 0, 
Size: 0 on OS Grid coordinates)

> QuadPrefixTree robustness: can throw exception while indexing a point at high 
> precision
> ---
>
> Key: LUCENE-8755
> URL: https://issues.apache.org/jira/browse/LUCENE-8755
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/spatial-extras
>Reporter: senthil nathan
>Priority: Critical
> Attachments: LUCENE-8755.patch
>
>
> When trying to index a below document with apache solr 7.5.0 i am getting 
> java.lang.IndexOutOfBoundsException, this data is causing the whole full 
> import to be failed. I have also defined my schema for your reference 
>  
> Data:
> [
> { "street_description":"SAMPLE_TEXT", "pao_start_number":6, 
> "x_coordinate":244502.06, "sao_text":"FIRST FLOOR", "logical_status":"1", 
> "street_record_type":1, "id":"AA60L12-ENG", 
> "street_description_str":"SAMPLE_TEXT", "lpi_logical_status":"1", 
> "administrative_area":"SAMPLE_TEXT & HOVE", "uprn":"8899889", 
> "town_name":"TEST TOWN", "street_description_full":"60 DEMO ", 
> "y_coordinate":639062.07, "postcode_locator":"AB1 1BB", "location":"244502.06 
> 639062.07" }
> ]
>  
> Configuration in managed-schema.xml
>  
>  geo="false" maxDistErr="0.09" worldBounds="ENVELOPE(0,70,130,0)" 
> distErrPct="0.15"/>
>  stored="false"/>
>   stored="false"/>
>  
>   indexed="true" stored="true"/>
>   stored="true"/>
>   required="true" stored="true"/>
>   stored="true"/>
>   stored="true"/>
>   indexed="false" stored="true"/>
>   indexed="false" stored="true"/>
>   stored="true"/>
>   indexed="true" stored="true"/>
>   stored="true"/>
>   indexed="true" stored="true"/>
>   stored="true"/>
>   indexed="true" stored="true"/>
>   stored="true"/>
>   indexed="true" stored="true"/>
>   stored="true"/>
>   indexed="true" stored="true"/>
>   stored="true"/>
>   indexed="true" stored="true"/>
>   stored="true"/>
>   indexed="true" stored="true"/>
>   multiValued="false" indexed="true" stored="true"/>
>   multiValued="false" indexed="true" stored="true"/> 
>   indexed="false" stored="true"/>
>   stored="true"/>
>   stored="true"/>
>   stored="true"/>
>   stored="true"/>



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12993) Split the state.json into 2. a small frequently modified data + a large unmodified data

2019-04-17 Thread mosh (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16819976#comment-16819976
 ] 

mosh edited comment on SOLR-12993 at 4/17/19 11:05 AM:
---

{quote}or alternately we can just add this data (status, leader) to the LIR 
term files . That way , we don't need to create any new files
{quote}
ZkShardTerms(class that generates LIR files) resides in solr-core, while 
ZkStateReader is in solrJ.
 Since this proposal is to split state.json, there would be no way to find out 
which replica is the leader,
 since this information will reside inside the LIR term files.

I propose two possible forms of action:
 # Move ZkShardTerms to solrJ, combining LIR terms, shard state status and 
leader.
 # Create new files as proposed by [~noble.paul], which will contain a small 
subset of the split information.

[~noble.paul], [~gus_heck],
 WDYT?


was (Author: moshebla):
{quote}or alternately we can just add this data (status, leader) to the LIR 
term files . That way , we don't need to create any new files
{quote}
ZkShardTerms(class that generates LIR files) resides in solr-core, while 
ZkStateReader is in solrJ.
 Since this proposal is to split state.json, there would be no way to find out 
which replica is the leader,
 since this information will reside inside the LIR term files.

I propose two possible forms of action:
 # Move ZkShardTerms to solrJ, and combine LIR terms
 # Create new files as proposed by [~noble.paul], which will contain a small 
subset of the split information.

[~noble.paul], [~gus_heck],
 WDYT?

> Split the state.json into 2. a small frequently modified data + a large 
> unmodified data
> ---
>
> Key: SOLR-12993
> URL: https://issues.apache.org/jira/browse/SOLR-12993
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Priority: Major
>
> This a just a proposal to minimize the ZK load and improve scalability of 
> very large clusters.
> Every time a small state change occurs for a collection/replica the following 
> file needs to be updated + read * n times (where n = no of replicas for this 
> collection ). The proposal is to split the main file into 2.
> {code}
> {"gettingstarted":{
> "pullReplicas":"0",
> "replicationFactor":"2",
> "router":{"name":"compositeId"},
> "maxShardsPerNode":"-1",
> "autoAddReplicas":"false",
> "nrtReplicas":"2",
> "tlogReplicas":"0",
> "shards":{
>   "shard1":{
> "range":"8000-",
>   
> "replicas":{
>   "core_node3":{
> "core":"gettingstarted_shard1_replica_n1",
> "base_url":"http://10.0.0.80:8983/solr;,
> "node_name":"10.0.0.80:8983_solr",
> "state":"active",
> "type":"NRT",
> "force_set_state":"false",
> "leader":"true"},
>   "core_node5":{
> "core":"gettingstarted_shard1_replica_n2",
> "base_url":"http://10.0.0.80:7574/solr;,
> "node_name":"10.0.0.80:7574_solr",
>  
> "type":"NRT",
> "force_set_state":"false"}}},
>   "shard2":{
> "range":"0-7fff",
> "state":"active",
> "replicas":{
>   "core_node7":{
> "core":"gettingstarted_shard2_replica_n4",
> "base_url":"http://10.0.0.80:7574/solr;,
> "node_name":"10.0.0.80:7574_solr",
>
> "type":"NRT",
> "force_set_state":"false"},
>   "core_node8":{
> "core":"gettingstarted_shard2_replica_n6",
> "base_url":"http://10.0.0.80:8983/solr;,
> "node_name":"10.0.0.80:8983_solr",
>  
> "type":"NRT",
> "force_set_state":"false",
> "leader":"true"}}
> {code}
> another file {{status.json}} which is frequently updated and small.
> {code}
> {
> "shard1": {
>   "state": "ACTIVE",
>   "core_node3": {"state": "active", "leader" : true},
>   "core_node5": {"state": "active"}
> },
> "shard2": {
>   "state": "active",
>   "core_node7": {"state": "active"},
>   "core_node8": {"state": "active", "leader" : true}}
>   }
> {code}
> Here the size of the file is roughly one tenth of the other file. This leads 
> to a dramatic reduction in the amount of data written/read to/from ZK.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12993) Split the state.json into 2. a small frequently modified data + a large unmodified data

2019-04-17 Thread mosh (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16819976#comment-16819976
 ] 

mosh commented on SOLR-12993:
-

{quote}or alternately we can just add this data (status, leader) to the LIR 
term files . That way , we don't need to create any new files
{quote}
ZkShardTerms(class that generates LIR files) resides in solr-core, while 
ZkStateReader is in solrJ.
 Since this proposal is to split state.json, there would be no way to find out 
which replica is the leader,
 since this information will reside inside the LIR term files.

I propose two possible forms of action:
 # Move ZkShardTerms to solrJ, and combine LIR terms
 # Create new files as proposed by [~noble.paul], which will contain a small 
subset of the split information.

[~noble.paul], [~gus_heck],
 WDYT?

> Split the state.json into 2. a small frequently modified data + a large 
> unmodified data
> ---
>
> Key: SOLR-12993
> URL: https://issues.apache.org/jira/browse/SOLR-12993
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Priority: Major
>
> This a just a proposal to minimize the ZK load and improve scalability of 
> very large clusters.
> Every time a small state change occurs for a collection/replica the following 
> file needs to be updated + read * n times (where n = no of replicas for this 
> collection ). The proposal is to split the main file into 2.
> {code}
> {"gettingstarted":{
> "pullReplicas":"0",
> "replicationFactor":"2",
> "router":{"name":"compositeId"},
> "maxShardsPerNode":"-1",
> "autoAddReplicas":"false",
> "nrtReplicas":"2",
> "tlogReplicas":"0",
> "shards":{
>   "shard1":{
> "range":"8000-",
>   
> "replicas":{
>   "core_node3":{
> "core":"gettingstarted_shard1_replica_n1",
> "base_url":"http://10.0.0.80:8983/solr;,
> "node_name":"10.0.0.80:8983_solr",
> "state":"active",
> "type":"NRT",
> "force_set_state":"false",
> "leader":"true"},
>   "core_node5":{
> "core":"gettingstarted_shard1_replica_n2",
> "base_url":"http://10.0.0.80:7574/solr;,
> "node_name":"10.0.0.80:7574_solr",
>  
> "type":"NRT",
> "force_set_state":"false"}}},
>   "shard2":{
> "range":"0-7fff",
> "state":"active",
> "replicas":{
>   "core_node7":{
> "core":"gettingstarted_shard2_replica_n4",
> "base_url":"http://10.0.0.80:7574/solr;,
> "node_name":"10.0.0.80:7574_solr",
>
> "type":"NRT",
> "force_set_state":"false"},
>   "core_node8":{
> "core":"gettingstarted_shard2_replica_n6",
> "base_url":"http://10.0.0.80:8983/solr;,
> "node_name":"10.0.0.80:8983_solr",
>  
> "type":"NRT",
> "force_set_state":"false",
> "leader":"true"}}
> {code}
> another file {{status.json}} which is frequently updated and small.
> {code}
> {
> "shard1": {
>   "state": "ACTIVE",
>   "core_node3": {"state": "active", "leader" : true},
>   "core_node5": {"state": "active"}
> },
> "shard2": {
>   "state": "active",
>   "core_node7": {"state": "active"},
>   "core_node8": {"state": "active", "leader" : true}}
>   }
> {code}
> Here the size of the file is roughly one tenth of the other file. This leads 
> to a dramatic reduction in the amount of data written/read to/from ZK.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-2562) Make Luke a Lucene/Solr Module

2019-04-17 Thread Uwe Schindler (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-2562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16819918#comment-16819918
 ] 

Uwe Schindler commented on LUCENE-2562:
---

It's ok.

> Make Luke a Lucene/Solr Module
> --
>
> Key: LUCENE-2562
> URL: https://issues.apache.org/jira/browse/LUCENE-2562
> Project: Lucene - Core
>  Issue Type: Task
>  Components: modules/luke
>Reporter: Mark Miller
>Assignee: Tomoko Uchida
>Priority: Major
>  Labels: gsoc2014
> Fix For: 8.1, master (9.0)
>
> Attachments: LUCENE-2562-Ivy.patch, LUCENE-2562-Ivy.patch, 
> LUCENE-2562-Ivy.patch, LUCENE-2562-ivy.patch, LUCENE-2562.patch, 
> LUCENE-2562.patch, LUCENE-2562.patch, Luke-ALE-1.png, Luke-ALE-2.png, 
> Luke-ALE-3.png, Luke-ALE-4.png, Luke-ALE-5.png, luke-javafx1.png, 
> luke-javafx2.png, luke-javafx3.png, luke1.jpg, luke2.jpg, luke3.jpg, 
> lukeALE-documents.png, screenshot-1.png, スクリーンショット 2018-11-05 9.19.47.png
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> see
> "RE: Luke - in need of maintainer": 
> http://markmail.org/message/m4gsto7giltvrpuf
> "Web-based Luke": http://markmail.org/message/4xwps7p7ifltme5q
> I think it would be great if there was a version of Luke that always worked 
> with trunk - and it would also be great if it was easier to match Luke jars 
> with Lucene versions.
> While I'd like to get GWT Luke into the mix as well, I think the easiest 
> starting point is to straight port Luke to another UI toolkit before 
> abstracting out DTO objects that both GWT Luke and Pivot Luke could share.
> I've started slowly converting Luke's use of thinlet to Apache Pivot. I 
> haven't/don't have a lot of time for this at the moment, but I've plugged 
> away here and there over the past work or two. There is still a *lot* to do.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-2562) Make Luke a Lucene/Solr Module

2019-04-17 Thread Tomoko Uchida (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-2562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16819909#comment-16819909
 ] 

Tomoko Uchida commented on LUCENE-2562:
---

Hi [~thetaphi],
{quote} The correct place to put the warning is into the 
SYSTEM_REQUIREMENTS.md/txt file in the Lucene root folder. It is then also part 
of official documentation ("ant documentation").
{quote}
I put a short note to SYSTEM_REQUIREMENTS.txt. Can you please double check it, 
if that's ok I will cherry-pick this to branch_8x. (I won't change the master 
branch.)

diff: 
[https://github.com/apache/lucene-solr/commit/e448173d363f906fa243d8440086d1f0689307b6]

 

> Make Luke a Lucene/Solr Module
> --
>
> Key: LUCENE-2562
> URL: https://issues.apache.org/jira/browse/LUCENE-2562
> Project: Lucene - Core
>  Issue Type: Task
>  Components: modules/luke
>Reporter: Mark Miller
>Assignee: Tomoko Uchida
>Priority: Major
>  Labels: gsoc2014
> Fix For: 8.1, master (9.0)
>
> Attachments: LUCENE-2562-Ivy.patch, LUCENE-2562-Ivy.patch, 
> LUCENE-2562-Ivy.patch, LUCENE-2562-ivy.patch, LUCENE-2562.patch, 
> LUCENE-2562.patch, LUCENE-2562.patch, Luke-ALE-1.png, Luke-ALE-2.png, 
> Luke-ALE-3.png, Luke-ALE-4.png, Luke-ALE-5.png, luke-javafx1.png, 
> luke-javafx2.png, luke-javafx3.png, luke1.jpg, luke2.jpg, luke3.jpg, 
> lukeALE-documents.png, screenshot-1.png, スクリーンショット 2018-11-05 9.19.47.png
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> see
> "RE: Luke - in need of maintainer": 
> http://markmail.org/message/m4gsto7giltvrpuf
> "Web-based Luke": http://markmail.org/message/4xwps7p7ifltme5q
> I think it would be great if there was a version of Luke that always worked 
> with trunk - and it would also be great if it was easier to match Luke jars 
> with Lucene versions.
> While I'd like to get GWT Luke into the mix as well, I think the easiest 
> starting point is to straight port Luke to another UI toolkit before 
> abstracting out DTO objects that both GWT Luke and Pivot Luke could share.
> I've started slowly converting Luke's use of thinlet to Apache Pivot. I 
> haven't/don't have a lot of time for this at the moment, but I've plugged 
> away here and there over the past work or two. There is still a *lot* to do.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-8.x-Linux (64bit/jdk-13-ea+shipilev-fastdebug) - Build # 415 - Unstable!

2019-04-17 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-Linux/415/
Java: 64bit/jdk-13-ea+shipilev-fastdebug -XX:+UseCompressedOops 
-XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.CloudExitableDirectoryReaderTest.test

Error Message:
Time allowed to handle this request exceeded

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Time allowed to handle this 
request exceeded
at 
__randomizedtesting.SeedInfo.seed([75B7EB8A4B95E89B:FDE3D450E5698563]:0)
at 
org.apache.solr.client.solrj.impl.LBSolrClient.request(LBSolrClient.java:343)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.sendRequest(BaseCloudSolrClient.java:1055)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.requestWithRetryOnStaleState(BaseCloudSolrClient.java:830)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.request(BaseCloudSolrClient.java:763)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:207)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:987)
at 
org.apache.solr.cloud.CloudExitableDirectoryReaderTest.assertPartialResults(CloudExitableDirectoryReaderTest.java:252)
at 
org.apache.solr.cloud.CloudExitableDirectoryReaderTest.assertPartialResults(CloudExitableDirectoryReaderTest.java:248)
at 
org.apache.solr.cloud.CloudExitableDirectoryReaderTest.test(CloudExitableDirectoryReaderTest.java:113)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:567)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 

[jira] [Commented] (LUCENE-8766) Add Luwak as a lucene module

2019-04-17 Thread Kyriakos Karenos (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16819885#comment-16819885
 ] 

Kyriakos Karenos commented on LUCENE-8766:
--

+1 There is strong affinity between the 2, in particular, it is rare that a 
query matching (ie alerting) functionality is not desired once search 
functionality is provided. 

> Add Luwak as a lucene module
> 
>
> Key: LUCENE-8766
> URL: https://issues.apache.org/jira/browse/LUCENE-8766
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Luwak [https://github.com/flaxsearch/luwak] is a stored query engine, 
> allowing users to efficiently match a stream of documents against a large set 
> of queries.  It's only dependency is lucene, and most recent updates have 
> just been upgrading the version of lucene against which it can run.
> It's a generally useful piece of software, and already licensed as Apache 2.  
> The maintainers would like to donate it to the ASF, and merge it in to the 
> lucene-solr project.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8738) Bump minimum Java version requirement to 11

2019-04-17 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16819884#comment-16819884
 ] 

ASF subversion and git services commented on LUCENE-8738:
-

Commit fb28958bc8cdec0ac20968222b03de7aed384032 in lucene-solr's branch 
refs/heads/master from Uwe Schindler
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=fb28958 ]

LUCENE-8738: Add Java 11 under "Getting Started" in CHANGES.txt


> Bump minimum Java version requirement to 11
> ---
>
> Key: LUCENE-8738
> URL: https://issues.apache.org/jira/browse/LUCENE-8738
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: general/build
>Reporter: Adrien Grand
>Assignee: Uwe Schindler
>Priority: Minor
>  Labels: Java11
> Fix For: master (9.0)
>
> Attachments: LUCENE-8738-solr-CoreCloseListener.patch, 
> LUCENE-8738.patch
>
>
> See vote thread for reference: https://markmail.org/message/q6ubdycqscpl43aq.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8767) DisjunctionMaxQuery do not work well when multiple search term+mm+query fields with different fieldType.

2019-04-17 Thread ZhongHua Wu (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16819882#comment-16819882
 ] 

ZhongHua Wu commented on LUCENE-8767:
-

BTW, I do test to add q.op=OR like:

q=(Versatil%20test)%20sundress=name=edismax=2=name^10%20partNumber_ntk=true=xml=1=OR

so this issue is not the same issue in 
https://issues.apache.org/jira/browse/SOLR-3589Verstail

Even we want to achieve same effect, we want name:Versatil | name:test

> DisjunctionMaxQuery do not work well when multiple search term+mm+query 
> fields with different fieldType.
> 
>
> Key: LUCENE-8767
> URL: https://issues.apache.org/jira/browse/LUCENE-8767
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/queryparser
>Affects Versions: 7.3
> Environment: Solr: 7.3.1
> Backup:
> FieldType for name field:
>  omitNorms="true">
>  
>  
>   words="stopwords.txt" enablePositionIncrements="true" />
>   generateNumberParts="0" catenateWords="1" catenateNumbers="1" catenateAll="0" 
>  splitOnCaseChange="0" preserveOriginal="1" splitOnNumerics="0"/>
>  
>   protected="protwords.txt" />
>  
>  
>  
> FieldType for partNumber field:
>  omitNorms="true">
>  
>  
>  
>  
>  
>  
>Reporter: ZhongHua Wu
>Priority: Critical
>  Labels: patch
>
> When multiple fields in query fields came from different fieldType, 
> especially one from KeywordTokenizerFactory, another from 
> WhitespaceTokenizerFactory, then the generated parse query could not honor 
> synonyms and mm, which hit incorrect documents. The following is my detail:
>  # We use Solr 7.3.1
>  # Our qf=name^10 partNumber_ntk, while fieldType of name use 
> solr.WhitespaceTokenizerFactory and solr.WordDelimiterFilterFactory, while  
> partNumber_ntk is not tokenized and use solr.KeywordTokenizerFactory
>  # mm=2<3 4<5 6<-80%25
>  # The search term is versatil sundress, while 'versatile' and 'testing' are 
> synonyms, we have documents named " Versatil Empire Waist Sundress" which 
> should be hit, but failed.
>  # We test same query on Solr 5.5.4, it works fine, it do not work on Solr 
> 7.3.1.
> q=
> (Versatil%20testing)%20sundress=name=edismax=2<3 4<5 
> 6<-80%25=name^10%20partNumber_ntk=true=xml=100
> parsedQuery:
> +(DisjunctionMaxQueryname:versatil name:test)~2)^10.0 | 
> partNumber_ntk:versatil testing)) DisjunctionMaxQuery(((name:sundress)^10.0 | 
> partNumber_ntk:sundress)))~2
> Which seems it incorrect parse name to: name:versatil name:test
> If I change the query fields to same fieldType, for example,shortDescription 
> is in same fieldType of name:
> q=(Versatil%20testing)%20sundress=name=edismax=2<3 4<5 
> 6<-80%25=name^10%20shortDescription=true=xml=100
> ParsedQuery:
> +((DisjunctionMaxQuery(((name:versatil)^10.0 | shortDescription:versatil)) 
> DisjunctionMaxQuery(((name:test)^10.0 | shortDescription:test))) 
> DisjunctionMaxQuery(((name:sundress)^10.0 | shortDescription:sundress)))~2
> which hits correctly.
> Could someone check this or tell us a quick workaround? Now it have big 
> impact on customer.
> Thanks in advance! The following is backup information:
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-13387) Specify intervals as json arrays

2019-04-17 Thread Mikhail Khludnev (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev resolved SOLR-13387.
-
Resolution: Won't Do

> Specify intervals as json arrays
> 
>
> Key: SOLR-13387
> URL: https://issues.apache.org/jira/browse/SOLR-13387
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module
>Reporter: Mikhail Khludnev
>Priority: Major
>
> in addition to classic range mini-syntax add one piggybacking on json arrays. 
> See comments on enclosing issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-13409) Remove directory listings in Jetty config

2019-04-17 Thread Uwe Schindler (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler resolved SOLR-13409.
--
Resolution: Fixed

> Remove directory listings in Jetty config
> -
>
> Key: SOLR-13409
> URL: https://issues.apache.org/jira/browse/SOLR-13409
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Server
>Affects Versions: 8.0
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>Priority: Major
> Fix For: 8.1, master (9.0)
>
> Attachments: SOLR-13409.patch
>
>
> In the shipped Jetty configuration the directory listings are enabled, 
> although not used in the admin interface. For security reasons this should be 
> disabled.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13409) Remove directory listings in Jetty config

2019-04-17 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16819872#comment-16819872
 ] 

ASF subversion and git services commented on SOLR-13409:


Commit e1901aaabb6dbb477eeb6c0b7b38731c52748635 in lucene-solr's branch 
refs/heads/branch_8x from Uwe Schindler
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=e1901aa ]

SOLR-13409: Disable HTML directory listings in admin interface to prevent 
possible security issues


> Remove directory listings in Jetty config
> -
>
> Key: SOLR-13409
> URL: https://issues.apache.org/jira/browse/SOLR-13409
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Server
>Affects Versions: 8.0
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>Priority: Major
> Fix For: 8.1, master (9.0)
>
> Attachments: SOLR-13409.patch
>
>
> In the shipped Jetty configuration the directory listings are enabled, 
> although not used in the admin interface. For security reasons this should be 
> disabled.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13409) Remove directory listings in Jetty config

2019-04-17 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16819871#comment-16819871
 ] 

ASF subversion and git services commented on SOLR-13409:


Commit df27ccf01d9b89149fbba00e81c3eed078e28a95 in lucene-solr's branch 
refs/heads/master from Uwe Schindler
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=df27ccf ]

SOLR-13409: Disable HTML directory listings in admin interface to prevent 
possible security issues


> Remove directory listings in Jetty config
> -
>
> Key: SOLR-13409
> URL: https://issues.apache.org/jira/browse/SOLR-13409
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Server
>Affects Versions: 8.0
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>Priority: Major
> Fix For: 8.1, master (9.0)
>
> Attachments: SOLR-13409.patch
>
>
> In the shipped Jetty configuration the directory listings are enabled, 
> although not used in the admin interface. For security reasons this should be 
> disabled.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8767) DisjunctionMaxQuery do not work well when multiple search term+mm+query fields with different fieldType.

2019-04-17 Thread ZhongHua Wu (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhongHua Wu updated LUCENE-8767:

Summary: DisjunctionMaxQuery do not work well when multiple search 
term+mm+query fields with different fieldType.  (was: DisjunctionMaxQuery do 
not work well when multiple search term+synonyms+mm+query fields with different 
fieldType.)

> DisjunctionMaxQuery do not work well when multiple search term+mm+query 
> fields with different fieldType.
> 
>
> Key: LUCENE-8767
> URL: https://issues.apache.org/jira/browse/LUCENE-8767
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/queryparser
>Affects Versions: 7.3
> Environment: Solr: 7.3.1
> Backup:
> FieldType for name field:
>  omitNorms="true">
>  
>  
>   words="stopwords.txt" enablePositionIncrements="true" />
>   generateNumberParts="0" catenateWords="1" catenateNumbers="1" catenateAll="0" 
>  splitOnCaseChange="0" preserveOriginal="1" splitOnNumerics="0"/>
>  
>   protected="protwords.txt" />
>  
>  
>  
> FieldType for partNumber field:
>  omitNorms="true">
>  
>  
>  
>  
>  
>  
>Reporter: ZhongHua Wu
>Priority: Critical
>  Labels: patch
>
> When multiple fields in query fields came from different fieldType, 
> especially one from KeywordTokenizerFactory, another from 
> WhitespaceTokenizerFactory, then the generated parse query could not honor 
> synonyms and mm, which hit incorrect documents. The following is my detail:
>  # We use Solr 7.3.1
>  # Our qf=name^10 partNumber_ntk, while fieldType of name use 
> solr.WhitespaceTokenizerFactory and solr.WordDelimiterFilterFactory, while  
> partNumber_ntk is not tokenized and use solr.KeywordTokenizerFactory
>  # mm=2<3 4<5 6<-80%25
>  # The search term is versatil sundress, while 'versatile' and 'testing' are 
> synonyms, we have documents named " Versatil Empire Waist Sundress" which 
> should be hit, but failed.
>  # We test same query on Solr 5.5.4, it works fine, it do not work on Solr 
> 7.3.1.
> q=
> (Versatil%20testing)%20sundress=name=edismax=2<3 4<5 
> 6<-80%25=name^10%20partNumber_ntk=true=xml=100
> parsedQuery:
> +(DisjunctionMaxQueryname:versatil name:test)~2)^10.0 | 
> partNumber_ntk:versatil testing)) DisjunctionMaxQuery(((name:sundress)^10.0 | 
> partNumber_ntk:sundress)))~2
> Which seems it incorrect parse name to: name:versatil name:test
> If I change the query fields to same fieldType, for example,shortDescription 
> is in same fieldType of name:
> q=(Versatil%20testing)%20sundress=name=edismax=2<3 4<5 
> 6<-80%25=name^10%20shortDescription=true=xml=100
> ParsedQuery:
> +((DisjunctionMaxQuery(((name:versatil)^10.0 | shortDescription:versatil)) 
> DisjunctionMaxQuery(((name:test)^10.0 | shortDescription:test))) 
> DisjunctionMaxQuery(((name:sundress)^10.0 | shortDescription:sundress)))~2
> which hits correctly.
> Could someone check this or tell us a quick workaround? Now it have big 
> impact on customer.
> Thanks in advance! The following is backup information:
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-8.x - Build # 73 - Still unstable

2019-04-17 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-8.x/73/

4 tests failed.
FAILED:  org.apache.solr.cloud.ReindexCollectionTest.testReshapeReindexing

Error Message:
num docs expected:<200> but was:<198>

Stack Trace:
java.lang.AssertionError: num docs expected:<200> but was:<198>
at 
__randomizedtesting.SeedInfo.seed([8F04F081A78B20B3:645983B80890EC46]:0)
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:645)
at 
org.apache.solr.cloud.ReindexCollectionTest.indexDocs(ReindexCollectionTest.java:376)
at 
org.apache.solr.cloud.ReindexCollectionTest.testReshapeReindexing(ReindexCollectionTest.java:221)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
org.apache.solr.cloud.autoscaling.HdfsAutoAddReplicasIntegrationTest.testSimple

Error Message:
Waiting for collection testSimple2 Timeout waiting to see 

[jira] [Commented] (SOLR-13410) Designated overseer not able to become overseer quickly

2019-04-17 Thread Ishan Chattopadhyaya (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16819835#comment-16819835
 ] 

Ishan Chattopadhyaya commented on SOLR-13410:
-

We just discovered this and I'm working with [~keshareenv] towards an automated 
test and a fix.

> Designated overseer not able to become overseer quickly 
> 
>
> Key: SOLR-13410
> URL: https://issues.apache.org/jira/browse/SOLR-13410
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: master (9.0)
>Reporter: Kesharee Nandan Vishwakarma
>Assignee: Ishan Chattopadhyaya
>Priority: Major
> Attachments: overseerElection.log
>
>
> Whenever a designated overseer node is restarted and overseer role is added 
> back if a designated node is not overseer leader following tasks take place:
>  1. one by one nodes from electionNodes list become leader and ask designated 
> node `to come join election at head`
>  2. current overseer node Fires Quit command and exits from Overseer Loop
>  3. Next node from `Overseer Loop` becomes leader repeats steps 1,2 until 
> designated overseer node becomes the leader
>  Problem with above flow: OverseerNodePrioritizer is not able to add 
> `designated node` at the head of electionNodes list
> Steps to reproduce:
>  # Setup solrcloud with 5 nodes, including one designated overseer
>  # Restart overseer container
> Attached relevant logs



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13410) Designated overseer not able to become overseer quickly

2019-04-17 Thread Kesharee Nandan Vishwakarma (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kesharee Nandan Vishwakarma updated SOLR-13410:
---
Attachment: overseerElection.log

> Designated overseer not able to become overseer quickly 
> 
>
> Key: SOLR-13410
> URL: https://issues.apache.org/jira/browse/SOLR-13410
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: master (9.0)
>Reporter: Kesharee Nandan Vishwakarma
>Assignee: Ishan Chattopadhyaya
>Priority: Major
> Attachments: overseerElection.log
>
>
> Whenever a designated overseer node is restarted and overseer role is added 
> back if a designated node is not overseer leader following tasks take place:
>  1. one by one nodes from electionNodes list become leader and ask designated 
> node `to come join election at head`
>  2. current overseer node Fires Quit command and exits from Overseer Loop
>  3. Next node from `Overseer Loop` becomes leader repeats steps 1,2 until 
> designated overseer node becomes the leader
>  Problem with above flow: OverseerNodePrioritizer is not able to add 
> `designated node` at the head of electionNodes list
> Steps to reproduce:
>  # Setup solrcloud with 5 nodes, including one designated overseer
>  # Restart overseer container
> Attached relevant logs



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13410) Designated overseer not able to become overseer quickly

2019-04-17 Thread Kesharee Nandan Vishwakarma (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kesharee Nandan Vishwakarma updated SOLR-13410:
---
Description: 
Whenever a designated overseer node is restarted and overseer role is added 
back if a designated node is not overseer leader following tasks take place:
 1. one by one nodes from electionNodes list become leader and ask designated 
node `to come join election at head`
 2. current overseer node Fires Quit command and exits from Overseer Loop
 3. Next node from `Overseer Loop` becomes leader repeats steps 1,2 until 
designated overseer node becomes the leader
 Problem with above flow: OverseerNodePrioritizer is not able to add 
`designated node` at the head of electionNodes list

Steps to reproduce:
 # Setup solrcloud with 5 nodes, including one designated overseer
 # Restart overseer container

Attached relevant logs

  was:
Whenever a designated overseer node is restarted and overseer role is added 
back if a designated node is not overseer leader following tasks take place:
1. one by one nodes from electionNodes list become leader and ask designated 
node `to come join election at head`
2. current overseer node Fires Quit command and exits from Overseer Loop
3. Next node from `Overseer Loop` becomes leader repeats steps 1,2 until 
designated overseer node becomes the leader
Problem with above flow: OverseerNodePrioritizer is not able to add `designated 
node` at the head of electionNodes list


> Designated overseer not able to become overseer quickly 
> 
>
> Key: SOLR-13410
> URL: https://issues.apache.org/jira/browse/SOLR-13410
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: master (9.0)
>Reporter: Kesharee Nandan Vishwakarma
>Priority: Major
>
> Whenever a designated overseer node is restarted and overseer role is added 
> back if a designated node is not overseer leader following tasks take place:
>  1. one by one nodes from electionNodes list become leader and ask designated 
> node `to come join election at head`
>  2. current overseer node Fires Quit command and exits from Overseer Loop
>  3. Next node from `Overseer Loop` becomes leader repeats steps 1,2 until 
> designated overseer node becomes the leader
>  Problem with above flow: OverseerNodePrioritizer is not able to add 
> `designated node` at the head of electionNodes list
> Steps to reproduce:
>  # Setup solrcloud with 5 nodes, including one designated overseer
>  # Restart overseer container
> Attached relevant logs



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-13410) Designated overseer not able to become overseer quickly

2019-04-17 Thread Ishan Chattopadhyaya (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya reassigned SOLR-13410:
---

Assignee: Ishan Chattopadhyaya

> Designated overseer not able to become overseer quickly 
> 
>
> Key: SOLR-13410
> URL: https://issues.apache.org/jira/browse/SOLR-13410
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: master (9.0)
>Reporter: Kesharee Nandan Vishwakarma
>Assignee: Ishan Chattopadhyaya
>Priority: Major
>
> Whenever a designated overseer node is restarted and overseer role is added 
> back if a designated node is not overseer leader following tasks take place:
>  1. one by one nodes from electionNodes list become leader and ask designated 
> node `to come join election at head`
>  2. current overseer node Fires Quit command and exits from Overseer Loop
>  3. Next node from `Overseer Loop` becomes leader repeats steps 1,2 until 
> designated overseer node becomes the leader
>  Problem with above flow: OverseerNodePrioritizer is not able to add 
> `designated node` at the head of electionNodes list
> Steps to reproduce:
>  # Setup solrcloud with 5 nodes, including one designated overseer
>  # Restart overseer container
> Attached relevant logs



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-13410) Designated overseer not able to become overseer quickly

2019-04-17 Thread Kesharee Nandan Vishwakarma (JIRA)
Kesharee Nandan Vishwakarma created SOLR-13410:
--

 Summary: Designated overseer not able to become overseer quickly 
 Key: SOLR-13410
 URL: https://issues.apache.org/jira/browse/SOLR-13410
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: SolrCloud
Affects Versions: master (9.0)
Reporter: Kesharee Nandan Vishwakarma


Whenever a designated overseer node is restarted and overseer role is added 
back if a designated node is not overseer leader following tasks take place:
1. one by one nodes from electionNodes list become leader and ask designated 
node `to come join election at head`
2. current overseer node Fires Quit command and exits from Overseer Loop
3. Next node from `Overseer Loop` becomes leader repeats steps 1,2 until 
designated overseer node becomes the leader
Problem with above flow: OverseerNodePrioritizer is not able to add `designated 
node` at the head of electionNodes list



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr 8.1

2019-04-17 Thread Ishan Chattopadhyaya
+1 for a 8.1 soon. I can volunteer for RM.
Does 30 April (about 2 weeks from now) sound reasonable for a branch cutting?

On Wed, Apr 17, 2019 at 1:14 PM Ignacio Vera  wrote:
>
> Hi all,
>
> Feature freeze for 8.0 was long time ago (January 29th) and there is 
> interesting stuff that has not been released yet. In Lucene in particular 
> there is the new BKD tree strategy for segment merging  which provides a 
> significant performance boost for high dimensions, the new Luke module or the 
> new query visitor API for naming a few. I see that in Solr there is as well 
> quite a few unreleased changes.
>
> I might not be able to be the release manager this time as I will be on 
> holidays the next few weeks but I would like to gauge the interest in the 
> community to have a new release soonish.
>
> Cheers,
>
> Ignacio
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-8767) DisjunctionMaxQuery do not work well when multiple search term+synonyms+mm+query fields with different fieldType.

2019-04-17 Thread ZhongHua Wu (JIRA)
ZhongHua Wu created LUCENE-8767:
---

 Summary: DisjunctionMaxQuery do not work well when multiple search 
term+synonyms+mm+query fields with different fieldType.
 Key: LUCENE-8767
 URL: https://issues.apache.org/jira/browse/LUCENE-8767
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/queryparser
Affects Versions: 7.3
 Environment: Solr: 7.3.1

Backup:

FieldType for name field:


 
 
 
 
 
 
 
 
 

FieldType for partNumber field:


 
 
 
 
 
 
Reporter: ZhongHua Wu


When multiple fields in query fields came from different fieldType, especially 
one from KeywordTokenizerFactory, another from WhitespaceTokenizerFactory, then 
the generated parse query could not honor synonyms and mm, which hit incorrect 
documents. The following is my detail:
 # We use Solr 7.3.1
 # Our qf=name^10 partNumber_ntk, while fieldType of name use 
solr.WhitespaceTokenizerFactory and solr.WordDelimiterFilterFactory, while  
partNumber_ntk is not tokenized and use solr.KeywordTokenizerFactory
 # mm=2<3 4<5 6<-80%25
 # The search term is versatil sundress, while 'versatile' and 'testing' are 
synonyms, we have documents named " Versatil Empire Waist Sundress" which 
should be hit, but failed.
 # We test same query on Solr 5.5.4, it works fine, it do not work on Solr 
7.3.1.

q=

(Versatil%20testing)%20sundress=name=edismax=2<3 4<5 
6<-80%25=name^10%20partNumber_ntk=true=xml=100

parsedQuery:

+(DisjunctionMaxQueryname:versatil name:test)~2)^10.0 | 
partNumber_ntk:versatil testing)) DisjunctionMaxQuery(((name:sundress)^10.0 | 
partNumber_ntk:sundress)))~2

Which seems it incorrect parse name to: name:versatil name:test

If I change the query fields to same fieldType, for example,shortDescription is 
in same fieldType of name:

q=(Versatil%20testing)%20sundress=name=edismax=2<3 4<5 
6<-80%25=name^10%20shortDescription=true=xml=100

ParsedQuery:

+((DisjunctionMaxQuery(((name:versatil)^10.0 | shortDescription:versatil)) 
DisjunctionMaxQuery(((name:test)^10.0 | shortDescription:test))) 
DisjunctionMaxQuery(((name:sundress)^10.0 | shortDescription:sundress)))~2

which hits correctly.

Could someone check this or tell us a quick workaround? Now it have big impact 
on customer.

Thanks in advance! The following is backup information:

 

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8766) Add Luwak as a lucene module

2019-04-17 Thread Daniel Collins (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16819816#comment-16819816
 ] 

Daniel Collins commented on LUCENE-8766:


+1 from me, we use Luwak in several places within my organisation and trying to 
keep it in sync with Lucene is a challenge.

> Add Luwak as a lucene module
> 
>
> Key: LUCENE-8766
> URL: https://issues.apache.org/jira/browse/LUCENE-8766
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Luwak [https://github.com/flaxsearch/luwak] is a stored query engine, 
> allowing users to efficiently match a stream of documents against a large set 
> of queries.  It's only dependency is lucene, and most recent updates have 
> just been upgrading the version of lucene against which it can run.
> It's a generally useful piece of software, and already licensed as Apache 2.  
> The maintainers would like to donate it to the ASF, and merge it in to the 
> lucene-solr project.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Lucene/Solr 8.1

2019-04-17 Thread Ignacio Vera
Hi all,

Feature freeze for 8.0 was long time ago (January 29th) and there is
interesting stuff that has not been released yet. In Lucene in particular
there is the new BKD tree strategy for segment merging  which provides a
significant performance boost for high dimensions, the new Luke module or
the new query visitor API for naming a few. I see that in Solr there is as
well quite a few unreleased changes.

I might not be able to be the release manager this time as I will be on
holidays the next few weeks but I would like to gauge the interest in the
community to have a new release soonish.

Cheers,

Ignacio


[jira] [Commented] (SOLR-13392) Unable to start prometheus-exporter in 7x branch

2019-04-17 Thread Karl Stoney (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16819787#comment-16819787
 ] 

Karl Stoney commented on SOLR-13392:


Patch works in 7_7

> Unable to start prometheus-exporter in 7x branch
> 
>
> Key: SOLR-13392
> URL: https://issues.apache.org/jira/browse/SOLR-13392
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: 7.7.2
>Reporter: Karl Stoney
>Assignee: Shalin Shekhar Mangar
>Priority: Major
> Attachments: SOLR-13392.patch
>
>
> Hi, 
> prometheus-exporter doesn't start in branch 7x on commit 
> 7dfe1c093b65f77407c2df4c2a1120a213aef166, it does work on 
> 26b498d0a9d25626a15e25b0cf97c8339114263a so something has changed between 
> those two commits causing this.
> I am presuming it is 
> https://github.com/apache/lucene-solr/commit/e1eeafb5dc077976646b06f4cba4d77534963fa9#diff-3f7b27f0f087632739effa2aa508d77eR34
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> org/apache/lucene/util/IOUtils
> at 
> org.apache.solr.core.SolrResourceLoader.close(SolrResourceLoader.java:881)
> at 
> org.apache.solr.prometheus.exporter.SolrExporter.loadMetricsConfiguration(SolrExporter.java:221)
> at 
> org.apache.solr.prometheus.exporter.SolrExporter.main(SolrExporter.java:205)
> Caused by: java.lang.ClassNotFoundException: org.apache.lucene.util.IOUtils
> at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> ... 3 more



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13392) Unable to start prometheus-exporter in 7x branch

2019-04-17 Thread Karl Stoney (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16819772#comment-16819772
 ] 

Karl Stoney commented on SOLR-13392:


Sounds nasty.  I'll monkeypatch my build with your patch for now to work around 
the issue.
Ta

> Unable to start prometheus-exporter in 7x branch
> 
>
> Key: SOLR-13392
> URL: https://issues.apache.org/jira/browse/SOLR-13392
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: 7.7.2
>Reporter: Karl Stoney
>Assignee: Shalin Shekhar Mangar
>Priority: Major
> Attachments: SOLR-13392.patch
>
>
> Hi, 
> prometheus-exporter doesn't start in branch 7x on commit 
> 7dfe1c093b65f77407c2df4c2a1120a213aef166, it does work on 
> 26b498d0a9d25626a15e25b0cf97c8339114263a so something has changed between 
> those two commits causing this.
> I am presuming it is 
> https://github.com/apache/lucene-solr/commit/e1eeafb5dc077976646b06f4cba4d77534963fa9#diff-3f7b27f0f087632739effa2aa508d77eR34
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> org/apache/lucene/util/IOUtils
> at 
> org.apache.solr.core.SolrResourceLoader.close(SolrResourceLoader.java:881)
> at 
> org.apache.solr.prometheus.exporter.SolrExporter.loadMetricsConfiguration(SolrExporter.java:221)
> at 
> org.apache.solr.prometheus.exporter.SolrExporter.main(SolrExporter.java:205)
> Caused by: java.lang.ClassNotFoundException: org.apache.lucene.util.IOUtils
> at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> ... 3 more



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org