Re: Lucene/Solr 7.2

2017-12-08 Thread Varun Thacker
This fails for me every single time :
https://issues.apache.org/jira/browse/SOLR-11740

Can someone with more knowledge of the bin/solr script confirm if this
effects the "-e cloud" command only or is it more widespread .
That might help determine if we want to fix this before the release.

On Fri, Dec 8, 2017 at 10:41 AM, Adrien Grand  wrote:

> FYI we are backporting SOLR-11423
>  to 7.2 so I'll build a
> RC on Monday (assuming it will have been backported by then).
>
> Le jeu. 7 déc. 2017 à 20:17, Adrien Grand  a écrit :
>
>> OK, it looks like all changes that we wanted to be included are now in?
>> Please let me know if there is still something left to include in 7.2
>> before building a RC.
>>
>> I noticed SOLR-11423 is in a weird state, it is included in the changelog
>> in 7.1 but has only been committed to master. Did we forget to backport it?
>>
>> Le mer. 6 déc. 2017 à 21:13, Andrzej Białecki <
>> andrzej.biale...@lucidworks.com> a écrit :
>>
>>> On 6 Dec 2017, at 18:45, Andrzej Białecki >> com> wrote:
>>>
>>> I attached the patch to SOLR-11714, which disables the ‘searchRate’
>>> trigger - if there are no objections I’ll commit it shortly to branch_7.2.
>>>
>>>
>>>
>>> This has been committed now to branch_7_2 and I don’t have any other
>>> open issues for 7.2. Thanks!
>>>
>>>
>>>
>>> On 6 Dec 2017, at 15:51, Andrzej Białecki >> com> wrote:
>>>
>>>
>>> On 6 Dec 2017, at 15:35, Andrzej Białecki >> com> wrote:
>>>
>>> SOLR-11458 is committed and resolved - thanks for the patience.
>>>
>>>
>>>
>>> Actually, one more thing … ;) SOLR-11714 is a more serious bug in a new
>>> feature (searchRate autoscaling trigger). It’s probably best to disable
>>> this feature in 7.2 rather than releasing a broken version, so I’d like to
>>> commit a patch that disables it (plus a note in CHANGES.txt).
>>>
>>>
>>>
>>>
>>> On 6 Dec 2017, at 14:02, Adrien Grand  wrote:
>>>
>>> Thanks for the heads up, Anshum.
>>>
>>> This leaves us with only SOLR-11458 to wait for before building a RC
>>> (which might be ready but just not marked as resolved).
>>>
>>>
>>>
>>> Le mer. 6 déc. 2017 à 13:47, Ishan Chattopadhyaya <
>>> ichattopadhy...@gmail.com> a écrit :
>>>
 Hi Adrien,
 I'm planning to skip SOLR-11624 for this release (as per my last
 comment https://issues.apache.org/jira/browse/SOLR-11624?
 focusedCommentId=16280121=com.atlassian.jira.
 plugin.system.issuetabpanels:comment-tabpanel#comment-16280121). If
 someone has an objection, please let me know; otherwise, please feel free
 to proceed with the release.
 I'll continue working on it anyway, and shall try to have it ready for
 the next release.
 Thanks,
 Ishan

 On Wed, Dec 6, 2017 at 2:41 PM, Adrien Grand  wrote:

> FYI I created the new branch for 7.2, so you will have to backport to
> this branch. No hurry though, I mostly created the branch so that it's 
> fine
> to cherry-pick changes that may wait for 7.3 to be released.
>
> Le mer. 6 déc. 2017 à 08:53, Adrien Grand  a
> écrit :
>
>> Sorry to hear that Ishan, I hope you are doing better now. +1 to get
>> SOLR-11624 in.
>>
>> Le mer. 6 déc. 2017 à 07:57, Ishan Chattopadhyaya <
>> ichattopadhy...@gmail.com> a écrit :
>>
>>> I was a bit unwell over the weekend and yesterday; I'm working on a
>>> very targeted fix for SOLR-11624 right now; I expect it to take another 
>>> 5-6
>>> hours.
>>> Is that fine with you, Adrien? If not, please go ahead with the
>>> release, and I'll volunteer later for a bugfix release for this after 
>>> 7.2
>>> is out.
>>>
>>> On Wed, Dec 6, 2017 at 3:25 AM, Adrien Grand 
>>> wrote:
>>>
 Fine with me.

 Le mar. 5 déc. 2017 à 22:34, Varun Thacker  a
 écrit :

> Hi Adrien,
>
> I'd like to commit SOLR-11590 . The issue had a patch couple of
> weeks ago and has been reviewed but never got committed. I've run all 
> the
> tests twice as well to verify.
>
> On Tue, Dec 5, 2017 at 9:08 AM, Andrzej Białecki <
> andrzej.biale...@lucidworks.com> wrote:
>
>>
>> On 5 Dec 2017, at 18:05, Adrien Grand  wrote:
>>
>> Andrzej, ok to merge since it is a bug fix. Since we're close to
>> the RC build, maybe try to get someone familiar with the code to 
>> review it
>> to make sure it doesn't have unexpected side-effects?
>>
>>
>> Sure I’ll do this - thanks!
>>
>>
>> Le mar. 5 déc. 2017 à 17:57, Andrzej Białecki <
>> 

[jira] [Commented] (SOLR-11740) bin/solr stop command always throws Connection refused

2017-12-08 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16284655#comment-16284655
 ] 

Varun Thacker commented on SOLR-11740:
--

This works:

{code}
[branch_7_2] ~/apache-work/lucene-solr/solr$ ./bin/solr  start -c
Waiting up to 180 seconds to see Solr running on port 8983 [\]  
Started Solr server on port 8983 (pid=43788). Happy searching!

[branch_7_2] ~/apache-work/lucene-solr/solr$ ./bin/solr  stop -all
Sending stop command to Solr running on port 8983 ... waiting up to 180 seconds 
to allow Jetty process 43788 to stop gracefully.

{code}

> bin/solr stop command always throws Connection refused
> --
>
> Key: SOLR-11740
> URL: https://issues.apache.org/jira/browse/SOLR-11740
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>
> Start solr using {{./bin/solr start -e cloud -noprompt}} and then try 
> stopping it. I ran into this problem every time I stopping solr on master. 
> I'm using Java9 and it works fine on Solr 7.1 ( haven't checked on the 7_2 
> branch yet )
> [master] ~/apache-work/lucene-solr/solr$ ./bin/solr  stop -all
> Sending stop command to Solr running on port 7574 ... waiting up to 180 
> seconds to allow Jetty process 40360 to stop gracefully.
> Sending stop command to Solr running on port 8983 ... waiting up to 180 
> seconds to allow Jetty process 40263 to stop gracefully.
> java.net.ConnectException: Connection refused (Connection refused)
>   at java.net.PlainSocketImpl.socketConnect(Native Method)
>   at 
> java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
>   at 
> java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
>   at 
> java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
>   at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
>   at java.net.Socket.connect(Socket.java:589)
>   at java.net.Socket.connect(Socket.java:538)
>   at java.net.Socket.(Socket.java:434)
>   at java.net.Socket.(Socket.java:244)
>   at org.eclipse.jetty.start.Main.stop(Main.java:535)
>   at org.eclipse.jetty.start.Main.stop(Main.java:511)
>   at org.eclipse.jetty.start.Main.doStop(Main.java:499)
>   at org.eclipse.jetty.start.Main.start(Main.java:404)
>   at org.eclipse.jetty.start.Main.main(Main.java:76)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11740) bin/solr stop command always throws Connection refused

2017-12-08 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16284654#comment-16284654
 ] 

Varun Thacker commented on SOLR-11740:
--

This reproduced on branch_7_2 as well unfortunately! 

{code}
[branch_7_2] ~/apache-work/lucene-solr/solr$ ./bin/solr  stop -all
Sending stop command to Solr running on port 7574 ... waiting up to 180 seconds 
to allow Jetty process 42384 to stop gracefully.
Sending stop command to Solr running on port 8983 ... waiting up to 180 seconds 
to allow Jetty process 42285 to stop gracefully.
java.net.ConnectException: Connection refused (Connection refused)
at java.base/java.net.PlainSocketImpl.socketConnect(Native Method)
at 
java.base/java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:400)
at 
java.base/java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:243)
at 
java.base/java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:225)
at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:402)
at java.base/java.net.Socket.connect(Socket.java:591)
at java.base/java.net.Socket.connect(Socket.java:540)
at java.base/java.net.Socket.(Socket.java:436)
at java.base/java.net.Socket.(Socket.java:246)
at org.eclipse.jetty.start.Main.stop(Main.java:535)
at org.eclipse.jetty.start.Main.stop(Main.java:511)
at org.eclipse.jetty.start.Main.doStop(Main.java:499)
at org.eclipse.jetty.start.Main.start(Main.java:404)
at org.eclipse.jetty.start.Main.main(Main.java:76)

Usage: java -jar start.jar [options] [properties] [configs]
   java -jar start.jar --help  # for more information
 [\]  

{code}

The spinner keeps moving for minutes and nothing happens. 

I can reproduce it 100% of the time with both Java8 and Java9

> bin/solr stop command always throws Connection refused
> --
>
> Key: SOLR-11740
> URL: https://issues.apache.org/jira/browse/SOLR-11740
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>
> Start solr using {{./bin/solr start -e cloud -noprompt}} and then try 
> stopping it. I ran into this problem every time I stopping solr on master. 
> I'm using Java9 and it works fine on Solr 7.1 ( haven't checked on the 7_2 
> branch yet )
> [master] ~/apache-work/lucene-solr/solr$ ./bin/solr  stop -all
> Sending stop command to Solr running on port 7574 ... waiting up to 180 
> seconds to allow Jetty process 40360 to stop gracefully.
> Sending stop command to Solr running on port 8983 ... waiting up to 180 
> seconds to allow Jetty process 40263 to stop gracefully.
> java.net.ConnectException: Connection refused (Connection refused)
>   at java.net.PlainSocketImpl.socketConnect(Native Method)
>   at 
> java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
>   at 
> java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
>   at 
> java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
>   at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
>   at java.net.Socket.connect(Socket.java:589)
>   at java.net.Socket.connect(Socket.java:538)
>   at java.net.Socket.(Socket.java:434)
>   at java.net.Socket.(Socket.java:244)
>   at org.eclipse.jetty.start.Main.stop(Main.java:535)
>   at org.eclipse.jetty.start.Main.stop(Main.java:511)
>   at org.eclipse.jetty.start.Main.doStop(Main.java:499)
>   at org.eclipse.jetty.start.Main.start(Main.java:404)
>   at org.eclipse.jetty.start.Main.main(Main.java:76)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-11740) bin/solr stop command always throws Connection refused

2017-12-08 Thread Varun Thacker (JIRA)
Varun Thacker created SOLR-11740:


 Summary: bin/solr stop command always throws Connection refused
 Key: SOLR-11740
 URL: https://issues.apache.org/jira/browse/SOLR-11740
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Varun Thacker


Start solr using {{./bin/solr start -e cloud -noprompt}} and then try stopping 
it. I ran into this problem every time I stopping solr on master. I'm using 
Java9 and it works fine on Solr 7.1 ( haven't checked on the 7_2 branch yet )

[master] ~/apache-work/lucene-solr/solr$ ./bin/solr  stop -all
Sending stop command to Solr running on port 7574 ... waiting up to 180 seconds 
to allow Jetty process 40360 to stop gracefully.
Sending stop command to Solr running on port 8983 ... waiting up to 180 seconds 
to allow Jetty process 40263 to stop gracefully.
java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at java.net.Socket.connect(Socket.java:538)
at java.net.Socket.(Socket.java:434)
at java.net.Socket.(Socket.java:244)
at org.eclipse.jetty.start.Main.stop(Main.java:535)
at org.eclipse.jetty.start.Main.stop(Main.java:511)
at org.eclipse.jetty.start.Main.doStop(Main.java:499)
at org.eclipse.jetty.start.Main.start(Main.java:404)
at org.eclipse.jetty.start.Main.main(Main.java:76)




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11661) Race condition between core creation thread and recovery request from leader causes inconsistent view of documents

2017-12-08 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16284644#comment-16284644
 ] 

Cao Manh Dat commented on SOLR-11661:
-

This bug relates to HDFS lease recovery. When tlog files of a replica 
(core_node7 in this case) get deleted and get recovered when a new collection 
of the same name gets created.

[~markrmil...@gmail.com] : for newly created core, should we skip lease 
recovery??


> Race condition between core creation thread and recovery request from leader 
> causes inconsistent view of documents
> --
>
> Key: SOLR-11661
> URL: https://issues.apache.org/jira/browse/SOLR-11661
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
> Fix For: 7.2, master (8.0)
>
> Attachments: 11458-2-MoveReplicaHDFSTest-log.txt
>
>
> While testing SOLR-11458, [~ab] ran into an interesting failure which 
> resulted in different document counts between leader and replica. The test is 
> MoveReplicaHDFSTest on jira/solr-11458-2 branch.
> The failure is rare but reproducible on beasting:
> {code}
> reproduce with: ant test  -Dtestcase=MoveReplicaHDFSTest 
> -Dtests.method=testNormalFailedMove -Dtests.seed=161856CB543CD71C 
> -Dtests.slow=true -Dtests.locale=ar-SA -Dtests.timezone=US/Michigan 
> -Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1
>[junit4] FAILURE 14.2s | MoveReplicaHDFSTest.testNormalFailedMove <<<
>[junit4]> Throwable #1: java.lang.AssertionError: expected:<100> but 
> was:<56>
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([161856CB543CD71C:31134983787E4905]:0)
>[junit4]>  at 
> org.apache.solr.cloud.MoveReplicaTest.testFailedMove(MoveReplicaTest.java:305)
>[junit4]>  at 
> org.apache.solr.cloud.MoveReplicaHDFSTest.testNormalFailedMove(MoveReplicaHDFSTest.java:69)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-7.2-Linux (64bit/jdk-10-ea+32) - Build # 34 - Still Unstable!

2017-12-08 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.2-Linux/34/
Java: 64bit/jdk-10-ea+32 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

4 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.client.solrj.io.stream.StreamExpressionTest

Error Message:
Error from server at https://127.0.0.1:41575/solr: create the collection time 
out:180s

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at https://127.0.0.1:41575/solr: create the collection time out:180s
at __randomizedtesting.SeedInfo.seed([43C9663BFC7AC7D9]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1103)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:883)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:816)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at 
org.apache.solr.client.solrj.io.stream.StreamExpressionTest.setupCluster(StreamExpressionTest.java:102)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:874)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  org.apache.solr.TestDistributedSearch.test

Error Message:
IOException occured when talking to server at: 
http://127.0.0.1:36425/b/collection1

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http://127.0.0.1:36425/b/collection1
at 
__randomizedtesting.SeedInfo.seed([944BCF759B9135B1:1C1FF0AF356D5849]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:657)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.SolrClient.deleteByQuery(SolrClient.java:895)
at 

[JENKINS] Lucene-Solr-7.x-Linux (32bit/jdk1.8.0_144) - Build # 962 - Unstable!

2017-12-08 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/962/
Java: 32bit/jdk1.8.0_144 -client -XX:+UseConcMarkSweepGC

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestLazyCores

Error Message:
1 thread leaked from SUITE scope at org.apache.solr.core.TestLazyCores: 1) 
Thread[id=10773, name=searcherExecutor-3425-thread-1, state=WAITING, 
group=TGRP-TestLazyCores] at sun.misc.Unsafe.park(Native Method)
 at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
at java.lang.Thread.run(Thread.java:748)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.core.TestLazyCores: 
   1) Thread[id=10773, name=searcherExecutor-3425-thread-1, state=WAITING, 
group=TGRP-TestLazyCores]
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
at __randomizedtesting.SeedInfo.seed([124DE0019178186D]:0)


FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestLazyCores

Error Message:
There are still zombie threads that couldn't be terminated:1) 
Thread[id=10773, name=searcherExecutor-3425-thread-1, state=WAITING, 
group=TGRP-TestLazyCores] at sun.misc.Unsafe.park(Native Method)
 at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
at java.lang.Thread.run(Thread.java:748)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie 
threads that couldn't be terminated:
   1) Thread[id=10773, name=searcherExecutor-3425-thread-1, state=WAITING, 
group=TGRP-TestLazyCores]
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
at __randomizedtesting.SeedInfo.seed([124DE0019178186D]:0)


FAILED:  org.apache.solr.core.TestLazyCores.testNoCommit

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at 
__randomizedtesting.SeedInfo.seed([124DE0019178186D:CD2D41D05A5F7BC8]:0)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:901)
at org.apache.solr.core.TestLazyCores.check10(TestLazyCores.java:847)
at 
org.apache.solr.core.TestLazyCores.testNoCommit(TestLazyCores.java:829)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 

[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-10-ea+32) - Build # 21056 - Still Unstable!

2017-12-08 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/21056/
Java: 64bit/jdk-10-ea+32 -XX:-UseCompressedOops -XX:+UseG1GC

2 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.TestStressCloudBlindAtomicUpdates

Error Message:
Error from server at http://127.0.0.1:36171/solr: create the collection time 
out:180s

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:36171/solr: create the collection time out:180s
at __randomizedtesting.SeedInfo.seed([1F36C10EB599A010]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1103)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:883)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:816)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at 
org.apache.solr.cloud.TestStressCloudBlindAtomicUpdates.createMiniSolrCloudCluster(TestStressCloudBlindAtomicUpdates.java:132)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:874)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  org.apache.solr.cloud.MoveReplicaHDFSTest.testFailedMove

Error Message:
No live SolrServers available to handle this 
request:[https://127.0.0.1:45383/solr/MoveReplicaHDFSTest_failed_coll_true, 
https://127.0.0.1:40533/solr/MoveReplicaHDFSTest_failed_coll_true]

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this 
request:[https://127.0.0.1:45383/solr/MoveReplicaHDFSTest_failed_coll_true, 
https://127.0.0.1:40533/solr/MoveReplicaHDFSTest_failed_coll_true]
at 
__randomizedtesting.SeedInfo.seed([1F36C10EB599A010:B5FB12FC024A75C0]:0)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:462)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1103)
at 

[JENKINS] Lucene-Solr-Tests-master - Build # 2214 - Still unstable

2017-12-08 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/2214/

2 tests failed.
FAILED:  org.apache.solr.cloud.MultiThreadedOCPTest.test

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([5E0054FCF1DA23FD:D6546B265F264E05]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertTrue(Assert.java:54)
at 
org.apache.solr.cloud.MultiThreadedOCPTest.testFillWorkQueue(MultiThreadedOCPTest.java:112)
at 
org.apache.solr.cloud.MultiThreadedOCPTest.test(MultiThreadedOCPTest.java:67)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:993)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 

[JENKINS] Lucene-Solr-7.2-Linux (64bit/jdk-9.0.1) - Build # 33 - Unstable!

2017-12-08 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.2-Linux/33/
Java: 64bit/jdk-9.0.1 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
org.apache.solr.handler.TestReplicationHandler.doTestIndexAndConfigReplication

Error Message:
Index 0 out-of-bounds for length 0

Stack Trace:
java.lang.IndexOutOfBoundsException: Index 0 out-of-bounds for length 0
at 
__randomizedtesting.SeedInfo.seed([D752D1547CDF625B:C31A8A015FD8DF45]:0)
at 
java.base/jdk.internal.util.Preconditions.outOfBounds(Preconditions.java:64)
at 
java.base/jdk.internal.util.Preconditions.outOfBoundsCheckIndex(Preconditions.java:70)
at 
java.base/jdk.internal.util.Preconditions.checkIndex(Preconditions.java:248)
at java.base/java.util.Objects.checkIndex(Objects.java:372)
at java.base/java.util.ArrayList.get(ArrayList.java:440)
at 
org.apache.solr.handler.TestReplicationHandler.doTestIndexAndConfigReplication(TestReplicationHandler.java:561)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 

[JENKINS] Lucene-Solr-Tests-7.x - Build # 277 - Still unstable

2017-12-08 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/277/

11 tests failed.
FAILED:  
org.apache.solr.cloud.CollectionTooManyReplicasTest.testAddTooManyReplicas

Error Message:
Could not load collection from ZK: TooManyReplicasInSeveralFlavors

Stack Trace:
org.apache.solr.common.SolrException: Could not load collection from ZK: 
TooManyReplicasInSeveralFlavors
at 
__randomizedtesting.SeedInfo.seed([EDC45B980F445A0F:60962F4041A265BE]:0)
at 
org.apache.solr.common.cloud.ZkStateReader.getCollectionLive(ZkStateReader.java:1123)
at 
org.apache.solr.common.cloud.ZkStateReader$LazyCollectionRef.get(ZkStateReader.java:648)
at 
org.apache.solr.common.cloud.ClusterState.getCollectionOrNull(ClusterState.java:130)
at 
org.apache.solr.common.cloud.ClusterState.getCollection(ClusterState.java:110)
at 
org.apache.solr.cloud.SolrCloudTestCase.getCollectionState(SolrCloudTestCase.java:247)
at 
org.apache.solr.cloud.CollectionTooManyReplicasTest.testAddTooManyReplicas(CollectionTooManyReplicasTest.java:91)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-11711) Fix minCount bug in distributed pivot & field facets

2017-12-08 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16284475#comment-16284475
 ] 

Hoss Man commented on SOLR-11711:
-

bq. What are your thoughts to backporting this fix to 6x and 5x?

at this point the only thing that _might_ get backported/released on 5x would 
be a heinous security issue -- even for 6x i can't magine any sort of 
backporting/releasing for non-security related bugs.  (The bar gets much higher 
as the age of the release branch gets older, because the type of user still 
using those older versions tends to be very concerned about the risk of 
unneccessary changes for bugs they may not have even encountered)

I'm not actually clear on why you classified this as a "Bug" and updated the 
summary to say "Fix minCount bug" ?

AFAICT, from an end user standpoint, this only improves the efficiency ...  I 
don't see any way that the "refinement candidate selection logic had a bug" you 
mentioned would have resulted in incorrect results being returned to clients -- 
it simply ment that solr was doing more work then needed to refine counts that 
it should have recognized in advance we're definitely not viable candidates for 
the final results.

This fix essentially seems tantamount to "removing unnecessary computation" -- 
which would be classified as an optimization, not a bug fix. (In which case i 
*definitely* don't think it makes sense to backport this to 6x)

Am I misunderstanding your changes? is there some situation in which the 
current code can produce incorrect results? If so we should *definitely* be 
adding a test case for that to insure against regression.

> Fix minCount bug in distributed pivot & field facets
> 
>
> Key: SOLR-11711
> URL: https://issues.apache.org/jira/browse/SOLR-11711
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: faceting
>Affects Versions: master (8.0)
>Reporter: Houston Putman
>Assignee: Hoss Man
>  Labels: pull-request-available
> Fix For: 5.6, 6.7, 7.2
>
>
> Currently while sending pivot facet requests to each shard, the 
> {{facet.pivot.mincount}} is set to {{0}} if the facet is sorted by count with 
> a specified limit > 0. However with a mincount of 0, the pivot facet will use 
> exponentially more wasted memory for every pivot field added. This is because 
> there will be a total of {{limit^(# of pivots)}} pivot values created in 
> memory, even though the vast majority of them will have counts of 0, and are 
> therefore useless.
> Imagine the scenario of a pivot facet with 3 levels, and 
> {{facet.limit=1000}}. There will be a billion pivot values created, and there 
> will almost definitely be nowhere near a billion pivot values with counts > 0.
> This likely due to the reasoning mentioned in [this comment in the original 
> distributed pivot facet 
> ticket|https://issues.apache.org/jira/browse/SOLR-2894?focusedCommentId=13979898].
>  Basically it was thought that the refinement code would need to know that a 
> count was 0 for a shard so that a refinement request wasn't sent to that 
> shard. However this is checked in the code, [in this part of the refinement 
> candidate 
> checking|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/7.1.0/solr/core/src/java/org/apache/solr/handler/component/PivotFacetField.java#L275].
>  Therefore if the {{pivot.mincount}} was set to 1, the non-existent values 
> would either:
> * Not be known, because the {{facet.limit}} was smaller than the number of 
> facet values with positive counts. This isn't an issue, because they wouldn't 
> have been returned with {{pivot.mincount}} set to 0.
> * Would be known, because the {{facet.limit}} would be larger than the number 
> of facet values returned. therefore this conditional would return false 
> (since we are only talking about pivot facets sorted by count).
> The solution, is to use the same pivot mincount as would be used if no limit 
> was specified. 
> This also relates to a similar problem in field faceting that was "fixed" in 
> [SOLR-8988|https://issues.apache.org/jira/browse/SOLR-8988#13324]. The 
> solution was to add a flag, {{facet.distrib.mco}}, which would enable not 
> choosing a mincount of 0 when unnessesary. Since this flag can only increase 
> performance, and doesn't break any queries I have removed it as an option and 
> replaced the code to use the feature always. 
> There was one code change necessary to fix the MCO option, since the 
> refinement candidate selection logic had a bug. The bug only occured with a 
> minCount > 0 and limit > 0 specified. When a shard replied with less than the 
> limit requested, it would assume the next maximum count on that shard was the 
> {{mincount}}, where 

[jira] [Resolved] (SOLR-11293) HttpPartitionTest fails often

2017-12-08 Thread Cao Manh Dat (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cao Manh Dat resolved SOLR-11293.
-
Resolution: Fixed

> HttpPartitionTest fails often
> -
>
> Key: SOLR-11293
> URL: https://issues.apache.org/jira/browse/SOLR-11293
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Assignee: Cao Manh Dat
> Fix For: 7.2, master (8.0)
>
> Attachments: SOLR-11293.patch, SOLR-11293.patch, SOLR-11293.patch, 
> SOLR-11293.patch
>
>
> https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/4140/testReport/org.apache.solr.cloud/HttpPartitionTest/test/
> {code}
> Error Message
> Doc with id=1 not found in http://127.0.0.1:60897/b/xj/collMinRf_1x3 due to: 
> Path not found: /id; rsp={doc=null}
> Stacktrace
> java.lang.AssertionError: Doc with id=1 not found in 
> http://127.0.0.1:60897/b/xj/collMinRf_1x3 due to: Path not found: /id; 
> rsp={doc=null}
>   at 
> __randomizedtesting.SeedInfo.seed([ACF841744A332569:24AC7EAEE4CF4891]:0)
>   at org.junit.Assert.fail(Assert.java:93)
>   at org.junit.Assert.assertTrue(Assert.java:43)
>   at 
> org.apache.solr.cloud.HttpPartitionTest.assertDocExists(HttpPartitionTest.java:603)
>   at 
> org.apache.solr.cloud.HttpPartitionTest.assertDocsExistInAllReplicas(HttpPartitionTest.java:558)
>   at 
> org.apache.solr.cloud.HttpPartitionTest.testMinRf(HttpPartitionTest.java:249)
>   at 
> org.apache.solr.cloud.HttpPartitionTest.test(HttpPartitionTest.java:127)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11293) HttpPartitionTest fails often

2017-12-08 Thread Cassandra Targett (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16284424#comment-16284424
 ] 

Cassandra Targett commented on SOLR-11293:
--

This was reopened but the original fix is supposed to be in 7.2. [~caomanhdat], 
have the issues that caused that action been resolved? If so, can this issue be 
resolved? If not, what is remaining here? 

> HttpPartitionTest fails often
> -
>
> Key: SOLR-11293
> URL: https://issues.apache.org/jira/browse/SOLR-11293
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Assignee: Cao Manh Dat
> Fix For: 7.2, master (8.0)
>
> Attachments: SOLR-11293.patch, SOLR-11293.patch, SOLR-11293.patch, 
> SOLR-11293.patch
>
>
> https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/4140/testReport/org.apache.solr.cloud/HttpPartitionTest/test/
> {code}
> Error Message
> Doc with id=1 not found in http://127.0.0.1:60897/b/xj/collMinRf_1x3 due to: 
> Path not found: /id; rsp={doc=null}
> Stacktrace
> java.lang.AssertionError: Doc with id=1 not found in 
> http://127.0.0.1:60897/b/xj/collMinRf_1x3 due to: Path not found: /id; 
> rsp={doc=null}
>   at 
> __randomizedtesting.SeedInfo.seed([ACF841744A332569:24AC7EAEE4CF4891]:0)
>   at org.junit.Assert.fail(Assert.java:93)
>   at org.junit.Assert.assertTrue(Assert.java:43)
>   at 
> org.apache.solr.cloud.HttpPartitionTest.assertDocExists(HttpPartitionTest.java:603)
>   at 
> org.apache.solr.cloud.HttpPartitionTest.assertDocsExistInAllReplicas(HttpPartitionTest.java:558)
>   at 
> org.apache.solr.cloud.HttpPartitionTest.testMinRf(HttpPartitionTest.java:249)
>   at 
> org.apache.solr.cloud.HttpPartitionTest.test(HttpPartitionTest.java:127)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-9.0.1) - Build # 21055 - Still Unstable!

2017-12-08 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/21055/
Java: 64bit/jdk-9.0.1 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.HttpPartitionTest.test

Error Message:
Didn't see all replicas for shard shard1 in c8n_1x3 come up within 9 ms! 
ClusterState: {   "collMinRf_1x3":{ "pullReplicas":"0", 
"replicationFactor":"1", "shards":{"shard1":{ 
"range":"8000-7fff", "state":"active", "replicas":{ 
  "core_node4":{ "core":"collMinRf_1x3_shard1_replica_n2",  
   "base_url":"http://127.0.0.1:42407/bwdy/u;, 
"node_name":"127.0.0.1:42407_bwdy%2Fu", "state":"active",   
  "type":"NRT"},   "core_node5":{ 
"core":"collMinRf_1x3_shard1_replica_n1", 
"base_url":"http://127.0.0.1:38587/bwdy/u;, 
"node_name":"127.0.0.1:38587_bwdy%2Fu", "state":"active",   
  "type":"NRT"},   "core_node6":{ 
"core":"collMinRf_1x3_shard1_replica_n3", 
"base_url":"http://127.0.0.1:46317/bwdy/u;, 
"node_name":"127.0.0.1:46317_bwdy%2Fu", "state":"active",   
  "type":"NRT", "leader":"true", 
"router":{"name":"compositeId"}, "maxShardsPerNode":"1", 
"autoAddReplicas":"false", "nrtReplicas":"3", "tlogReplicas":"0"},   
"collection1":{ "pullReplicas":"0", "replicationFactor":"1", 
"shards":{   "shard1":{ "range":"8000-", 
"state":"active", "replicas":{"core_node44":{ 
"core":"collection1_shard1_replica_n43", 
"base_url":"http://127.0.0.1:35777/bwdy/u;, 
"node_name":"127.0.0.1:35777_bwdy%2Fu", "state":"active",   
  "type":"NRT", "leader":"true"}}},   "shard2":{ 
"range":"0-7fff", "state":"active", "replicas":{   
"core_node42":{ "core":"collection1_shard2_replica_n41",
 "base_url":"http://127.0.0.1:38587/bwdy/u;, 
"node_name":"127.0.0.1:38587_bwdy%2Fu", "state":"active",   
  "type":"NRT", "leader":"true"},   "core_node46":{ 
"core":"collection1_shard2_replica_n45", 
"base_url":"http://127.0.0.1:46317/bwdy/u;, 
"node_name":"127.0.0.1:46317_bwdy%2Fu", "state":"active",   
  "type":"NRT", "router":{"name":"compositeId"}, 
"maxShardsPerNode":"1", "autoAddReplicas":"false", "nrtReplicas":"1",   
  "tlogReplicas":"0"},   "c8n_1x3":{ "pullReplicas":"0", 
"replicationFactor":"1", "shards":{"shard1":{ 
"range":"8000-7fff", "state":"active", "replicas":{ 
  "core_node4":{ "core":"c8n_1x3_shard1_replica_n2",
 "base_url":"http://127.0.0.1:42407/bwdy/u;, 
"node_name":"127.0.0.1:42407_bwdy%2Fu", "state":"recovering",   
  "type":"NRT"},   "core_node5":{ 
"core":"c8n_1x3_shard1_replica_n1", 
"base_url":"http://127.0.0.1:46317/bwdy/u;, 
"node_name":"127.0.0.1:46317_bwdy%2Fu", "state":"active",   
  "type":"NRT", "leader":"true"},   "core_node6":{  
   "core":"c8n_1x3_shard1_replica_n3", 
"base_url":"http://127.0.0.1:35777/bwdy/u;, 
"node_name":"127.0.0.1:35777_bwdy%2Fu", "state":"recovering",   
  "type":"NRT", "router":{"name":"compositeId"}, 
"maxShardsPerNode":"1", "autoAddReplicas":"false", "nrtReplicas":"3",   
  "tlogReplicas":"0"},   "control_collection":{ "pullReplicas":"0", 
"replicationFactor":"1", "shards":{"shard1":{ 
"range":"8000-7fff", "state":"active", 
"replicas":{"core_node2":{ 
"core":"control_collection_shard1_replica_n1", 
"base_url":"http://127.0.0.1:42407/bwdy/u;, 
"node_name":"127.0.0.1:42407_bwdy%2Fu", "state":"active",   
  "type":"NRT", "leader":"true", 
"router":{"name":"compositeId"}, "maxShardsPerNode":"1", 
"autoAddReplicas":"false", "nrtReplicas":"1", "tlogReplicas":"0"}}

Stack Trace:
java.lang.AssertionError: Didn't see all replicas for shard shard1 in c8n_1x3 
come up within 9 ms! ClusterState: {
  "collMinRf_1x3":{
"pullReplicas":"0",
"replicationFactor":"1",
"shards":{"shard1":{
"range":"8000-7fff",
"state":"active",
"replicas":{
  "core_node4":{
"core":"collMinRf_1x3_shard1_replica_n2",
"base_url":"http://127.0.0.1:42407/bwdy/u;,
"node_name":"127.0.0.1:42407_bwdy%2Fu",
"state":"active",
"type":"NRT"},
  "core_node5":{
"core":"collMinRf_1x3_shard1_replica_n1",
   

Re: Heads up: Lucene 8 to require positive scores

2017-12-08 Thread Chris Hostetter

Adrien: shouldn't we have a note about this in the "Changes in Runtime 
Behavior" section of lucene/CHANGES.txt and the "Upgrade Notes" section of 
solr/CHANGES.txt for 8.0?


: Date: Wed, 06 Dec 2017 13:25:10 +
: From: Adrien Grand 
: Reply-To: dev@lucene.apache.org
: To: Lucene Dev 
: Subject: Heads up: Lucene 8 to require positive scores
: 
: Hello,
: 
: I just merged a change to the Scorer contract: Scorer.score() must now
: return a positive float, see
: https://issues.apache.org/jira/browse/LUCENE-7996.
: 
: As a side effect, negative boosts are now disallowed.
: 
: Since FunctionScoreQuery and FunctionQuery can't ensure that the value
: source only produces positive values, they return 0 when a negative value
: is produced.
: 
: This might look like an annoying constraint, but this new requirement is
: going to help build new features and optimizations, in particular
: LUCENE-4100, which helps get great speedups for top-k queries sorted by
: score.
: 

-Hoss
http://www.lucidworks.com/

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11711) Fix minCount bug in distributed pivot & field facets

2017-12-08 Thread Houston Putman (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Houston Putman updated SOLR-11711:
--
Summary: Fix minCount bug in distributed pivot & field facets  (was: 
Improve mincount & limit usage in pivot & field facets)

> Fix minCount bug in distributed pivot & field facets
> 
>
> Key: SOLR-11711
> URL: https://issues.apache.org/jira/browse/SOLR-11711
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: faceting
>Affects Versions: master (8.0)
>Reporter: Houston Putman
>Assignee: Hoss Man
>  Labels: pull-request-available
> Fix For: 5.6, 6.7, 7.2
>
>
> Currently while sending pivot facet requests to each shard, the 
> {{facet.pivot.mincount}} is set to {{0}} if the facet is sorted by count with 
> a specified limit > 0. However with a mincount of 0, the pivot facet will use 
> exponentially more wasted memory for every pivot field added. This is because 
> there will be a total of {{limit^(# of pivots)}} pivot values created in 
> memory, even though the vast majority of them will have counts of 0, and are 
> therefore useless.
> Imagine the scenario of a pivot facet with 3 levels, and 
> {{facet.limit=1000}}. There will be a billion pivot values created, and there 
> will almost definitely be nowhere near a billion pivot values with counts > 0.
> This likely due to the reasoning mentioned in [this comment in the original 
> distributed pivot facet 
> ticket|https://issues.apache.org/jira/browse/SOLR-2894?focusedCommentId=13979898].
>  Basically it was thought that the refinement code would need to know that a 
> count was 0 for a shard so that a refinement request wasn't sent to that 
> shard. However this is checked in the code, [in this part of the refinement 
> candidate 
> checking|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/7.1.0/solr/core/src/java/org/apache/solr/handler/component/PivotFacetField.java#L275].
>  Therefore if the {{pivot.mincount}} was set to 1, the non-existent values 
> would either:
> * Not be known, because the {{facet.limit}} was smaller than the number of 
> facet values with positive counts. This isn't an issue, because they wouldn't 
> have been returned with {{pivot.mincount}} set to 0.
> * Would be known, because the {{facet.limit}} would be larger than the number 
> of facet values returned. therefore this conditional would return false 
> (since we are only talking about pivot facets sorted by count).
> The solution, is to use the same pivot mincount as would be used if no limit 
> was specified. 
> This also relates to a similar problem in field faceting that was "fixed" in 
> [SOLR-8988|https://issues.apache.org/jira/browse/SOLR-8988#13324]. The 
> solution was to add a flag, {{facet.distrib.mco}}, which would enable not 
> choosing a mincount of 0 when unnessesary. Since this flag can only increase 
> performance, and doesn't break any queries I have removed it as an option and 
> replaced the code to use the feature always. 
> There was one code change necessary to fix the MCO option, since the 
> refinement candidate selection logic had a bug. The bug only occured with a 
> minCount > 0 and limit > 0 specified. When a shard replied with less than the 
> limit requested, it would assume the next maximum count on that shard was the 
> {{mincount}}, where it would actually be the {{mincount-1}} (because a facet 
> value with a count of mincount would have been returned). Therefore the MCO 
> didn't cause any errors, but with a mincount of 1 the refinement logic always 
> assumed that the shard had more values with a count of 1.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11673) ReplicationHandler race-condition between deleting slave index and commit in master

2017-12-08 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16284290#comment-16284290
 ] 

Mikhail Khludnev commented on SOLR-11673:
-

I don't know, but I see that {{skipCommitOnMasterVersionZero}} bypasses it 
successfully. Perhaps just by _updating commit point_, whatever it means.

> ReplicationHandler race-condition between deleting slave index and commit in 
> master
> ---
>
> Key: SOLR-11673
> URL: https://issues.apache.org/jira/browse/SOLR-11673
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mikhail Khludnev
> Attachments: SOLR-11673-reproducer.patch, 
> SOLR-11673-skipCommitOnMasterVersionZero.patch, SOLR-11673-test-fix.patch, 
> doTestIndexAndConfigReplication-consoleText.txt
>
>
> failure in master [described in 
> SOLR-6228|https://issues.apache.org/jira/browse/SOLR-6228?focusedCommentId=16266007=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16266007].
> {code}
>   2> NOTE: reproduce with: ant test  -Dtestcase=TestReplicationHandler 
> -Dtests.method=doTestIndexAndConfigReplication -Dtests.seed=C541E9C9CC845BA5 
> -Dtests.slow=true -Dtests.locale=es-BO -Dtests.timezone=Africa/Addis_Ababa 
> -Dtests.asserts=true -Dtests.file.encoding=US-ASCII
> [10:13:23.442] ERROR   36.6s | 
> TestReplicationHandler.doTestIndexAndConfigReplication <<<
>> Throwable #1: java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
>>at 
> __randomizedtesting.SeedInfo.seed([C541E9C9CC845BA5:D109B29CEF83E6BB]:0)
>>at java.util.ArrayList.rangeCheck(ArrayList.java:653)
>>at java.util.ArrayList.get(ArrayList.java:429)
>>at 
> org.apache.solr.handler.TestReplicationHandler.doTestIndexAndConfigReplication(TestReplicationHandler.java:561)
> {code}
> Easily reproducible in master by beast.  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11673) ReplicationHandler race-condition between deleting slave index and commit in master

2017-12-08 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-11673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16284271#comment-16284271
 ] 

Tomás Fernández Löbbe commented on SOLR-11673:
--

bq. I don't see a reason behind that slave commit at all. Could it happen that 
it was necessary some time ago and now it just breaks the test(s) rarely?
Maybe I'm missing something, but doesn't the deleteAll requires the commit?

> ReplicationHandler race-condition between deleting slave index and commit in 
> master
> ---
>
> Key: SOLR-11673
> URL: https://issues.apache.org/jira/browse/SOLR-11673
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mikhail Khludnev
> Attachments: SOLR-11673-reproducer.patch, 
> SOLR-11673-skipCommitOnMasterVersionZero.patch, SOLR-11673-test-fix.patch, 
> doTestIndexAndConfigReplication-consoleText.txt
>
>
> failure in master [described in 
> SOLR-6228|https://issues.apache.org/jira/browse/SOLR-6228?focusedCommentId=16266007=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16266007].
> {code}
>   2> NOTE: reproduce with: ant test  -Dtestcase=TestReplicationHandler 
> -Dtests.method=doTestIndexAndConfigReplication -Dtests.seed=C541E9C9CC845BA5 
> -Dtests.slow=true -Dtests.locale=es-BO -Dtests.timezone=Africa/Addis_Ababa 
> -Dtests.asserts=true -Dtests.file.encoding=US-ASCII
> [10:13:23.442] ERROR   36.6s | 
> TestReplicationHandler.doTestIndexAndConfigReplication <<<
>> Throwable #1: java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
>>at 
> __randomizedtesting.SeedInfo.seed([C541E9C9CC845BA5:D109B29CEF83E6BB]:0)
>>at java.util.ArrayList.rangeCheck(ArrayList.java:653)
>>at java.util.ArrayList.get(ArrayList.java:429)
>>at 
> org.apache.solr.handler.TestReplicationHandler.doTestIndexAndConfigReplication(TestReplicationHandler.java:561)
> {code}
> Easily reproducible in master by beast.  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11673) ReplicationHandler race-condition between deleting slave index and commit in master

2017-12-08 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16284256#comment-16284256
 ] 

Mikhail Khludnev commented on SOLR-11673:
-

quoting 
https://issues.apache.org/jira/browse/SOLR-11293?focusedCommentId=16182379=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16182379
bq. In case of masterVersion = zero, tlog replica won't do commit, just open 
new searcher.
Here is what the parameter {{skipCommitOnMasterVersionZero}} was introduced 
for. 
But the thing is that it always makes sense (all tests are passed). The points 
are:
 - there is a race condition between initial slave commit and the first 
meaningful master commit, the later might just be ignored.
 - I don't see a reason behind [that slave 
commit|https://github.com/apache/lucene-solr/blame/master/solr/core/src/java/org/apache/solr/handler/IndexFetcher.java#L458]
 at all. Could it happen that it was necessary some time ago and now it just 
breaks the test(s) rarely?  

Thanks for responding, [~tomasflobbe]


> ReplicationHandler race-condition between deleting slave index and commit in 
> master
> ---
>
> Key: SOLR-11673
> URL: https://issues.apache.org/jira/browse/SOLR-11673
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mikhail Khludnev
> Attachments: SOLR-11673-reproducer.patch, 
> SOLR-11673-skipCommitOnMasterVersionZero.patch, SOLR-11673-test-fix.patch, 
> doTestIndexAndConfigReplication-consoleText.txt
>
>
> failure in master [described in 
> SOLR-6228|https://issues.apache.org/jira/browse/SOLR-6228?focusedCommentId=16266007=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16266007].
> {code}
>   2> NOTE: reproduce with: ant test  -Dtestcase=TestReplicationHandler 
> -Dtests.method=doTestIndexAndConfigReplication -Dtests.seed=C541E9C9CC845BA5 
> -Dtests.slow=true -Dtests.locale=es-BO -Dtests.timezone=Africa/Addis_Ababa 
> -Dtests.asserts=true -Dtests.file.encoding=US-ASCII
> [10:13:23.442] ERROR   36.6s | 
> TestReplicationHandler.doTestIndexAndConfigReplication <<<
>> Throwable #1: java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
>>at 
> __randomizedtesting.SeedInfo.seed([C541E9C9CC845BA5:D109B29CEF83E6BB]:0)
>>at java.util.ArrayList.rangeCheck(ArrayList.java:653)
>>at java.util.ArrayList.get(ArrayList.java:429)
>>at 
> org.apache.solr.handler.TestReplicationHandler.doTestIndexAndConfigReplication(TestReplicationHandler.java:561)
> {code}
> Easily reproducible in master by beast.  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-7.2 - Build # 5 - Still unstable

2017-12-08 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.2/5/

4 tests failed.
FAILED:  
org.apache.solr.handler.TestReplicationHandler.doTestIndexAndConfigReplication

Error Message:
Index: 0, Size: 0

Stack Trace:
java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
at 
__randomizedtesting.SeedInfo.seed([19C524887EEBEE6B:D8D7FDD5DEC5375]:0)
at java.util.ArrayList.rangeCheck(ArrayList.java:657)
at java.util.ArrayList.get(ArrayList.java:433)
at 
org.apache.solr.handler.TestReplicationHandler.doTestIndexAndConfigReplication(TestReplicationHandler.java:561)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
org.apache.solr.client.solrj.io.stream.StreamExpressionTest.testExecutorStream

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([1CC18AF23ABE7FE:230C995400C1CDEE]:0)
at org.junit.Assert.fail(Assert.java:92)
at 

[jira] [Assigned] (SOLR-11739) Solr can accept duplicated async IDs

2017-12-08 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-11739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe reassigned SOLR-11739:


Assignee: Tomás Fernández Löbbe

> Solr can accept duplicated async IDs
> 
>
> Key: SOLR-11739
> URL: https://issues.apache.org/jira/browse/SOLR-11739
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Tomás Fernández Löbbe
>Assignee: Tomás Fernández Löbbe
>Priority: Minor
> Attachments: SOLR-11739.patch
>
>
> Solr is supposed to reject duplicated async IDs, however, if the repeated IDs 
> are sent fast enough, a race condition in Solr will let the repeated IDs 
> through. The duplicated task is ran and and then silently fails to report as 
> completed because the same async ID is already in the completed map. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11739) Solr can accept duplicated async IDs

2017-12-08 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-11739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16284147#comment-16284147
 ] 

Tomás Fernández Löbbe commented on SOLR-11739:
--

I thought about three options
1. Fix the actual race condition, don't let duplicate async IDs at all.
2. Fix the Overseer so that it checks before running each task if one with the 
same ID was completed before.
3. Let the Overseer re-run the tasks (leave it as it is now). Maybe just add 
logging, or a way to show the error (failed tasks)

#3 can be dangerous, since the task could be something like a DELETEREPLICA. If 
the duplicate ID was caused by some broken retry logic on the client side, Solr 
could be deleting many replicas with what the client thought was a single 
command. 

#2 may be OK, the problem I see with that is that it gives an inconsistent 
behavior to the user (sometimes the duplicate IDs are rejected, and sometimes 
not). Also, this would make the Overseer silently drop tasks (yes, we can add 
some sort of failure in the logs but we can’t assume anyone is going to 
notice). 

#1 is the correct fix from the functional stand point, however I can’t think of 
a way to really fix the race condition without adding an extra write to 
ZooKeeper, which we’d have to do for every collection request with an asyncID. 
And this is to cover from a client misuse edge case. 

I think (and I discussed this offline with [~anshumg], he thinks this too) #1 
is the way to go. I’ll put up a patch.

> Solr can accept duplicated async IDs
> 
>
> Key: SOLR-11739
> URL: https://issues.apache.org/jira/browse/SOLR-11739
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Tomás Fernández Löbbe
>Priority: Minor
> Attachments: SOLR-11739.patch
>
>
> Solr is supposed to reject duplicated async IDs, however, if the repeated IDs 
> are sent fast enough, a race condition in Solr will let the repeated IDs 
> through. The duplicated task is ran and and then silently fails to report as 
> completed because the same async ID is already in the completed map. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk1.8.0_144) - Build # 960 - Still Unstable!

2017-12-08 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/960/
Java: 64bit/jdk1.8.0_144 -XX:+UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  
org.apache.solr.cloud.TestDeleteCollectionOnDownNodes.deleteCollectionWithDownNodes

Error Message:
Error from server at http://127.0.0.1:33431/solr: create the collection time 
out:180s

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:33431/solr: create the collection time out:180s
at 
__randomizedtesting.SeedInfo.seed([14BB2B640D0472EB:8A8E4F9C2B273E63]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1103)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:883)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:816)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at 
org.apache.solr.cloud.TestDeleteCollectionOnDownNodes.deleteCollectionWithDownNodes(TestDeleteCollectionOnDownNodes.java:40)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-11711) Improve mincount & limit usage in pivot & field facets

2017-12-08 Thread Houston Putman (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16284136#comment-16284136
 ] 

Houston Putman commented on SOLR-11711:
---

The deprecation fix is in now.

> Improve mincount & limit usage in pivot & field facets
> --
>
> Key: SOLR-11711
> URL: https://issues.apache.org/jira/browse/SOLR-11711
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: faceting
>Affects Versions: master (8.0)
>Reporter: Houston Putman
>Assignee: Hoss Man
>  Labels: pull-request-available
> Fix For: 5.6, 6.7, 7.2
>
>
> Currently while sending pivot facet requests to each shard, the 
> {{facet.pivot.mincount}} is set to {{0}} if the facet is sorted by count with 
> a specified limit > 0. However with a mincount of 0, the pivot facet will use 
> exponentially more wasted memory for every pivot field added. This is because 
> there will be a total of {{limit^(# of pivots)}} pivot values created in 
> memory, even though the vast majority of them will have counts of 0, and are 
> therefore useless.
> Imagine the scenario of a pivot facet with 3 levels, and 
> {{facet.limit=1000}}. There will be a billion pivot values created, and there 
> will almost definitely be nowhere near a billion pivot values with counts > 0.
> This likely due to the reasoning mentioned in [this comment in the original 
> distributed pivot facet 
> ticket|https://issues.apache.org/jira/browse/SOLR-2894?focusedCommentId=13979898].
>  Basically it was thought that the refinement code would need to know that a 
> count was 0 for a shard so that a refinement request wasn't sent to that 
> shard. However this is checked in the code, [in this part of the refinement 
> candidate 
> checking|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/7.1.0/solr/core/src/java/org/apache/solr/handler/component/PivotFacetField.java#L275].
>  Therefore if the {{pivot.mincount}} was set to 1, the non-existent values 
> would either:
> * Not be known, because the {{facet.limit}} was smaller than the number of 
> facet values with positive counts. This isn't an issue, because they wouldn't 
> have been returned with {{pivot.mincount}} set to 0.
> * Would be known, because the {{facet.limit}} would be larger than the number 
> of facet values returned. therefore this conditional would return false 
> (since we are only talking about pivot facets sorted by count).
> The solution, is to use the same pivot mincount as would be used if no limit 
> was specified. 
> This also relates to a similar problem in field faceting that was "fixed" in 
> [SOLR-8988|https://issues.apache.org/jira/browse/SOLR-8988#13324]. The 
> solution was to add a flag, {{facet.distrib.mco}}, which would enable not 
> choosing a mincount of 0 when unnessesary. Since this flag can only increase 
> performance, and doesn't break any queries I have removed it as an option and 
> replaced the code to use the feature always. 
> There was one code change necessary to fix the MCO option, since the 
> refinement candidate selection logic had a bug. The bug only occured with a 
> minCount > 0 and limit > 0 specified. When a shard replied with less than the 
> limit requested, it would assume the next maximum count on that shard was the 
> {{mincount}}, where it would actually be the {{mincount-1}} (because a facet 
> value with a count of mincount would have been returned). Therefore the MCO 
> didn't cause any errors, but with a mincount of 1 the refinement logic always 
> assumed that the shard had more values with a count of 1.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11711) Improve mincount & limit usage in pivot & field facets

2017-12-08 Thread Houston Putman (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16284126#comment-16284126
 ] 

Houston Putman commented on SOLR-11711:
---

Thanks for taking a look and running those tests!

I'll add back in the {{FACET_DISTRIB_MCO}} option and deprecate it for 7x.

What are your thoughts to backporting this fix to 6x and 5x?

> Improve mincount & limit usage in pivot & field facets
> --
>
> Key: SOLR-11711
> URL: https://issues.apache.org/jira/browse/SOLR-11711
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: faceting
>Affects Versions: master (8.0)
>Reporter: Houston Putman
>Assignee: Hoss Man
>  Labels: pull-request-available
> Fix For: 5.6, 6.7, 7.2
>
>
> Currently while sending pivot facet requests to each shard, the 
> {{facet.pivot.mincount}} is set to {{0}} if the facet is sorted by count with 
> a specified limit > 0. However with a mincount of 0, the pivot facet will use 
> exponentially more wasted memory for every pivot field added. This is because 
> there will be a total of {{limit^(# of pivots)}} pivot values created in 
> memory, even though the vast majority of them will have counts of 0, and are 
> therefore useless.
> Imagine the scenario of a pivot facet with 3 levels, and 
> {{facet.limit=1000}}. There will be a billion pivot values created, and there 
> will almost definitely be nowhere near a billion pivot values with counts > 0.
> This likely due to the reasoning mentioned in [this comment in the original 
> distributed pivot facet 
> ticket|https://issues.apache.org/jira/browse/SOLR-2894?focusedCommentId=13979898].
>  Basically it was thought that the refinement code would need to know that a 
> count was 0 for a shard so that a refinement request wasn't sent to that 
> shard. However this is checked in the code, [in this part of the refinement 
> candidate 
> checking|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/7.1.0/solr/core/src/java/org/apache/solr/handler/component/PivotFacetField.java#L275].
>  Therefore if the {{pivot.mincount}} was set to 1, the non-existent values 
> would either:
> * Not be known, because the {{facet.limit}} was smaller than the number of 
> facet values with positive counts. This isn't an issue, because they wouldn't 
> have been returned with {{pivot.mincount}} set to 0.
> * Would be known, because the {{facet.limit}} would be larger than the number 
> of facet values returned. therefore this conditional would return false 
> (since we are only talking about pivot facets sorted by count).
> The solution, is to use the same pivot mincount as would be used if no limit 
> was specified. 
> This also relates to a similar problem in field faceting that was "fixed" in 
> [SOLR-8988|https://issues.apache.org/jira/browse/SOLR-8988#13324]. The 
> solution was to add a flag, {{facet.distrib.mco}}, which would enable not 
> choosing a mincount of 0 when unnessesary. Since this flag can only increase 
> performance, and doesn't break any queries I have removed it as an option and 
> replaced the code to use the feature always. 
> There was one code change necessary to fix the MCO option, since the 
> refinement candidate selection logic had a bug. The bug only occured with a 
> minCount > 0 and limit > 0 specified. When a shard replied with less than the 
> limit requested, it would assume the next maximum count on that shard was the 
> {{mincount}}, where it would actually be the {{mincount-1}} (because a facet 
> value with a count of mincount would have been returned). Therefore the MCO 
> didn't cause any errors, but with a mincount of 1 the refinement logic always 
> assumed that the shard had more values with a count of 1.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11423) Overseer queue needs a hard cap (maximum size) that clients respect

2017-12-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16284120#comment-16284120
 ] 

ASF subversion and git services commented on SOLR-11423:


Commit 3a7f1071644ffe11ee74c96cfd4946204b6544b5 in lucene-solr's branch 
refs/heads/master from [~dragonsinth]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=3a7f107 ]

SOLR-11423: fix solr/CHANGES.txt, fixed in 7.2 not 7.1


> Overseer queue needs a hard cap (maximum size) that clients respect
> ---
>
> Key: SOLR-11423
> URL: https://issues.apache.org/jira/browse/SOLR-11423
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Scott Blum
>Assignee: Scott Blum
>
> When Solr gets into pathological GC thrashing states, it can fill the 
> overseer queue with literally thousands and thousands of queued state 
> changes.  Many of these end up being duplicated up/down state updates.  Our 
> production cluster has gotten to the 100k queued items level many times, and 
> there's nothing useful you can do at this point except manually purge the 
> queue in ZK.  Recently, it hit 3 million queued items, at which point our 
> entire ZK cluster exploded.
> I propose a hard cap.  Any client trying to enqueue a item when a queue is 
> full would throw an exception.  I was thinking maybe 10,000 items would be a 
> reasonable limit.  Thoughts?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11423) Overseer queue needs a hard cap (maximum size) that clients respect

2017-12-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16284118#comment-16284118
 ] 

ASF subversion and git services commented on SOLR-11423:


Commit 576b0b5d659527a61ebdd80347c6fa14dc0a086f in lucene-solr's branch 
refs/heads/branch_7_2 from [~dragonsinth]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=576b0b5 ]

SOLR-11423: Overseer queue needs a hard cap (maximum size) that clients respect


> Overseer queue needs a hard cap (maximum size) that clients respect
> ---
>
> Key: SOLR-11423
> URL: https://issues.apache.org/jira/browse/SOLR-11423
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Scott Blum
>Assignee: Scott Blum
>
> When Solr gets into pathological GC thrashing states, it can fill the 
> overseer queue with literally thousands and thousands of queued state 
> changes.  Many of these end up being duplicated up/down state updates.  Our 
> production cluster has gotten to the 100k queued items level many times, and 
> there's nothing useful you can do at this point except manually purge the 
> queue in ZK.  Recently, it hit 3 million queued items, at which point our 
> entire ZK cluster exploded.
> I propose a hard cap.  Any client trying to enqueue a item when a queue is 
> full would throw an exception.  I was thinking maybe 10,000 items would be a 
> reasonable limit.  Thoughts?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11739) Solr can accept duplicated async IDs

2017-12-08 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-11739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe updated SOLR-11739:
-
Attachment: SOLR-11739.patch

Patch with failing test

> Solr can accept duplicated async IDs
> 
>
> Key: SOLR-11739
> URL: https://issues.apache.org/jira/browse/SOLR-11739
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Tomás Fernández Löbbe
>Priority: Minor
> Attachments: SOLR-11739.patch
>
>
> Solr is supposed to reject duplicated async IDs, however, if the repeated IDs 
> are sent fast enough, a race condition in Solr will let the repeated IDs 
> through. The duplicated task is ran and and then silently fails to report as 
> completed because the same async ID is already in the completed map. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8085) Extend Ant / Ivy configuration to retrieve sources and javadoc for dependencies in order to be accessible during development

2017-12-08 Thread Emerson Castaneda (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16284085#comment-16284085
 ] 

Emerson Castaneda commented on LUCENE-8085:
---

Ivy: Fetching Javadocs and Sources
https://stackoverflow.com/q/12304976/86
https://stackoverflow.com/a/12305424/86

> Extend Ant / Ivy configuration to retrieve sources and javadoc for 
> dependencies in order to be accessible during development
> 
>
> Key: LUCENE-8085
> URL: https://issues.apache.org/jira/browse/LUCENE-8085
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/other, general/build
>Affects Versions: 7.1
>Reporter: Emerson Castaneda
>Priority: Minor
> Attachments: lucene_dependencies.PNG
>
>
> It would be useful setting up the required configuration for Ant / Ivy in 
> order to recover automatically javadocs and sources for dependencies, so you 
> have no to attach those manually avoiding this situation:
> !lucene_dependencies.PNG!
> *Start point:*
> Ref: Ivy: How to Retrieve Source Codes of Dependencies 
> https://dzone.com/articles/ivy-how-retrieve-source-codes



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11423) Overseer queue needs a hard cap (maximum size) that clients respect

2017-12-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16284072#comment-16284072
 ] 

ASF subversion and git services commented on SOLR-11423:


Commit 311105f1b0dad7f20ff6cd55be1d2eb9cd4246d6 in lucene-solr's branch 
refs/heads/branch_7x from [~dragonsinth]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=311105f ]

SOLR-11423: Overseer queue needs a hard cap (maximum size) that clients respect


> Overseer queue needs a hard cap (maximum size) that clients respect
> ---
>
> Key: SOLR-11423
> URL: https://issues.apache.org/jira/browse/SOLR-11423
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Scott Blum
>Assignee: Scott Blum
>
> When Solr gets into pathological GC thrashing states, it can fill the 
> overseer queue with literally thousands and thousands of queued state 
> changes.  Many of these end up being duplicated up/down state updates.  Our 
> production cluster has gotten to the 100k queued items level many times, and 
> there's nothing useful you can do at this point except manually purge the 
> queue in ZK.  Recently, it hit 3 million queued items, at which point our 
> entire ZK cluster exploded.
> I propose a hard cap.  Any client trying to enqueue a item when a queue is 
> full would throw an exception.  I was thinking maybe 10,000 items would be a 
> reasonable limit.  Thoughts?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-11739) Solr can accept duplicated async IDs

2017-12-08 Thread JIRA
Tomás Fernández Löbbe created SOLR-11739:


 Summary: Solr can accept duplicated async IDs
 Key: SOLR-11739
 URL: https://issues.apache.org/jira/browse/SOLR-11739
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Tomás Fernández Löbbe
Priority: Minor


Solr is supposed to reject duplicated async IDs, however, if the repeated IDs 
are sent fast enough, a race condition in Solr will let the repeated IDs 
through. The duplicated task is ran and and then silently fails to report as 
completed because the same async ID is already in the completed map. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3666) DataImportHandler status command in SolrCloud does not work properly

2017-12-08 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16284048#comment-16284048
 ] 

Shawn Heisey commented on SOLR-3666:


While looking into how to do a few things, I have discovered that the bin/solr 
script itself seems to have bashisms, so I'm not going to worry about bourne 
shell compatibility.

> DataImportHandler status command in SolrCloud does not work properly 
> -
>
> Key: SOLR-3666
> URL: https://issues.apache.org/jira/browse/SOLR-3666
> Project: Solr
>  Issue Type: Bug
>  Components: contrib - DataImportHandler, SolrCloud
>Affects Versions: 4.0-ALPHA
>Reporter: Sauvik Sarkar
>
> The dataimport?command=status command does not work correctly when invoked on 
> the node not running the DIH in a SolrCloud configuration.
> The expectation is that no matter which node is importing any other node 
> should be able get the import status information.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-11711) Improve mincount & limit usage in pivot & field facets

2017-12-08 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man reassigned SOLR-11711:
---

Assignee: Hoss Man

> Improve mincount & limit usage in pivot & field facets
> --
>
> Key: SOLR-11711
> URL: https://issues.apache.org/jira/browse/SOLR-11711
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: faceting
>Affects Versions: master (8.0)
>Reporter: Houston Putman
>Assignee: Hoss Man
>  Labels: pull-request-available
> Fix For: 5.6, 6.7, 7.2
>
>
> Currently while sending pivot facet requests to each shard, the 
> {{facet.pivot.mincount}} is set to {{0}} if the facet is sorted by count with 
> a specified limit > 0. However with a mincount of 0, the pivot facet will use 
> exponentially more wasted memory for every pivot field added. This is because 
> there will be a total of {{limit^(# of pivots)}} pivot values created in 
> memory, even though the vast majority of them will have counts of 0, and are 
> therefore useless.
> Imagine the scenario of a pivot facet with 3 levels, and 
> {{facet.limit=1000}}. There will be a billion pivot values created, and there 
> will almost definitely be nowhere near a billion pivot values with counts > 0.
> This likely due to the reasoning mentioned in [this comment in the original 
> distributed pivot facet 
> ticket|https://issues.apache.org/jira/browse/SOLR-2894?focusedCommentId=13979898].
>  Basically it was thought that the refinement code would need to know that a 
> count was 0 for a shard so that a refinement request wasn't sent to that 
> shard. However this is checked in the code, [in this part of the refinement 
> candidate 
> checking|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/7.1.0/solr/core/src/java/org/apache/solr/handler/component/PivotFacetField.java#L275].
>  Therefore if the {{pivot.mincount}} was set to 1, the non-existent values 
> would either:
> * Not be known, because the {{facet.limit}} was smaller than the number of 
> facet values with positive counts. This isn't an issue, because they wouldn't 
> have been returned with {{pivot.mincount}} set to 0.
> * Would be known, because the {{facet.limit}} would be larger than the number 
> of facet values returned. therefore this conditional would return false 
> (since we are only talking about pivot facets sorted by count).
> The solution, is to use the same pivot mincount as would be used if no limit 
> was specified. 
> This also relates to a similar problem in field faceting that was "fixed" in 
> [SOLR-8988|https://issues.apache.org/jira/browse/SOLR-8988#13324]. The 
> solution was to add a flag, {{facet.distrib.mco}}, which would enable not 
> choosing a mincount of 0 when unnessesary. Since this flag can only increase 
> performance, and doesn't break any queries I have removed it as an option and 
> replaced the code to use the feature always. 
> There was one code change necessary to fix the MCO option, since the 
> refinement candidate selection logic had a bug. The bug only occured with a 
> minCount > 0 and limit > 0 specified. When a shard replied with less than the 
> limit requested, it would assume the next maximum count on that shard was the 
> {{mincount}}, where it would actually be the {{mincount-1}} (because a facet 
> value with a count of mincount would have been returned). Therefore the MCO 
> didn't cause any errors, but with a mincount of 1 the refinement logic always 
> assumed that the shard had more values with a count of 1.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11711) Improve mincount & limit usage in pivot & field facets

2017-12-08 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16284045#comment-16284045
 ] 

Hoss Man commented on SOLR-11711:
-

I think you assessment makes sense (and thank you for all the due dilligence 
and back linking to the relevant comments/jiras!) ... I'm hammering on the 
randomized tests now just to sanity check that we're not missing something 
obvious, but overall i'm +1 to the patch.

My one objection is to the immediate removal of the {{FACET_DISTRIB_MCO}} 
constant from FacetParams.java.  The patch we commit & backport to 7x should 
only deprecate that param and remove it's _usage_ in existing code, that way 
users who upgrade will get a deprecation warning when compiling their solrj 
code, but not a compilation failure.  once the backport is done we can do a 
separate commit to remove it from master.

if you feel inclined to revise your patch/pr to deal with the deprecation i'll 
aim for committing/backporting monday baring test failures -- but if you don't 
have time no worries: it's a trivial thing for me to make myself locally before 
committing

> Improve mincount & limit usage in pivot & field facets
> --
>
> Key: SOLR-11711
> URL: https://issues.apache.org/jira/browse/SOLR-11711
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: faceting
>Affects Versions: master (8.0)
>Reporter: Houston Putman
>  Labels: pull-request-available
> Fix For: 5.6, 6.7, 7.2
>
>
> Currently while sending pivot facet requests to each shard, the 
> {{facet.pivot.mincount}} is set to {{0}} if the facet is sorted by count with 
> a specified limit > 0. However with a mincount of 0, the pivot facet will use 
> exponentially more wasted memory for every pivot field added. This is because 
> there will be a total of {{limit^(# of pivots)}} pivot values created in 
> memory, even though the vast majority of them will have counts of 0, and are 
> therefore useless.
> Imagine the scenario of a pivot facet with 3 levels, and 
> {{facet.limit=1000}}. There will be a billion pivot values created, and there 
> will almost definitely be nowhere near a billion pivot values with counts > 0.
> This likely due to the reasoning mentioned in [this comment in the original 
> distributed pivot facet 
> ticket|https://issues.apache.org/jira/browse/SOLR-2894?focusedCommentId=13979898].
>  Basically it was thought that the refinement code would need to know that a 
> count was 0 for a shard so that a refinement request wasn't sent to that 
> shard. However this is checked in the code, [in this part of the refinement 
> candidate 
> checking|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/7.1.0/solr/core/src/java/org/apache/solr/handler/component/PivotFacetField.java#L275].
>  Therefore if the {{pivot.mincount}} was set to 1, the non-existent values 
> would either:
> * Not be known, because the {{facet.limit}} was smaller than the number of 
> facet values with positive counts. This isn't an issue, because they wouldn't 
> have been returned with {{pivot.mincount}} set to 0.
> * Would be known, because the {{facet.limit}} would be larger than the number 
> of facet values returned. therefore this conditional would return false 
> (since we are only talking about pivot facets sorted by count).
> The solution, is to use the same pivot mincount as would be used if no limit 
> was specified. 
> This also relates to a similar problem in field faceting that was "fixed" in 
> [SOLR-8988|https://issues.apache.org/jira/browse/SOLR-8988#13324]. The 
> solution was to add a flag, {{facet.distrib.mco}}, which would enable not 
> choosing a mincount of 0 when unnessesary. Since this flag can only increase 
> performance, and doesn't break any queries I have removed it as an option and 
> replaced the code to use the feature always. 
> There was one code change necessary to fix the MCO option, since the 
> refinement candidate selection logic had a bug. The bug only occured with a 
> minCount > 0 and limit > 0 specified. When a shard replied with less than the 
> limit requested, it would assume the next maximum count on that shard was the 
> {{mincount}}, where it would actually be the {{mincount-1}} (because a facet 
> value with a count of mincount would have been returned). Therefore the MCO 
> didn't cause any errors, but with a mincount of 1 the refinement logic always 
> assumed that the shard had more values with a count of 1.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3666) DataImportHandler status command in SolrCloud does not work properly

2017-12-08 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16284043#comment-16284043
 ] 

Shawn Heisey commented on SOLR-3666:


One of the first things that may require bikeshedding is where exactly to store 
the sticky information for DIH in zookeeper, what exactly needs to be recorded, 
and how it should be written.  If it weren't for the fact that handler names 
usually have forward slashes, I'd prefer znodes, but since I don't think znodes 
can have that character, I'm betting that a json state file will be the right 
way to go.  Would URL encoding the handler name make sense, so we can use a 
znode structure?

Looking at the Cloud->Tree info in Solr's admin UI, I notice that znodes have 
timestamps, so I wonder if the "mtime" data could be used for expiration 
purposes if we use pure znodes and something like URL encoding for the handler 
name.

I will need to defer to others about the overseer, watches, and other such 
details.


> DataImportHandler status command in SolrCloud does not work properly 
> -
>
> Key: SOLR-3666
> URL: https://issues.apache.org/jira/browse/SOLR-3666
> Project: Solr
>  Issue Type: Bug
>  Components: contrib - DataImportHandler, SolrCloud
>Affects Versions: 4.0-ALPHA
>Reporter: Sauvik Sarkar
>
> The dataimport?command=status command does not work correctly when invoked on 
> the node not running the DIH in a SolrCloud configuration.
> The expectation is that no matter which node is importing any other node 
> should be able get the import status information.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-11423) Overseer queue needs a hard cap (maximum size) that clients respect

2017-12-08 Thread Scott Blum (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16284021#comment-16284021
 ] 

Scott Blum edited comment on SOLR-11423 at 12/8/17 6:52 PM:


I'm fixing solr/CHANGES.txt while cherry-picking into 7x and 7_2.  I'll push a 
change to master to move it when that's done.

Note: the solr/CHANGES.txt that shipped with 7.1 does not erroneously report 
this bugfix being included, it's just master/7x/7_2 that has it wrong.



was (Author: dragonsinth):
I'm fixing solr/CHANGES.txt while cherry-picking into 7x and 7_2.  I'll push a 
change to master to move it when that's done.

> Overseer queue needs a hard cap (maximum size) that clients respect
> ---
>
> Key: SOLR-11423
> URL: https://issues.apache.org/jira/browse/SOLR-11423
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Scott Blum
>Assignee: Scott Blum
>
> When Solr gets into pathological GC thrashing states, it can fill the 
> overseer queue with literally thousands and thousands of queued state 
> changes.  Many of these end up being duplicated up/down state updates.  Our 
> production cluster has gotten to the 100k queued items level many times, and 
> there's nothing useful you can do at this point except manually purge the 
> queue in ZK.  Recently, it hit 3 million queued items, at which point our 
> entire ZK cluster exploded.
> I propose a hard cap.  Any client trying to enqueue a item when a queue is 
> full would throw an exception.  I was thinking maybe 10,000 items would be a 
> reasonable limit.  Thoughts?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11423) Overseer queue needs a hard cap (maximum size) that clients respect

2017-12-08 Thread Scott Blum (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16284021#comment-16284021
 ] 

Scott Blum commented on SOLR-11423:
---

I'm fixing solr/CHANGES.txt while cherry-picking into 7x and 7_2.  I'll push a 
change to master to move it when that's done.

> Overseer queue needs a hard cap (maximum size) that clients respect
> ---
>
> Key: SOLR-11423
> URL: https://issues.apache.org/jira/browse/SOLR-11423
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Scott Blum
>Assignee: Scott Blum
>
> When Solr gets into pathological GC thrashing states, it can fill the 
> overseer queue with literally thousands and thousands of queued state 
> changes.  Many of these end up being duplicated up/down state updates.  Our 
> production cluster has gotten to the 100k queued items level many times, and 
> there's nothing useful you can do at this point except manually purge the 
> queue in ZK.  Recently, it hit 3 million queued items, at which point our 
> entire ZK cluster exploded.
> I propose a hard cap.  Any client trying to enqueue a item when a queue is 
> full would throw an exception.  I was thinking maybe 10,000 items would be a 
> reasonable limit.  Thoughts?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-9.0.1) - Build # 21054 - Still Unstable!

2017-12-08 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/21054/
Java: 64bit/jdk-9.0.1 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

2 tests failed.
FAILED:  
org.apache.solr.common.cloud.TestCollectionStateWatchers.testWaitForStateChecksCurrentState

Error Message:


Stack Trace:
java.util.concurrent.TimeoutException
at 
__randomizedtesting.SeedInfo.seed([F833DC9C74B6B277:2EDFB34424EFCF22]:0)
at 
org.apache.solr.common.cloud.ZkStateReader.waitForState(ZkStateReader.java:1276)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.waitForState(CloudSolrClient.java:448)
at 
org.apache.solr.common.cloud.TestCollectionStateWatchers.testWaitForStateChecksCurrentState(TestCollectionStateWatchers.java:182)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  org.apache.solr.handler.TestSQLHandler.doTest

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([977FE358436706B1:303B5BFC2EDC1508]:0)

[jira] [Commented] (LUCENE-8086) G3d wrapper: Improve circles for non spherical planets

2017-12-08 Thread Ignacio Vera (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16284008#comment-16284008
 ] 

Ignacio Vera commented on LUCENE-8086:
--

Thanks for the quick review, I don't think I will manage to do the changes 
during the weekend so early next week.



> G3d wrapper: Improve circles for non spherical planets
> --
>
> Key: LUCENE-8086
> URL: https://issues.apache.org/jira/browse/LUCENE-8086
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/spatial-extras
>Reporter: Ignacio Vera
>
> Hi [~dsmiley],
> The purpose of this ticket is to add a new circle shape (GeoExactCircle) for 
> non-spherical planets and therefore remove the method relate from 
> Geo3dCircleShape. The patch will include some simplifications on the wrapper 
> and some refactoring of the tests.
> I will open shortly a pull request.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr 7.2

2017-12-08 Thread Adrien Grand
FYI we are backporting SOLR-11423
 to 7.2 so I'll build a
RC on Monday (assuming it will have been backported by then).

Le jeu. 7 déc. 2017 à 20:17, Adrien Grand  a écrit :

> OK, it looks like all changes that we wanted to be included are now in?
> Please let me know if there is still something left to include in 7.2
> before building a RC.
>
> I noticed SOLR-11423 is in a weird state, it is included in the changelog
> in 7.1 but has only been committed to master. Did we forget to backport it?
>
> Le mer. 6 déc. 2017 à 21:13, Andrzej Białecki <
> andrzej.biale...@lucidworks.com> a écrit :
>
>> On 6 Dec 2017, at 18:45, Andrzej Białecki <
>> andrzej.biale...@lucidworks.com> wrote:
>>
>> I attached the patch to SOLR-11714, which disables the ‘searchRate’
>> trigger - if there are no objections I’ll commit it shortly to branch_7.2.
>>
>>
>>
>> This has been committed now to branch_7_2 and I don’t have any other open
>> issues for 7.2. Thanks!
>>
>>
>>
>> On 6 Dec 2017, at 15:51, Andrzej Białecki <
>> andrzej.biale...@lucidworks.com> wrote:
>>
>>
>> On 6 Dec 2017, at 15:35, Andrzej Białecki <
>> andrzej.biale...@lucidworks.com> wrote:
>>
>> SOLR-11458 is committed and resolved - thanks for the patience.
>>
>>
>>
>> Actually, one more thing … ;) SOLR-11714 is a more serious bug in a new
>> feature (searchRate autoscaling trigger). It’s probably best to disable
>> this feature in 7.2 rather than releasing a broken version, so I’d like to
>> commit a patch that disables it (plus a note in CHANGES.txt).
>>
>>
>>
>>
>> On 6 Dec 2017, at 14:02, Adrien Grand  wrote:
>>
>> Thanks for the heads up, Anshum.
>>
>> This leaves us with only SOLR-11458 to wait for before building a RC
>> (which might be ready but just not marked as resolved).
>>
>>
>>
>> Le mer. 6 déc. 2017 à 13:47, Ishan Chattopadhyaya <
>> ichattopadhy...@gmail.com> a écrit :
>>
>>> Hi Adrien,
>>> I'm planning to skip SOLR-11624 for this release (as per my last comment
>>> https://issues.apache.org/jira/browse/SOLR-11624?focusedCommentId=16280121=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16280121).
>>> If someone has an objection, please let me know; otherwise, please feel
>>> free to proceed with the release.
>>> I'll continue working on it anyway, and shall try to have it ready for
>>> the next release.
>>> Thanks,
>>> Ishan
>>>
>>> On Wed, Dec 6, 2017 at 2:41 PM, Adrien Grand  wrote:
>>>
 FYI I created the new branch for 7.2, so you will have to backport to
 this branch. No hurry though, I mostly created the branch so that it's fine
 to cherry-pick changes that may wait for 7.3 to be released.

 Le mer. 6 déc. 2017 à 08:53, Adrien Grand  a écrit :

> Sorry to hear that Ishan, I hope you are doing better now. +1 to get
> SOLR-11624 in.
>
> Le mer. 6 déc. 2017 à 07:57, Ishan Chattopadhyaya <
> ichattopadhy...@gmail.com> a écrit :
>
>> I was a bit unwell over the weekend and yesterday; I'm working on a
>> very targeted fix for SOLR-11624 right now; I expect it to take another 
>> 5-6
>> hours.
>> Is that fine with you, Adrien? If not, please go ahead with the
>> release, and I'll volunteer later for a bugfix release for this after 7.2
>> is out.
>>
>> On Wed, Dec 6, 2017 at 3:25 AM, Adrien Grand 
>> wrote:
>>
>>> Fine with me.
>>>
>>> Le mar. 5 déc. 2017 à 22:34, Varun Thacker  a
>>> écrit :
>>>
 Hi Adrien,

 I'd like to commit SOLR-11590 . The issue had a patch couple of
 weeks ago and has been reviewed but never got committed. I've run all 
 the
 tests twice as well to verify.

 On Tue, Dec 5, 2017 at 9:08 AM, Andrzej Białecki <
 andrzej.biale...@lucidworks.com> wrote:

>
> On 5 Dec 2017, at 18:05, Adrien Grand  wrote:
>
> Andrzej, ok to merge since it is a bug fix. Since we're close to
> the RC build, maybe try to get someone familiar with the code to 
> review it
> to make sure it doesn't have unexpected side-effects?
>
>
> Sure I’ll do this - thanks!
>
>
> Le mar. 5 déc. 2017 à 17:57, Andrzej Białecki <
> andrzej.biale...@lucidworks.com> a écrit :
>
>> Adrien,
>>
>> If it’s ok I would also like to merge SOLR-11458, this
>> significantly reduces the chance of accidental data loss when using
>> MoveReplicaCmd.
>>
>> On 5 Dec 2017, at 14:44, Adrien Grand  wrote:
>>
>> Quick update:
>>
>> LUCENE-8043, SOLR-9137, SOLR-11662 and SOLR-11687 have been
>> merged, they will be in 7.2.
>>
>> LUCENE-8048 and 

[jira] [Commented] (LUCENE-8086) G3d wrapper: Improve circles for non spherical planets

2017-12-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16284004#comment-16284004
 ] 

ASF GitHub Bot commented on LUCENE-8086:


Github user iverase commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/288#discussion_r155844958
  
--- Diff: 
lucene/spatial-extras/src/test/org/apache/lucene/spatial/spatial4j/Geo3dShapeRectRelationTestCase.java
 ---
@@ -16,29 +16,15 @@
  */
 package org.apache.lucene.spatial.spatial4j;
 
-import java.util.ArrayList;
-import java.util.List;
-
-import org.apache.lucene.spatial3d.geom.GeoPath;
-import org.apache.lucene.spatial3d.geom.GeoPolygon;
+import org.junit.Rule;
+import org.junit.Test;
 import org.locationtech.spatial4j.TestLog;
 import org.locationtech.spatial4j.context.SpatialContext;
-import org.locationtech.spatial4j.distance.DistanceUtils;
 import org.locationtech.spatial4j.shape.Circle;
 import org.locationtech.spatial4j.shape.Point;
 import org.locationtech.spatial4j.shape.RectIntersectionTestHelper;
-import org.apache.lucene.spatial3d.geom.LatLonBounds;
--- End diff --

I didn't dear to change it but that was the idea of the effort, I will 
remove Geo3d. 


> G3d wrapper: Improve circles for non spherical planets
> --
>
> Key: LUCENE-8086
> URL: https://issues.apache.org/jira/browse/LUCENE-8086
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/spatial-extras
>Reporter: Ignacio Vera
>
> Hi [~dsmiley],
> The purpose of this ticket is to add a new circle shape (GeoExactCircle) for 
> non-spherical planets and therefore remove the method relate from 
> Geo3dCircleShape. The patch will include some simplifications on the wrapper 
> and some refactoring of the tests.
> I will open shortly a pull request.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #288: LUCENE-8086

2017-12-08 Thread iverase
Github user iverase commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/288#discussion_r155844958
  
--- Diff: 
lucene/spatial-extras/src/test/org/apache/lucene/spatial/spatial4j/Geo3dShapeRectRelationTestCase.java
 ---
@@ -16,29 +16,15 @@
  */
 package org.apache.lucene.spatial.spatial4j;
 
-import java.util.ArrayList;
-import java.util.List;
-
-import org.apache.lucene.spatial3d.geom.GeoPath;
-import org.apache.lucene.spatial3d.geom.GeoPolygon;
+import org.junit.Rule;
+import org.junit.Test;
 import org.locationtech.spatial4j.TestLog;
 import org.locationtech.spatial4j.context.SpatialContext;
-import org.locationtech.spatial4j.distance.DistanceUtils;
 import org.locationtech.spatial4j.shape.Circle;
 import org.locationtech.spatial4j.shape.Point;
 import org.locationtech.spatial4j.shape.RectIntersectionTestHelper;
-import org.apache.lucene.spatial3d.geom.LatLonBounds;
--- End diff --

I didn't dear to change it but that was the idea of the effort, I will 
remove Geo3d. 


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-11423) Overseer queue needs a hard cap (maximum size) that clients respect

2017-12-08 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-11423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16283981#comment-16283981
 ] 

Tomás Fernández Löbbe edited comment on SOLR-11423 at 12/8/17 6:37 PM:
---

bq. Sounds good to me. So backport to branch_7_2 and branch_7x?
+1. And lets fix CHANGES.txt
Is this OK [~jpountz]? This is not a bugfix, but as you said, it's in an odd 
state right now, and since it was included in the 7.1 CHANGES I feel we should 
correct ASAP


was (Author: tomasflobbe):
bq. Sounds good to me. So backport to branch_7_2 and branch_7x?
+1. And lets fix CHANGES.txt

> Overseer queue needs a hard cap (maximum size) that clients respect
> ---
>
> Key: SOLR-11423
> URL: https://issues.apache.org/jira/browse/SOLR-11423
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Scott Blum
>Assignee: Scott Blum
>
> When Solr gets into pathological GC thrashing states, it can fill the 
> overseer queue with literally thousands and thousands of queued state 
> changes.  Many of these end up being duplicated up/down state updates.  Our 
> production cluster has gotten to the 100k queued items level many times, and 
> there's nothing useful you can do at this point except manually purge the 
> queue in ZK.  Recently, it hit 3 million queued items, at which point our 
> entire ZK cluster exploded.
> I propose a hard cap.  Any client trying to enqueue a item when a queue is 
> full would throw an exception.  I was thinking maybe 10,000 items would be a 
> reasonable limit.  Thoughts?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8086) G3d wrapper: Improve circles for non spherical planets

2017-12-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16283996#comment-16283996
 ] 

ASF GitHub Bot commented on LUCENE-8086:


Github user iverase commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/288#discussion_r155844229
  
--- Diff: 
lucene/spatial-extras/src/test/org/apache/lucene/spatial/spatial4j/Geo3dShapeRectRelationTestCase.java
 ---
@@ -155,16 +107,12 @@ protected Geo3dShape generateRandomShape(Point nearP) 
{
   ulhcPoint = lrhcPoint;
   lrhcPoint = temp;
 }
-final GeoBBox shape = GeoBBoxFactory.makeGeoBBox(planetModel, 
ulhcPoint.getY() * DEGREES_TO_RADIANS,
-lrhcPoint.getY() * DEGREES_TO_RADIANS,
-ulhcPoint.getX() * DEGREES_TO_RADIANS,
-lrhcPoint.getX() * DEGREES_TO_RADIANS);
-return new Geo3dShape(shape, ctx);
+return (Geo3dShape) ctx.getShapeFactory().rect(lrhcPoint, 
ulhcPoint);
--- End diff --

Indeed!


> G3d wrapper: Improve circles for non spherical planets
> --
>
> Key: LUCENE-8086
> URL: https://issues.apache.org/jira/browse/LUCENE-8086
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/spatial-extras
>Reporter: Ignacio Vera
>
> Hi [~dsmiley],
> The purpose of this ticket is to add a new circle shape (GeoExactCircle) for 
> non-spherical planets and therefore remove the method relate from 
> Geo3dCircleShape. The patch will include some simplifications on the wrapper 
> and some refactoring of the tests.
> I will open shortly a pull request.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8086) G3d wrapper: Improve circles for non spherical planets

2017-12-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16283991#comment-16283991
 ] 

ASF GitHub Bot commented on LUCENE-8086:


Github user iverase commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/288#discussion_r155843938
  
--- Diff: 
lucene/spatial-extras/src/java/org/apache/lucene/spatial/spatial4j/Geo3dShapeFactory.java
 ---
@@ -150,10 +176,21 @@ public Rectangle rect(double minX, double maxX, 
double minY, double maxY) {
 
   @Override
   public Circle circle(double x, double y, double distance) {
-GeoCircle circle = GeoCircleFactory.makeGeoCircle(planetModel,
-y * DistanceUtils.DEGREES_TO_RADIANS,
-x * DistanceUtils.DEGREES_TO_RADIANS,
-distance * DistanceUtils.DEGREES_TO_RADIANS);
+GeoCircle circle;
+if (planetModel.ab == planetModel.c) {
--- End diff --

I think spatial3d is a low level library in that respect, so it shouldn't 
have such a method. Karl Wright has the last word, comment would be fine   


> G3d wrapper: Improve circles for non spherical planets
> --
>
> Key: LUCENE-8086
> URL: https://issues.apache.org/jira/browse/LUCENE-8086
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/spatial-extras
>Reporter: Ignacio Vera
>
> Hi [~dsmiley],
> The purpose of this ticket is to add a new circle shape (GeoExactCircle) for 
> non-spherical planets and therefore remove the method relate from 
> Geo3dCircleShape. The patch will include some simplifications on the wrapper 
> and some refactoring of the tests.
> I will open shortly a pull request.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8086) G3d wrapper: Improve circles for non spherical planets

2017-12-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16283993#comment-16283993
 ] 

ASF GitHub Bot commented on LUCENE-8086:


Github user iverase commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/288#discussion_r155844074
  
--- Diff: 
lucene/spatial-extras/src/java/org/apache/lucene/spatial/spatial4j/Geo3dShapeFactory.java
 ---
@@ -55,6 +55,13 @@
   private SpatialContext context;
   private PlanetModel planetModel;
 
+  /**
+   * Default accuracy for circles when not using the unit sphere.
+   * It is equivalent to 10m on the surface of the earth.
+   */
+  private static double DEFAULT_CIRCLE_ACCURACY = 1.6e-6;
--- End diff --

Indeed!


> G3d wrapper: Improve circles for non spherical planets
> --
>
> Key: LUCENE-8086
> URL: https://issues.apache.org/jira/browse/LUCENE-8086
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/spatial-extras
>Reporter: Ignacio Vera
>
> Hi [~dsmiley],
> The purpose of this ticket is to add a new circle shape (GeoExactCircle) for 
> non-spherical planets and therefore remove the method relate from 
> Geo3dCircleShape. The patch will include some simplifications on the wrapper 
> and some refactoring of the tests.
> I will open shortly a pull request.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #288: LUCENE-8086

2017-12-08 Thread iverase
Github user iverase commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/288#discussion_r155844229
  
--- Diff: 
lucene/spatial-extras/src/test/org/apache/lucene/spatial/spatial4j/Geo3dShapeRectRelationTestCase.java
 ---
@@ -155,16 +107,12 @@ protected Geo3dShape generateRandomShape(Point nearP) 
{
   ulhcPoint = lrhcPoint;
   lrhcPoint = temp;
 }
-final GeoBBox shape = GeoBBoxFactory.makeGeoBBox(planetModel, 
ulhcPoint.getY() * DEGREES_TO_RADIANS,
-lrhcPoint.getY() * DEGREES_TO_RADIANS,
-ulhcPoint.getX() * DEGREES_TO_RADIANS,
-lrhcPoint.getX() * DEGREES_TO_RADIANS);
-return new Geo3dShape(shape, ctx);
+return (Geo3dShape) ctx.getShapeFactory().rect(lrhcPoint, 
ulhcPoint);
--- End diff --

Indeed!


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #288: LUCENE-8086

2017-12-08 Thread iverase
Github user iverase commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/288#discussion_r155844074
  
--- Diff: 
lucene/spatial-extras/src/java/org/apache/lucene/spatial/spatial4j/Geo3dShapeFactory.java
 ---
@@ -55,6 +55,13 @@
   private SpatialContext context;
   private PlanetModel planetModel;
 
+  /**
+   * Default accuracy for circles when not using the unit sphere.
+   * It is equivalent to 10m on the surface of the earth.
+   */
+  private static double DEFAULT_CIRCLE_ACCURACY = 1.6e-6;
--- End diff --

Indeed!


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #288: LUCENE-8086

2017-12-08 Thread iverase
Github user iverase commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/288#discussion_r155843938
  
--- Diff: 
lucene/spatial-extras/src/java/org/apache/lucene/spatial/spatial4j/Geo3dShapeFactory.java
 ---
@@ -150,10 +176,21 @@ public Rectangle rect(double minX, double maxX, 
double minY, double maxY) {
 
   @Override
   public Circle circle(double x, double y, double distance) {
-GeoCircle circle = GeoCircleFactory.makeGeoCircle(planetModel,
-y * DistanceUtils.DEGREES_TO_RADIANS,
-x * DistanceUtils.DEGREES_TO_RADIANS,
-distance * DistanceUtils.DEGREES_TO_RADIANS);
+GeoCircle circle;
+if (planetModel.ab == planetModel.c) {
--- End diff --

I think spatial3d is a low level library in that respect, so it shouldn't 
have such a method. Karl Wright has the last word, comment would be fine   


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8086) G3d wrapper: Improve circles for non spherical planets

2017-12-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16283985#comment-16283985
 ] 

ASF GitHub Bot commented on LUCENE-8086:


Github user iverase commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/288#discussion_r155843390
  
--- Diff: 
lucene/spatial-extras/src/java/org/apache/lucene/spatial/spatial4j/Geo3dShapeFactory.java
 ---
@@ -67,6 +74,25 @@ public SpatialContext getSpatialContext() {
 return context;
   }
 
+  /**
+   * Set the accuracy for circles.
+   *
+   * "Accuracy" is defined as the maximum linear distance between any 
point on the
+   * surface circle and planes that describe the circle. Therefore on 
WSG84, since the
+   * radius of earth is 6,371,000 meters, an accuracy of 1e-6 corresponds 
to 6.3 meters.
+   * For an accuracy of 1.0 meters, the value of 1.6e-7.
+   *
+   * The default value is set to 10m (1.6e-6).
+   *
+   * Note that accuracy has no effect when the planet model is a sphere. 
In that case circles
+   * are always fully precise.
+   *
+   * @param circleAccuracy the provided accuracy as a linear distance.
--- End diff --

I need to ask Karl Wright if that is what it means, but I guess so. I will 
update accordingly.


> G3d wrapper: Improve circles for non spherical planets
> --
>
> Key: LUCENE-8086
> URL: https://issues.apache.org/jira/browse/LUCENE-8086
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/spatial-extras
>Reporter: Ignacio Vera
>
> Hi [~dsmiley],
> The purpose of this ticket is to add a new circle shape (GeoExactCircle) for 
> non-spherical planets and therefore remove the method relate from 
> Geo3dCircleShape. The patch will include some simplifications on the wrapper 
> and some refactoring of the tests.
> I will open shortly a pull request.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #288: LUCENE-8086

2017-12-08 Thread iverase
Github user iverase commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/288#discussion_r155843390
  
--- Diff: 
lucene/spatial-extras/src/java/org/apache/lucene/spatial/spatial4j/Geo3dShapeFactory.java
 ---
@@ -67,6 +74,25 @@ public SpatialContext getSpatialContext() {
 return context;
   }
 
+  /**
+   * Set the accuracy for circles.
+   *
+   * "Accuracy" is defined as the maximum linear distance between any 
point on the
+   * surface circle and planes that describe the circle. Therefore on 
WSG84, since the
+   * radius of earth is 6,371,000 meters, an accuracy of 1e-6 corresponds 
to 6.3 meters.
+   * For an accuracy of 1.0 meters, the value of 1.6e-7.
+   *
+   * The default value is set to 10m (1.6e-6).
+   *
+   * Note that accuracy has no effect when the planet model is a sphere. 
In that case circles
+   * are always fully precise.
+   *
+   * @param circleAccuracy the provided accuracy as a linear distance.
--- End diff --

I need to ask Karl Wright if that is what it means, but I guess so. I will 
update accordingly.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11423) Overseer queue needs a hard cap (maximum size) that clients respect

2017-12-08 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-11423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16283981#comment-16283981
 ] 

Tomás Fernández Löbbe commented on SOLR-11423:
--

bq. Sounds good to me. So backport to branch_7_2 and branch_7x?
+1. And lets fix CHANGES.txt

> Overseer queue needs a hard cap (maximum size) that clients respect
> ---
>
> Key: SOLR-11423
> URL: https://issues.apache.org/jira/browse/SOLR-11423
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Scott Blum
>Assignee: Scott Blum
>
> When Solr gets into pathological GC thrashing states, it can fill the 
> overseer queue with literally thousands and thousands of queued state 
> changes.  Many of these end up being duplicated up/down state updates.  Our 
> production cluster has gotten to the 100k queued items level many times, and 
> there's nothing useful you can do at this point except manually purge the 
> queue in ZK.  Recently, it hit 3 million queued items, at which point our 
> entire ZK cluster exploded.
> I propose a hard cap.  Any client trying to enqueue a item when a queue is 
> full would throw an exception.  I was thinking maybe 10,000 items would be a 
> reasonable limit.  Thoughts?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-master - Build # 2213 - Still Failing

2017-12-08 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/2213/

All tests passed

Build Log:
[...truncated 453 lines...]
   [junit4] JVM J2: stdout was not empty, see: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/lucene/build/core/test/temp/junit4-J2-20171208_173442_0565286478377924707753.sysout
   [junit4] >>> JVM J2 emitted unexpected output (verbatim) 
   [junit4] #
   [junit4] # There is insufficient memory for the Java Runtime Environment to 
continue.
   [junit4] # Native memory allocation (mmap) failed to map 12288 bytes for 
committing reserved memory.
   [junit4] # An error report file with more information is saved as:
   [junit4] # 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/lucene/build/core/test/J2/hs_err_pid9399.log
   [junit4] <<< JVM J2: EOF 

   [junit4] JVM J2: stderr was not empty, see: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/lucene/build/core/test/temp/junit4-J2-20171208_173442_056893493451660914932.syserr
   [junit4] >>> JVM J2 emitted unexpected output (verbatim) 
   [junit4] Java HotSpot(TM) 64-Bit Server VM warning: INFO: 
os::commit_memory(0x7f2deb3b6000, 12288, 0) failed; error='Cannot allocate 
memory' (errno=12)
   [junit4] <<< JVM J2: EOF 

[...truncated 1199 lines...]
   [junit4] ERROR: JVM J2 ended with an exception, command line: 
/usr/local/asfpackages/java/jdk1.8.0_144/jre/bin/java 
-XX:+HeapDumpOnOutOfMemoryError 
-XX:HeapDumpPath=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/heapdumps
 -ea -esa -Dtests.prefix=tests -Dtests.seed=495E9549A693F856 -Xmx512M 
-Dtests.iters= -Dtests.verbose=false -Dtests.infostream=false 
-Dtests.codec=random -Dtests.postingsformat=random 
-Dtests.docvaluesformat=random -Dtests.locale=random -Dtests.timezone=random 
-Dtests.directory=random -Dtests.linedocsfile=europarl.lines.txt.gz 
-Dtests.luceneMatchVersion=8.0.0 -Dtests.cleanthreads=perMethod 
-Djava.util.logging.config.file=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/lucene/tools/junit4/logging.properties
 -Dtests.nightly=false -Dtests.weekly=false -Dtests.monster=false 
-Dtests.slow=true -Dtests.asserts=true -Dtests.multiplier=2 -DtempDir=./temp 
-Djava.io.tmpdir=./temp 
-Djunit4.tempDir=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/lucene/build/core/test/temp
 
-Dcommon.dir=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/lucene
 
-Dclover.db.dir=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/lucene/build/clover/db
 
-Djava.security.policy=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/lucene/tools/junit4/tests.policy
 -Dtests.LUCENE_VERSION=8.0.0 -Djetty.testMode=1 -Djetty.insecurerandom=1 
-Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory 
-Djava.awt.headless=true -Djdk.map.althashing.threshold=0 
-Dtests.src.home=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master 
-Djava.security.egd=file:/dev/./urandom 
-Djunit4.childvm.cwd=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/lucene/build/core/test/J2
 -Djunit4.childvm.id=2 -Djunit4.childvm.count=3 -Dtests.leaveTemporary=false 
-Dtests.filterstacks=true 
-Djava.security.manager=org.apache.lucene.util.TestSecurityManager 
-Dfile.encoding=US-ASCII -classpath 

[jira] [Commented] (SOLR-11673) ReplicationHandler race-condition between deleting slave index and commit in master

2017-12-08 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-11673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16283979#comment-16283979
 ] 

Tomás Fernández Löbbe commented on SOLR-11673:
--

I think this is the same issue described in SOLR-10751, although I thought this 
was fixed by SOLR-11293. Looks like it's not?

> ReplicationHandler race-condition between deleting slave index and commit in 
> master
> ---
>
> Key: SOLR-11673
> URL: https://issues.apache.org/jira/browse/SOLR-11673
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mikhail Khludnev
> Attachments: SOLR-11673-reproducer.patch, 
> SOLR-11673-skipCommitOnMasterVersionZero.patch, SOLR-11673-test-fix.patch, 
> doTestIndexAndConfigReplication-consoleText.txt
>
>
> failure in master [described in 
> SOLR-6228|https://issues.apache.org/jira/browse/SOLR-6228?focusedCommentId=16266007=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16266007].
> {code}
>   2> NOTE: reproduce with: ant test  -Dtestcase=TestReplicationHandler 
> -Dtests.method=doTestIndexAndConfigReplication -Dtests.seed=C541E9C9CC845BA5 
> -Dtests.slow=true -Dtests.locale=es-BO -Dtests.timezone=Africa/Addis_Ababa 
> -Dtests.asserts=true -Dtests.file.encoding=US-ASCII
> [10:13:23.442] ERROR   36.6s | 
> TestReplicationHandler.doTestIndexAndConfigReplication <<<
>> Throwable #1: java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
>>at 
> __randomizedtesting.SeedInfo.seed([C541E9C9CC845BA5:D109B29CEF83E6BB]:0)
>>at java.util.ArrayList.rangeCheck(ArrayList.java:653)
>>at java.util.ArrayList.get(ArrayList.java:429)
>>at 
> org.apache.solr.handler.TestReplicationHandler.doTestIndexAndConfigReplication(TestReplicationHandler.java:561)
> {code}
> Easily reproducible in master by beast.  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-11423) Overseer queue needs a hard cap (maximum size) that clients respect

2017-12-08 Thread Scott Blum (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16283972#comment-16283972
 ] 

Scott Blum edited comment on SOLR-11423 at 12/8/17 6:21 PM:


Sounds good to me.  So backport to branch_7_2 and branch_7x?


was (Author: dragonsinth):
Sounds good to me.

> Overseer queue needs a hard cap (maximum size) that clients respect
> ---
>
> Key: SOLR-11423
> URL: https://issues.apache.org/jira/browse/SOLR-11423
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Scott Blum
>Assignee: Scott Blum
>
> When Solr gets into pathological GC thrashing states, it can fill the 
> overseer queue with literally thousands and thousands of queued state 
> changes.  Many of these end up being duplicated up/down state updates.  Our 
> production cluster has gotten to the 100k queued items level many times, and 
> there's nothing useful you can do at this point except manually purge the 
> queue in ZK.  Recently, it hit 3 million queued items, at which point our 
> entire ZK cluster exploded.
> I propose a hard cap.  Any client trying to enqueue a item when a queue is 
> full would throw an exception.  I was thinking maybe 10,000 items would be a 
> reasonable limit.  Thoughts?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11423) Overseer queue needs a hard cap (maximum size) that clients respect

2017-12-08 Thread Scott Blum (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16283972#comment-16283972
 ] 

Scott Blum commented on SOLR-11423:
---

Sounds good to me.

> Overseer queue needs a hard cap (maximum size) that clients respect
> ---
>
> Key: SOLR-11423
> URL: https://issues.apache.org/jira/browse/SOLR-11423
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Scott Blum
>Assignee: Scott Blum
>
> When Solr gets into pathological GC thrashing states, it can fill the 
> overseer queue with literally thousands and thousands of queued state 
> changes.  Many of these end up being duplicated up/down state updates.  Our 
> production cluster has gotten to the 100k queued items level many times, and 
> there's nothing useful you can do at this point except manually purge the 
> queue in ZK.  Recently, it hit 3 million queued items, at which point our 
> entire ZK cluster exploded.
> I propose a hard cap.  Any client trying to enqueue a item when a queue is 
> full would throw an exception.  I was thinking maybe 10,000 items would be a 
> reasonable limit.  Thoughts?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8197) Make zero counts in heatmap PNG transparent

2017-12-08 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated SOLR-8197:
---
Component/s: spatial

> Make zero counts in heatmap PNG transparent
> ---
>
> Key: SOLR-8197
> URL: https://issues.apache.org/jira/browse/SOLR-8197
> Project: Solr
>  Issue Type: Improvement
>  Components: spatial
>Reporter: Neil Ireson
>Priority: Minor
> Attachments: transparency.patch
>
>
> It would be useful to have transparent zero values so that I can overlay the 
> image as a layer on a map.
> The change just requires altering two methods in SpatialHeatmapFacets.java as 
> follows:
> {code}
> static void writeCountAtColumnRow(BufferedImage image, int rows, int c, int 
> r, int val)
> {
>   image.setRGB(c, rows - 1 - r, val == 0 ? 0 : val ^ 0xFF_00_00_00);
> }
> static int getCountAtColumnRow(BufferedImage image, int rows, int c, int r)
> {
>   int val = image.getRGB(c, rows - 1 - r);
>   return val == 0 ? 0 : val ^ 0xFF_00_00_00;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8197) Make zero counts in heatmap PNG transparent

2017-12-08 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16283968#comment-16283968
 ] 

David Smiley commented on SOLR-8197:


I completely overlooked this issue; sorry about that.

I'm not so sure we should do this.

Firstly, the PNG format here was purely intended for it's compression, not for 
final rendering.  One example of why it's poor to render directly is that 
values in excess of 16M will start to fade the image due to the abuse of the 
alpha channel in order to shove an integer into a RGBA.  Secondly, note that 
that Solr doesn't offer any ways to make this image pretty (e.g. apply a 
readable color scale) whatsoever (the assumption is the client will do that).   
 I am curious about your real-world experience rendering it; can you share your 
experience?  Do you consume this data from a web client?

Secondly, this is a back-compat break.  It could be addressed but it's awkward.

Thirdly, your patch maps 0 to 0 but I'm guessing this means the integer count 
value that currently maps to 0 (16M?) is lost.  We could fix that though.

At some point I'd love to add a new format -- 
[UTFGrid|https://github.com/mapbox/utfgrid-spec]

> Make zero counts in heatmap PNG transparent
> ---
>
> Key: SOLR-8197
> URL: https://issues.apache.org/jira/browse/SOLR-8197
> Project: Solr
>  Issue Type: Improvement
>  Components: spatial
>Reporter: Neil Ireson
>Priority: Minor
> Attachments: transparency.patch
>
>
> It would be useful to have transparent zero values so that I can overlay the 
> image as a layer on a map.
> The change just requires altering two methods in SpatialHeatmapFacets.java as 
> follows:
> {code}
> static void writeCountAtColumnRow(BufferedImage image, int rows, int c, int 
> r, int val)
> {
>   image.setRGB(c, rows - 1 - r, val == 0 ? 0 : val ^ 0xFF_00_00_00);
> }
> static int getCountAtColumnRow(BufferedImage image, int rows, int c, int r)
> {
>   int val = image.getRGB(c, rows - 1 - r);
>   return val == 0 ? 0 : val ^ 0xFF_00_00_00;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Welcome Ishan Chattopadhyaya to the PMC

2017-12-08 Thread Alexandre Rafalovitch
Congratulations Ishan.

Regards,
   Alex.

http://www.solr-start.com/ - Resources for Solr users, new and experienced


On 8 December 2017 at 08:47, Adrien Grand  wrote:
> I am pleased to announce that Ishan Chattopadhyaya has accepted the PMC's
> invitation to join.
>
> Welcome Ishan!

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Welcome Ishan Chattopadhyaya to the PMC

2017-12-08 Thread Tomás Fernández Löbbe
Welcome Ishan!

On Fri, Dec 8, 2017 at 8:11 AM, Erick Erickson 
wrote:

> Welcome Ishan!
>
> On Fri, Dec 8, 2017 at 7:39 AM, David Smiley 
> wrote:
> > Welcome Ishan!  Well deserved.
> >
> > On Fri, Dec 8, 2017 at 8:47 AM Adrien Grand  wrote:
> >>
> >> I am pleased to announce that Ishan Chattopadhyaya has accepted the
> PMC's
> >> invitation to join.
> >>
> >> Welcome Ishan!
> >
> > --
> > Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
> > LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
> > http://www.solrenterprisesearchserver.com
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


[jira] [Commented] (SOLR-11423) Overseer queue needs a hard cap (maximum size) that clients respect

2017-12-08 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16283926#comment-16283926
 ] 

Varun Thacker commented on SOLR-11423:
--

+1 to what Tomas suggested

> Overseer queue needs a hard cap (maximum size) that clients respect
> ---
>
> Key: SOLR-11423
> URL: https://issues.apache.org/jira/browse/SOLR-11423
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Scott Blum
>Assignee: Scott Blum
>
> When Solr gets into pathological GC thrashing states, it can fill the 
> overseer queue with literally thousands and thousands of queued state 
> changes.  Many of these end up being duplicated up/down state updates.  Our 
> production cluster has gotten to the 100k queued items level many times, and 
> there's nothing useful you can do at this point except manually purge the 
> queue in ZK.  Recently, it hit 3 million queued items, at which point our 
> entire ZK cluster exploded.
> I propose a hard cap.  Any client trying to enqueue a item when a queue is 
> full would throw an exception.  I was thinking maybe 10,000 items would be a 
> reasonable limit.  Thoughts?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11423) Overseer queue needs a hard cap (maximum size) that clients respect

2017-12-08 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-11423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16283908#comment-16283908
 ] 

Tomás Fernández Löbbe commented on SOLR-11423:
--

In my experience, 20k state updates in the queue is beyond the point of no 
return too. I think we should backport as it is now, and if needed in the 
future we can use a cluster property to address Scott's point. Lets get this to 
7.2

> Overseer queue needs a hard cap (maximum size) that clients respect
> ---
>
> Key: SOLR-11423
> URL: https://issues.apache.org/jira/browse/SOLR-11423
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Scott Blum
>Assignee: Scott Blum
>
> When Solr gets into pathological GC thrashing states, it can fill the 
> overseer queue with literally thousands and thousands of queued state 
> changes.  Many of these end up being duplicated up/down state updates.  Our 
> production cluster has gotten to the 100k queued items level many times, and 
> there's nothing useful you can do at this point except manually purge the 
> queue in ZK.  Recently, it hit 3 million queued items, at which point our 
> entire ZK cluster exploded.
> I propose a hard cap.  Any client trying to enqueue a item when a queue is 
> full would throw an exception.  I was thinking maybe 10,000 items would be a 
> reasonable limit.  Thoughts?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-7.x - Build # 276 - Still Failing

2017-12-08 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/276/

All tests passed

Build Log:
[...truncated 466 lines...]
   [junit4] JVM J0: stdout was not empty, see: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/build/core/test/temp/junit4-J0-20171208_165844_8742006765338077402132.sysout
   [junit4] >>> JVM J0 emitted unexpected output (verbatim) 
   [junit4] #
   [junit4] # There is insufficient memory for the Java Runtime Environment to 
continue.
   [junit4] # Native memory allocation (mmap) failed to map 114294784 bytes for 
committing reserved memory.
   [junit4] # An error report file with more information is saved as:
   [junit4] # 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/build/core/test/J0/hs_err_pid24683.log
   [junit4] <<< JVM J0: EOF 

   [junit4] JVM J0: stderr was not empty, see: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/build/core/test/temp/junit4-J0-20171208_165844_8749039779944601907095.syserr
   [junit4] >>> JVM J0 emitted unexpected output (verbatim) 
   [junit4] Java HotSpot(TM) 64-Bit Server VM warning: INFO: 
os::commit_memory(0xeca0, 114294784, 0) failed; error='Cannot 
allocate memory' (errno=12)
   [junit4] <<< JVM J0: EOF 

[...truncated 824 lines...]
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 142606336 bytes for committing 
reserved memory.
# An error report file with more information is saved as:
# /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/hs_err_pid24332.log
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
[Fast Archiver] No artifacts from Lucene-Solr-Tests-7.x #274 to compare, so 
performing full copy of artifacts
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-7.2-Linux (32bit/jdk1.8.0_144) - Build # 31 - Unstable!

2017-12-08 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.2-Linux/31/
Java: 32bit/jdk1.8.0_144 -client -XX:+UseG1GC

6 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestLazyCores

Error Message:
1 thread leaked from SUITE scope at org.apache.solr.core.TestLazyCores: 1) 
Thread[id=7118, name=searcherExecutor-2756-thread-1, state=WAITING, 
group=TGRP-TestLazyCores] at sun.misc.Unsafe.park(Native Method)
 at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
at java.lang.Thread.run(Thread.java:748)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.core.TestLazyCores: 
   1) Thread[id=7118, name=searcherExecutor-2756-thread-1, state=WAITING, 
group=TGRP-TestLazyCores]
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
at __randomizedtesting.SeedInfo.seed([1834B8F65DD67582]:0)


FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestLazyCores

Error Message:
There are still zombie threads that couldn't be terminated:1) 
Thread[id=7118, name=searcherExecutor-2756-thread-1, state=WAITING, 
group=TGRP-TestLazyCores] at sun.misc.Unsafe.park(Native Method)
 at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
at java.lang.Thread.run(Thread.java:748)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie 
threads that couldn't be terminated:
   1) Thread[id=7118, name=searcherExecutor-2756-thread-1, state=WAITING, 
group=TGRP-TestLazyCores]
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
at __randomizedtesting.SeedInfo.seed([1834B8F65DD67582]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.client.solrj.request.TestV2Request

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.client.solrj.request.TestV2Request: 1) Thread[id=366, 
name=Connection evictor, state=TIMED_WAITING, group=TGRP-TestV2Request] 
at java.lang.Thread.sleep(Native Method) at 
org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
 at java.lang.Thread.run(Thread.java:748)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.client.solrj.request.TestV2Request: 
   1) Thread[id=366, name=Connection evictor, state=TIMED_WAITING, 
group=TGRP-TestV2Request]
at java.lang.Thread.sleep(Native Method)
at 
org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
at java.lang.Thread.run(Thread.java:748)
at __randomizedtesting.SeedInfo.seed([B8D326E83C980B90]:0)


FAILED:  org.apache.solr.client.solrj.request.TestV2Request.testCloudSolrClient

Error 

[jira] [Commented] (LUCENE-8086) G3d wrapper: Improve circles for non spherical planets

2017-12-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16283854#comment-16283854
 ] 

ASF GitHub Bot commented on LUCENE-8086:


Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/288#discussion_r155822251
  
--- Diff: 
lucene/spatial-extras/src/java/org/apache/lucene/spatial/spatial4j/Geo3dShapeFactory.java
 ---
@@ -67,6 +74,25 @@ public SpatialContext getSpatialContext() {
 return context;
   }
 
+  /**
+   * Set the accuracy for circles.
+   *
+   * "Accuracy" is defined as the maximum linear distance between any 
point on the
+   * surface circle and planes that describe the circle. Therefore on 
WSG84, since the
+   * radius of earth is 6,371,000 meters, an accuracy of 1e-6 corresponds 
to 6.3 meters.
+   * For an accuracy of 1.0 meters, the value of 1.6e-7.
+   *
+   * The default value is set to 10m (1.6e-6).
+   *
+   * Note that accuracy has no effect when the planet model is a sphere. 
In that case circles
+   * are always fully precise.
+   *
+   * @param circleAccuracy the provided accuracy as a linear distance.
--- End diff --

by "linear distance" do you mean decimal degrees as is used in other parts 
of the Spatial4j API? If so please say "decimal degrees".  If not, perhaps it 
should be in that unit?


> G3d wrapper: Improve circles for non spherical planets
> --
>
> Key: LUCENE-8086
> URL: https://issues.apache.org/jira/browse/LUCENE-8086
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/spatial-extras
>Reporter: Ignacio Vera
>
> Hi [~dsmiley],
> The purpose of this ticket is to add a new circle shape (GeoExactCircle) for 
> non-spherical planets and therefore remove the method relate from 
> Geo3dCircleShape. The patch will include some simplifications on the wrapper 
> and some refactoring of the tests.
> I will open shortly a pull request.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8086) G3d wrapper: Improve circles for non spherical planets

2017-12-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16283853#comment-16283853
 ] 

ASF GitHub Bot commented on LUCENE-8086:


Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/288#discussion_r155825386
  
--- Diff: 
lucene/spatial-extras/src/test/org/apache/lucene/spatial/spatial4j/Geo3dShapeRectRelationTestCase.java
 ---
@@ -16,29 +16,15 @@
  */
 package org.apache.lucene.spatial.spatial4j;
 
-import java.util.ArrayList;
-import java.util.List;
-
-import org.apache.lucene.spatial3d.geom.GeoPath;
-import org.apache.lucene.spatial3d.geom.GeoPolygon;
+import org.junit.Rule;
+import org.junit.Test;
 import org.locationtech.spatial4j.TestLog;
 import org.locationtech.spatial4j.context.SpatialContext;
-import org.locationtech.spatial4j.distance.DistanceUtils;
 import org.locationtech.spatial4j.shape.Circle;
 import org.locationtech.spatial4j.shape.Point;
 import org.locationtech.spatial4j.shape.RectIntersectionTestHelper;
-import org.apache.lucene.spatial3d.geom.LatLonBounds;
--- End diff --

If I get this right, you've removed the Geo3D dependencies of this test.  
Yet it's still named to be Geo3d?


> G3d wrapper: Improve circles for non spherical planets
> --
>
> Key: LUCENE-8086
> URL: https://issues.apache.org/jira/browse/LUCENE-8086
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/spatial-extras
>Reporter: Ignacio Vera
>
> Hi [~dsmiley],
> The purpose of this ticket is to add a new circle shape (GeoExactCircle) for 
> non-spherical planets and therefore remove the method relate from 
> Geo3dCircleShape. The patch will include some simplifications on the wrapper 
> and some refactoring of the tests.
> I will open shortly a pull request.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8086) G3d wrapper: Improve circles for non spherical planets

2017-12-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16283852#comment-16283852
 ] 

ASF GitHub Bot commented on LUCENE-8086:


Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/288#discussion_r155826444
  
--- Diff: 
lucene/spatial-extras/src/java/org/apache/lucene/spatial/spatial4j/Geo3dDistanceCalculator.java
 ---
@@ -73,62 +74,20 @@ public boolean within(Point from, double toX, double 
toY, double distance) {
 
   @Override
   public Point pointOnBearing(Point from, double distDEG, double 
bearingDEG, SpatialContext ctx, Point reuse) {
-// Algorithm using Vincenty's formulae 
(https://en.wikipedia.org/wiki/Vincenty%27s_formulae)
-// which takes into account that planets may not be spherical.
-//Code adaptation from 
http://www.movable-type.co.uk/scripts/latlong-vincenty.html
 Geo3dPointShape geoFrom = (Geo3dPointShape) from;
 GeoPoint point = (GeoPoint) geoFrom.shape;
-double lat = point.getLatitude();
-double lon = point.getLongitude();
 double dist = DistanceUtils.DEGREES_TO_RADIANS * distDEG;
 double bearing = DistanceUtils.DEGREES_TO_RADIANS * bearingDEG;
-
--- End diff --

Yay


> G3d wrapper: Improve circles for non spherical planets
> --
>
> Key: LUCENE-8086
> URL: https://issues.apache.org/jira/browse/LUCENE-8086
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/spatial-extras
>Reporter: Ignacio Vera
>
> Hi [~dsmiley],
> The purpose of this ticket is to add a new circle shape (GeoExactCircle) for 
> non-spherical planets and therefore remove the method relate from 
> Geo3dCircleShape. The patch will include some simplifications on the wrapper 
> and some refactoring of the tests.
> I will open shortly a pull request.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8086) G3d wrapper: Improve circles for non spherical planets

2017-12-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16283848#comment-16283848
 ] 

ASF GitHub Bot commented on LUCENE-8086:


Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/288#discussion_r155821440
  
--- Diff: 
lucene/spatial-extras/src/java/org/apache/lucene/spatial/spatial4j/Geo3dShapeFactory.java
 ---
@@ -55,6 +55,13 @@
   private SpatialContext context;
   private PlanetModel planetModel;
 
+  /**
+   * Default accuracy for circles when not using the unit sphere.
+   * It is equivalent to 10m on the surface of the earth.
+   */
+  private static double DEFAULT_CIRCLE_ACCURACY = 1.6e-6;
--- End diff --

should be final


> G3d wrapper: Improve circles for non spherical planets
> --
>
> Key: LUCENE-8086
> URL: https://issues.apache.org/jira/browse/LUCENE-8086
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/spatial-extras
>Reporter: Ignacio Vera
>
> Hi [~dsmiley],
> The purpose of this ticket is to add a new circle shape (GeoExactCircle) for 
> non-spherical planets and therefore remove the method relate from 
> Geo3dCircleShape. The patch will include some simplifications on the wrapper 
> and some refactoring of the tests.
> I will open shortly a pull request.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8086) G3d wrapper: Improve circles for non spherical planets

2017-12-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16283851#comment-16283851
 ] 

ASF GitHub Bot commented on LUCENE-8086:


Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/288#discussion_r155824604
  
--- Diff: 
lucene/spatial-extras/src/test/org/apache/lucene/spatial/spatial4j/Geo3dShapeRectRelationTestCase.java
 ---
@@ -155,16 +107,12 @@ protected Geo3dShape generateRandomShape(Point nearP) 
{
   ulhcPoint = lrhcPoint;
   lrhcPoint = temp;
 }
-final GeoBBox shape = GeoBBoxFactory.makeGeoBBox(planetModel, 
ulhcPoint.getY() * DEGREES_TO_RADIANS,
-lrhcPoint.getY() * DEGREES_TO_RADIANS,
-ulhcPoint.getX() * DEGREES_TO_RADIANS,
-lrhcPoint.getX() * DEGREES_TO_RADIANS);
-return new Geo3dShape(shape, ctx);
+return (Geo3dShape) ctx.getShapeFactory().rect(lrhcPoint, 
ulhcPoint);
--- End diff --

change is good but the variable names are wrong.  `rect(lowerLeft, 
upperRight)`


> G3d wrapper: Improve circles for non spherical planets
> --
>
> Key: LUCENE-8086
> URL: https://issues.apache.org/jira/browse/LUCENE-8086
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/spatial-extras
>Reporter: Ignacio Vera
>
> Hi [~dsmiley],
> The purpose of this ticket is to add a new circle shape (GeoExactCircle) for 
> non-spherical planets and therefore remove the method relate from 
> Geo3dCircleShape. The patch will include some simplifications on the wrapper 
> and some refactoring of the tests.
> I will open shortly a pull request.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8086) G3d wrapper: Improve circles for non spherical planets

2017-12-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16283850#comment-16283850
 ] 

ASF GitHub Bot commented on LUCENE-8086:


Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/288#discussion_r155822689
  
--- Diff: 
lucene/spatial-extras/src/java/org/apache/lucene/spatial/spatial4j/Geo3dShapeFactory.java
 ---
@@ -150,10 +176,21 @@ public Rectangle rect(double minX, double maxX, 
double minY, double maxY) {
 
   @Override
   public Circle circle(double x, double y, double distance) {
-GeoCircle circle = GeoCircleFactory.makeGeoCircle(planetModel,
-y * DistanceUtils.DEGREES_TO_RADIANS,
-x * DistanceUtils.DEGREES_TO_RADIANS,
-distance * DistanceUtils.DEGREES_TO_RADIANS);
+GeoCircle circle;
+if (planetModel.ab == planetModel.c) {
--- End diff --

Should there be a method on planetModel that more descriptively 
characterizes the condition?  (e.g. isSpherical?)  Just a suggestion; perhaps 
not if it's too hard to give an appropriate name.  If not then maybe add a 
comment here so we know what "ab" being equal to "c" means.


> G3d wrapper: Improve circles for non spherical planets
> --
>
> Key: LUCENE-8086
> URL: https://issues.apache.org/jira/browse/LUCENE-8086
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/spatial-extras
>Reporter: Ignacio Vera
>
> Hi [~dsmiley],
> The purpose of this ticket is to add a new circle shape (GeoExactCircle) for 
> non-spherical planets and therefore remove the method relate from 
> Geo3dCircleShape. The patch will include some simplifications on the wrapper 
> and some refactoring of the tests.
> I will open shortly a pull request.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8086) G3d wrapper: Improve circles for non spherical planets

2017-12-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16283849#comment-16283849
 ] 

ASF GitHub Bot commented on LUCENE-8086:


Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/288#discussion_r155826399
  
--- Diff: 
lucene/spatial-extras/src/java/org/apache/lucene/spatial/spatial4j/Geo3dCircleShape.java
 ---
@@ -67,16 +64,4 @@ public Point getCenter() {
 }
 return center;
   }
-
--- End diff --

Yay


> G3d wrapper: Improve circles for non spherical planets
> --
>
> Key: LUCENE-8086
> URL: https://issues.apache.org/jira/browse/LUCENE-8086
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/spatial-extras
>Reporter: Ignacio Vera
>
> Hi [~dsmiley],
> The purpose of this ticket is to add a new circle shape (GeoExactCircle) for 
> non-spherical planets and therefore remove the method relate from 
> Geo3dCircleShape. The patch will include some simplifications on the wrapper 
> and some refactoring of the tests.
> I will open shortly a pull request.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #288: LUCENE-8086

2017-12-08 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/288#discussion_r155822251
  
--- Diff: 
lucene/spatial-extras/src/java/org/apache/lucene/spatial/spatial4j/Geo3dShapeFactory.java
 ---
@@ -67,6 +74,25 @@ public SpatialContext getSpatialContext() {
 return context;
   }
 
+  /**
+   * Set the accuracy for circles.
+   *
+   * "Accuracy" is defined as the maximum linear distance between any 
point on the
+   * surface circle and planes that describe the circle. Therefore on 
WSG84, since the
+   * radius of earth is 6,371,000 meters, an accuracy of 1e-6 corresponds 
to 6.3 meters.
+   * For an accuracy of 1.0 meters, the value of 1.6e-7.
+   *
+   * The default value is set to 10m (1.6e-6).
+   *
+   * Note that accuracy has no effect when the planet model is a sphere. 
In that case circles
+   * are always fully precise.
+   *
+   * @param circleAccuracy the provided accuracy as a linear distance.
--- End diff --

by "linear distance" do you mean decimal degrees as is used in other parts 
of the Spatial4j API? If so please say "decimal degrees".  If not, perhaps it 
should be in that unit?


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #288: LUCENE-8086

2017-12-08 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/288#discussion_r155824604
  
--- Diff: 
lucene/spatial-extras/src/test/org/apache/lucene/spatial/spatial4j/Geo3dShapeRectRelationTestCase.java
 ---
@@ -155,16 +107,12 @@ protected Geo3dShape generateRandomShape(Point nearP) 
{
   ulhcPoint = lrhcPoint;
   lrhcPoint = temp;
 }
-final GeoBBox shape = GeoBBoxFactory.makeGeoBBox(planetModel, 
ulhcPoint.getY() * DEGREES_TO_RADIANS,
-lrhcPoint.getY() * DEGREES_TO_RADIANS,
-ulhcPoint.getX() * DEGREES_TO_RADIANS,
-lrhcPoint.getX() * DEGREES_TO_RADIANS);
-return new Geo3dShape(shape, ctx);
+return (Geo3dShape) ctx.getShapeFactory().rect(lrhcPoint, 
ulhcPoint);
--- End diff --

change is good but the variable names are wrong.  `rect(lowerLeft, 
upperRight)`


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #288: LUCENE-8086

2017-12-08 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/288#discussion_r155821440
  
--- Diff: 
lucene/spatial-extras/src/java/org/apache/lucene/spatial/spatial4j/Geo3dShapeFactory.java
 ---
@@ -55,6 +55,13 @@
   private SpatialContext context;
   private PlanetModel planetModel;
 
+  /**
+   * Default accuracy for circles when not using the unit sphere.
+   * It is equivalent to 10m on the surface of the earth.
+   */
+  private static double DEFAULT_CIRCLE_ACCURACY = 1.6e-6;
--- End diff --

should be final


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #288: LUCENE-8086

2017-12-08 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/288#discussion_r155826444
  
--- Diff: 
lucene/spatial-extras/src/java/org/apache/lucene/spatial/spatial4j/Geo3dDistanceCalculator.java
 ---
@@ -73,62 +74,20 @@ public boolean within(Point from, double toX, double 
toY, double distance) {
 
   @Override
   public Point pointOnBearing(Point from, double distDEG, double 
bearingDEG, SpatialContext ctx, Point reuse) {
-// Algorithm using Vincenty's formulae 
(https://en.wikipedia.org/wiki/Vincenty%27s_formulae)
-// which takes into account that planets may not be spherical.
-//Code adaptation from 
http://www.movable-type.co.uk/scripts/latlong-vincenty.html
 Geo3dPointShape geoFrom = (Geo3dPointShape) from;
 GeoPoint point = (GeoPoint) geoFrom.shape;
-double lat = point.getLatitude();
-double lon = point.getLongitude();
 double dist = DistanceUtils.DEGREES_TO_RADIANS * distDEG;
 double bearing = DistanceUtils.DEGREES_TO_RADIANS * bearingDEG;
-
--- End diff --

Yay


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #288: LUCENE-8086

2017-12-08 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/288#discussion_r155822689
  
--- Diff: 
lucene/spatial-extras/src/java/org/apache/lucene/spatial/spatial4j/Geo3dShapeFactory.java
 ---
@@ -150,10 +176,21 @@ public Rectangle rect(double minX, double maxX, 
double minY, double maxY) {
 
   @Override
   public Circle circle(double x, double y, double distance) {
-GeoCircle circle = GeoCircleFactory.makeGeoCircle(planetModel,
-y * DistanceUtils.DEGREES_TO_RADIANS,
-x * DistanceUtils.DEGREES_TO_RADIANS,
-distance * DistanceUtils.DEGREES_TO_RADIANS);
+GeoCircle circle;
+if (planetModel.ab == planetModel.c) {
--- End diff --

Should there be a method on planetModel that more descriptively 
characterizes the condition?  (e.g. isSpherical?)  Just a suggestion; perhaps 
not if it's too hard to give an appropriate name.  If not then maybe add a 
comment here so we know what "ab" being equal to "c" means.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #288: LUCENE-8086

2017-12-08 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/288#discussion_r155825386
  
--- Diff: 
lucene/spatial-extras/src/test/org/apache/lucene/spatial/spatial4j/Geo3dShapeRectRelationTestCase.java
 ---
@@ -16,29 +16,15 @@
  */
 package org.apache.lucene.spatial.spatial4j;
 
-import java.util.ArrayList;
-import java.util.List;
-
-import org.apache.lucene.spatial3d.geom.GeoPath;
-import org.apache.lucene.spatial3d.geom.GeoPolygon;
+import org.junit.Rule;
+import org.junit.Test;
 import org.locationtech.spatial4j.TestLog;
 import org.locationtech.spatial4j.context.SpatialContext;
-import org.locationtech.spatial4j.distance.DistanceUtils;
 import org.locationtech.spatial4j.shape.Circle;
 import org.locationtech.spatial4j.shape.Point;
 import org.locationtech.spatial4j.shape.RectIntersectionTestHelper;
-import org.apache.lucene.spatial3d.geom.LatLonBounds;
--- End diff --

If I get this right, you've removed the Geo3D dependencies of this test.  
Yet it's still named to be Geo3d?


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #288: LUCENE-8086

2017-12-08 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/288#discussion_r155826399
  
--- Diff: 
lucene/spatial-extras/src/java/org/apache/lucene/spatial/spatial4j/Geo3dCircleShape.java
 ---
@@ -67,16 +64,4 @@ public Point getCenter() {
 }
 return center;
   }
-
--- End diff --

Yay


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-master - Build # 2212 - Still Failing

2017-12-08 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/2212/

1 tests failed.
FAILED:  org.apache.lucene.codecs.TestCodecLoadingDeadlock.testDeadlock

Error Message:
Process died abnormally expected:<0> but was:<1>

Stack Trace:
java.lang.AssertionError: Process died abnormally expected:<0> but was:<1>
at 
__randomizedtesting.SeedInfo.seed([CB2D2476AF31C7F4:C646C562A96B6A22]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.lucene.codecs.TestCodecLoadingDeadlock.testDeadlock(TestCodecLoadingDeadlock.java:72)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$2.evaluate(ThreadLeakControl.java:404)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSuite(RandomizedRunner.java:705)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$200(RandomizedRunner.java:139)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$2.run(RandomizedRunner.java:626)




Build Log:
[...truncated 369 lines...]
   [junit4] JVM J2: stdout was not empty, see: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/lucene/build/core/test/temp/junit4-J2-20171208_161900_168819074993223925921.sysout
   [junit4] >>> JVM J2 emitted unexpected output (verbatim) 
   [junit4] #
   [junit4] # There is insufficient memory for the Java Runtime Environment to 
continue.
   [junit4] # Native memory allocation (mmap) failed to map 2097152 bytes for 
committing reserved memory.
   [junit4] # An error report file with more information is saved as:
   [junit4] # 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/lucene/build/core/test/J2/hs_err_pid3747.log
   [junit4] <<< JVM J2: EOF 

   [junit4] JVM J2: stderr was not empty, see: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/lucene/build/core/test/temp/junit4-J2-20171208_161900_1684779331348179613145.syserr
   [junit4] >>> JVM J2 emitted unexpected output (verbatim) 
   [junit4] Java HotSpot(TM) 64-Bit Server VM warning: INFO: 
os::commit_memory(0xffe0, 2097152, 0) failed; error='Cannot 
allocate memory' (errno=12)
   [junit4] <<< JVM J2: EOF 

[...truncated 338 lines...]
   [junit4] JVM J1: stdout was not empty, see: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/lucene/build/core/test/temp/junit4-J1-20171208_161900_1693555026490529423456.sysout
   [junit4] >>> JVM J1 emitted unexpected output (verbatim) 
   [junit4] #
   [junit4] # There is insufficient memory for the Java Runtime Environment to 
continue.
   [junit4] # Native memory allocation (mmap) failed to map 7864320 bytes for 
committing reserved memory.
   [junit4] # An error report file with more information is saved as:
   [junit4] # 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/lucene/build/core/test/J1/hs_err_pid3746.log
   [junit4] <<< JVM J1: EOF 

   [junit4] JVM J1: stderr was not empty, see: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/lucene/build/core/test/temp/junit4-J1-20171208_161900_1693682050038139109647.syserr
   [junit4] >>> JVM J1 emitted unexpected output (verbatim) 
 

[jira] [Commented] (LUCENE-8087) Record per-term max term frequencies

2017-12-08 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16283823#comment-16283823
 ] 

Robert Muir commented on LUCENE-8087:
-

also for omit norms and omit frequencies cases some stuff would be implicit, if 
you set both options nothing needs to be written at all.

> Record per-term max term frequencies
> 
>
> Key: LUCENE-8087
> URL: https://issues.apache.org/jira/browse/LUCENE-8087
> Project: Lucene - Core
>  Issue Type: Wish
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-8087.patch
>
>
> I was mostly interested in doing that in order to get better score upper 
> bounds for LUCENE-4100. However this doesn't help, at least with the tasks 
> that we have for wikimedium10m. I dug this a bit, and this is due to the fact 
> that the upper bound is not much better if we can't make assumptions about 
> the value of the length. Ideally we'd need something like the maximum term 
> frequency for each norm value. I'll post the patch in case someone has 
> another use-case for per-term max term frequencies.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8087) Record per-term max term frequencies

2017-12-08 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16283818#comment-16283818
 ] 

Robert Muir commented on LUCENE-8087:
-

{quote}
Ideally we'd need something like the maximum term frequency for each norm value.
{quote}

I agree this would be ideal: it would let similarity defer computing the 
maximum impact to query time. Maybe its the right tradeoff to look into, if we 
get good performance without bloating the index? For big terms it'd be at worst 
256 integers. For terms appearing only once or twice the overhead could be kept 
smaller if we don't encode zeros, etc.

> Record per-term max term frequencies
> 
>
> Key: LUCENE-8087
> URL: https://issues.apache.org/jira/browse/LUCENE-8087
> Project: Lucene - Core
>  Issue Type: Wish
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-8087.patch
>
>
> I was mostly interested in doing that in order to get better score upper 
> bounds for LUCENE-4100. However this doesn't help, at least with the tasks 
> that we have for wikimedium10m. I dug this a bit, and this is due to the fact 
> that the upper bound is not much better if we can't make assumptions about 
> the value of the length. Ideally we'd need something like the maximum term 
> frequency for each norm value. I'll post the patch in case someone has 
> another use-case for per-term max term frequencies.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-11738) Add singular value decomposition Stream Evaluator

2017-12-08 Thread Joel Bernstein (JIRA)
Joel Bernstein created SOLR-11738:
-

 Summary: Add singular value decomposition Stream Evaluator
 Key: SOLR-11738
 URL: https://issues.apache.org/jira/browse/SOLR-11738
 Project: Solr
  Issue Type: New Feature
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Joel Bernstein


This ticket adds support for the singular value matrix decomposition to the 
Stream Expression machine learning library. Implementation provided by Apache 
Commons Math.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11737) Add kmeans Stream Evaluator to support kmeans clustering

2017-12-08 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11737?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-11737:
--
Fix Version/s: 7.3

> Add kmeans Stream Evaluator to support kmeans clustering
> 
>
> Key: SOLR-11737
> URL: https://issues.apache.org/jira/browse/SOLR-11737
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 7.3
>
>
> The is ticket add kmeans clustering support to the Streaming Expression 
> machine learning library. Implementation provided by Apache Commons Math.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-11737) Add kmeans Stream Evaluator to support kmeans clustering

2017-12-08 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11737?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein reassigned SOLR-11737:
-

Assignee: Joel Bernstein

> Add kmeans Stream Evaluator to support kmeans clustering
> 
>
> Key: SOLR-11737
> URL: https://issues.apache.org/jira/browse/SOLR-11737
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>
> The is ticket add kmeans clustering support to the Streaming Expression 
> machine learning library. Implementation provided by Apache Commons Math.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-11737) Add kmeans Stream Evaluator to support kmeans clustering

2017-12-08 Thread Joel Bernstein (JIRA)
Joel Bernstein created SOLR-11737:
-

 Summary: Add kmeans Stream Evaluator to support kmeans clustering
 Key: SOLR-11737
 URL: https://issues.apache.org/jira/browse/SOLR-11737
 Project: Solr
  Issue Type: New Feature
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Joel Bernstein


The is ticket add kmeans clustering support to the Streaming Expression machine 
learning library. Implementation provided by Apache Commons Math.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11736) Rename knn Streaming Expression to knnSearch and add new knn Stream Evaluator

2017-12-08 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-11736:
--
Fix Version/s: 7.3

> Rename knn Streaming Expression to knnSearch and add new knn Stream Evaluator
> -
>
> Key: SOLR-11736
> URL: https://issues.apache.org/jira/browse/SOLR-11736
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 7.3
>
>
> The current knn Streaming Expression performs a *more like this* search to 
> find nearest neighbors of a specific document. The ticket will rename the knn 
> Streaming Expression to knnSearch.
> This ticket will also add a new knn Stream Evaluator that performs the 
> k-nearest neighbor algorithm on vectors.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-11736) Rename knn Streaming Expression to knnSearch and add new knn Stream Evaluator

2017-12-08 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein reassigned SOLR-11736:
-

Assignee: Joel Bernstein

> Rename knn Streaming Expression to knnSearch and add new knn Stream Evaluator
> -
>
> Key: SOLR-11736
> URL: https://issues.apache.org/jira/browse/SOLR-11736
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 7.3
>
>
> The current knn Streaming Expression performs a *more like this* search to 
> find nearest neighbors of a specific document. The ticket will rename the knn 
> Streaming Expression to knnSearch.
> This ticket will also add a new knn Stream Evaluator that performs the 
> k-nearest neighbor algorithm on vectors.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-11736) Rename knn Streaming Expression to knnSearch and add new knn Stream Evaluator

2017-12-08 Thread Joel Bernstein (JIRA)
Joel Bernstein created SOLR-11736:
-

 Summary: Rename knn Streaming Expression to knnSearch and add new 
knn Stream Evaluator
 Key: SOLR-11736
 URL: https://issues.apache.org/jira/browse/SOLR-11736
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Joel Bernstein


The current knn Streaming Expression performs a *more like this* search to find 
nearest neighbors of a specific document. The ticket will rename the knn 
Streaming Expression to knnSearch.

This ticket will also add a new knn Stream Evaluator that performs the 
k-nearest neighbor algorithm on vectors.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk1.8.0_144) - Build # 959 - Still unstable!

2017-12-08 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/959/
Java: 64bit/jdk1.8.0_144 -XX:-UseCompressedOops -XX:+UseParallelGC

2 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.client.solrj.io.graph.GraphTest

Error Message:
Error from server at https://127.0.0.1:36255/solr: create the collection time 
out:180s

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at https://127.0.0.1:36255/solr: create the collection time out:180s
at __randomizedtesting.SeedInfo.seed([8F89C5528A40B32]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1103)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:883)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:816)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at 
org.apache.solr.client.solrj.io.graph.GraphTest.setupCluster(GraphTest.java:62)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:874)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
org.apache.solr.cloud.TestPullReplicaErrorHandling.testCantConnectToPullReplica

Error Message:
Error from server at http://127.0.0.1:35227/solr: Could not fully create 
collection: pull_replica_error_handling_test_cant_connect_to_pull_replica

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:35227/solr: Could not fully create collection: 
pull_replica_error_handling_test_cant_connect_to_pull_replica
at 
__randomizedtesting.SeedInfo.seed([FFE982745EB68207:F7DE64BC8518F689]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 

[jira] [Commented] (SOLR-8197) Make zero counts in heatmap PNG transparent

2017-12-08 Thread Neil Ireson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16283793#comment-16283793
 ] 

Neil Ireson commented on SOLR-8197:
---

Is there any chance this could get considered for inclusion into the code. At 
the moment every time I update my Solr I have to recompile the distribution 
with this patch.

It's a small change and as far as I can see it doesn't really have any negative 
impact. If you wanted to maintain the current visual result you can display the 
PNG on a black background.

N

> Make zero counts in heatmap PNG transparent
> ---
>
> Key: SOLR-8197
> URL: https://issues.apache.org/jira/browse/SOLR-8197
> Project: Solr
>  Issue Type: Improvement
>Reporter: Neil Ireson
>Priority: Minor
> Attachments: transparency.patch
>
>
> It would be useful to have transparent zero values so that I can overlay the 
> image as a layer on a map.
> The change just requires altering two methods in SpatialHeatmapFacets.java as 
> follows:
> {code}
> static void writeCountAtColumnRow(BufferedImage image, int rows, int c, int 
> r, int val)
> {
>   image.setRGB(c, rows - 1 - r, val == 0 ? 0 : val ^ 0xFF_00_00_00);
> }
> static int getCountAtColumnRow(BufferedImage image, int rows, int c, int r)
> {
>   int val = image.getRGB(c, rows - 1 - r);
>   return val == 0 ? 0 : val ^ 0xFF_00_00_00;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Welcome Ishan Chattopadhyaya to the PMC

2017-12-08 Thread Erick Erickson
Welcome Ishan!

On Fri, Dec 8, 2017 at 7:39 AM, David Smiley  wrote:
> Welcome Ishan!  Well deserved.
>
> On Fri, Dec 8, 2017 at 8:47 AM Adrien Grand  wrote:
>>
>> I am pleased to announce that Ishan Chattopadhyaya has accepted the PMC's
>> invitation to join.
>>
>> Welcome Ishan!
>
> --
> Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
> LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
> http://www.solrenterprisesearchserver.com

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8087) Record per-term max term frequencies

2017-12-08 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-8087:
-
Attachment: LUCENE-8087.patch

Here is the patch (which I don't plan to push as explained in the description).

> Record per-term max term frequencies
> 
>
> Key: LUCENE-8087
> URL: https://issues.apache.org/jira/browse/LUCENE-8087
> Project: Lucene - Core
>  Issue Type: Wish
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-8087.patch
>
>
> I was mostly interested in doing that in order to get better score upper 
> bounds for LUCENE-4100. However this doesn't help, at least with the tasks 
> that we have for wikimedium10m. I dug this a bit, and this is due to the fact 
> that the upper bound is not much better if we can't make assumptions about 
> the value of the length. Ideally we'd need something like the maximum term 
> frequency for each norm value. I'll post the patch in case someone has 
> another use-case for per-term max term frequencies.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-8087) Record per-term max term frequencies

2017-12-08 Thread Adrien Grand (JIRA)
Adrien Grand created LUCENE-8087:


 Summary: Record per-term max term frequencies
 Key: LUCENE-8087
 URL: https://issues.apache.org/jira/browse/LUCENE-8087
 Project: Lucene - Core
  Issue Type: Wish
Reporter: Adrien Grand
Priority: Minor


I was mostly interested in doing that in order to get better score upper bounds 
for LUCENE-4100. However this doesn't help, at least with the tasks that we 
have for wikimedium10m. I dug this a bit, and this is due to the fact that the 
upper bound is not much better if we can't make assumptions about the value of 
the length. Ideally we'd need something like the maximum term frequency for 
each norm value. I'll post the patch in case someone has another use-case for 
per-term max term frequencies.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8086) G3d wrapper: Improve circles for non spherical planets

2017-12-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16283735#comment-16283735
 ] 

ASF GitHub Bot commented on LUCENE-8086:


GitHub user iverase opened a pull request:

https://github.com/apache/lucene-solr/pull/288

LUCENE-8086

Here are the changes, in particular:

- Geo3dFactory: Use GeoExactCircle for non-spherical planets.
- Geo3dCircleShape: Remove method relate.
- Geo3DShape: Use new factory method for building GeoBbox from bounds 
object.
- Geo3dDistanceCalculator: use pointonbearing from planet model.
- Test refactoring



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/iverase/lucene-solr master

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/288.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #288


commit 73d9ce324b217c633ae72b24233099dd2afc43d5
Author: iverase 
Date:   2017-12-08T15:33:22Z

LUCENE-8086




> G3d wrapper: Improve circles for non spherical planets
> --
>
> Key: LUCENE-8086
> URL: https://issues.apache.org/jira/browse/LUCENE-8086
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/spatial-extras
>Reporter: Ignacio Vera
>
> Hi [~dsmiley],
> The purpose of this ticket is to add a new circle shape (GeoExactCircle) for 
> non-spherical planets and therefore remove the method relate from 
> Geo3dCircleShape. The patch will include some simplifications on the wrapper 
> and some refactoring of the tests.
> I will open shortly a pull request.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Welcome Ishan Chattopadhyaya to the PMC

2017-12-08 Thread David Smiley
Welcome Ishan!  Well deserved.

On Fri, Dec 8, 2017 at 8:47 AM Adrien Grand  wrote:

> I am pleased to announce that Ishan Chattopadhyaya has accepted the PMC's
> invitation to join.
>
> Welcome Ishan!
>
-- 
Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
http://www.solrenterprisesearchserver.com


[GitHub] lucene-solr pull request #288: LUCENE-8086

2017-12-08 Thread iverase
GitHub user iverase opened a pull request:

https://github.com/apache/lucene-solr/pull/288

LUCENE-8086

Here are the changes, in particular:

- Geo3dFactory: Use GeoExactCircle for non-spherical planets.
- Geo3dCircleShape: Remove method relate.
- Geo3DShape: Use new factory method for building GeoBbox from bounds 
object.
- Geo3dDistanceCalculator: use pointonbearing from planet model.
- Test refactoring



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/iverase/lucene-solr master

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/288.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #288


commit 73d9ce324b217c633ae72b24233099dd2afc43d5
Author: iverase 
Date:   2017-12-08T15:33:22Z

LUCENE-8086




---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Welcome Ishan Chattopadhyaya to the PMC

2017-12-08 Thread Dawid Weiss
Welcome Ishan!

Dawid

On Fri, Dec 8, 2017 at 4:19 PM, Mike Drob  wrote:
> Congratulations and well deserved!
>
> On Fri, Dec 8, 2017 at 8:16 AM, Joel Bernstein  wrote:
>>
>> Welcome Ishan!
>>
>> Joel Bernstein
>> http://joelsolr.blogspot.com/
>>
>> On Fri, Dec 8, 2017 at 9:11 AM, Yonik Seeley  wrote:
>>>
>>> Welcome Ishan!
>>> -Yonik
>>>
>>>
>>> On Fri, Dec 8, 2017 at 8:47 AM, Adrien Grand  wrote:
>>> > I am pleased to announce that Ishan Chattopadhyaya has accepted the
>>> > PMC's
>>> > invitation to join.
>>> >
>>> > Welcome Ishan!
>>>
>>> -
>>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>>
>>
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Welcome Ishan Chattopadhyaya to the PMC

2017-12-08 Thread Mike Drob
Congratulations and well deserved!

On Fri, Dec 8, 2017 at 8:16 AM, Joel Bernstein  wrote:

> Welcome Ishan!
>
> Joel Bernstein
> http://joelsolr.blogspot.com/
>
> On Fri, Dec 8, 2017 at 9:11 AM, Yonik Seeley  wrote:
>
>> Welcome Ishan!
>> -Yonik
>>
>>
>> On Fri, Dec 8, 2017 at 8:47 AM, Adrien Grand  wrote:
>> > I am pleased to announce that Ishan Chattopadhyaya has accepted the
>> PMC's
>> > invitation to join.
>> >
>> > Welcome Ishan!
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>>
>


[JENKINS] Lucene-Solr-NightlyTests-7.x - Build # 101 - Still Failing

2017-12-08 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-7.x/101/

6 tests failed.
FAILED:  org.apache.lucene.spatial3d.TestGeo3DPoint.testRandomBig

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([A25508AACED788C8]:0)


FAILED:  org.apache.solr.cloud.CleanupOldIndexTest.test

Error Message:


Stack Trace:
java.util.concurrent.TimeoutException
at 
__randomizedtesting.SeedInfo.seed([F9118E648F671286:7145B1BE219B7F7E]:0)
at 
org.apache.solr.common.cloud.ZkStateReader.waitForState(ZkStateReader.java:1276)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.waitForState(CloudSolrClient.java:448)
at 
org.apache.solr.cloud.CleanupOldIndexTest.test(CleanupOldIndexTest.java:114)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  

  1   2   >