[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1564 - Unstable!

2017-12-11 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1564/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation

Error Message:
2 threads leaked from SUITE scope at 
org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation: 1) 
Thread[id=28879, name=jetty-launcher-6047-thread-1-EventThread, 
state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation] 
at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
 at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)  
   at 
org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323)
 at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105)  
   at 
org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:41)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.readValue(SharedValue.java:244)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.access$100(SharedValue.java:44)
 at 
org.apache.curator.framework.recipes.shared.SharedValue$1.process(SharedValue.java:61)
 at 
org.apache.curator.framework.imps.NamespaceWatcher.process(NamespaceWatcher.java:67)
 at 
org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:530)   
  at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:505)   
 2) Thread[id=28889, name=jetty-launcher-6047-thread-2-EventThread, 
state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation] 
at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
 at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)  
   at 
org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323)
 at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105)  
   at 
org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:41)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.readValue(SharedValue.java:244)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.access$100(SharedValue.java:44)
 at 
org.apache.curator.framework.recipes.shared.SharedValue$1.process(SharedValue.java:61)
 at 
org.apache.curator.framework.imps.NamespaceWatcher.process(NamespaceWatcher.java:67)
 at 
org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:530)   
  at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:505)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 2 threads leaked from SUITE 
scope at org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation: 
   1) Thread[id=28879, name=jetty-launcher-6047-thread-1-EventThread, 
state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)
at 
org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323)
at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105)
at 
org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288)
at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279)
at 

[jira] [Commented] (SOLR-11673) ReplicationHandler race-condition between deleting slave index and commit in master

2017-12-11 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16287189#comment-16287189
 ] 

Mikhail Khludnev commented on SOLR-11673:
-

Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/977/
Java: 64bit/jdk-10-ea+32 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  
org.apache.solr.handler.TestReplicationHandler.doTestIndexAndConfigReplication

Error Message:
Index 0 out-of-bounds for length 0

Stack Trace:
java.lang.IndexOutOfBoundsException: Index 0 out-of-bounds for length 0
at 
__randomizedtesting.SeedInfo.seed([5212D7D20AA5CB57:465A8C8729A27649]:0)
at 
java.base/jdk.internal.util.Preconditions.outOfBounds(Preconditions.java:64)
at 
java.base/jdk.internal.util.Preconditions.outOfBoundsCheckIndex(Preconditions.java:70)
at 
java.base/jdk.internal.util.Preconditions.checkIndex(Preconditions.java:248)
at java.base/java.util.Objects.checkIndex(Objects.java:372)
at java.base/java.util.ArrayList.get(ArrayList.java:440)
at 
org.apache.solr.handler.TestReplicationHandler.doTestIndexAndConfigReplication(TestReplicationHandler.java:561)

> ReplicationHandler race-condition between deleting slave index and commit in 
> master
> ---
>
> Key: SOLR-11673
> URL: https://issues.apache.org/jira/browse/SOLR-11673
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mikhail Khludnev
> Attachments: SOLR-11673-reproducer.patch, 
> SOLR-11673-skipCommitOnMasterVersionZero.patch, SOLR-11673-test-fix.patch, 
> doTestIndexAndConfigReplication-consoleText.txt
>
>
> failure in master [described in 
> SOLR-6228|https://issues.apache.org/jira/browse/SOLR-6228?focusedCommentId=16266007=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16266007].
> {code}
>   2> NOTE: reproduce with: ant test  -Dtestcase=TestReplicationHandler 
> -Dtests.method=doTestIndexAndConfigReplication -Dtests.seed=C541E9C9CC845BA5 
> -Dtests.slow=true -Dtests.locale=es-BO -Dtests.timezone=Africa/Addis_Ababa 
> -Dtests.asserts=true -Dtests.file.encoding=US-ASCII
> [10:13:23.442] ERROR   36.6s | 
> TestReplicationHandler.doTestIndexAndConfigReplication <<<
>> Throwable #1: java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
>>at 
> __randomizedtesting.SeedInfo.seed([C541E9C9CC845BA5:D109B29CEF83E6BB]:0)
>>at java.util.ArrayList.rangeCheck(ArrayList.java:653)
>>at java.util.ArrayList.get(ArrayList.java:429)
>>at 
> org.apache.solr.handler.TestReplicationHandler.doTestIndexAndConfigReplication(TestReplicationHandler.java:561)
> {code}
> Easily reproducible in master by beast.  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-7.x-Linux (64bit/jdk-10-ea+32) - Build # 977 - Unstable!

2017-12-11 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/977/
Java: 64bit/jdk-10-ea+32 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  
org.apache.solr.handler.TestReplicationHandler.doTestIndexAndConfigReplication

Error Message:
Index 0 out-of-bounds for length 0

Stack Trace:
java.lang.IndexOutOfBoundsException: Index 0 out-of-bounds for length 0
at 
__randomizedtesting.SeedInfo.seed([5212D7D20AA5CB57:465A8C8729A27649]:0)
at 
java.base/jdk.internal.util.Preconditions.outOfBounds(Preconditions.java:64)
at 
java.base/jdk.internal.util.Preconditions.outOfBoundsCheckIndex(Preconditions.java:70)
at 
java.base/jdk.internal.util.Preconditions.checkIndex(Preconditions.java:248)
at java.base/java.util.Objects.checkIndex(Objects.java:372)
at java.base/java.util.ArrayList.get(ArrayList.java:440)
at 
org.apache.solr.handler.TestReplicationHandler.doTestIndexAndConfigReplication(TestReplicationHandler.java:561)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 

[JENKINS] Lucene-Solr-7.x-MacOSX (64bit/jdk-9) - Build # 338 - Still Unstable!

2017-12-11 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-MacOSX/338/
Java: 64bit/jdk-9 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
org.apache.solr.handler.TestReplicationHandler.doTestIndexAndConfigReplication

Error Message:
Index 0 out-of-bounds for length 0

Stack Trace:
java.lang.IndexOutOfBoundsException: Index 0 out-of-bounds for length 0
at 
__randomizedtesting.SeedInfo.seed([30AF31F6286F6D2D:24E76AA30B68D033]:0)
at 
java.base/jdk.internal.util.Preconditions.outOfBounds(Preconditions.java:64)
at 
java.base/jdk.internal.util.Preconditions.outOfBoundsCheckIndex(Preconditions.java:70)
at 
java.base/jdk.internal.util.Preconditions.checkIndex(Preconditions.java:248)
at java.base/java.util.Objects.checkIndex(Objects.java:372)
at java.base/java.util.ArrayList.get(ArrayList.java:439)
at 
org.apache.solr.handler.TestReplicationHandler.doTestIndexAndConfigReplication(TestReplicationHandler.java:561)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 

[jira] [Commented] (SOLR-10733) Rule-based Replica Placement not working correct

2017-12-11 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16287118#comment-16287118
 ] 

Erick Erickson commented on SOLR-10733:
---

In the same ballpark, even though 11005 is about new policy.

> Rule-based Replica Placement not working correct
> 
>
> Key: SOLR-10733
> URL: https://issues.apache.org/jira/browse/SOLR-10733
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Rules, SolrCloud
>Affects Versions: 6.5.1
>Reporter: Bernd Fehling
>Assignee: Noble Paul
> Attachments: SOLR-10733.patch, SOLR-10733.patch, SOLR-10733.patch
>
>
> A setup of a SolrCloud with 6 nodes on 3 server e.g.:
> {code}
> server1:8983 , server1:7574
> server2:8983 , server2:7574
> server3:8983 , server3:7574
> {code}
> and a command for creating a new collection with rule:
> {code}
> http://localhost:8983/solr/admin/collections?action=CREATE=boss;
> collection.configName=boss_configs=3=2&
> maxShardsPerNode=1=shard:shard1,replica:<2,port:8983
> {code}
> should create a collection with 3 shards and least a shard1 with two 
> different nodes running on port 8983.
> {code}
> shard1 --> server_x:8983 ,  server_y:8983
> {code}
> A even more restrictive rule like
> {code}
> rule=shard:shard1,replica:<2,port:8983=shard:shard3,replica:<2,port:7574
> {code}
> should also resolve to a solution because if it really checks all 
> permutations accross shards/replicas/ports and available nodes it should be 
> able to solve this.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11005) inconsistency when maxShardsPerNode used along with policies

2017-12-11 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16287117#comment-16287117
 ] 

Erick Erickson commented on SOLR-11005:
---

If this is not required, can we close it?

> inconsistency when maxShardsPerNode used along with policies
> 
>
> Key: SOLR-11005
> URL: https://issues.apache.org/jira/browse/SOLR-11005
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, SolrCloud
>Reporter: Noble Paul
>Assignee: Noble Paul
> Fix For: 7.2, master (8.0)
>
>
> The attribute maxShardsPerNode conflicts with the conditions in the new 
> Policy framework
> for example , I can say maxShardsPerNode=5 and I can have a policy 
> {code}
> { replica:"<3" , shard: "#ANY", node:"#ANY"}
> {code}
> So, it makes no sense to persist this attribute in collection state.json . 
> Ideally, we would like to keep this as a part of the policy and policy only.
> h3. proposed new behavior
> if the new policy framework is being used {maxShardsPerNode} should result in 
> creating a new collection specific policy with the correct condition. for 
> example, if a collection "x" is created with the parameter 
> {{maxShardsPerNode=2}} we will  create a new policy in autoscaling.json
> {code}
> {
> "policies":{
> "x_COLL_POLICY" : [{replica:"<3", shard:"#ANY" , node:"ANY"}]
> }
> }
> {code}
> this policy will be referred to in the state.json. There will be no attribute 
> called {{maxShardsPerNode}} persisted to the state.json.
> if there is already a policy being specified for the collection, solr should 
> throw an error asking the user to edit the policy directly
> h3.the name is bad
> We must rename the attribute {{maxShardsPerNode}} to {{maxReplicasPerNode}}. 
> This should be a backward compatible change. The old name will continue to 
> work and the API would give a friendly warning if the old name is used



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-2899) Add OpenNLP Analysis capabilities as a module

2017-12-11 Thread Alex Watson (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16287111#comment-16287111
 ] 

Alex Watson commented on LUCENE-2899:
-

I am currently on Annual Leave, returning on the 14th of December.
If your matter is urgent please contact the office on 01414401234.
Regards,
Alex



> Add OpenNLP Analysis capabilities as a module
> -
>
> Key: LUCENE-2899
> URL: https://issues.apache.org/jira/browse/LUCENE-2899
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: modules/analysis
>Reporter: Grant Ingersoll
>Assignee: Steve Rowe
>Priority: Minor
> Fix For: 4.9, 6.0
>
> Attachments: LUCENE-2899-6.1.0.patch, LUCENE-2899-RJN.patch, 
> LUCENE-2899.patch, LUCENE-2899.patch, LUCENE-2899.patch, LUCENE-2899.patch, 
> LUCENE-2899.patch, LUCENE-2899.patch, OpenNLPFilter.java, 
> OpenNLPTokenizer.java
>
>
> Now that OpenNLP is an ASF project and has a nice license, it would be nice 
> to have a submodule (under analysis) that exposed capabilities for it. Drew 
> Farris, Tom Morton and I have code that does:
> * Sentence Detection as a Tokenizer (could also be a TokenFilter, although it 
> would have to change slightly to buffer tokens)
> * NamedEntity recognition as a TokenFilter
> We are also planning a Tokenizer/TokenFilter that can put parts of speech as 
> either payloads (PartOfSpeechAttribute?) on a token or at the same position.
> I'd propose it go under:
> modules/analysis/opennlp



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Issue Comment Deleted] (LUCENE-2899) Add OpenNLP Analysis capabilities as a module

2017-12-11 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-2899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated LUCENE-2899:
---
Comment: was deleted

(was: I am currently on Annual Leave, returning on the 14th of December.
If your matter is urgent please contact the office on 01414401234.
Regards,
Alex

)

> Add OpenNLP Analysis capabilities as a module
> -
>
> Key: LUCENE-2899
> URL: https://issues.apache.org/jira/browse/LUCENE-2899
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: modules/analysis
>Reporter: Grant Ingersoll
>Assignee: Steve Rowe
>Priority: Minor
> Fix For: 4.9, 6.0
>
> Attachments: LUCENE-2899-6.1.0.patch, LUCENE-2899-RJN.patch, 
> LUCENE-2899.patch, LUCENE-2899.patch, LUCENE-2899.patch, LUCENE-2899.patch, 
> LUCENE-2899.patch, LUCENE-2899.patch, OpenNLPFilter.java, 
> OpenNLPTokenizer.java
>
>
> Now that OpenNLP is an ASF project and has a nice license, it would be nice 
> to have a submodule (under analysis) that exposed capabilities for it. Drew 
> Farris, Tom Morton and I have code that does:
> * Sentence Detection as a Tokenizer (could also be a TokenFilter, although it 
> would have to change slightly to buffer tokens)
> * NamedEntity recognition as a TokenFilter
> We are also planning a Tokenizer/TokenFilter that can put parts of speech as 
> either payloads (PartOfSpeechAttribute?) on a token or at the same position.
> I'd propose it go under:
> modules/analysis/opennlp



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10733) Rule-based Replica Placement not working correct

2017-12-11 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16287110#comment-16287110
 ] 

Erick Erickson commented on SOLR-10733:
---

they all seem to be about replica placement rules being incorrect.

> Rule-based Replica Placement not working correct
> 
>
> Key: SOLR-10733
> URL: https://issues.apache.org/jira/browse/SOLR-10733
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Rules, SolrCloud
>Affects Versions: 6.5.1
>Reporter: Bernd Fehling
>Assignee: Noble Paul
> Attachments: SOLR-10733.patch, SOLR-10733.patch, SOLR-10733.patch
>
>
> A setup of a SolrCloud with 6 nodes on 3 server e.g.:
> {code}
> server1:8983 , server1:7574
> server2:8983 , server2:7574
> server3:8983 , server3:7574
> {code}
> and a command for creating a new collection with rule:
> {code}
> http://localhost:8983/solr/admin/collections?action=CREATE=boss;
> collection.configName=boss_configs=3=2&
> maxShardsPerNode=1=shard:shard1,replica:<2,port:8983
> {code}
> should create a collection with 3 shards and least a shard1 with two 
> different nodes running on port 8983.
> {code}
> shard1 --> server_x:8983 ,  server_y:8983
> {code}
> A even more restrictive rule like
> {code}
> rule=shard:shard1,replica:<2,port:8983=shard:shard3,replica:<2,port:7574
> {code}
> should also resolve to a solution because if it really checks all 
> permutations accross shards/replicas/ports and available nodes it should be 
> able to solve this.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10954) Refactor code to standardize replica assignment

2017-12-11 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16287109#comment-16287109
 ] 

Erick Erickson commented on SOLR-10954:
---

[~noble.paul] What's the status of this JIRA? There's no mention of it in 
CHANGES.txt (master/8.0) yet it's been pushed.

> Refactor code to standardize replica assignment
> ---
>
> Key: SOLR-10954
> URL: https://issues.apache.org/jira/browse/SOLR-10954
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>
> Today, we have different mechanisms to assign replicas to nodes
> # the default
> # rules based replica placement
> # policy based replica placement
> different commands such as add-replica, add shard, create collection etc uses 
> different code paths to assign replicas to nodes. it should be refactored to 
> unify all this into a single method, if possible



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (LUCENE-2899) Add OpenNLP Analysis capabilities as a module

2017-12-11 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-2899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe reassigned LUCENE-2899:
--

Assignee: Steve Rowe  (was: Grant Ingersoll)

> Add OpenNLP Analysis capabilities as a module
> -
>
> Key: LUCENE-2899
> URL: https://issues.apache.org/jira/browse/LUCENE-2899
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: modules/analysis
>Reporter: Grant Ingersoll
>Assignee: Steve Rowe
>Priority: Minor
> Fix For: 4.9, 6.0
>
> Attachments: LUCENE-2899-6.1.0.patch, LUCENE-2899-RJN.patch, 
> LUCENE-2899.patch, LUCENE-2899.patch, LUCENE-2899.patch, LUCENE-2899.patch, 
> LUCENE-2899.patch, LUCENE-2899.patch, OpenNLPFilter.java, 
> OpenNLPTokenizer.java
>
>
> Now that OpenNLP is an ASF project and has a nice license, it would be nice 
> to have a submodule (under analysis) that exposed capabilities for it. Drew 
> Farris, Tom Morton and I have code that does:
> * Sentence Detection as a Tokenizer (could also be a TokenFilter, although it 
> would have to change slightly to buffer tokens)
> * NamedEntity recognition as a TokenFilter
> We are also planning a Tokenizer/TokenFilter that can put parts of speech as 
> either payloads (PartOfSpeechAttribute?) on a token or at the same position.
> I'd propose it go under:
> modules/analysis/opennlp



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-2899) Add OpenNLP Analysis capabilities as a module

2017-12-11 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-2899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated LUCENE-2899:
---
Attachment: LUCENE-2899.patch

Patch, lots of changes (see below), I think it's ready to go (precommit and all 
Lucene/Solr tests pass).  My plan is to wait a couple days for review, then 
commit if there are no objections.

Changes since the last patch:

* Corrected some test model training data.
* Switched OpenNLPTokenizer to require both the sentence and tokenizer models - 
I couldn't think of a reason to support specification of only one of them.
* OpenNLPTokenizer now extends SegmentingTokenizerBase, and uses an OpenNLP 
sentence segmenter via shim class OpenNLPSentenceBreakIterator.  
End-of-sentence tokens are marked by setting a bit on the FlagsAttribute.  All 
OpenNLP-based filters operate on sentences.
* OpenNLPLemmatizerFilter now supports dictionary-based and/or model-based 
lemmatization; if both are configured, dictionary-based tokenization is 
performed first, and then out-of-vocabulary tokens are lemmatized using the 
model.
* Each analysis operation is now in its own component (previously OpenNLPFilter 
did multiple things).
* Removed the end-of-sentence hack in OpenNLPPOSFilter (described in an earlier 
comment on this issue) - periods are no longer appended to sentences prior to 
pos tagging.
* Named entity recognition is now handled in an update request processor in the 
analysis-extras Solr contrib (though the NER model loading machinery remains in 
OpenNLPOpsFactory in the lucene/analysis/opennlp module).
* Added ref guide docs.
* Added CHANGES.txt entries.

> Add OpenNLP Analysis capabilities as a module
> -
>
> Key: LUCENE-2899
> URL: https://issues.apache.org/jira/browse/LUCENE-2899
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: modules/analysis
>Reporter: Grant Ingersoll
>Assignee: Grant Ingersoll
>Priority: Minor
> Fix For: 4.9, 6.0
>
> Attachments: LUCENE-2899-6.1.0.patch, LUCENE-2899-RJN.patch, 
> LUCENE-2899.patch, LUCENE-2899.patch, LUCENE-2899.patch, LUCENE-2899.patch, 
> LUCENE-2899.patch, LUCENE-2899.patch, OpenNLPFilter.java, 
> OpenNLPTokenizer.java
>
>
> Now that OpenNLP is an ASF project and has a nice license, it would be nice 
> to have a submodule (under analysis) that exposed capabilities for it. Drew 
> Farris, Tom Morton and I have code that does:
> * Sentence Detection as a Tokenizer (could also be a TokenFilter, although it 
> would have to change slightly to buffer tokens)
> * NamedEntity recognition as a TokenFilter
> We are also planning a Tokenizer/TokenFilter that can put parts of speech as 
> either payloads (PartOfSpeechAttribute?) on a token or at the same position.
> I'd propose it go under:
> modules/analysis/opennlp



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-10-ea+32) - Build # 21071 - Still Unstable!

2017-12-11 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/21071/
Java: 64bit/jdk-10-ea+32 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

2 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.client.solrj.io.stream.StreamExpressionTest

Error Message:
There are still nodes recoverying - waited for 90 seconds

Stack Trace:
java.lang.AssertionError: There are still nodes recoverying - waited for 90 
seconds
at __randomizedtesting.SeedInfo.seed([8F8652DBBDE6AF13]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:185)
at 
org.apache.solr.client.solrj.io.stream.StreamExpressionTest.setupCluster(StreamExpressionTest.java:103)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:874)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  
org.apache.solr.handler.TestReplicationHandler.doTestIndexAndConfigReplication

Error Message:
Index ০ out-of-bounds for length ০

Stack Trace:
java.lang.IndexOutOfBoundsException: Index ০ out-of-bounds for length ০
at 
__randomizedtesting.SeedInfo.seed([E8B0FF3000140AB5:FCF8A4652313B7AB]:0)
at 
java.base/jdk.internal.util.Preconditions.outOfBounds(Preconditions.java:64)
at 
java.base/jdk.internal.util.Preconditions.outOfBoundsCheckIndex(Preconditions.java:70)
at 
java.base/jdk.internal.util.Preconditions.checkIndex(Preconditions.java:248)
at java.base/java.util.Objects.checkIndex(Objects.java:372)
at java.base/java.util.ArrayList.get(ArrayList.java:440)
at 
org.apache.solr.handler.TestReplicationHandler.doTestIndexAndConfigReplication(TestReplicationHandler.java:561)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 

[jira] [Comment Edited] (SOLR-11624) _default configset overwrites a a configset if collection.configName isn't specified even if a confiset of the same name already exists.

2017-12-11 Thread Abhishek Kumar Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16286385#comment-16286385
 ] 

Abhishek Kumar Singh edited comment on SOLR-11624 at 12/11/17 6:46 PM:
---

Please find the updated patch here -> [^SOLR-11624.3.patch] 

Changes made:-
1. Modified the test case {{TimeRoutedAliasUpdateProcessorTest#test}} to 
create/update and use the correct {{modifiedConfigSet}} name.
2. Refactored the {{configName}} , added suffix to the name of _auto-generated 
configSet_. 

[~ichattopadhyaya] , [~dsmiley] Please review the patch.  


was (Author: asingh2411):
Please find the updated patch here -> [^SOLR-11624.3.patch] 

Changes made:-
1. Modified the test case {{TimeRoutedAliasUpdateProcessorTest#test}} to 
create/update and use the correct {{modifiedConfigSet}} name.
2. Refactored the {{configName}} , added suffix to the name of _auto-generated 
configSet_. 

[~ichattopadhyaya] [~dsmiley] Please review the patch.  

> _default configset overwrites a a configset if collection.configName isn't 
> specified even if a confiset of the same name already exists.
> 
>
> Key: SOLR-11624
> URL: https://issues.apache.org/jira/browse/SOLR-11624
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.2
>Reporter: Erick Erickson
>Assignee: Ishan Chattopadhyaya
> Attachments: SOLR-11624-2.patch, SOLR-11624.3.patch, SOLR-11624.patch
>
>
> Looks like a problem that crept in when we changed the _default configset 
> stuff.
> setup:
> upload a configset named "wiki"
> collections?action=CREATE=wiki&.
> My custom configset "wiki" gets overwritten by _default and then used by the 
> "wiki" collection.
> Assigning to myself only because it really needs to be fixed IMO and I don't 
> want to lose track of it. Anyone else please feel free to take it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11711) distributed pivot & field facets can processes excessive docs unneccessarily due to internal mincount=0

2017-12-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16286315#comment-16286315
 ] 

ASF subversion and git services commented on SOLR-11711:


Commit 41113ecbb62694e7e07bd236ecd0b5169bd62547 in lucene-solr's branch 
refs/heads/branch_7x from Chris Hostetter
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=41113ec ]

SOLR-11711: Fixed distributed processing of facet.field/facet.pivot sub 
requests to prevent requesting unneccessary and excessive '0' count terms from 
each shard

(cherry picked from commit efc2f32ea05029edff9144f163d0619d091d1ba3)


> distributed pivot & field facets can processes excessive docs unneccessarily 
> due to internal mincount=0
> ---
>
> Key: SOLR-11711
> URL: https://issues.apache.org/jira/browse/SOLR-11711
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: faceting
>Affects Versions: master (8.0)
>Reporter: Houston Putman
>Assignee: Hoss Man
>  Labels: pull-request-available
>
> Currently while sending pivot facet requests to each shard, the 
> {{facet.pivot.mincount}} is set to {{0}} if the facet is sorted by count with 
> a specified limit > 0. However with a mincount of 0, the pivot facet will use 
> exponentially more wasted memory for every pivot field added. This is because 
> there will be a total of {{limit^(# of pivots)}} pivot values created in 
> memory, even though the vast majority of them will have counts of 0, and are 
> therefore useless.
> Imagine the scenario of a pivot facet with 3 levels, and 
> {{facet.limit=1000}}. There will be a billion pivot values created, and there 
> will almost definitely be nowhere near a billion pivot values with counts > 0.
> This likely due to the reasoning mentioned in [this comment in the original 
> distributed pivot facet 
> ticket|https://issues.apache.org/jira/browse/SOLR-2894?focusedCommentId=13979898].
>  Basically it was thought that the refinement code would need to know that a 
> count was 0 for a shard so that a refinement request wasn't sent to that 
> shard. However this is checked in the code, [in this part of the refinement 
> candidate 
> checking|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/7.1.0/solr/core/src/java/org/apache/solr/handler/component/PivotFacetField.java#L275].
>  Therefore if the {{pivot.mincount}} was set to 1, the non-existent values 
> would either:
> * Not be known, because the {{facet.limit}} was smaller than the number of 
> facet values with positive counts. This isn't an issue, because they wouldn't 
> have been returned with {{pivot.mincount}} set to 0.
> * Would be known, because the {{facet.limit}} would be larger than the number 
> of facet values returned. therefore this conditional would return false 
> (since we are only talking about pivot facets sorted by count).
> The solution, is to use the same pivot mincount as would be used if no limit 
> was specified. 
> This also relates to a similar problem in field faceting that was "fixed" in 
> [SOLR-8988|https://issues.apache.org/jira/browse/SOLR-8988#13324]. The 
> solution was to add a flag, {{facet.distrib.mco}}, which would enable not 
> choosing a mincount of 0 when unnessesary. Since this flag can only increase 
> performance, and doesn't break any queries I have removed it as an option and 
> replaced the code to use the feature always. 
> There was one code change necessary to fix the MCO option, since the 
> refinement candidate selection logic had a bug. The bug only occured with a 
> minCount > 0 and limit > 0 specified. When a shard replied with less than the 
> limit requested, it would assume the next maximum count on that shard was the 
> {{mincount}}, where it would actually be the {{mincount-1}} (because a facet 
> value with a count of mincount would have been returned). Therefore the MCO 
> didn't cause any errors, but with a mincount of 1 the refinement logic always 
> assumed that the shard had more values with a count of 1.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-11569) Add support for distance matrices to the distance Stream Evaluator

2017-12-11 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein resolved SOLR-11569.
---
Resolution: Resolved

> Add support for distance matrices to the distance Stream Evaluator 
> ---
>
> Key: SOLR-11569
> URL: https://issues.apache.org/jira/browse/SOLR-11569
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 7.2
>
> Attachments: SOLR-11569.patch
>
>
> This ticket will expand the functionality of the *distance* Stream Evaluator 
> to support distance matrices.
> https://en.wikipedia.org/wiki/Distance_matrix



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-11742) Add documentation for 7.2 release statistical functions

2017-12-11 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett reassigned SOLR-11742:


Assignee: Joel Bernstein  (was: Cassandra Targett)

Oops, accidentally assigned this to myself while trying to type somewhere else. 
Re-assigning back to Joel.

> Add documentation for 7.2 release statistical functions
> ---
>
> Key: SOLR-11742
> URL: https://issues.apache.org/jira/browse/SOLR-11742
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 7.2
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-11744) ConfigSetAdminRequest.CREATE should allow null in baseConfigSetName

2017-12-11 Thread Abhishek Kumar Singh (JIRA)
Abhishek Kumar Singh created SOLR-11744:
---

 Summary: ConfigSetAdminRequest.CREATE should allow null in  
baseConfigSetName
 Key: SOLR-11744
 URL: https://issues.apache.org/jira/browse/SOLR-11744
 Project: Solr
  Issue Type: Wish
  Security Level: Public (Default Security Level. Issues are Public)
  Components: config-api, Tests
Reporter: Abhishek Kumar Singh
Priority: Minor


Currently, 
[ConfigSetAdminRequest.Create|http://lucene.apache.org/solr/6_5_0/solr-solrj/org/apache/solr/client/solrj/request/ConfigSetAdminRequest.Create.html]
 gives an exception *_{color:red}no Base ConfigSet specified!{color}_* if 
[baseConfigSetName|http://lucene.apache.org/solr/6_5_0/solr-solrj/org/apache/solr/client/solrj/request/ConfigSetAdminRequest.Create.html#baseConfigSetName]
  is null.


However, a configSet can  be created by passing the *__default_* as the 
{{baseConfigSetName}} which is a hack. 

IMO *_baseConfigSetName_*  should be optional, so that, instead of giving an 
exception, *_ConfigSetAdminRequest.Create_*   lets the user create a 
*_configSet_* from the *_default config set_* i.e. *__default_*  if the 
*_baseConfigSetName_* is not provided.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11624) _default configset overwrites a a configset if collection.configName isn't specified even if a confiset of the same name already exists.

2017-12-11 Thread Abhishek Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhishek Kumar Singh updated SOLR-11624:

Attachment: (was: solr-11624.3.patch)

> _default configset overwrites a a configset if collection.configName isn't 
> specified even if a confiset of the same name already exists.
> 
>
> Key: SOLR-11624
> URL: https://issues.apache.org/jira/browse/SOLR-11624
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.2
>Reporter: Erick Erickson
>Assignee: Ishan Chattopadhyaya
> Attachments: SOLR-11624-2.patch, SOLR-11624.patch
>
>
> Looks like a problem that crept in when we changed the _default configset 
> stuff.
> setup:
> upload a configset named "wiki"
> collections?action=CREATE=wiki&.
> My custom configset "wiki" gets overwritten by _default and then used by the 
> "wiki" collection.
> Assigning to myself only because it really needs to be fixed IMO and I don't 
> want to lose track of it. Anyone else please feel free to take it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-11624) _default configset overwrites a a configset if collection.configName isn't specified even if a confiset of the same name already exists.

2017-12-11 Thread Abhishek Kumar Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16286385#comment-16286385
 ] 

Abhishek Kumar Singh edited comment on SOLR-11624 at 12/11/17 6:45 PM:
---

Please find the updated patch here -> [^SOLR-11624.3.patch] 

Changes made:-
1. Modified the test case {{TimeRoutedAliasUpdateProcessorTest#test}} to 
create/update and use the correct {{modifiedConfigSet}} name.
2. Refactored the {{configName}} , added suffix to the name of _auto-generated 
configSet_. 

[~ichattopadhyaya] [~dsmiley] Please review the patch.  


was (Author: asingh2411):
Please find the updated patch here -> [^SOLR-11624.3.patch] 

Changes made:-
1. Modified the test case {{TimeRoutedAliasUpdateProcessorTest#test}} to 
create/update and use the correct {{modifiedConfigSet}} name.
2. Refactored the {{configName}} , added suffix to the name of _auto-generated 
configSet_. 


> _default configset overwrites a a configset if collection.configName isn't 
> specified even if a confiset of the same name already exists.
> 
>
> Key: SOLR-11624
> URL: https://issues.apache.org/jira/browse/SOLR-11624
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.2
>Reporter: Erick Erickson
>Assignee: Ishan Chattopadhyaya
> Attachments: SOLR-11624-2.patch, SOLR-11624.3.patch, SOLR-11624.patch
>
>
> Looks like a problem that crept in when we changed the _default configset 
> stuff.
> setup:
> upload a configset named "wiki"
> collections?action=CREATE=wiki&.
> My custom configset "wiki" gets overwritten by _default and then used by the 
> "wiki" collection.
> Assigning to myself only because it really needs to be fixed IMO and I don't 
> want to lose track of it. Anyone else please feel free to take it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.2-Linux (64bit/jdk1.8.0_144) - Build # 48 - Still Unstable!

2017-12-11 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.2-Linux/48/
Java: 64bit/jdk1.8.0_144 -XX:-UseCompressedOops -XX:+UseParallelGC

2 tests failed.
FAILED:  org.apache.solr.cloud.autoscaling.ExecutePlanActionTest.testIntegration

Error Message:
Timed out waiting for replicas of collection to be 2 again null Live Nodes: 
[127.0.0.1:41737_solr] Last available state: 
DocCollection(testIntegration//collections/testIntegration/state.json/7)={   
"pullReplicas":"0",   "replicationFactor":"2",   "shards":{"shard1":{   
"range":"8000-7fff",   "state":"active",   "replicas":{ 
"core_node3":{   "core":"testIntegration_shard1_replica_n1",   
"base_url":"http://127.0.0.1:40263/solr;,   
"node_name":"127.0.0.1:40263_solr",   "state":"down",   
"type":"NRT"}, "core_node4":{   
"core":"testIntegration_shard1_replica_n2",   
"base_url":"http://127.0.0.1:41737/solr;,   
"node_name":"127.0.0.1:41737_solr",   "state":"active",   
"type":"NRT"}, "core_node6":{   
"core":"testIntegration_shard1_replica_n5",   
"base_url":"http://127.0.0.1:41737/solr;,   "state":"down",   
"node_name":"127.0.0.1:41737_solr",   "type":"NRT",   
"router":{"name":"compositeId"},   "maxShardsPerNode":"1",   
"autoAddReplicas":"false",   "nrtReplicas":"2",   "tlogReplicas":"0"}

Stack Trace:
java.lang.AssertionError: Timed out waiting for replicas of collection to be 2 
again
null
Live Nodes: [127.0.0.1:41737_solr]
Last available state: 
DocCollection(testIntegration//collections/testIntegration/state.json/7)={
  "pullReplicas":"0",
  "replicationFactor":"2",
  "shards":{"shard1":{
  "range":"8000-7fff",
  "state":"active",
  "replicas":{
"core_node3":{
  "core":"testIntegration_shard1_replica_n1",
  "base_url":"http://127.0.0.1:40263/solr;,
  "node_name":"127.0.0.1:40263_solr",
  "state":"down",
  "type":"NRT"},
"core_node4":{
  "core":"testIntegration_shard1_replica_n2",
  "base_url":"http://127.0.0.1:41737/solr;,
  "node_name":"127.0.0.1:41737_solr",
  "state":"active",
  "type":"NRT"},
"core_node6":{
  "core":"testIntegration_shard1_replica_n5",
  "base_url":"http://127.0.0.1:41737/solr;,
  "state":"down",
  "node_name":"127.0.0.1:41737_solr",
  "type":"NRT",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false",
  "nrtReplicas":"2",
  "tlogReplicas":"0"}
at 
__randomizedtesting.SeedInfo.seed([7734EBA911DB484B:C755E58534E4E96E]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.SolrCloudTestCase.waitForState(SolrCloudTestCase.java:269)
at 
org.apache.solr.cloud.autoscaling.ExecutePlanActionTest.testIntegration(ExecutePlanActionTest.java:209)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 

[JENKINS] Lucene-Solr-7.2-Windows (32bit/jdk1.8.0_144) - Build # 12 - Still Unstable!

2017-12-11 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.2-Windows/12/
Java: 32bit/jdk1.8.0_144 -server -XX:+UseSerialGC

2 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.TestDistributedGrouping

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-7.2-Windows\solr\build\solr-core\test\J1\temp\solr.TestDistributedGrouping_81D4D968453127E1-001\tempDir-001\shard2\configsets\cdcr-target:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.2-Windows\solr\build\solr-core\test\J1\temp\solr.TestDistributedGrouping_81D4D968453127E1-001\tempDir-001\shard2\configsets\cdcr-target

C:\Users\jenkins\workspace\Lucene-Solr-7.2-Windows\solr\build\solr-core\test\J1\temp\solr.TestDistributedGrouping_81D4D968453127E1-001\tempDir-001\shard2\configsets:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.2-Windows\solr\build\solr-core\test\J1\temp\solr.TestDistributedGrouping_81D4D968453127E1-001\tempDir-001\shard2\configsets

C:\Users\jenkins\workspace\Lucene-Solr-7.2-Windows\solr\build\solr-core\test\J1\temp\solr.TestDistributedGrouping_81D4D968453127E1-001\tempDir-001\shard2:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.2-Windows\solr\build\solr-core\test\J1\temp\solr.TestDistributedGrouping_81D4D968453127E1-001\tempDir-001\shard2

C:\Users\jenkins\workspace\Lucene-Solr-7.2-Windows\solr\build\solr-core\test\J1\temp\solr.TestDistributedGrouping_81D4D968453127E1-001\tempDir-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.2-Windows\solr\build\solr-core\test\J1\temp\solr.TestDistributedGrouping_81D4D968453127E1-001\tempDir-001
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-7.2-Windows\solr\build\solr-core\test\J1\temp\solr.TestDistributedGrouping_81D4D968453127E1-001\tempDir-001\shard2\configsets\cdcr-target:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.2-Windows\solr\build\solr-core\test\J1\temp\solr.TestDistributedGrouping_81D4D968453127E1-001\tempDir-001\shard2\configsets\cdcr-target
   
C:\Users\jenkins\workspace\Lucene-Solr-7.2-Windows\solr\build\solr-core\test\J1\temp\solr.TestDistributedGrouping_81D4D968453127E1-001\tempDir-001\shard2\configsets:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.2-Windows\solr\build\solr-core\test\J1\temp\solr.TestDistributedGrouping_81D4D968453127E1-001\tempDir-001\shard2\configsets
   
C:\Users\jenkins\workspace\Lucene-Solr-7.2-Windows\solr\build\solr-core\test\J1\temp\solr.TestDistributedGrouping_81D4D968453127E1-001\tempDir-001\shard2:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.2-Windows\solr\build\solr-core\test\J1\temp\solr.TestDistributedGrouping_81D4D968453127E1-001\tempDir-001\shard2
   
C:\Users\jenkins\workspace\Lucene-Solr-7.2-Windows\solr\build\solr-core\test\J1\temp\solr.TestDistributedGrouping_81D4D968453127E1-001\tempDir-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.2-Windows\solr\build\solr-core\test\J1\temp\solr.TestDistributedGrouping_81D4D968453127E1-001\tempDir-001

at __randomizedtesting.SeedInfo.seed([81D4D968453127E1]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.ShardRoutingTest

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-7.2-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.ShardRoutingTest_81D4D968453127E1-001\shard-2-001\configsets\upload\dih-script-transformer:
 

Re: Welcome Ishan Chattopadhyaya to the PMC

2017-12-11 Thread Koji Sekiguchi

Welcome Ishan!

Koji

On 2017/12/08 22:47, Adrien Grand wrote:

I am pleased to announce that Ishan Chattopadhyaya has accepted the PMC's 
invitation to join.

Welcome Ishan!


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-7.2 - Build # 9 - Still Unstable

2017-12-11 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.2/9/

17 tests failed.
FAILED:  org.apache.solr.cloud.TestSegmentSorting.testSegmentTerminateEarly

Error Message:
expected: but was:

Stack Trace:
java.lang.AssertionError: expected: but was:
at 
__randomizedtesting.SeedInfo.seed([5C2671659151D4E9:8C80B6C84D034083]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:147)
at 
org.apache.solr.cloud.TestSegmentSorting.createCollection(TestSegmentSorting.java:84)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:968)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:47)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  org.apache.solr.cloud.UnloadDistributedZkTest.test

Error Message:
Error from server at http://127.0.0.1:35059: KeeperErrorCode = Session expired 
for /overseer/collection-queue-work/qnr-

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from 

[JENKINS] Lucene-Solr-7.x-Windows (64bit/jdk-9.0.1) - Build # 339 - Still Unstable!

2017-12-11 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/339/
Java: 64bit/jdk-9.0.1 -XX:+UseCompressedOops -XX:+UseSerialGC

3 tests failed.
FAILED:  org.apache.solr.cloud.autoscaling.ComputePlanActionTest.testNodeLost

Error Message:
Trigger was not fired even after 10 seconds

Stack Trace:
java.lang.AssertionError: Trigger was not fired even after 10 seconds
at 
__randomizedtesting.SeedInfo.seed([742AF83E369660BB:CB3F36C0B57C053D]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.autoscaling.ComputePlanActionTest.testNodeLost(ComputePlanActionTest.java:193)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  org.apache.solr.cloud.autoscaling.ComputePlanActionTest.testNodeAdded

Error Message:
Unexpected node in computed operation expected:<127.0.0.1:520[50]_solr> but 
was:<127.0.0.1:520[04]_solr>

Stack Trace:
org.junit.ComparisonFailure: Unexpected node in computed 

[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-10-ea+32) - Build # 21070 - Still Unstable!

2017-12-11 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/21070/
Java: 64bit/jdk-10-ea+32 -XX:+UseCompressedOops -XX:+UseSerialGC

2 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.admin.AutoscalingHistoryHandlerTest

Error Message:
Error from server at https://127.0.0.1:33161/solr: create the collection time 
out:180s

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at https://127.0.0.1:33161/solr: create the collection time out:180s
at __randomizedtesting.SeedInfo.seed([3CDE27C98E6A116]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1103)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:883)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:816)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at 
org.apache.solr.handler.admin.AutoscalingHistoryHandlerTest.setupCluster(AutoscalingHistoryHandlerTest.java:72)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:874)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  org.apache.solr.cloud.TestCloudPivotFacet.test

Error Message:
Timeout occured while waiting response from server at: 
http://127.0.0.1:35599/uj_w/c

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: http://127.0.0.1:35599/uj_w/c
at 
__randomizedtesting.SeedInfo.seed([3CDE27C98E6A116:8B99DDA6361ACCEE]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:654)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
  

[JENKINS] Lucene-Solr-NightlyTests-7.2 - Build # 1 - Unstable

2017-12-11 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-7.2/1/

11 tests failed.
FAILED:  org.apache.lucene.analysis.core.TestRandomChains.testRandomChains

Error Message:
startOffset must be non-negative, and endOffset must be >= startOffset, and 
offsets must not go backwards startOffset=0,endOffset=14,lastStartOffset=1 for 
field 'dummy'

Stack Trace:
java.lang.IllegalArgumentException: startOffset must be non-negative, and 
endOffset must be >= startOffset, and offsets must not go backwards 
startOffset=0,endOffset=14,lastStartOffset=1 for field 'dummy'
at 
__randomizedtesting.SeedInfo.seed([C006DAD2E1FC77AF:FDE7F3B3A6EE6A6F]:0)
at 
org.apache.lucene.index.DefaultIndexingChain$PerField.invert(DefaultIndexingChain.java:767)
at 
org.apache.lucene.index.DefaultIndexingChain.processField(DefaultIndexingChain.java:430)
at 
org.apache.lucene.index.DefaultIndexingChain.processDocument(DefaultIndexingChain.java:392)
at 
org.apache.lucene.index.DocumentsWriterPerThread.updateDocument(DocumentsWriterPerThread.java:240)
at 
org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:496)
at 
org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1729)
at 
org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1464)
at 
org.apache.lucene.index.RandomIndexWriter.addDocument(RandomIndexWriter.java:171)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:650)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:540)
at 
org.apache.lucene.analysis.core.TestRandomChains.testRandomChains(TestRandomChains.java:856)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 

[jira] [Resolved] (SOLR-11575) Cleanup Java Snippets in "Using SolrJ" ref-guide page

2017-12-11 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11575?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man resolved SOLR-11575.
-
   Resolution: Implemented
 Assignee: Hoss Man
Fix Version/s: 7.3
   master (8.0)

Thanks Jason, these edits were great!

FYI, made one addition change...

{noformat}
+expectLine("Found "+NUM_LIVE_NODES+" live nodes");
...
-assertEquals(NUM_LIVE_NODES, liveNodes.size());
+print("Found " + liveNodes.size() + " live nodes");
{noformat}

bq. I put the print-assertion utility functions in the test class itself, which 
isn't the most re-usable.  We can put it somewhere more re-usable, ...

If/when we decide we want the same mock {{print(String)}} method in another 
test class we can consider refactoring it -- but the main thing to remember is 
that "{{print(...)}}" is just what semed to make the most sense for these 
particula asserts.  It could have just as easily been a mock 
{{processResults(SolrDocument)}} method.  Down the road, we might want other 
snippets that use a mocked out {{SolrClient}} that asserts it's given the 
requests it expects, or a mocked out {{SolrParams}} class that asserts it gets 
the values the test expects.

The important thing is that we can focus on the snippets making sense in the 
context of the docs, and mocking whatever parts we need to ensure that the code 
in the snippets does what hte docs say they do -- even if those mocks are 
different between different tests.



> Cleanup Java Snippets in "Using SolrJ" ref-guide page
> -
>
> Key: SOLR-11575
> URL: https://issues.apache.org/jira/browse/SOLR-11575
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: master (8.0)
>Reporter: Jason Gerlowski
>Assignee: Hoss Man
>Priority: Trivial
> Fix For: master (8.0), 7.3
>
> Attachments: SOLR-11575.patch, SOLR-11575.patch
>
>
> Hoss pointed out on SOLR-11032 that some of the Java snippets don't do a 
> great job of looking like
> bq. "real code" a user might do something with in an app.
> Particularly, the snippets show how to obtain certain SolrJ objects, but they 
> don't show readers what they can/should do with those objects.  The snippets 
> might be more useful to readers if they printed information returned in the 
> SolrJ object as a result of each API call.  Hoss specifically suggested 
> setting up a print-asserter, which would appear to be a normal 
> print-statement in the ref-guide snippet, but double as an assertion in the 
> JUnit test where the snippet lives.
> This JIRA involves giving that a shot.  It might make sense to figure this 
> out before pulling more Java snippets into the build (as suggested in 
> SOLR-11574).  On the flip side, extracting more snippets into the build might 
> inform a better, consistent format/pattern for the snippets.  So these 
> stories are related, but maybe not strict dependencies of one another.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11575) Cleanup Java Snippets in "Using SolrJ" ref-guide page

2017-12-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11575?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16286866#comment-16286866
 ] 

ASF subversion and git services commented on SOLR-11575:


Commit 76b7bc3dbe825bac004b04f3baf70a0701194f76 in lucene-solr's branch 
refs/heads/branch_7x from Chris Hostetter
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=76b7bc3 ]

SOLR-11575: Improve ref-guide solrj snippets via mock 'print()' method

(cherry picked from commit 5b2e25f301156fbe5b40e2abb670b985f74456de)


> Cleanup Java Snippets in "Using SolrJ" ref-guide page
> -
>
> Key: SOLR-11575
> URL: https://issues.apache.org/jira/browse/SOLR-11575
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: master (8.0)
>Reporter: Jason Gerlowski
>Priority: Trivial
> Attachments: SOLR-11575.patch, SOLR-11575.patch
>
>
> Hoss pointed out on SOLR-11032 that some of the Java snippets don't do a 
> great job of looking like
> bq. "real code" a user might do something with in an app.
> Particularly, the snippets show how to obtain certain SolrJ objects, but they 
> don't show readers what they can/should do with those objects.  The snippets 
> might be more useful to readers if they printed information returned in the 
> SolrJ object as a result of each API call.  Hoss specifically suggested 
> setting up a print-asserter, which would appear to be a normal 
> print-statement in the ref-guide snippet, but double as an assertion in the 
> JUnit test where the snippet lives.
> This JIRA involves giving that a shot.  It might make sense to figure this 
> out before pulling more Java snippets into the build (as suggested in 
> SOLR-11574).  On the flip side, extracting more snippets into the build might 
> inform a better, consistent format/pattern for the snippets.  So these 
> stories are related, but maybe not strict dependencies of one another.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11575) Cleanup Java Snippets in "Using SolrJ" ref-guide page

2017-12-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11575?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16286867#comment-16286867
 ] 

ASF subversion and git services commented on SOLR-11575:


Commit 5b2e25f301156fbe5b40e2abb670b985f74456de in lucene-solr's branch 
refs/heads/master from Chris Hostetter
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=5b2e25f ]

SOLR-11575: Improve ref-guide solrj snippets via mock 'print()' method


> Cleanup Java Snippets in "Using SolrJ" ref-guide page
> -
>
> Key: SOLR-11575
> URL: https://issues.apache.org/jira/browse/SOLR-11575
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: master (8.0)
>Reporter: Jason Gerlowski
>Priority: Trivial
> Attachments: SOLR-11575.patch, SOLR-11575.patch
>
>
> Hoss pointed out on SOLR-11032 that some of the Java snippets don't do a 
> great job of looking like
> bq. "real code" a user might do something with in an app.
> Particularly, the snippets show how to obtain certain SolrJ objects, but they 
> don't show readers what they can/should do with those objects.  The snippets 
> might be more useful to readers if they printed information returned in the 
> SolrJ object as a result of each API call.  Hoss specifically suggested 
> setting up a print-asserter, which would appear to be a normal 
> print-statement in the ref-guide snippet, but double as an assertion in the 
> JUnit test where the snippet lives.
> This JIRA involves giving that a shot.  It might make sense to figure this 
> out before pulling more Java snippets into the build (as suggested in 
> SOLR-11574).  On the flip side, extracting more snippets into the build might 
> inform a better, consistent format/pattern for the snippets.  So these 
> stories are related, but maybe not strict dependencies of one another.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11746) numeric fields need better error handling for prefix/wildcard syntax -- consider uniform support for "foo:* == foo:[* TO *]"

2017-12-11 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16286637#comment-16286637
 ] 

Hoss Man commented on SOLR-11746:
-

bq. ...currently the behavior is largely non-sensical: ...

Here's some examples of how solr behaves with a few "odd" queries (using the 
techproducts schema where "price" is a FloatPointField)...

{noformat}
price:fo*o - Valid, Matches no docs
price:*- Valid, Matches no docs
price:foo* - Error: "Can't run prefix queries on numeric field"
{noformat}

The combination of the last two particularly confuses me, since clearly 
PointField is going out of it's way to do some error validation of attempting a 
prefix query on a numeric field, but somehow that error handling isn't being 
triggered by the simplest case (empty prefix) _AND_ neither is it able to 
"match docs with prices" (which is what TrieFloatField would do by accident) 

> numeric fields need better error handling for prefix/wildcard syntax -- 
> consider uniform support for "foo:* == foo:[* TO *]"
> 
>
> Key: SOLR-11746
> URL: https://issues.apache.org/jira/browse/SOLR-11746
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>
> On the solr-user mailing list, Torsten Krah pointed out that with Trie 
> numeric fields, query syntax such as {{foo_d:\*}} has been functionality 
> equivilent to {{foo_d:\[\* TO \*]}} and asked why this was not also supported 
> for Point based numeric fields.
> The fact that this type of syntax works (for {{indexed="true"}} Trie fields) 
> appears to have been an (untested, undocumented) fluke of Trie fields given 
> that they use indexed terms for the (encoded) numeric terms and inherit the 
> default implementation of {{FieldType.getPrefixQuery}} which produces a 
> prefix query against the {{""}} (empty string) term.  
> (Note that this syntax has aparently _*never*_ worked for Trie fields with 
> {{indexed="false" docValues="true"}} )
> In general, we should assess the behavior users attempt a prefix/wildcard 
> syntax query against numeric fields, as currently the behavior is largely 
> non-sensical:  prefix/wildcard syntax frequently match no docs w/o any sort 
> of error, and the aformentioned {{numeric_field:*}} behaves inconsistently 
> between points/trie fields and between indexed/docValued trie fields.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Windows (64bit/jdk-9.0.1) - Build # 7051 - Unstable!

2017-12-11 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7051/
Java: 64bit/jdk-9.0.1 -XX:-UseCompressedOops -XX:+UseSerialGC

7 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.lucene.store.TestFileSwitchDirectory

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J0\temp\lucene.store.TestFileSwitchDirectory_6460C8168CDF54E-001\foo-023:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J0\temp\lucene.store.TestFileSwitchDirectory_6460C8168CDF54E-001\foo-023

C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J0\temp\lucene.store.TestFileSwitchDirectory_6460C8168CDF54E-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J0\temp\lucene.store.TestFileSwitchDirectory_6460C8168CDF54E-001
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J0\temp\lucene.store.TestFileSwitchDirectory_6460C8168CDF54E-001\foo-023:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J0\temp\lucene.store.TestFileSwitchDirectory_6460C8168CDF54E-001\foo-023
   
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J0\temp\lucene.store.TestFileSwitchDirectory_6460C8168CDF54E-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J0\temp\lucene.store.TestFileSwitchDirectory_6460C8168CDF54E-001

at __randomizedtesting.SeedInfo.seed([6460C8168CDF54E]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  org.apache.lucene.replicator.IndexReplicationClientTest.testRestart

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\replicator\test\J1\temp\lucene.replicator.IndexReplicationClientTest_4BA0DAB9D26B8CDF-001\replicationClientTest-002\1:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\replicator\test\J1\temp\lucene.replicator.IndexReplicationClientTest_4BA0DAB9D26B8CDF-001\replicationClientTest-002\1
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\replicator\test\J1\temp\lucene.replicator.IndexReplicationClientTest_4BA0DAB9D26B8CDF-001\replicationClientTest-002\1:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\replicator\test\J1\temp\lucene.replicator.IndexReplicationClientTest_4BA0DAB9D26B8CDF-001\replicationClientTest-002\1

at 
__randomizedtesting.SeedInfo.seed([4BA0DAB9D26B8CDF:DE2D36D8578338CA]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329)
at 
org.apache.lucene.replicator.PerSessionDirectoryFactory.cleanupSession(PerSessionDirectoryFactory.java:58)
at 
org.apache.lucene.replicator.ReplicationClient.doUpdate(ReplicationClient.java:259)
at 
org.apache.lucene.replicator.ReplicationClient.updateNow(ReplicationClient.java:401)
at 
org.apache.lucene.replicator.IndexReplicationClientTest.testRestart(IndexReplicationClientTest.java:193)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 

[jira] [Created] (SOLR-11746) numeric fields need better error handling for prefix/wildcard syntax -- consider uniform support for "foo:* == foo:[* TO *]"

2017-12-11 Thread Hoss Man (JIRA)
Hoss Man created SOLR-11746:
---

 Summary: numeric fields need better error handling for 
prefix/wildcard syntax -- consider uniform support for "foo:* == foo:[* TO *]"
 Key: SOLR-11746
 URL: https://issues.apache.org/jira/browse/SOLR-11746
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Hoss Man


On the solr-user mailing list, Torsten Krah pointed out that with Trie numeric 
fields, query syntax such as {{foo_d:\*}} has been functionality equivilent to 
{{foo_d:\[\* TO \*]}} and asked why this was not also supported for Point based 
numeric fields.

The fact that this type of syntax works (for {{indexed="true"}} Trie fields) 
appears to have been an (untested, undocumented) fluke of Trie fields given 
that they use indexed terms for the (encoded) numeric terms and inherit the 
default implementation of {{FieldType.getPrefixQuery}} which produces a prefix 
query against the {{""}} (empty string) term.  

(Note that this syntax has aparently _*never*_ worked for Trie fields with 
{{indexed="false" docValues="true"}} )

In general, we should assess the behavior users attempt a prefix/wildcard 
syntax query against numeric fields, as currently the behavior is largely 
non-sensical:  prefix/wildcard syntax frequently match no docs w/o any sort of 
error, and the aformentioned {{numeric_field:*}} behaves inconsistently between 
points/trie fields and between indexed/docValued trie fields.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11745) SolrCore doesn't log core if too many closes called on it

2017-12-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16286633#comment-16286633
 ] 

ASF GitHub Bot commented on SOLR-11745:
---

GitHub user millerjeff0 opened a pull request:

https://github.com/apache/lucene-solr/pull/289

SOLR-11745: change logging references to getName

SOLR-11745

this doesn't resolve to anything by object name, Eg: 
org.apache.solr.core.SolrCore@4812a0d7

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/millerjeff0/lucene-solr SOLR-11745

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/289.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #289


commit 6f87979e088b49ccb12d74af9a4352cb920665c9
Author: Jeff 
Date:   2017-12-11T21:54:42Z

SOLR-11745: change logging references to getName




> SolrCore doesn't log core if too many closes called on it
> -
>
> Key: SOLR-11745
> URL: https://issues.apache.org/jira/browse/SOLR-11745
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: logging
>Affects Versions: 7.1
>Reporter: Jeff Miller
>Priority: Trivial
>   Original Estimate: 5m
>  Remaining Estimate: 5m
>
>   log.error("Too many close [count:{}] on {}. Please report this 
> exception to solr-u...@lucene.apache.org", count, this );
> Calling this just prints
> org.apache.solr.core.SolrCore@4812a0d7
> Suggest changing to getName



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-8091) Better nearest-neighbor queries

2017-12-11 Thread Adrien Grand (JIRA)
Adrien Grand created LUCENE-8091:


 Summary: Better nearest-neighbor queries
 Key: LUCENE-8091
 URL: https://issues.apache.org/jira/browse/LUCENE-8091
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Priority: Minor


LatLonPoint.nearest is very efficient at identifying the top-k documents sorted 
by distance from a given point, by working directly on the BKD tree. This 
doesn't support filtering though, so if you need to filter by another property, 
you need to switch to querying on the filter and sorting by a 
LatLonPointSortField. Unfortunately this requires visiting all documents that 
match the filter.

We could leverage the new {{setMinCompetitiveScore}} API introduced in 
LUCENE-4100 in order to allow for retrieval of nearest neighbors with arbitrary 
filters, by recomputing a bounding-box when a new minimum competitive score is 
set.

In the future we could also leverage this to speed up queries that are boosted 
by distance. For instance if the final score is a weighted sum of the score on 
a text field and a distance-based score, and the minimum competitive score gets 
higher than the maximum score that may be produced on the text field at some 
point, then we could dynamically prune hits based on the distance.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.2-Linux (64bit/jdk1.8.0_144) - Build # 47 - Unstable!

2017-12-11 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.2-Linux/47/
Java: 64bit/jdk1.8.0_144 -XX:-UseCompressedOops -XX:+UseSerialGC

3 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.client.solrj.request.TestV2Request

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.client.solrj.request.TestV2Request: 1) Thread[id=376, 
name=Connection evictor, state=TIMED_WAITING, group=TGRP-TestV2Request] 
at java.lang.Thread.sleep(Native Method) at 
org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
 at java.lang.Thread.run(Thread.java:748)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.client.solrj.request.TestV2Request: 
   1) Thread[id=376, name=Connection evictor, state=TIMED_WAITING, 
group=TGRP-TestV2Request]
at java.lang.Thread.sleep(Native Method)
at 
org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
at java.lang.Thread.run(Thread.java:748)
at __randomizedtesting.SeedInfo.seed([E94778272556A57A]:0)


FAILED:  org.apache.solr.client.solrj.request.TestV2Request.testHttpSolrClient

Error Message:
Error from server at https://127.0.0.1:34903/solr: Could not fully remove 
collection: test

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteExecutionException: 
Error from server at https://127.0.0.1:34903/solr: Could not fully remove 
collection: test
at 
__randomizedtesting.SeedInfo.seed([E94778272556A57A:315F600D8462B75D]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteExecutionException.create(HttpSolrClient.java:829)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:620)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
at 
org.apache.solr.client.solrj.request.TestV2Request.assertSuccess(TestV2Request.java:42)
at 
org.apache.solr.client.solrj.request.TestV2Request.doTest(TestV2Request.java:91)
at 
org.apache.solr.client.solrj.request.TestV2Request.testHttpSolrClient(TestV2Request.java:62)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 

[jira] [Created] (SOLR-11745) SolrCore doesn't log core if too many closes called on it

2017-12-11 Thread Jeff Miller (JIRA)
Jeff Miller created SOLR-11745:
--

 Summary: SolrCore doesn't log core if too many closes called on it
 Key: SOLR-11745
 URL: https://issues.apache.org/jira/browse/SOLR-11745
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: logging
Affects Versions: 7.1
Reporter: Jeff Miller
Priority: Trivial


  log.error("Too many close [count:{}] on {}. Please report this exception 
to solr-u...@lucene.apache.org", count, this );

Calling this just prints
org.apache.solr.core.SolrCore@4812a0d7

Suggest changing to getName



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #289: SOLR-11745: change logging references to getN...

2017-12-11 Thread millerjeff0
GitHub user millerjeff0 opened a pull request:

https://github.com/apache/lucene-solr/pull/289

SOLR-11745: change logging references to getName

SOLR-11745

this doesn't resolve to anything by object name, Eg: 
org.apache.solr.core.SolrCore@4812a0d7

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/millerjeff0/lucene-solr SOLR-11745

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/289.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #289


commit 6f87979e088b49ccb12d74af9a4352cb920665c9
Author: Jeff 
Date:   2017-12-11T21:54:42Z

SOLR-11745: change logging references to getName




---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Solaris (64bit/jdk1.8.0) - Build # 331 - Unstable!

2017-12-11 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Solaris/331/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
org.apache.solr.handler.TestReplicationHandler.doTestIndexAndConfigReplication

Error Message:
Index: 0, Size: 0

Stack Trace:
java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
at 
__randomizedtesting.SeedInfo.seed([DA15A919F03C62F:19E901C4BC047B31]:0)
at java.util.ArrayList.rangeCheck(ArrayList.java:657)
at java.util.ArrayList.get(ArrayList.java:433)
at 
org.apache.solr.handler.TestReplicationHandler.doTestIndexAndConfigReplication(TestReplicationHandler.java:561)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 12822 lines...]
   [junit4] Suite: org.apache.solr.handler.TestReplicationHandler
   [junit4]   2> Creating dataDir: 

[jira] [Commented] (LUCENE-8091) Better nearest-neighbor queries

2017-12-11 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16286510#comment-16286510
 ] 

Steve Rowe commented on LUCENE-8091:


Adrien, just skimmed the patch, but it doesn't appear to refer to the 
N-dimensional float KNN impl introduced in LUCENE-7974 - would the impl here 
supercede that one?

> Better nearest-neighbor queries
> ---
>
> Key: LUCENE-8091
> URL: https://issues.apache.org/jira/browse/LUCENE-8091
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-8091.patch
>
>
> LatLonPoint.nearest is very efficient at identifying the top-k documents 
> sorted by distance from a given point, by working directly on the BKD tree. 
> This doesn't support filtering though, so if you need to filter by another 
> property, you need to switch to querying on the filter and sorting by a 
> LatLonPointSortField. Unfortunately this requires visiting all documents that 
> match the filter.
> We could leverage the new {{setMinCompetitiveScore}} API introduced in 
> LUCENE-4100 in order to allow for retrieval of nearest neighbors with 
> arbitrary filters, by recomputing a bounding-box when a new minimum 
> competitive score is set.
> In the future we could also leverage this to speed up queries that are 
> boosted by distance. For instance if the final score is a weighted sum of the 
> score on a text field and a distance-based score, and the minimum competitive 
> score gets higher than the maximum score that may be produced on the text 
> field at some point, then we could dynamically prune hits based on the 
> distance.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk1.8.0_144) - Build # 975 - Still Unstable!

2017-12-11 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/975/
Java: 64bit/jdk1.8.0_144 -XX:+UseCompressedOops -XX:+UseSerialGC

2 tests failed.
FAILED:  org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.test

Error Message:
Timeout occured while waiting response from server at: https://127.0.0.1:33737

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: https://127.0.0.1:33737
at 
__randomizedtesting.SeedInfo.seed([EEFDB79B8870C8E7:66A98841268CA51F]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:654)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1103)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:883)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:816)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createServers(AbstractFullDistribZkTestBase.java:314)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:991)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)

[jira] [Commented] (SOLR-11711) distributed pivot & field facets can processes excessive docs unneccessarily due to internal mincount=0

2017-12-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16286316#comment-16286316
 ] 

ASF subversion and git services commented on SOLR-11711:


Commit efc2f32ea05029edff9144f163d0619d091d1ba3 in lucene-solr's branch 
refs/heads/master from Chris Hostetter
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=efc2f32 ]

SOLR-11711: Fixed distributed processing of facet.field/facet.pivot sub 
requests to prevent requesting unneccessary and excessive '0' count terms from 
each shard


> distributed pivot & field facets can processes excessive docs unneccessarily 
> due to internal mincount=0
> ---
>
> Key: SOLR-11711
> URL: https://issues.apache.org/jira/browse/SOLR-11711
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: faceting
>Affects Versions: master (8.0)
>Reporter: Houston Putman
>Assignee: Hoss Man
>  Labels: pull-request-available
>
> Currently while sending pivot facet requests to each shard, the 
> {{facet.pivot.mincount}} is set to {{0}} if the facet is sorted by count with 
> a specified limit > 0. However with a mincount of 0, the pivot facet will use 
> exponentially more wasted memory for every pivot field added. This is because 
> there will be a total of {{limit^(# of pivots)}} pivot values created in 
> memory, even though the vast majority of them will have counts of 0, and are 
> therefore useless.
> Imagine the scenario of a pivot facet with 3 levels, and 
> {{facet.limit=1000}}. There will be a billion pivot values created, and there 
> will almost definitely be nowhere near a billion pivot values with counts > 0.
> This likely due to the reasoning mentioned in [this comment in the original 
> distributed pivot facet 
> ticket|https://issues.apache.org/jira/browse/SOLR-2894?focusedCommentId=13979898].
>  Basically it was thought that the refinement code would need to know that a 
> count was 0 for a shard so that a refinement request wasn't sent to that 
> shard. However this is checked in the code, [in this part of the refinement 
> candidate 
> checking|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/7.1.0/solr/core/src/java/org/apache/solr/handler/component/PivotFacetField.java#L275].
>  Therefore if the {{pivot.mincount}} was set to 1, the non-existent values 
> would either:
> * Not be known, because the {{facet.limit}} was smaller than the number of 
> facet values with positive counts. This isn't an issue, because they wouldn't 
> have been returned with {{pivot.mincount}} set to 0.
> * Would be known, because the {{facet.limit}} would be larger than the number 
> of facet values returned. therefore this conditional would return false 
> (since we are only talking about pivot facets sorted by count).
> The solution, is to use the same pivot mincount as would be used if no limit 
> was specified. 
> This also relates to a similar problem in field faceting that was "fixed" in 
> [SOLR-8988|https://issues.apache.org/jira/browse/SOLR-8988#13324]. The 
> solution was to add a flag, {{facet.distrib.mco}}, which would enable not 
> choosing a mincount of 0 when unnessesary. Since this flag can only increase 
> performance, and doesn't break any queries I have removed it as an option and 
> replaced the code to use the feature always. 
> There was one code change necessary to fix the MCO option, since the 
> refinement candidate selection logic had a bug. The bug only occured with a 
> minCount > 0 and limit > 0 specified. When a shard replied with less than the 
> limit requested, it would assume the next maximum count on that shard was the 
> {{mincount}}, where it would actually be the {{mincount-1}} (because a facet 
> value with a count of mincount would have been returned). Therefore the MCO 
> didn't cause any errors, but with a mincount of 1 the refinement logic always 
> assumed that the shard had more values with a count of 1.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-7.x - Build # 280 - Still Unstable

2017-12-11 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/280/

4 tests failed.
FAILED:  org.apache.solr.cloud.ChaosMonkeySafeLeaderWithPullReplicasTest.test

Error Message:
Could not load collection from ZK: collection1

Stack Trace:
org.apache.solr.common.SolrException: Could not load collection from ZK: 
collection1
at 
__randomizedtesting.SeedInfo.seed([9313CF6DE8EEFE5E:1B47F0B7461293A6]:0)
at 
org.apache.solr.common.cloud.ZkStateReader.getCollectionLive(ZkStateReader.java:1123)
at 
org.apache.solr.common.cloud.ZkStateReader$LazyCollectionRef.get(ZkStateReader.java:648)
at 
org.apache.solr.common.cloud.ClusterState.getCollectionOrNull(ClusterState.java:130)
at 
org.apache.solr.common.cloud.ClusterState.getCollection(ClusterState.java:110)
at 
org.apache.solr.cloud.ChaosMonkey.logCollectionStateSummary(ChaosMonkey.java:710)
at org.apache.solr.cloud.ChaosMonkey.wait(ChaosMonkey.java:704)
at 
org.apache.solr.cloud.ChaosMonkeySafeLeaderWithPullReplicasTest.test(ChaosMonkeySafeLeaderWithPullReplicasTest.java:172)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:993)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 

[jira] [Resolved] (SOLR-11743) Solr ssl issue while creating collection

2017-12-11 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-11743.
---
Resolution: Not A Problem

This is not an appropriate use of Solr's JIRA, we try to reserve the JIRA 
system for code issues rather than usage questions..

Please ask the question here: solr-u...@lucene.apache.org, see: 
http://lucene.apache.org/solr/community.html#mailing-lists-irc


If the consensus there is that there are code issues, we can reopen this JIRA 
or create a new one.


> Solr ssl issue while creating collection
> 
>
> Key: SOLR-11743
> URL: https://issues.apache.org/jira/browse/SOLR-11743
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Affects Versions: 7.1
> Environment: stage
>Reporter: Dinesh Sundaram
>   Original Estimate: 504h
>  Remaining Estimate: 504h
>
> How do I change the protocol to https everywhere including replica.
> NOTE: I have just only one node 8983. started solr using this command.
> bin/solr start -cloud -p 8983 -noprompt
> 1. Configure SSL using 
> https://lucene.apache.org/solr/guide/7_1/enabling-ssl.html
> 2. Restart solr 
> 3. Validate solr with https url https://localhost:8983/solr - works fine
> 4. Create a collection https://localhost:8983/solr/#/~collections
> 5. here is the response : 
>Connection to Solr lost 
>Please check the Solr instance.
> 6.Server solr.log: here notice the replica call goes to http port instead of 
> https
>2017-12-11 11:52:27.929 ERROR 
> (OverseerThreadFactory-8-thread-1-processing-n:localhost:8983_solr) [   ] 
> o.a.s.c.OverseerCollectionMessageHandler Error from shard: 
> http://localhost:8983/solr
> org.apache.solr.client.solrj.SolrServerException: IOException occured when 
> talking to server at: http://localhost:8983/solr
> at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:640)
> at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:253)
> at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:242)
> at 
> org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
> at 
> org.apache.solr.handler.component.HttpShardHandler.lambda$submit$0(HttpShardHandler.java:172)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176)
> at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:188)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.http.client.ClientProtocolException
> at 
> org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:187)
> at 
> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83)
> at 
> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:56)
> at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:525)
> ... 12 more
> Caused by: org.apache.http.ProtocolException: The server failed to respond 
> with a valid HTTP response
> at 
> org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:149)
> at 
> org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:56)
> at 
> org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:259)
> at 
> org.apache.http.impl.DefaultBHttpClientConnection.receiveResponseHeader(DefaultBHttpClientConnection.java:163)
> at 
> org.apache.http.impl.conn.CPoolProxy.receiveResponseHeader(CPoolProxy.java:165)
> at 
> org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:273)
> at 
> org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:125)
> at 
> org.apache.solr.util.stats.InstrumentedHttpRequestExecutor.execute(InstrumentedHttpRequestExecutor.java:118)
> at 
> org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:272)
> at 
> org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:185)
> at 

[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-10-ea+32) - Build # 21069 - Still Unstable!

2017-12-11 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/21069/
Java: 64bit/jdk-10-ea+32 -XX:+UseCompressedOops -XX:+UseG1GC

3 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.analytics.legacy.facet.LegacyRangeFacetCloudTest

Error Message:
There are still nodes recoverying - waited for 90 seconds

Stack Trace:
java.lang.AssertionError: There are still nodes recoverying - waited for 90 
seconds
at __randomizedtesting.SeedInfo.seed([378BC7D158801A48]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:185)
at 
org.apache.solr.analytics.legacy.LegacyAbstractAnalyticsCloudTest.setupCollection(LegacyAbstractAnalyticsCloudTest.java:51)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:874)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  junit.framework.TestSuite.org.apache.solr.client.solrj.io.sql.JdbcTest

Error Message:
Error from server at https://127.0.0.1:33707/solr: create the collection time 
out:180s

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at https://127.0.0.1:33707/solr: create the collection time out:180s
at __randomizedtesting.SeedInfo.seed([64C755427E26973F]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1103)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:883)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:816)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at 
org.apache.solr.client.solrj.io.sql.JdbcTest.setupCluster(JdbcTest.java:77)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 

[jira] [Commented] (SOLR-11739) Solr can accept duplicated async IDs

2017-12-11 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-11739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16286410#comment-16286410
 ] 

Tomás Fernández Löbbe commented on SOLR-11739:
--

bq. Solr should assign the asyncIds and guarantee that they are unique
We now do something like this when async IDs are not provided, but *on the 
client side*. See 
https://github.com/apache/lucene-solr/blob/41644bdcdcc0734115ce08ec24d6b408e1f8cf28/solr/solrj/src/java/org/apache/solr/client/solrj/request/CollectionAdminRequest.java#L150-L152
bq. It sounds like tomas is saying that there is existing code in solr that 
tries to reject duplicate asyncIds
Yes, here: 
https://github.com/apache/lucene-solr/blob/41644bdcdcc0734115ce08ec24d6b408e1f8cf28/solr/core/src/java/org/apache/solr/handler/admin/CollectionsHandler.java#L282-L286
bq. Solr should happily let you specify the same asyncId multiple times
hmm I'm not sure. I think the fact that Solr won't re-execute the same request 
twice makes it much easier to write workflows that do collection operations. 
bq. isn't the amount of work / zk writes needed to generate a universally 
unique asyncId on the server side essentially the same as the amount needed to 
tell the client that the asyncId they specified isn't unique?
We now do a bunch of reads to check if the async IDs are in any of the current 
queues/maps (see the code I linked above). I was considering either writing to 
some ZooKeeper path the async ID  as part of a lock, before checking the 
existing queues and then deleting the lock, or moving completely the async IDs 
to it's own path, and starting using this as the source of truth of asyncIDs. 
The later would then make it easier to check if an async ID is currently in 
use, however it's a much bigger change and we'll need to consider back 
compatibility

> Solr can accept duplicated async IDs
> 
>
> Key: SOLR-11739
> URL: https://issues.apache.org/jira/browse/SOLR-11739
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Tomás Fernández Löbbe
>Assignee: Tomás Fernández Löbbe
>Priority: Minor
> Attachments: SOLR-11739.patch
>
>
> Solr is supposed to reject duplicated async IDs, however, if the repeated IDs 
> are sent fast enough, a race condition in Solr will let the repeated IDs 
> through. The duplicated task is ran and and then silently fails to report as 
> completed because the same async ID is already in the completed map. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-11624) _default configset overwrites a a configset if collection.configName isn't specified even if a confiset of the same name already exists.

2017-12-11 Thread Abhishek Kumar Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16286385#comment-16286385
 ] 

Abhishek Kumar Singh edited comment on SOLR-11624 at 12/11/17 6:43 PM:
---

Please find the updated patch here -> [^SOLR-11624.3.patch] 

Changes made:-
1. Modified the test case {{TimeRoutedAliasUpdateProcessorTest#test}} to 
create/update and use the correct {{modifiedConfigSet}} name.
2. Refactored the {{configName}} , added suffix to the name of _auto-generated 
configSet_. 



was (Author: asingh2411):
Please find the updated patch here. 

> _default configset overwrites a a configset if collection.configName isn't 
> specified even if a confiset of the same name already exists.
> 
>
> Key: SOLR-11624
> URL: https://issues.apache.org/jira/browse/SOLR-11624
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.2
>Reporter: Erick Erickson
>Assignee: Ishan Chattopadhyaya
> Attachments: SOLR-11624-2.patch, SOLR-11624.3.patch, SOLR-11624.patch
>
>
> Looks like a problem that crept in when we changed the _default configset 
> stuff.
> setup:
> upload a configset named "wiki"
> collections?action=CREATE=wiki&.
> My custom configset "wiki" gets overwritten by _default and then used by the 
> "wiki" collection.
> Assigning to myself only because it really needs to be fixed IMO and I don't 
> want to lose track of it. Anyone else please feel free to take it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11624) _default configset overwrites a a configset if collection.configName isn't specified even if a confiset of the same name already exists.

2017-12-11 Thread Abhishek Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhishek Kumar Singh updated SOLR-11624:

Attachment: solr-11624.3.patch

Please find the updated patch here. 

> _default configset overwrites a a configset if collection.configName isn't 
> specified even if a confiset of the same name already exists.
> 
>
> Key: SOLR-11624
> URL: https://issues.apache.org/jira/browse/SOLR-11624
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.2
>Reporter: Erick Erickson
>Assignee: Ishan Chattopadhyaya
> Attachments: SOLR-11624-2.patch, SOLR-11624.patch, solr-11624.3.patch
>
>
> Looks like a problem that crept in when we changed the _default configset 
> stuff.
> setup:
> upload a configset named "wiki"
> collections?action=CREATE=wiki&.
> My custom configset "wiki" gets overwritten by _default and then used by the 
> "wiki" collection.
> Assigning to myself only because it really needs to be fixed IMO and I don't 
> want to lose track of it. Anyone else please feel free to take it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11624) _default configset overwrites a a configset if collection.configName isn't specified even if a confiset of the same name already exists.

2017-12-11 Thread Abhishek Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhishek Kumar Singh updated SOLR-11624:

Attachment: SOLR-11624.3.patch

> _default configset overwrites a a configset if collection.configName isn't 
> specified even if a confiset of the same name already exists.
> 
>
> Key: SOLR-11624
> URL: https://issues.apache.org/jira/browse/SOLR-11624
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.2
>Reporter: Erick Erickson
>Assignee: Ishan Chattopadhyaya
> Attachments: SOLR-11624-2.patch, SOLR-11624.3.patch, SOLR-11624.patch
>
>
> Looks like a problem that crept in when we changed the _default configset 
> stuff.
> setup:
> upload a configset named "wiki"
> collections?action=CREATE=wiki&.
> My custom configset "wiki" gets overwritten by _default and then used by the 
> "wiki" collection.
> Assigning to myself only because it really needs to be fixed IMO and I don't 
> want to lose track of it. Anyone else please feel free to take it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-11743) Solr ssl issue while creating collection

2017-12-11 Thread Dinesh Sundaram (JIRA)
Dinesh Sundaram created SOLR-11743:
--

 Summary: Solr ssl issue while creating collection
 Key: SOLR-11743
 URL: https://issues.apache.org/jira/browse/SOLR-11743
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: Admin UI
Affects Versions: 7.1
 Environment: stage
Reporter: Dinesh Sundaram


How do I change the protocol to https everywhere including replica.
NOTE: I have just only one node 8983. started solr using this command.
bin/solr start -cloud -p 8983 -noprompt

1. Configure SSL using 
https://lucene.apache.org/solr/guide/7_1/enabling-ssl.html
2. Restart solr 
3. Validate solr with https url https://localhost:8983/solr - works fine
4. Create a collection https://localhost:8983/solr/#/~collections
5. here is the response : 
   Connection to Solr lost 
   Please check the Solr instance.
6.Server solr.log: here notice the replica call goes to http port instead of 
https

   2017-12-11 11:52:27.929 ERROR 
(OverseerThreadFactory-8-thread-1-processing-n:localhost:8983_solr) [   ] 
o.a.s.c.OverseerCollectionMessageHandler Error from shard: 
http://localhost:8983/solr
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http://localhost:8983/solr
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:640)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:253)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:242)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
at 
org.apache.solr.handler.component.HttpShardHandler.lambda$submit$0(HttpShardHandler.java:172)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:188)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.http.client.ClientProtocolException
at 
org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:187)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:56)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:525)
... 12 more
Caused by: org.apache.http.ProtocolException: The server failed to respond with 
a valid HTTP response
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:149)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:56)
at 
org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:259)
at 
org.apache.http.impl.DefaultBHttpClientConnection.receiveResponseHeader(DefaultBHttpClientConnection.java:163)
at 
org.apache.http.impl.conn.CPoolProxy.receiveResponseHeader(CPoolProxy.java:165)
at 
org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:273)
at 
org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:125)
at 
org.apache.solr.util.stats.InstrumentedHttpRequestExecutor.execute(InstrumentedHttpRequestExecutor.java:118)
at 
org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:272)
at 
org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:185)
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89)
at 
org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:111)
at 
org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185)
... 15 more





--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8091) Better nearest-neighbor queries

2017-12-11 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-8091:
-
Attachment: LUCENE-8091.patch

Here is a prototype that demonstrates the idea in the 1D case. For instance 
when running a nearest-neighbor search given a dataset of 1M points uniformly 
distributed between 0 and 1M, a regular sorted search needs to visit the 1M 
documents (as expected), while this new special query only requires ~11k calls 
to DocIdSetIterator.nextDoc / TopDocsCollector.collect, ~32k calls to 
IntersectVisitor.visit, ~3k calls to IntersectVisitor.compare and runs about 7x 
faster.

This patch needs a lot of cleaning/testing before being ready to commit.

> Better nearest-neighbor queries
> ---
>
> Key: LUCENE-8091
> URL: https://issues.apache.org/jira/browse/LUCENE-8091
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-8091.patch
>
>
> LatLonPoint.nearest is very efficient at identifying the top-k documents 
> sorted by distance from a given point, by working directly on the BKD tree. 
> This doesn't support filtering though, so if you need to filter by another 
> property, you need to switch to querying on the filter and sorting by a 
> LatLonPointSortField. Unfortunately this requires visiting all documents that 
> match the filter.
> We could leverage the new {{setMinCompetitiveScore}} API introduced in 
> LUCENE-4100 in order to allow for retrieval of nearest neighbors with 
> arbitrary filters, by recomputing a bounding-box when a new minimum 
> competitive score is set.
> In the future we could also leverage this to speed up queries that are 
> boosted by distance. For instance if the final score is a weighted sum of the 
> score on a text field and a distance-based score, and the minimum competitive 
> score gets higher than the maximum score that may be produced on the text 
> field at some point, then we could dynamically prune hits based on the 
> distance.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-11742) Add documentation for 7.2 release statistical functions

2017-12-11 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett reassigned SOLR-11742:


Assignee: Cassandra Targett  (was: Joel Bernstein)

> Add documentation for 7.2 release statistical functions
> ---
>
> Key: SOLR-11742
> URL: https://issues.apache.org/jira/browse/SOLR-11742
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Cassandra Targett
> Fix For: 7.2
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11739) Solr can accept duplicated async IDs

2017-12-11 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16286346#comment-16286346
 ] 

Hoss Man commented on SOLR-11739:
-

My off the cuff, uneducated, impression w/o knowing much about the internals or 
the history of how the existing code go to the state it's currently in is 
that...

Either:
# Solr should assign the asyncIds and guarantee that they are unique
# The user should assign the asyncIds, and Solr should make no assumptions 
about them, nor use them for *anything* other then reporting status.

#1 seems a lot harder since it's essentially a distributed unique key 
generation problem, which IIUC is why asyncId wasn't implemented that way in 
the first place.

For #2, from my perspective, It sounds like tomas is saying that there is 
existing code in solr that tries to reject duplicate asyncIds -- and I would 
argue (as a straw man) that Solr making any attempt at doing that is where the 
real bug lies ... Solr should happily let you specify the same asyncId multiple 
times, and that should have no affect at all on the reliability of the commands 
being executed in the order recieved.  the only thing it should affect is that 
requesting status info on the commands may give unexpected results (depending 
on what the client is expecting) ... i would expect that requesting status for 
the id would return the status of the "1st" instance, until the "2nd" instance 
finishes at which point the status info is overridden.

that way if a user wants to re-use the exact same asyncId for every request, 
they are welcome to put that bullet in their foot as many times as they want -- 
it keeps things simpler for us internally, and we're not trying to coddle them 
for doing something (very advanced) in a silly way.



If we're going to coddle them, then we should coddle them all the way -- isn't 
the amount of work / zk writes needed to generate a universally unique asyncId 
on the server side essentially the same as the amount needed to tell the client 
that the asyncId they specified isn't unique?

> Solr can accept duplicated async IDs
> 
>
> Key: SOLR-11739
> URL: https://issues.apache.org/jira/browse/SOLR-11739
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Tomás Fernández Löbbe
>Assignee: Tomás Fernández Löbbe
>Priority: Minor
> Attachments: SOLR-11739.patch
>
>
> Solr is supposed to reject duplicated async IDs, however, if the repeated IDs 
> are sent fast enough, a race condition in Solr will let the repeated IDs 
> through. The duplicated task is ran and and then silently fails to report as 
> completed because the same async ID is already in the completed map. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-11711) distributed pivot & field facets can processes excessive docs unneccessarily due to internal mincount=0

2017-12-11 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man resolved SOLR-11711.
-
   Resolution: Fixed
Fix Version/s: 7.3
   master (8.0)

Thanks [~houstonputman] !

(and thanks [~k317h] for the initial work in SOLR-8988 ... sorry i didn't fully 
grasp it enough at the time)

> distributed pivot & field facets can processes excessive docs unneccessarily 
> due to internal mincount=0
> ---
>
> Key: SOLR-11711
> URL: https://issues.apache.org/jira/browse/SOLR-11711
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: faceting
>Affects Versions: master (8.0)
>Reporter: Houston Putman
>Assignee: Hoss Man
>  Labels: pull-request-available
> Fix For: master (8.0), 7.3
>
>
> Currently while sending pivot facet requests to each shard, the 
> {{facet.pivot.mincount}} is set to {{0}} if the facet is sorted by count with 
> a specified limit > 0. However with a mincount of 0, the pivot facet will use 
> exponentially more wasted memory for every pivot field added. This is because 
> there will be a total of {{limit^(# of pivots)}} pivot values created in 
> memory, even though the vast majority of them will have counts of 0, and are 
> therefore useless.
> Imagine the scenario of a pivot facet with 3 levels, and 
> {{facet.limit=1000}}. There will be a billion pivot values created, and there 
> will almost definitely be nowhere near a billion pivot values with counts > 0.
> This likely due to the reasoning mentioned in [this comment in the original 
> distributed pivot facet 
> ticket|https://issues.apache.org/jira/browse/SOLR-2894?focusedCommentId=13979898].
>  Basically it was thought that the refinement code would need to know that a 
> count was 0 for a shard so that a refinement request wasn't sent to that 
> shard. However this is checked in the code, [in this part of the refinement 
> candidate 
> checking|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/7.1.0/solr/core/src/java/org/apache/solr/handler/component/PivotFacetField.java#L275].
>  Therefore if the {{pivot.mincount}} was set to 1, the non-existent values 
> would either:
> * Not be known, because the {{facet.limit}} was smaller than the number of 
> facet values with positive counts. This isn't an issue, because they wouldn't 
> have been returned with {{pivot.mincount}} set to 0.
> * Would be known, because the {{facet.limit}} would be larger than the number 
> of facet values returned. therefore this conditional would return false 
> (since we are only talking about pivot facets sorted by count).
> The solution, is to use the same pivot mincount as would be used if no limit 
> was specified. 
> This also relates to a similar problem in field faceting that was "fixed" in 
> [SOLR-8988|https://issues.apache.org/jira/browse/SOLR-8988#13324]. The 
> solution was to add a flag, {{facet.distrib.mco}}, which would enable not 
> choosing a mincount of 0 when unnessesary. Since this flag can only increase 
> performance, and doesn't break any queries I have removed it as an option and 
> replaced the code to use the feature always. 
> There was one code change necessary to fix the MCO option, since the 
> refinement candidate selection logic had a bug. The bug only occured with a 
> minCount > 0 and limit > 0 specified. When a shard replied with less than the 
> limit requested, it would assume the next maximum count on that shard was the 
> {{mincount}}, where it would actually be the {{mincount-1}} (because a facet 
> value with a count of mincount would have been returned). Therefore the MCO 
> didn't cause any errors, but with a mincount of 1 the refinement logic always 
> assumed that the shard had more values with a count of 1.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-11568) Add matrix Stream Evaluator to support efficient matrix operations

2017-12-11 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein resolved SOLR-11568.
---
Resolution: Resolved

> Add matrix Stream Evaluator to support efficient matrix operations
> --
>
> Key: SOLR-11568
> URL: https://issues.apache.org/jira/browse/SOLR-11568
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>
> This ticket is to add specific support for *matrices* to Solr's machine 
> learning and statistical libraries.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-MacOSX (64bit/jdk1.8.0) - Build # 337 - Unstable!

2017-12-11 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-MacOSX/337/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  
org.apache.solr.handler.TestReplicationHandler.doTestIndexAndConfigReplication

Error Message:
Index: 0, Size: 0

Stack Trace:
java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
at 
__randomizedtesting.SeedInfo.seed([BC729226D782504A:A83AC973F485ED54]:0)
at java.util.ArrayList.rangeCheck(ArrayList.java:653)
at java.util.ArrayList.get(ArrayList.java:429)
at 
org.apache.solr.handler.TestReplicationHandler.doTestIndexAndConfigReplication(TestReplicationHandler.java:561)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 12278 lines...]
   [junit4] Suite: org.apache.solr.handler.TestReplicationHandler
   [junit4]   2> Creating dataDir: 

[jira] [Commented] (SOLR-11742) Add documentation for 7.2 release statistical functions

2017-12-11 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16286290#comment-16286290
 ] 

Joel Bernstein commented on SOLR-11742:
---

This ticket was mainly for the online documentation, but I will work on adding 
code level docs as part of this as well.

> Add documentation for 7.2 release statistical functions
> ---
>
> Key: SOLR-11742
> URL: https://issues.apache.org/jira/browse/SOLR-11742
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 7.2
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-11566) Add transpose Stream Evaluator to support transposing of matrices

2017-12-11 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein resolved SOLR-11566.
---
Resolution: Resolved

> Add transpose Stream Evaluator to support transposing of matrices
> -
>
> Key: SOLR-11566
> URL: https://issues.apache.org/jira/browse/SOLR-11566
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 7.2
>
> Attachments: SOLR-11566.patch
>
>
> This ticket adds the transpose Stream Evaluator to support transposing of 
> matrices.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11711) distributed pivot & field facets can processes excessive docs unneccessarily due to internal mincount=0

2017-12-11 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-11711:

Fix Version/s: (was: 7.2)
   (was: 6.7)
   (was: 5.6)
   Issue Type: Improvement  (was: Bug)
  Summary: distributed pivot & field facets can processes excessive 
docs unneccessarily due to internal mincount=0  (was: Fix minCount bug in 
distributed pivot & field facets)

no worries, it's definitely an eye of the beholder type situation -- personally 
i try to ensure that i put myself in the shoes of an end user skimming 
CHANGES/jira and wondering "how badly does this hurt me?"

> distributed pivot & field facets can processes excessive docs unneccessarily 
> due to internal mincount=0
> ---
>
> Key: SOLR-11711
> URL: https://issues.apache.org/jira/browse/SOLR-11711
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: faceting
>Affects Versions: master (8.0)
>Reporter: Houston Putman
>Assignee: Hoss Man
>  Labels: pull-request-available
>
> Currently while sending pivot facet requests to each shard, the 
> {{facet.pivot.mincount}} is set to {{0}} if the facet is sorted by count with 
> a specified limit > 0. However with a mincount of 0, the pivot facet will use 
> exponentially more wasted memory for every pivot field added. This is because 
> there will be a total of {{limit^(# of pivots)}} pivot values created in 
> memory, even though the vast majority of them will have counts of 0, and are 
> therefore useless.
> Imagine the scenario of a pivot facet with 3 levels, and 
> {{facet.limit=1000}}. There will be a billion pivot values created, and there 
> will almost definitely be nowhere near a billion pivot values with counts > 0.
> This likely due to the reasoning mentioned in [this comment in the original 
> distributed pivot facet 
> ticket|https://issues.apache.org/jira/browse/SOLR-2894?focusedCommentId=13979898].
>  Basically it was thought that the refinement code would need to know that a 
> count was 0 for a shard so that a refinement request wasn't sent to that 
> shard. However this is checked in the code, [in this part of the refinement 
> candidate 
> checking|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/7.1.0/solr/core/src/java/org/apache/solr/handler/component/PivotFacetField.java#L275].
>  Therefore if the {{pivot.mincount}} was set to 1, the non-existent values 
> would either:
> * Not be known, because the {{facet.limit}} was smaller than the number of 
> facet values with positive counts. This isn't an issue, because they wouldn't 
> have been returned with {{pivot.mincount}} set to 0.
> * Would be known, because the {{facet.limit}} would be larger than the number 
> of facet values returned. therefore this conditional would return false 
> (since we are only talking about pivot facets sorted by count).
> The solution, is to use the same pivot mincount as would be used if no limit 
> was specified. 
> This also relates to a similar problem in field faceting that was "fixed" in 
> [SOLR-8988|https://issues.apache.org/jira/browse/SOLR-8988#13324]. The 
> solution was to add a flag, {{facet.distrib.mco}}, which would enable not 
> choosing a mincount of 0 when unnessesary. Since this flag can only increase 
> performance, and doesn't break any queries I have removed it as an option and 
> replaced the code to use the feature always. 
> There was one code change necessary to fix the MCO option, since the 
> refinement candidate selection logic had a bug. The bug only occured with a 
> minCount > 0 and limit > 0 specified. When a shard replied with less than the 
> limit requested, it would assume the next maximum count on that shard was the 
> {{mincount}}, where it would actually be the {{mincount-1}} (because a facet 
> value with a count of mincount would have been returned). Therefore the MCO 
> didn't cause any errors, but with a mincount of 1 the refinement logic always 
> assumed that the shard had more values with a count of 1.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-11485) Add olsRegress, spline and derivative Stream Evaluators

2017-12-11 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein resolved SOLR-11485.
---
Resolution: Resolved

> Add olsRegress, spline and derivative Stream Evaluators
> ---
>
> Key: SOLR-11485
> URL: https://issues.apache.org/jira/browse/SOLR-11485
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 7.2, master (8.0)
>
> Attachments: SOLR-11485.patch
>
>
> This ticket adds support for OLS (ordinary least squares)  multiple 
> regression, spline interpolation and derivative functions.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-11565) Add unit Stream Evaluator to support unitizing of vectors and matrices

2017-12-11 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein resolved SOLR-11565.
---
Resolution: Resolved

> Add unit Stream Evaluator to support unitizing of vectors and matrices
> --
>
> Key: SOLR-11565
> URL: https://issues.apache.org/jira/browse/SOLR-11565
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 7.2
>
> Attachments: SOLR-11565.patch
>
>
> This ticket will add the unit Stream Evaluator which returns the unit vector 
> for a numeric vector.
> https://en.wikipedia.org/wiki/Unit_vector



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-11674) Support ranges in the probability Stream Evaluator

2017-12-11 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein resolved SOLR-11674.
---
Resolution: Resolved

> Support ranges in the probability Stream Evaluator
> --
>
> Key: SOLR-11674
> URL: https://issues.apache.org/jira/browse/SOLR-11674
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 7.2
>
> Attachments: SOLR-11674.patch
>
>
> Currently the *probability* Stream Evaluator only accepts a single parameter 
> to return the probability of a specific value. This only works with discrete 
> probability distributions. This ticket will add support for specifying the 
> range for computing probability. This will allow the probability function to 
> work with continuous probability distributions.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-11571) Add diff Stream Evaluator to support time series differencing

2017-12-11 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein resolved SOLR-11571.
---
Resolution: Resolved

> Add diff Stream Evaluator to support time series differencing
> -
>
> Key: SOLR-11571
> URL: https://issues.apache.org/jira/browse/SOLR-11571
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 7.2
>
> Attachments: SOLR-11571
>
>
> This ticket adds support for time series differencing to Solr's statistical 
> expression library.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-11567) Add triangularDistribution Stream Evaluator

2017-12-11 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein resolved SOLR-11567.
---
Resolution: Resolved

> Add triangularDistribution Stream Evaluator
> ---
>
> Key: SOLR-11567
> URL: https://issues.apache.org/jira/browse/SOLR-11567
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 7.2
>
> Attachments: SOLR-11567.patch
>
>
> This ticket adds support for the triangular probability distribution to 
> Solr's statistical function library.
> https://en.wikipedia.org/wiki/Triangular_distribution



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-11570) Add support for correlation matrices to the corr Stream Evaluator

2017-12-11 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein resolved SOLR-11570.
---
Resolution: Resolved

> Add support for correlation matrices to the corr Stream Evaluator
> -
>
> Key: SOLR-11570
> URL: https://issues.apache.org/jira/browse/SOLR-11570
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 7.2
>
> Attachments: SOLR-11570.patch, SOLR-11570.patch
>
>
> This ticket will add support for correlation matrices to the corr Stream 
> Evaluator.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-11742) Add documentation for 7.2 release statistical functions

2017-12-11 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein reassigned SOLR-11742:
-

Assignee: Joel Bernstein

> Add documentation for 7.2 release statistical functions
> ---
>
> Key: SOLR-11742
> URL: https://issues.apache.org/jira/browse/SOLR-11742
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-11429) Add loess Stream Evaluator to support Local Regression interpolation

2017-12-11 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11429?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein resolved SOLR-11429.
---
Resolution: Resolved

> Add loess Stream Evaluator to support Local Regression interpolation
> 
>
> Key: SOLR-11429
> URL: https://issues.apache.org/jira/browse/SOLR-11429
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 7.2, master (8.0)
>
> Attachments: SOLR-11429.patch, SOLR-11429.patch
>
>
> The loess function will fit a curved line through a set of points using the 
> Local Regression Algorithm.
> Syntax:
> {code}
> yvalues = loess(xvec, yvec)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-11594) Add precision Stream Evaluator

2017-12-11 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein resolved SOLR-11594.
---
Resolution: Resolved

> Add precision Stream Evaluator
> --
>
> Key: SOLR-11594
> URL: https://issues.apache.org/jira/browse/SOLR-11594
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Trivial
> Fix For: 7.2
>
> Attachments: SOLR-11594.patch
>
>
> This ticket adds the precision Stream Evaluator which rounds decimals to a 
> specific decimal place.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11741) Offline training mode for schema guessing

2017-12-11 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16286265#comment-16286265
 ] 

David Smiley commented on SOLR-11741:
-

This'll be great!

> Offline training mode for schema guessing
> -
>
> Key: SOLR-11741
> URL: https://issues.apache.org/jira/browse/SOLR-11741
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ishan Chattopadhyaya
>
> Our data driven schema guessing doesn't work under many situations. For 
> example, if the first document has a field with value "0", it is guessed as 
> Long and subsequent fields with "0.0" are rejected. Similarly, if the same 
> field had alphanumeric contents for a latter document, those documents are 
> rejected. Also, single vs. multi valued field guessing is not ideal.
> Proposing an offline training mode where Solr accepts bunch of documents and 
> returns a guessed schema (without indexing). This schema can then be used for 
> actual indexing. I think the original idea is from Hoss.
> I think initial implementation can be based on an UpdateRequestProcessor. We 
> can hash out the API soon, as we go along.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-10680) Add minMaxScale Stream Evaluator

2017-12-11 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10680?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein resolved SOLR-10680.
---
Resolution: Resolved

> Add minMaxScale Stream Evaluator
> 
>
> Key: SOLR-10680
> URL: https://issues.apache.org/jira/browse/SOLR-10680
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 7.2
>
> Attachments: SOLR-10680.patch, SOLR-10680.patch
>
>
> The minMaxNormalize Stream Evaluator scales an array of numbers within the 
> specified min/max range. Default to min=0, max=1.
> Syntax:
> {code}
> a = minMaxScale(colA, 0, 1)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-11593) Add support for covariance matrices to the cov Stream Evaluator

2017-12-11 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein resolved SOLR-11593.
---
Resolution: Resolved

> Add support for covariance matrices to the cov Stream Evaluator
> ---
>
> Key: SOLR-11593
> URL: https://issues.apache.org/jira/browse/SOLR-11593
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Trivial
> Fix For: 7.2
>
> Attachments: SOLR-11593.patch
>
>
> This ticket adds support for covariances matrices to the *cov* Stream 
> Evaluator. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11742) Add documentation for 7.2 release statistical functions

2017-12-11 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16286275#comment-16286275
 ] 

Alexandre Rafalovitch commented on SOLR-11742:
--

Would this include @version tags in the Javadoc comments :-)

> Add documentation for 7.2 release statistical functions
> ---
>
> Key: SOLR-11742
> URL: https://issues.apache.org/jira/browse/SOLR-11742
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 7.2
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-11697) Add geometricDistribution Stream Evaluator

2017-12-11 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein resolved SOLR-11697.
---
Resolution: Resolved

> Add geometricDistribution Stream Evaluator
> --
>
> Key: SOLR-11697
> URL: https://issues.apache.org/jira/browse/SOLR-11697
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 7.2
>
> Attachments: SOLR-11697.patch
>
>
> This ticket adds the geometric probability distribution to the Streaming 
> Expression statistical function library.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-11680) Add normalizeSum Stream Evaluator

2017-12-11 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11680?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein resolved SOLR-11680.
---
Resolution: Resolved

> Add normalizeSum Stream Evaluator
> -
>
> Key: SOLR-11680
> URL: https://issues.apache.org/jira/browse/SOLR-11680
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Attachments: SOLR-11680.patch, SOLR-11680.patch
>
>
> The normalizeSum Stream Evaluator normalizes vectors so that they sum to a 
> specific value. The function will work on both vectors and matrices.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-11428) Add spline Stream Evaluator to support spline interpolation

2017-12-11 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein resolved SOLR-11428.
---
Resolution: Duplicate

> Add spline Stream Evaluator to support spline interpolation
> ---
>
> Key: SOLR-11428
> URL: https://issues.apache.org/jira/browse/SOLR-11428
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 7.2, master (8.0)
>
>
> The *spline* Stream Evaluator will fit a smooth curved line through a set of 
> points.
> Syntax:
> {code}
> yvalues = spline(xvec, yvec)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-11602) Add Markov Chain Stream Evaluator

2017-12-11 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein resolved SOLR-11602.
---
Resolution: Resolved

> Add Markov Chain Stream Evaluator
> -
>
> Key: SOLR-11602
> URL: https://issues.apache.org/jira/browse/SOLR-11602
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Trivial
> Fix For: 7.2
>
> Attachments: SOLR-11602.patch, SOLR-11602.patch, SOLR-11602.patch
>
>
> Now that Streaming Expressions supports Monte Carlo simulations it would be 
> useful to add Markov Chain support 
> (https://en.wikipedia.org/wiki/Markov_chain). This ticket will add support 
> for Markov Chain simulations.
> Here is the syntax:
> {code}
> let(state0=array(.3, .4, .3),
> state1=array(.2, .1, .7),
> state2=array(.6, .2, .2),
> states=matrix(state0, state1, state2),
> m=markovChain(states, 0),
> s=sample(m, 500))
> {code}
> The Markov chain is initialized with a matrix who's rows represent the 
> different *states* of the system. The columns represent the probabilities of 
> changing from one state to another state.
> For example if we are in state 1 represented by the array(.2,.1,.7), there is 
> a .7 percent probability that it will transition to state 2.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11742) Add documentation for 7.2 release statistical functions

2017-12-11 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-11742:
--
Fix Version/s: 7.2

> Add documentation for 7.2 release statistical functions
> ---
>
> Key: SOLR-11742
> URL: https://issues.apache.org/jira/browse/SOLR-11742
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 7.2
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-11742) Add documentation for 7.2 release statistical functions

2017-12-11 Thread Joel Bernstein (JIRA)
Joel Bernstein created SOLR-11742:
-

 Summary: Add documentation for 7.2 release statistical functions
 Key: SOLR-11742
 URL: https://issues.apache.org/jira/browse/SOLR-11742
 Project: Solr
  Issue Type: New Feature
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Joel Bernstein






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11740) bin/solr stop command always throws Connection refused

2017-12-11 Thread Cassandra Targett (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16286088#comment-16286088
 ] 

Cassandra Targett commented on SOLR-11740:
--

It's also possible to duplicate this with {{bin/solr start -c}} if you then 
start another node on another port (7574) and create a collection with 
{{bin/solr create}} (I did the create with {{-s 2 -rf 2}} options). It's always 
the port 8983 instance that fails to stop & that's the one that launches ZK & 
is the leader.

My suspicion is that it's somehow related to autoAddReplicas and autoscaling 
features - during the stop shutdown, the shutdown of port 7574 registers as a 
"nodeLost" event, which you can see in logs. I wonder if there is something 
blocking the shutdown of port 8983? I can't really tell from the logs if it's 
just registering the event or if it's actually trying to do something:

{code}
2017-12-11 15:53:24.737 INFO  
(zkCallback-3-thread-7-processing-n:192.168.0.28:8983_solr) [   ] 
o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent 
state:SyncConnected type:NodeDataChanged path:/collections/test/state.json] for 
collection [test] has occurred - updating... (live nodes size: [1])
2017-12-11 15:53:24.745 INFO  
(zkCallback-3-thread-5-processing-n:192.168.0.28:8983_solr) [c:test s:shard2 
r:core_node8 x:test_shard2_replica_n6] o.a.s.c.ShardLeaderElectionContext I may 
be the new leader - try and sync
2017-12-11 15:53:24.850 INFO  
(zkCallback-3-thread-8-processing-n:192.168.0.28:8983_solr) [   ] 
o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent 
state:SyncConnected type:NodeDataChanged path:/collections/test/state.json] for 
collection [test] has occurred - updating... (live nodes size: [1])
2017-12-11 15:53:24.850 INFO  
(zkCallback-3-thread-6-processing-n:192.168.0.28:8983_solr) [   ] 
o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent 
state:SyncConnected type:NodeDataChanged path:/collections/test/state.json] for 
collection [test] has occurred - updating... (live nodes size: [1])
2017-12-11 15:53:27.248 INFO  
(zkCallback-3-thread-5-processing-n:192.168.0.28:8983_solr) [c:test s:shard2 
r:core_node8 x:test_shard2_replica_n6] o.a.s.c.SyncStrategy Sync replicas to 
http://192.168.0.28:8983/solr/test_shard2_replica_n6/
2017-12-11 15:53:27.249 INFO  
(zkCallback-3-thread-5-processing-n:192.168.0.28:8983_solr) [c:test s:shard2 
r:core_node8 x:test_shard2_replica_n6] o.a.s.c.SyncStrategy Sync Success - now 
sync replicas to me
2017-12-11 15:53:27.249 INFO  
(zkCallback-3-thread-5-processing-n:192.168.0.28:8983_solr) [c:test s:shard2 
r:core_node8 x:test_shard2_replica_n6] o.a.s.c.SyncStrategy 
http://192.168.0.28:8983/solr/test_shard2_replica_n6/ has no replicas
2017-12-11 15:53:27.254 INFO  
(zkCallback-3-thread-5-processing-n:192.168.0.28:8983_solr) [c:test s:shard2 
r:core_node8 x:test_shard2_replica_n6] o.a.s.c.ShardLeaderElectionContext I am 
the new leader: http://192.168.0.28:8983/solr/test_shard2_replica_n6/ shard2
2017-12-11 15:53:27.255 INFO  
(zkCallback-3-thread-5-processing-n:192.168.0.28:8983_solr) [   ] 
o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent 
state:SyncConnected type:NodeDataChanged path:/collections/test/state.json] for 
collection [test] has occurred - updating... (live nodes size: [1])
2017-12-11 15:53:27.255 INFO  
(zkCallback-3-thread-6-processing-n:192.168.0.28:8983_solr) [   ] 
o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent 
state:SyncConnected type:NodeDataChanged path:/collections/test/state.json] for 
collection [test] has occurred - updating... (live nodes size: [1])
2017-12-11 15:53:55.582 INFO  (ScheduledTrigger-6-thread-2) [   ] 
o.a.s.c.a.SystemLogListener Collection .system does not exist, disabling 
logging.
2017-12-11 15:53:55.601 INFO  (qtp575335780-14) [   ] o.a.s.s.HttpSolrCall 
[admin] webapp=null path=/admin/metrics 
params={prefix=CORE.coreName=javabin=2=solr.core} status=0 
QTime=6
2017-12-11 15:53:55.606 INFO  
(AutoscalingActionExecutor-7-thread-1-processing-n:192.168.0.28:8983_solr) [   
] o.a.s.c.a.ExecutePlanAction No operations to execute for event: {
  "id":"14ff48669eb06380Tbkths0f4h9k570n1nrsqrq81v",
  "source":".auto_add_replicas",
  "eventTime":151300760540600,
  "eventType":"NODELOST",
  "properties":{
"eventTimes":[151300760540600],
"_enqueue_time_":151300763549300,
"nodeNames":["192.168.0.28:7574_solr"]}}
{code}

I think it's just registering the event, but doesn't or can't actually do 
anything since it's a single node (IOW, there isn't anywhere for it to do 
anything in this scenario). I saw it eventually time out and forcefully kill 
the process, but it seems Varun didn't see that (it was ~5 minutes before it 
did that, I think).

Probably need [~shalinmangar] or [~caomanhdat] to take a look to see if my 
hunch is correct.

If that's not it, SOLR-9137 made some change to the stop behavior and 

[jira] [Commented] (SOLR-11740) bin/solr stop command always throws Connection refused

2017-12-11 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16286087#comment-16286087
 ] 

Erick Erickson commented on SOLR-11740:
---

FWIW, the -noprompt is unnecessary, I see the same thing with:

./bin/solr start -e cloud

And the server stack trace looks like this (7x) OS X Java 1.8.0_121:

INFO  - 2017-12-11 15:58:13.281; [   ] 
org.apache.solr.common.cloud.ZkStateReader$StateWatcher; A cluster state 
change: [WatchedEvent state:SyncConnected type:NodeDataChanged 
path:/collections/gettingstarted/state.json] for collection [gettingstarted] 
has occurred - updating... (live nodes size: [1])
INFO  - 2017-12-11 15:58:13.282; [c:gettingstarted s:shard1 r:core_node3 
x:gettingstarted_shard1_replica_n1] org.apache.solr.core.SolrCore; 
[gettingstarted_shard1_replica_n1]  CLOSING SolrCore 
org.apache.solr.core.SolrCore@34804f5b
INFO  - 2017-12-11 15:58:13.282; [c:gettingstarted s:shard2 r:core_node7 
x:gettingstarted_shard2_replica_n4] org.apache.solr.core.SolrCore; 
[gettingstarted_shard2_replica_n4]  CLOSING SolrCore 
org.apache.solr.core.SolrCore@21c3203d
INFO  - 2017-12-11 15:58:13.282; [c:gettingstarted s:shard1 r:core_node3 
x:gettingstarted_shard1_replica_n1] org.apache.solr.metrics.SolrMetricManager; 
Closing metric reporters for 
registry=solr.core.gettingstarted.shard1.replica_n1, tag=880824155
INFO  - 2017-12-11 15:58:13.283; [c:gettingstarted s:shard1 r:core_node3 
x:gettingstarted_shard1_replica_n1] org.apache.solr.metrics.SolrMetricManager; 
Closing metric reporters for 
registry=solr.collection.gettingstarted.shard1.leader, tag=880824155
INFO  - 2017-12-11 15:58:13.283; [c:gettingstarted s:shard2 r:core_node7 
x:gettingstarted_shard2_replica_n4] org.apache.solr.metrics.SolrMetricManager; 
Closing metric reporters for 
registry=solr.core.gettingstarted.shard2.replica_n4, tag=566435901
INFO  - 2017-12-11 15:58:13.283; [c:gettingstarted s:shard2 r:core_node7 
x:gettingstarted_shard2_replica_n4] org.apache.solr.metrics.SolrMetricManager; 
Closing metric reporters for 
registry=solr.collection.gettingstarted.shard2.leader, tag=566435901
INFO  - 2017-12-11 15:58:13.291; [   ] org.apache.solr.cloud.Overseer; Overseer 
(id=99156481766129664-192.168.1.222:8983_solr-n_00) closing
INFO  - 2017-12-11 15:58:13.292; [   ] 
org.apache.solr.cloud.Overseer$ClusterStateUpdater; Overseer Loop exiting : 
192.168.1.222:8983_solr
WARN  - 2017-12-11 15:58:13.292; [   ] 
org.apache.solr.cloud.autoscaling.OverseerTriggerThread; OverseerTriggerThread 
woken up but we are closed, exiting.
WARN  - 2017-12-11 15:58:13.295; [   ] 
org.apache.zookeeper.server.ZooKeeperServerMain; Server interrupted
java.lang.InterruptedException
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:998)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:231)
at 
org.apache.zookeeper.server.ZooKeeperServerMain.runFromConfig(ZooKeeperServerMain.java:122)
at org.apache.solr.cloud.SolrZkServer$1.run(SolrZkServer.java:116)
INFO  - 2017-12-11 15:58:13.298; [   ] org.apache.solr.cloud.SolrZkServer$1; 
ZooKeeper Server exited.
ERROR - 2017-12-11 15:58:13.298; [   ] 
org.apache.zookeeper.server.ZooKeeperCriticalThread; Severe unrecoverable 
error, from thread : SyncThread:0
java.nio.channels.ClosedChannelException
at sun.nio.ch.FileChannelImpl.ensureOpen(FileChannelImpl.java:110)
at sun.nio.ch.FileChannelImpl.position(FileChannelImpl.java:253)
at 
org.apache.zookeeper.server.persistence.Util.padLogFile(Util.java:215)
at 
org.apache.zookeeper.server.persistence.FileTxnLog.padFile(FileTxnLog.java:241)
at 
org.apache.zookeeper.server.persistence.FileTxnLog.append(FileTxnLog.java:219)
at 
org.apache.zookeeper.server.persistence.FileTxnSnapLog.append(FileTxnSnapLog.java:324)
at org.apache.zookeeper.server.ZKDatabase.append(ZKDatabase.java:470)
at 
org.apache.zookeeper.server.SyncRequestProcessor.run(SyncRequestProcessor.java:140)
INFO  - 2017-12-11 15:58:13.300; [   ] 
org.eclipse.jetty.server.handler.ContextHandler; Stopped 
o.e.j.w.WebAppContext@7a1ebcd8{/solr,null,UNAVAILABLE}{/Users/Erick/apache/solrJiras/branch_7x/solr/server/solr-webapp/webapp}

I can't pursue it ATM I'm afraid.

> bin/solr stop command always throws Connection refused
> --
>
> Key: SOLR-11740
> URL: https://issues.apache.org/jira/browse/SOLR-11740
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Priority: Blocker
> Fix For: 7.2, master (8.0)
>
>
> 

[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk-9.0.1) - Build # 974 - Still Unstable!

2017-12-11 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/974/
Java: 64bit/jdk-9.0.1 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.cloud.MoveReplicaHDFSTest.testFailedMove

Error Message:
No live SolrServers available to handle this 
request:[https://127.0.0.1:37613/solr/MoveReplicaHDFSTest_failed_coll_true, 
https://127.0.0.1:34287/solr/MoveReplicaHDFSTest_failed_coll_true]

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this 
request:[https://127.0.0.1:37613/solr/MoveReplicaHDFSTest_failed_coll_true, 
https://127.0.0.1:34287/solr/MoveReplicaHDFSTest_failed_coll_true]
at 
__randomizedtesting.SeedInfo.seed([146B0BB715E6D10:AB8B6349C68DB8C0]:0)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:462)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1103)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:883)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:990)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:816)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:942)
at 
org.apache.solr.cloud.MoveReplicaTest.testFailedMove(MoveReplicaTest.java:306)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)

[jira] [Commented] (SOLR-11711) Fix minCount bug in distributed pivot & field facets

2017-12-11 Thread Houston Putman (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16286011#comment-16286011
 ] 

Houston Putman commented on SOLR-11711:
---

Oh sorry, I guess I misunderstood the implications of the word "Bug". It is 
just an inefficiency. I can change it back if you think it's misleading.

And I understand your issues with backporting further than 7x. 

> Fix minCount bug in distributed pivot & field facets
> 
>
> Key: SOLR-11711
> URL: https://issues.apache.org/jira/browse/SOLR-11711
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: faceting
>Affects Versions: master (8.0)
>Reporter: Houston Putman
>Assignee: Hoss Man
>  Labels: pull-request-available
> Fix For: 5.6, 6.7, 7.2
>
>
> Currently while sending pivot facet requests to each shard, the 
> {{facet.pivot.mincount}} is set to {{0}} if the facet is sorted by count with 
> a specified limit > 0. However with a mincount of 0, the pivot facet will use 
> exponentially more wasted memory for every pivot field added. This is because 
> there will be a total of {{limit^(# of pivots)}} pivot values created in 
> memory, even though the vast majority of them will have counts of 0, and are 
> therefore useless.
> Imagine the scenario of a pivot facet with 3 levels, and 
> {{facet.limit=1000}}. There will be a billion pivot values created, and there 
> will almost definitely be nowhere near a billion pivot values with counts > 0.
> This likely due to the reasoning mentioned in [this comment in the original 
> distributed pivot facet 
> ticket|https://issues.apache.org/jira/browse/SOLR-2894?focusedCommentId=13979898].
>  Basically it was thought that the refinement code would need to know that a 
> count was 0 for a shard so that a refinement request wasn't sent to that 
> shard. However this is checked in the code, [in this part of the refinement 
> candidate 
> checking|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/7.1.0/solr/core/src/java/org/apache/solr/handler/component/PivotFacetField.java#L275].
>  Therefore if the {{pivot.mincount}} was set to 1, the non-existent values 
> would either:
> * Not be known, because the {{facet.limit}} was smaller than the number of 
> facet values with positive counts. This isn't an issue, because they wouldn't 
> have been returned with {{pivot.mincount}} set to 0.
> * Would be known, because the {{facet.limit}} would be larger than the number 
> of facet values returned. therefore this conditional would return false 
> (since we are only talking about pivot facets sorted by count).
> The solution, is to use the same pivot mincount as would be used if no limit 
> was specified. 
> This also relates to a similar problem in field faceting that was "fixed" in 
> [SOLR-8988|https://issues.apache.org/jira/browse/SOLR-8988#13324]. The 
> solution was to add a flag, {{facet.distrib.mco}}, which would enable not 
> choosing a mincount of 0 when unnessesary. Since this flag can only increase 
> performance, and doesn't break any queries I have removed it as an option and 
> replaced the code to use the feature always. 
> There was one code change necessary to fix the MCO option, since the 
> refinement candidate selection logic had a bug. The bug only occured with a 
> minCount > 0 and limit > 0 specified. When a shard replied with less than the 
> limit requested, it would assume the next maximum count on that shard was the 
> {{mincount}}, where it would actually be the {{mincount-1}} (because a facet 
> value with a count of mincount would have been returned). Therefore the MCO 
> didn't cause any errors, but with a mincount of 1 the refinement logic always 
> assumed that the shard had more values with a count of 1.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11634) Create collection doesn't respect `maxShardsPerNode`

2017-12-11 Thread Nikolay Martynov (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16285989#comment-16285989
 ] 

Nikolay Martynov commented on SOLR-11634:
-

To clarify: we have 1 JVM per box.

> Create collection doesn't respect `maxShardsPerNode`
> 
>
> Key: SOLR-11634
> URL: https://issues.apache.org/jira/browse/SOLR-11634
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.6.1
>Reporter: Nikolay Martynov
>Assignee: Erick Erickson
>
> Command
> {noformat}
> curl 
> 'http://host:8983/solr/admin/collections?action=CREATE=xxx=16=3=config=2=shard:*,replica:<2,node:*=shard:*,replica:<2,sysprop.aws.az:*'
> {noformat}
> creates collection with 1,2 and 3 shard per nodes - looks like 
> {{maxShardsPerNode}} is being ignored.
> Adding {{rule=replica:<{},node:*}} seems to help, but I'm not sure if this is 
> correct and it doesn't seem to match documented behaviour.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.2-Windows (64bit/jdk-9.0.1) - Build # 11 - Still Unstable!

2017-12-11 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.2-Windows/11/
Java: 64bit/jdk-9.0.1 -XX:-UseCompressedOops -XX:+UseSerialGC

8 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.lucene.store.TestMmapDirectory

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-7.2-Windows\lucene\build\core\test\J1\temp\lucene.store.TestMmapDirectory_F2ABB6C2EC3A63A7-001\testZLong-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.2-Windows\lucene\build\core\test\J1\temp\lucene.store.TestMmapDirectory_F2ABB6C2EC3A63A7-001\testZLong-001
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-7.2-Windows\lucene\build\core\test\J1\temp\lucene.store.TestMmapDirectory_F2ABB6C2EC3A63A7-001\testZLong-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.2-Windows\lucene\build\core\test\J1\temp\lucene.store.TestMmapDirectory_F2ABB6C2EC3A63A7-001\testZLong-001

at __randomizedtesting.SeedInfo.seed([F2ABB6C2EC3A63A7]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  
org.apache.lucene.replicator.IndexAndTaxonomyReplicationClientTest.testRestart

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-7.2-Windows\lucene\build\replicator\test\J0\temp\lucene.replicator.IndexAndTaxonomyReplicationClientTest_49E9EC7413289D7C-001\replicationClientTest-003\3:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.2-Windows\lucene\build\replicator\test\J0\temp\lucene.replicator.IndexAndTaxonomyReplicationClientTest_49E9EC7413289D7C-001\replicationClientTest-003\3
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-7.2-Windows\lucene\build\replicator\test\J0\temp\lucene.replicator.IndexAndTaxonomyReplicationClientTest_49E9EC7413289D7C-001\replicationClientTest-003\3:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.2-Windows\lucene\build\replicator\test\J0\temp\lucene.replicator.IndexAndTaxonomyReplicationClientTest_49E9EC7413289D7C-001\replicationClientTest-003\3

at 
__randomizedtesting.SeedInfo.seed([49E9EC7413289D7C:DC64001596C02969]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329)
at 
org.apache.lucene.replicator.PerSessionDirectoryFactory.cleanupSession(PerSessionDirectoryFactory.java:58)
at 
org.apache.lucene.replicator.ReplicationClient.doUpdate(ReplicationClient.java:259)
at 
org.apache.lucene.replicator.ReplicationClient.updateNow(ReplicationClient.java:401)
at 
org.apache.lucene.replicator.IndexAndTaxonomyReplicationClientTest.testRestart(IndexAndTaxonomyReplicationClientTest.java:256)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 

[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-10-ea+32) - Build # 21068 - Unstable!

2017-12-11 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/21068/
Java: 64bit/jdk-10-ea+32 -XX:+UseCompressedOops -XX:+UseSerialGC

2 tests failed.
FAILED:  
org.apache.solr.client.solrj.io.stream.StreamExpressionTest.testDistributions

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([3740B856BF4CB922:88BFF9FC61B659BE]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertTrue(Assert.java:54)
at 
org.apache.solr.client.solrj.io.stream.StreamExpressionTest.testDistributions(StreamExpressionTest.java:6551)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  
org.apache.solr.security.TestAuthorizationFramework.authorizationFrameworkTest

Error Message:
Timeout occured while waiting response from server at: 
https://127.0.0.1:37569/lj

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 

[jira] [Commented] (LUCENE-8090) IndexWriter#flushNextBuffer can cause NPE

2017-12-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16285924#comment-16285924
 ] 

ASF subversion and git services commented on LUCENE-8090:
-

Commit 9ad84fea80a459be4e85b6ff6ef0a1976bcffe38 in lucene-solr's branch 
refs/heads/master from [~simonw]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=9ad84fe ]

LUCENE-8090: Prevent stale threadstate reads in DocumentsWriterFlushControl


> IndexWriter#flushNextBuffer can cause NPE
> -
>
> Key: LUCENE-8090
> URL: https://issues.apache.org/jira/browse/LUCENE-8090
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: master (8.0), 7.2, 7.3
>Reporter: Simon Willnauer
>Assignee: Simon Willnauer
>Priority: Blocker
> Fix For: master (8.0), 7.2, 7.3
>
> Attachments: LUCENE-8090.patch
>
>
> There is a missing synchronized statment in DocumentsWriterFlushControl 
> causing failures like this:
> {code}
> 04:07:06[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestIndexWriterDelete -Dtests.method=testDeleteAllNoDeadLock 
> -Dtests.seed=D43A2A18EB61840A -Dtests.slow=true -Dtests.locale=sv-SE 
> -Dtests.timezone=Pacific/Kosrae -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1
> 04:07:06[junit4] ERROR   0.21s J1 | 
> TestIndexWriterDelete.testDeleteAllNoDeadLock <<<
> 04:07:06[junit4]> Throwable #1: 
> com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an 
> uncaught exception in thread: Thread[id=516, name=Thread-413, state=RUNNABLE, 
> group=TGRP-TestIndexWriterDelete]
> 04:07:06[junit4]> at 
> __randomizedtesting.SeedInfo.seed([D43A2A18EB61840A:E723C5066BA23E5C]:0)
> 04:07:06[junit4]> Caused by: java.lang.RuntimeException: 
> java.lang.NullPointerException
> 04:07:06[junit4]> at 
> __randomizedtesting.SeedInfo.seed([D43A2A18EB61840A]:0)
> 04:07:06[junit4]> at 
> org.apache.lucene.index.TestIndexWriterDelete$1.run(TestIndexWriterDelete.java:332)
> 04:07:06[junit4]> Caused by: java.lang.NullPointerException
> 04:07:06[junit4]> at 
> org.apache.lucene.index.DocumentsWriterFlushControl.findLargestNonPendingWriter(DocumentsWriterFlushControl.java:730)
> 04:07:06[junit4]> at 
> org.apache.lucene.index.DocumentsWriterFlushControl.checkoutLargestNonPendingWriter(DocumentsWriterFlushControl.java:750)
> 04:07:06[junit4]> at 
> org.apache.lucene.index.DocumentsWriter.flushOneDWPT(DocumentsWriter.java:256)
> 04:07:06[junit4]> at 
> org.apache.lucene.index.IndexWriter.flushNextBuffer(IndexWriter.java:3203)
> 04:07:06[junit4]> at 
> org.apache.lucene.index.RandomIndexWriter.maybeFlushOrCommit(RandomIndexWriter.java:189)
> 04:07:06[junit4]> at 
> org.apache.lucene.index.RandomIndexWriter.addDocument(RandomIndexWriter.java:174)
> 04:07:06[junit4]> at 
> org.apache.lucene.index.TestIndexWriterDelete$1.run(TestIndexWriterDelete.java:326)
> 04:07:06[junit4]   2> NOTE: test params are: codec=CheapBastard, 
> sim=RandomSimilarity(queryNorm=false): {field=DFR I(ne)1, contents=DFR 
> G3(800.0), city=DFR G1, id=LM Jelinek-Mercer(0.10), content=DFR I(ne)B1}, 
> locale=sv-SE, timezone=Pacific/Kosrae
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-8090) IndexWriter#flushNextBuffer can cause NPE

2017-12-11 Thread Simon Willnauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Willnauer resolved LUCENE-8090.
-
Resolution: Fixed

fixed

> IndexWriter#flushNextBuffer can cause NPE
> -
>
> Key: LUCENE-8090
> URL: https://issues.apache.org/jira/browse/LUCENE-8090
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: master (8.0), 7.2, 7.3
>Reporter: Simon Willnauer
>Assignee: Simon Willnauer
>Priority: Blocker
> Fix For: master (8.0), 7.2, 7.3
>
> Attachments: LUCENE-8090.patch
>
>
> There is a missing synchronized statment in DocumentsWriterFlushControl 
> causing failures like this:
> {code}
> 04:07:06[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestIndexWriterDelete -Dtests.method=testDeleteAllNoDeadLock 
> -Dtests.seed=D43A2A18EB61840A -Dtests.slow=true -Dtests.locale=sv-SE 
> -Dtests.timezone=Pacific/Kosrae -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1
> 04:07:06[junit4] ERROR   0.21s J1 | 
> TestIndexWriterDelete.testDeleteAllNoDeadLock <<<
> 04:07:06[junit4]> Throwable #1: 
> com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an 
> uncaught exception in thread: Thread[id=516, name=Thread-413, state=RUNNABLE, 
> group=TGRP-TestIndexWriterDelete]
> 04:07:06[junit4]> at 
> __randomizedtesting.SeedInfo.seed([D43A2A18EB61840A:E723C5066BA23E5C]:0)
> 04:07:06[junit4]> Caused by: java.lang.RuntimeException: 
> java.lang.NullPointerException
> 04:07:06[junit4]> at 
> __randomizedtesting.SeedInfo.seed([D43A2A18EB61840A]:0)
> 04:07:06[junit4]> at 
> org.apache.lucene.index.TestIndexWriterDelete$1.run(TestIndexWriterDelete.java:332)
> 04:07:06[junit4]> Caused by: java.lang.NullPointerException
> 04:07:06[junit4]> at 
> org.apache.lucene.index.DocumentsWriterFlushControl.findLargestNonPendingWriter(DocumentsWriterFlushControl.java:730)
> 04:07:06[junit4]> at 
> org.apache.lucene.index.DocumentsWriterFlushControl.checkoutLargestNonPendingWriter(DocumentsWriterFlushControl.java:750)
> 04:07:06[junit4]> at 
> org.apache.lucene.index.DocumentsWriter.flushOneDWPT(DocumentsWriter.java:256)
> 04:07:06[junit4]> at 
> org.apache.lucene.index.IndexWriter.flushNextBuffer(IndexWriter.java:3203)
> 04:07:06[junit4]> at 
> org.apache.lucene.index.RandomIndexWriter.maybeFlushOrCommit(RandomIndexWriter.java:189)
> 04:07:06[junit4]> at 
> org.apache.lucene.index.RandomIndexWriter.addDocument(RandomIndexWriter.java:174)
> 04:07:06[junit4]> at 
> org.apache.lucene.index.TestIndexWriterDelete$1.run(TestIndexWriterDelete.java:326)
> 04:07:06[junit4]   2> NOTE: test params are: codec=CheapBastard, 
> sim=RandomSimilarity(queryNorm=false): {field=DFR I(ne)1, contents=DFR 
> G3(800.0), city=DFR G1, id=LM Jelinek-Mercer(0.10), content=DFR I(ne)B1}, 
> locale=sv-SE, timezone=Pacific/Kosrae
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8090) IndexWriter#flushNextBuffer can cause NPE

2017-12-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16285922#comment-16285922
 ] 

ASF subversion and git services commented on LUCENE-8090:
-

Commit 5059391923f18daf27a038d1b8b3c72d2375c919 in lucene-solr's branch 
refs/heads/branch_7_2 from [~simonw]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=5059391 ]

LUCENE-8090: Prevent stale threadstate reads in DocumentsWriterFlushControl


> IndexWriter#flushNextBuffer can cause NPE
> -
>
> Key: LUCENE-8090
> URL: https://issues.apache.org/jira/browse/LUCENE-8090
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: master (8.0), 7.2, 7.3
>Reporter: Simon Willnauer
>Assignee: Simon Willnauer
>Priority: Blocker
> Fix For: master (8.0), 7.2, 7.3
>
> Attachments: LUCENE-8090.patch
>
>
> There is a missing synchronized statment in DocumentsWriterFlushControl 
> causing failures like this:
> {code}
> 04:07:06[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestIndexWriterDelete -Dtests.method=testDeleteAllNoDeadLock 
> -Dtests.seed=D43A2A18EB61840A -Dtests.slow=true -Dtests.locale=sv-SE 
> -Dtests.timezone=Pacific/Kosrae -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1
> 04:07:06[junit4] ERROR   0.21s J1 | 
> TestIndexWriterDelete.testDeleteAllNoDeadLock <<<
> 04:07:06[junit4]> Throwable #1: 
> com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an 
> uncaught exception in thread: Thread[id=516, name=Thread-413, state=RUNNABLE, 
> group=TGRP-TestIndexWriterDelete]
> 04:07:06[junit4]> at 
> __randomizedtesting.SeedInfo.seed([D43A2A18EB61840A:E723C5066BA23E5C]:0)
> 04:07:06[junit4]> Caused by: java.lang.RuntimeException: 
> java.lang.NullPointerException
> 04:07:06[junit4]> at 
> __randomizedtesting.SeedInfo.seed([D43A2A18EB61840A]:0)
> 04:07:06[junit4]> at 
> org.apache.lucene.index.TestIndexWriterDelete$1.run(TestIndexWriterDelete.java:332)
> 04:07:06[junit4]> Caused by: java.lang.NullPointerException
> 04:07:06[junit4]> at 
> org.apache.lucene.index.DocumentsWriterFlushControl.findLargestNonPendingWriter(DocumentsWriterFlushControl.java:730)
> 04:07:06[junit4]> at 
> org.apache.lucene.index.DocumentsWriterFlushControl.checkoutLargestNonPendingWriter(DocumentsWriterFlushControl.java:750)
> 04:07:06[junit4]> at 
> org.apache.lucene.index.DocumentsWriter.flushOneDWPT(DocumentsWriter.java:256)
> 04:07:06[junit4]> at 
> org.apache.lucene.index.IndexWriter.flushNextBuffer(IndexWriter.java:3203)
> 04:07:06[junit4]> at 
> org.apache.lucene.index.RandomIndexWriter.maybeFlushOrCommit(RandomIndexWriter.java:189)
> 04:07:06[junit4]> at 
> org.apache.lucene.index.RandomIndexWriter.addDocument(RandomIndexWriter.java:174)
> 04:07:06[junit4]> at 
> org.apache.lucene.index.TestIndexWriterDelete$1.run(TestIndexWriterDelete.java:326)
> 04:07:06[junit4]   2> NOTE: test params are: codec=CheapBastard, 
> sim=RandomSimilarity(queryNorm=false): {field=DFR I(ne)1, contents=DFR 
> G3(800.0), city=DFR G1, id=LM Jelinek-Mercer(0.10), content=DFR I(ne)B1}, 
> locale=sv-SE, timezone=Pacific/Kosrae
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8090) IndexWriter#flushNextBuffer can cause NPE

2017-12-11 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16285923#comment-16285923
 ] 

ASF subversion and git services commented on LUCENE-8090:
-

Commit 42fadfc0387f0a5b739e73d073a14eb1311b4a80 in lucene-solr's branch 
refs/heads/branch_7x from [~simonw]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=42fadfc ]

LUCENE-8090: Prevent stale threadstate reads in DocumentsWriterFlushControl


> IndexWriter#flushNextBuffer can cause NPE
> -
>
> Key: LUCENE-8090
> URL: https://issues.apache.org/jira/browse/LUCENE-8090
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: master (8.0), 7.2, 7.3
>Reporter: Simon Willnauer
>Assignee: Simon Willnauer
>Priority: Blocker
> Fix For: master (8.0), 7.2, 7.3
>
> Attachments: LUCENE-8090.patch
>
>
> There is a missing synchronized statment in DocumentsWriterFlushControl 
> causing failures like this:
> {code}
> 04:07:06[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestIndexWriterDelete -Dtests.method=testDeleteAllNoDeadLock 
> -Dtests.seed=D43A2A18EB61840A -Dtests.slow=true -Dtests.locale=sv-SE 
> -Dtests.timezone=Pacific/Kosrae -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1
> 04:07:06[junit4] ERROR   0.21s J1 | 
> TestIndexWriterDelete.testDeleteAllNoDeadLock <<<
> 04:07:06[junit4]> Throwable #1: 
> com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an 
> uncaught exception in thread: Thread[id=516, name=Thread-413, state=RUNNABLE, 
> group=TGRP-TestIndexWriterDelete]
> 04:07:06[junit4]> at 
> __randomizedtesting.SeedInfo.seed([D43A2A18EB61840A:E723C5066BA23E5C]:0)
> 04:07:06[junit4]> Caused by: java.lang.RuntimeException: 
> java.lang.NullPointerException
> 04:07:06[junit4]> at 
> __randomizedtesting.SeedInfo.seed([D43A2A18EB61840A]:0)
> 04:07:06[junit4]> at 
> org.apache.lucene.index.TestIndexWriterDelete$1.run(TestIndexWriterDelete.java:332)
> 04:07:06[junit4]> Caused by: java.lang.NullPointerException
> 04:07:06[junit4]> at 
> org.apache.lucene.index.DocumentsWriterFlushControl.findLargestNonPendingWriter(DocumentsWriterFlushControl.java:730)
> 04:07:06[junit4]> at 
> org.apache.lucene.index.DocumentsWriterFlushControl.checkoutLargestNonPendingWriter(DocumentsWriterFlushControl.java:750)
> 04:07:06[junit4]> at 
> org.apache.lucene.index.DocumentsWriter.flushOneDWPT(DocumentsWriter.java:256)
> 04:07:06[junit4]> at 
> org.apache.lucene.index.IndexWriter.flushNextBuffer(IndexWriter.java:3203)
> 04:07:06[junit4]> at 
> org.apache.lucene.index.RandomIndexWriter.maybeFlushOrCommit(RandomIndexWriter.java:189)
> 04:07:06[junit4]> at 
> org.apache.lucene.index.RandomIndexWriter.addDocument(RandomIndexWriter.java:174)
> 04:07:06[junit4]> at 
> org.apache.lucene.index.TestIndexWriterDelete$1.run(TestIndexWriterDelete.java:326)
> 04:07:06[junit4]   2> NOTE: test params are: codec=CheapBastard, 
> sim=RandomSimilarity(queryNorm=false): {field=DFR I(ne)1, contents=DFR 
> G3(800.0), city=DFR G1, id=LM Jelinek-Mercer(0.10), content=DFR I(ne)B1}, 
> locale=sv-SE, timezone=Pacific/Kosrae
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8086) G3d wrapper: Improve circles for non spherical planets

2017-12-11 Thread Ignacio Vera (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16285910#comment-16285910
 ] 

Ignacio Vera commented on LUCENE-8086:
--

I guess we do not want to expose this complexity on this interface and it would 
be nice if accuracy could be expressed in decimal degrees to follow spatial4j 
convention. 

We are dealing at the moment with unit planet models so better don't worry 
about that now.

What comes to my mind is that we can approximate linear distance to surface 
distance (in radians) at this level as distances are very tiny. Then the 
problem is easy as we just need to transform between degrees and radians to set 
accuracy.  I think we are overstimating the accuracy so everything should be ok.

Does it make sense?



> G3d wrapper: Improve circles for non spherical planets
> --
>
> Key: LUCENE-8086
> URL: https://issues.apache.org/jira/browse/LUCENE-8086
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/spatial-extras
>Reporter: Ignacio Vera
>
> Hi [~dsmiley],
> The purpose of this ticket is to add a new circle shape (GeoExactCircle) for 
> non-spherical planets and therefore remove the method relate from 
> Geo3dCircleShape. The patch will include some simplifications on the wrapper 
> and some refactoring of the tests.
> I will open shortly a pull request.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8090) IndexWriter#flushNextBuffer can cause NPE

2017-12-11 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16285904#comment-16285904
 ] 

Michael McCandless commented on LUCENE-8090:


+1, phew.

> IndexWriter#flushNextBuffer can cause NPE
> -
>
> Key: LUCENE-8090
> URL: https://issues.apache.org/jira/browse/LUCENE-8090
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: master (8.0), 7.2, 7.3
>Reporter: Simon Willnauer
>Assignee: Simon Willnauer
>Priority: Blocker
> Fix For: master (8.0), 7.2, 7.3
>
> Attachments: LUCENE-8090.patch
>
>
> There is a missing synchronized statment in DocumentsWriterFlushControl 
> causing failures like this:
> {code}
> 04:07:06[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestIndexWriterDelete -Dtests.method=testDeleteAllNoDeadLock 
> -Dtests.seed=D43A2A18EB61840A -Dtests.slow=true -Dtests.locale=sv-SE 
> -Dtests.timezone=Pacific/Kosrae -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1
> 04:07:06[junit4] ERROR   0.21s J1 | 
> TestIndexWriterDelete.testDeleteAllNoDeadLock <<<
> 04:07:06[junit4]> Throwable #1: 
> com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an 
> uncaught exception in thread: Thread[id=516, name=Thread-413, state=RUNNABLE, 
> group=TGRP-TestIndexWriterDelete]
> 04:07:06[junit4]> at 
> __randomizedtesting.SeedInfo.seed([D43A2A18EB61840A:E723C5066BA23E5C]:0)
> 04:07:06[junit4]> Caused by: java.lang.RuntimeException: 
> java.lang.NullPointerException
> 04:07:06[junit4]> at 
> __randomizedtesting.SeedInfo.seed([D43A2A18EB61840A]:0)
> 04:07:06[junit4]> at 
> org.apache.lucene.index.TestIndexWriterDelete$1.run(TestIndexWriterDelete.java:332)
> 04:07:06[junit4]> Caused by: java.lang.NullPointerException
> 04:07:06[junit4]> at 
> org.apache.lucene.index.DocumentsWriterFlushControl.findLargestNonPendingWriter(DocumentsWriterFlushControl.java:730)
> 04:07:06[junit4]> at 
> org.apache.lucene.index.DocumentsWriterFlushControl.checkoutLargestNonPendingWriter(DocumentsWriterFlushControl.java:750)
> 04:07:06[junit4]> at 
> org.apache.lucene.index.DocumentsWriter.flushOneDWPT(DocumentsWriter.java:256)
> 04:07:06[junit4]> at 
> org.apache.lucene.index.IndexWriter.flushNextBuffer(IndexWriter.java:3203)
> 04:07:06[junit4]> at 
> org.apache.lucene.index.RandomIndexWriter.maybeFlushOrCommit(RandomIndexWriter.java:189)
> 04:07:06[junit4]> at 
> org.apache.lucene.index.RandomIndexWriter.addDocument(RandomIndexWriter.java:174)
> 04:07:06[junit4]> at 
> org.apache.lucene.index.TestIndexWriterDelete$1.run(TestIndexWriterDelete.java:326)
> 04:07:06[junit4]   2> NOTE: test params are: codec=CheapBastard, 
> sim=RandomSimilarity(queryNorm=false): {field=DFR I(ne)1, contents=DFR 
> G3(800.0), city=DFR G1, id=LM Jelinek-Mercer(0.10), content=DFR I(ne)B1}, 
> locale=sv-SE, timezone=Pacific/Kosrae
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Windows (32bit/jdk1.8.0_144) - Build # 338 - Still Unstable!

2017-12-11 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/338/
Java: 32bit/jdk1.8.0_144 -server -XX:+UseParallelGC

3 tests failed.
FAILED:  
org.apache.lucene.replicator.IndexReplicationClientTest.testConsistencyOnExceptions

Error Message:
Captured an uncaught exception in thread: Thread[id=22, 
name=ReplicationThread-index, state=RUNNABLE, 
group=TGRP-IndexReplicationClientTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=22, name=ReplicationThread-index, 
state=RUNNABLE, group=TGRP-IndexReplicationClientTest]
at 
__randomizedtesting.SeedInfo.seed([84874611C0695B74:B09A1B1D205A88B]:0)
Caused by: java.lang.AssertionError: handler failed too many times: -1
at __randomizedtesting.SeedInfo.seed([84874611C0695B74]:0)
at 
org.apache.lucene.replicator.IndexReplicationClientTest$4.handleUpdateException(IndexReplicationClientTest.java:304)
at 
org.apache.lucene.replicator.ReplicationClient$ReplicationThread.run(ReplicationClient.java:77)


FAILED:  org.apache.solr.cloud.PeerSyncReplicationTest.test

Error Message:
Timeout occured while waiting response from server at: 
http://127.0.0.1:65254/_d/z

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: http://127.0.0.1:65254/_d/z
at 
__randomizedtesting.SeedInfo.seed([C8029DA6D0A82C53:4056A27C7E5441AB]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:654)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1103)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:883)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:816)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createServers(AbstractFullDistribZkTestBase.java:314)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:991)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 

[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1435 - Failure

2017-12-11 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1435/

9 tests failed.
FAILED:  org.apache.solr.cloud.ForceLeaderTest.testReplicasInLIRNoLeader

Error Message:
There are still nodes recoverying - waited for 330 seconds

Stack Trace:
java.lang.AssertionError: There are still nodes recoverying - waited for 330 
seconds
at 
__randomizedtesting.SeedInfo.seed([33451B4D9585DE5F:D5D22F8DAC07273E]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:185)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:140)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:135)
at 
org.apache.solr.cloud.ForceLeaderTest.bringBackOldLeaderAndSendDoc(ForceLeaderTest.java:381)
at 
org.apache.solr.cloud.ForceLeaderTest.testReplicasInLIRNoLeader(ForceLeaderTest.java:152)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:993)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 

[JENKINS-EA] Lucene-Solr-7.2-Linux (64bit/jdk-10-ea+32) - Build # 45 - Still Unstable!

2017-12-11 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.2-Linux/45/
Java: 64bit/jdk-10-ea+32 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.analytics.facet.PivotFacetTest

Error Message:
Error from server at https://127.0.0.1:34525/solr/collection1: Async exception 
during distributed update: 127.0.0.1:43377 failed to respond

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at https://127.0.0.1:34525/solr/collection1: Async exception during 
distributed update: 127.0.0.1:43377 failed to respond
at __randomizedtesting.SeedInfo.seed([61E61E2EB46BB251]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1103)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:883)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:816)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.request.UpdateRequest.commit(UpdateRequest.java:233)
at 
org.apache.solr.analytics.SolrAnalyticsTestCase.setupCollection(SolrAnalyticsTestCase.java:68)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:874)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)




Build Log:
[...truncated 17020 lines...]
   [junit4] Suite: org.apache.solr.analytics.facet.PivotFacetTest
   [junit4]   2> Creating dataDir: 
/home/jenkins/workspace/Lucene-Solr-7.2-Linux/solr/build/contrib/solr-analytics/test/J0/temp/solr.analytics.facet.PivotFacetTest_61E61E2EB46BB251-001/init-core-data-001
   [junit4]   2> Dec 11, 2017 11:30:59 AM 
com.carrotsearch.randomizedtesting.ThreadLeakControl checkThreadLeaks
   [junit4]   2> WARNING: Will linger awaiting termination of 1 leaked 
thread(s).
   [junit4]   2> NOTE: test params are: codec=Asserting(Lucene70): {}, 
docValues:{}, maxPointsInLeafNode=1947, maxMBSortInHeap=5.239306382390618, 
sim=RandomSimilarity(queryNorm=true): {}, locale=fr, 
timezone=America/Thunder_Bay
   [junit4]   2> NOTE: Linux 4.10.0-40-generic 

Re: Lucene/Solr 7.2

2017-12-11 Thread Adrien Grand
We have another blocker: https://issues.apache.org/jira/browse/LUCENE-8090.

Could someone help Varun investigate
https://issues.apache.org/jira/browse/SOLR-11740? I just made it a blocker
as well until we know more about it.

Le sam. 9 déc. 2017 à 08:57, Varun Thacker  a écrit :

> This fails for me every single time :
> https://issues.apache.org/jira/browse/SOLR-11740
>
> Can someone with more knowledge of the bin/solr script confirm if this
> effects the "-e cloud" command only or is it more widespread .
> That might help determine if we want to fix this before the release.
>
> On Fri, Dec 8, 2017 at 10:41 AM, Adrien Grand  wrote:
>
>> FYI we are backporting SOLR-11423
>>  to 7.2 so I'll build
>> a RC on Monday (assuming it will have been backported by then).
>>
>> Le jeu. 7 déc. 2017 à 20:17, Adrien Grand  a écrit :
>>
>>> OK, it looks like all changes that we wanted to be included are now in?
>>> Please let me know if there is still something left to include in 7.2
>>> before building a RC.
>>>
>>> I noticed SOLR-11423 is in a weird state, it is included in the
>>> changelog in 7.1 but has only been committed to master. Did we forget to
>>> backport it?
>>>
>>> Le mer. 6 déc. 2017 à 21:13, Andrzej Białecki <
>>> andrzej.biale...@lucidworks.com> a écrit :
>>>
 On 6 Dec 2017, at 18:45, Andrzej Białecki <
 andrzej.biale...@lucidworks.com> wrote:

 I attached the patch to SOLR-11714, which disables the ‘searchRate’
 trigger - if there are no objections I’ll commit it shortly to branch_7.2.



 This has been committed now to branch_7_2 and I don’t have any other
 open issues for 7.2. Thanks!



 On 6 Dec 2017, at 15:51, Andrzej Białecki <
 andrzej.biale...@lucidworks.com> wrote:


 On 6 Dec 2017, at 15:35, Andrzej Białecki <
 andrzej.biale...@lucidworks.com> wrote:

 SOLR-11458 is committed and resolved - thanks for the patience.



 Actually, one more thing … ;) SOLR-11714 is a more serious bug in a new
 feature (searchRate autoscaling trigger). It’s probably best to disable
 this feature in 7.2 rather than releasing a broken version, so I’d like to
 commit a patch that disables it (plus a note in CHANGES.txt).




 On 6 Dec 2017, at 14:02, Adrien Grand  wrote:

 Thanks for the heads up, Anshum.

 This leaves us with only SOLR-11458 to wait for before building a RC
 (which might be ready but just not marked as resolved).



 Le mer. 6 déc. 2017 à 13:47, Ishan Chattopadhyaya <
 ichattopadhy...@gmail.com> a écrit :

> Hi Adrien,
> I'm planning to skip SOLR-11624 for this release (as per my last
> comment
> https://issues.apache.org/jira/browse/SOLR-11624?focusedCommentId=16280121=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16280121).
> If someone has an objection, please let me know; otherwise, please feel
> free to proceed with the release.
> I'll continue working on it anyway, and shall try to have it ready for
> the next release.
> Thanks,
> Ishan
>
> On Wed, Dec 6, 2017 at 2:41 PM, Adrien Grand 
> wrote:
>
>> FYI I created the new branch for 7.2, so you will have to backport to
>> this branch. No hurry though, I mostly created the branch so that it's 
>> fine
>> to cherry-pick changes that may wait for 7.3 to be released.
>>
>> Le mer. 6 déc. 2017 à 08:53, Adrien Grand  a
>> écrit :
>>
>>> Sorry to hear that Ishan, I hope you are doing better now. +1 to get
>>> SOLR-11624 in.
>>>
>>> Le mer. 6 déc. 2017 à 07:57, Ishan Chattopadhyaya <
>>> ichattopadhy...@gmail.com> a écrit :
>>>
 I was a bit unwell over the weekend and yesterday; I'm working on a
 very targeted fix for SOLR-11624 right now; I expect it to take 
 another 5-6
 hours.
 Is that fine with you, Adrien? If not, please go ahead with the
 release, and I'll volunteer later for a bugfix release for this after 
 7.2
 is out.

 On Wed, Dec 6, 2017 at 3:25 AM, Adrien Grand 
 wrote:

> Fine with me.
>
> Le mar. 5 déc. 2017 à 22:34, Varun Thacker  a
> écrit :
>
>> Hi Adrien,
>>
>> I'd like to commit SOLR-11590 . The issue had a patch couple of
>> weeks ago and has been reviewed but never got committed. I've run 
>> all the
>> tests twice as well to verify.
>>
>> On Tue, Dec 5, 2017 at 9:08 AM, Andrzej Białecki <
>> andrzej.biale...@lucidworks.com> wrote:
>>
>>>
>>> On 5 Dec 2017, at 18:05, Adrien Grand 

[jira] [Updated] (SOLR-11740) bin/solr stop command always throws Connection refused

2017-12-11 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated SOLR-11740:

Fix Version/s: master (8.0)
   7.2

> bin/solr stop command always throws Connection refused
> --
>
> Key: SOLR-11740
> URL: https://issues.apache.org/jira/browse/SOLR-11740
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Priority: Blocker
> Fix For: 7.2, master (8.0)
>
>
> Start solr using {{./bin/solr start -e cloud -noprompt}} and then try 
> stopping it. I ran into this problem every time I stopping solr on master. 
> I'm using Java9 and it works fine on Solr 7.1 ( haven't checked on the 7_2 
> branch yet )
> [master] ~/apache-work/lucene-solr/solr$ ./bin/solr  stop -all
> Sending stop command to Solr running on port 7574 ... waiting up to 180 
> seconds to allow Jetty process 40360 to stop gracefully.
> Sending stop command to Solr running on port 8983 ... waiting up to 180 
> seconds to allow Jetty process 40263 to stop gracefully.
> java.net.ConnectException: Connection refused (Connection refused)
>   at java.net.PlainSocketImpl.socketConnect(Native Method)
>   at 
> java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
>   at 
> java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
>   at 
> java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
>   at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
>   at java.net.Socket.connect(Socket.java:589)
>   at java.net.Socket.connect(Socket.java:538)
>   at java.net.Socket.(Socket.java:434)
>   at java.net.Socket.(Socket.java:244)
>   at org.eclipse.jetty.start.Main.stop(Main.java:535)
>   at org.eclipse.jetty.start.Main.stop(Main.java:511)
>   at org.eclipse.jetty.start.Main.doStop(Main.java:499)
>   at org.eclipse.jetty.start.Main.start(Main.java:404)
>   at org.eclipse.jetty.start.Main.main(Main.java:76)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11740) bin/solr stop command always throws Connection refused

2017-12-11 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated SOLR-11740:

Priority: Blocker  (was: Major)

> bin/solr stop command always throws Connection refused
> --
>
> Key: SOLR-11740
> URL: https://issues.apache.org/jira/browse/SOLR-11740
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Priority: Blocker
> Fix For: 7.2, master (8.0)
>
>
> Start solr using {{./bin/solr start -e cloud -noprompt}} and then try 
> stopping it. I ran into this problem every time I stopping solr on master. 
> I'm using Java9 and it works fine on Solr 7.1 ( haven't checked on the 7_2 
> branch yet )
> [master] ~/apache-work/lucene-solr/solr$ ./bin/solr  stop -all
> Sending stop command to Solr running on port 7574 ... waiting up to 180 
> seconds to allow Jetty process 40360 to stop gracefully.
> Sending stop command to Solr running on port 8983 ... waiting up to 180 
> seconds to allow Jetty process 40263 to stop gracefully.
> java.net.ConnectException: Connection refused (Connection refused)
>   at java.net.PlainSocketImpl.socketConnect(Native Method)
>   at 
> java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
>   at 
> java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
>   at 
> java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
>   at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
>   at java.net.Socket.connect(Socket.java:589)
>   at java.net.Socket.connect(Socket.java:538)
>   at java.net.Socket.(Socket.java:434)
>   at java.net.Socket.(Socket.java:244)
>   at org.eclipse.jetty.start.Main.stop(Main.java:535)
>   at org.eclipse.jetty.start.Main.stop(Main.java:511)
>   at org.eclipse.jetty.start.Main.doStop(Main.java:499)
>   at org.eclipse.jetty.start.Main.start(Main.java:404)
>   at org.eclipse.jetty.start.Main.main(Main.java:76)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8086) G3d wrapper: Improve circles for non spherical planets

2017-12-11 Thread Karl Wright (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16285727#comment-16285727
 ] 

Karl Wright commented on LUCENE-8086:
-

[~ivera], the error distance is the linear (perpendicular) distance from a 
plane to a point that is supposedly on that plane but which is not quite.

The units we're using here are not radians -- those are an angular unit.  
Instead we're talking about the unit ellipsoid, where x^2/ab^2 + y^2/ab^2 + 
z^2/c^2 = 1.  (It is possible, as you noted before, to construct a non-unit 
planetmodel, but I don't know what effects that might have, and maybe we should 
put in a check to prevent it.)  The error is therefore relative to 1.0, so it's 
best described as a fraction of the circle or ellipsoid.



> G3d wrapper: Improve circles for non spherical planets
> --
>
> Key: LUCENE-8086
> URL: https://issues.apache.org/jira/browse/LUCENE-8086
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/spatial-extras
>Reporter: Ignacio Vera
>
> Hi [~dsmiley],
> The purpose of this ticket is to add a new circle shape (GeoExactCircle) for 
> non-spherical planets and therefore remove the method relate from 
> Geo3dCircleShape. The patch will include some simplifications on the wrapper 
> and some refactoring of the tests.
> I will open shortly a pull request.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8090) IndexWriter#flushNextBuffer can cause NPE

2017-12-11 Thread Simon Willnauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Willnauer updated LUCENE-8090:

Attachment: LUCENE-8090.patch

patch synchronizing on the DWFLushControll

> IndexWriter#flushNextBuffer can cause NPE
> -
>
> Key: LUCENE-8090
> URL: https://issues.apache.org/jira/browse/LUCENE-8090
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: master (8.0), 7.2, 7.3
>Reporter: Simon Willnauer
>Assignee: Simon Willnauer
>Priority: Blocker
> Fix For: master (8.0), 7.2, 7.3
>
> Attachments: LUCENE-8090.patch
>
>
> There is a missing synchronized statment in DocumentsWriterFlushControl 
> causing failures like this:
> {code}
> 04:07:06[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestIndexWriterDelete -Dtests.method=testDeleteAllNoDeadLock 
> -Dtests.seed=D43A2A18EB61840A -Dtests.slow=true -Dtests.locale=sv-SE 
> -Dtests.timezone=Pacific/Kosrae -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1
> 04:07:06[junit4] ERROR   0.21s J1 | 
> TestIndexWriterDelete.testDeleteAllNoDeadLock <<<
> 04:07:06[junit4]> Throwable #1: 
> com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an 
> uncaught exception in thread: Thread[id=516, name=Thread-413, state=RUNNABLE, 
> group=TGRP-TestIndexWriterDelete]
> 04:07:06[junit4]> at 
> __randomizedtesting.SeedInfo.seed([D43A2A18EB61840A:E723C5066BA23E5C]:0)
> 04:07:06[junit4]> Caused by: java.lang.RuntimeException: 
> java.lang.NullPointerException
> 04:07:06[junit4]> at 
> __randomizedtesting.SeedInfo.seed([D43A2A18EB61840A]:0)
> 04:07:06[junit4]> at 
> org.apache.lucene.index.TestIndexWriterDelete$1.run(TestIndexWriterDelete.java:332)
> 04:07:06[junit4]> Caused by: java.lang.NullPointerException
> 04:07:06[junit4]> at 
> org.apache.lucene.index.DocumentsWriterFlushControl.findLargestNonPendingWriter(DocumentsWriterFlushControl.java:730)
> 04:07:06[junit4]> at 
> org.apache.lucene.index.DocumentsWriterFlushControl.checkoutLargestNonPendingWriter(DocumentsWriterFlushControl.java:750)
> 04:07:06[junit4]> at 
> org.apache.lucene.index.DocumentsWriter.flushOneDWPT(DocumentsWriter.java:256)
> 04:07:06[junit4]> at 
> org.apache.lucene.index.IndexWriter.flushNextBuffer(IndexWriter.java:3203)
> 04:07:06[junit4]> at 
> org.apache.lucene.index.RandomIndexWriter.maybeFlushOrCommit(RandomIndexWriter.java:189)
> 04:07:06[junit4]> at 
> org.apache.lucene.index.RandomIndexWriter.addDocument(RandomIndexWriter.java:174)
> 04:07:06[junit4]> at 
> org.apache.lucene.index.TestIndexWriterDelete$1.run(TestIndexWriterDelete.java:326)
> 04:07:06[junit4]   2> NOTE: test params are: codec=CheapBastard, 
> sim=RandomSimilarity(queryNorm=false): {field=DFR I(ne)1, contents=DFR 
> G3(800.0), city=DFR G1, id=LM Jelinek-Mercer(0.10), content=DFR I(ne)B1}, 
> locale=sv-SE, timezone=Pacific/Kosrae
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-7.x-Linux (64bit/jdk-10-ea+32) - Build # 973 - Still Unstable!

2017-12-11 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/973/
Java: 64bit/jdk-10-ea+32 -XX:+UseCompressedOops -XX:+UseSerialGC

2 tests failed.
FAILED:  org.apache.solr.cloud.HttpPartitionTest.test

Error Message:
Error from server at http://127.0.0.1:33985: Could not fully create collection: 
collection1

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:33985: Could not fully create collection: 
collection1
at 
__randomizedtesting.SeedInfo.seed([324DC3B7E1C84CF3:BA19FC6D4F34210B]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1103)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:883)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:816)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createJettys(AbstractFullDistribZkTestBase.java:384)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createServers(AbstractFullDistribZkTestBase.java:333)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:991)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

  1   2   >