[jira] [Commented] (LUCENE-7962) GeoPaths need ability to compute distance along route WITHOUT perpendicular leg

2017-09-07 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16158165#comment-16158165
 ] 

ASF subversion and git services commented on LUCENE-7962:
-

Commit ba29cce46e0cc17fa134b062a385bcd7d48ce801 in lucene-solr's branch 
refs/heads/branch_7x from [~kwri...@metacarta.com]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ba29cce ]

LUCENE-7962: Add path support for computing distance along the path only.


> GeoPaths need ability to compute distance along route WITHOUT perpendicular 
> leg
> ---
>
> Key: LUCENE-7962
> URL: https://issues.apache.org/jira/browse/LUCENE-7962
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/spatial3d
>Affects Versions: 6.6
>Reporter: Karl Wright
>Assignee: Karl Wright
>
> Distance computation for GeoPaths properly computes distance as distance 
> along the route PLUS the perpendicular distance from the route to the point 
> in question.  That is fine but there is another use case for GeoPaths, which 
> is to compute distance along the route without the perpendicular leg.
> The proposal is to add a method for GeoPath implementations only that 
> computes this distance.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7962) GeoPaths need ability to compute distance along route WITHOUT perpendicular leg

2017-09-07 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16158164#comment-16158164
 ] 

ASF subversion and git services commented on LUCENE-7962:
-

Commit fda254b5267b430bbacf7831a32683fc4f374d61 in lucene-solr's branch 
refs/heads/branch_6x from [~kwri...@metacarta.com]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=fda254b ]

LUCENE-7962: Add path support for computing distance along the path only.


> GeoPaths need ability to compute distance along route WITHOUT perpendicular 
> leg
> ---
>
> Key: LUCENE-7962
> URL: https://issues.apache.org/jira/browse/LUCENE-7962
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/spatial3d
>Affects Versions: 6.6
>Reporter: Karl Wright
>Assignee: Karl Wright
>
> Distance computation for GeoPaths properly computes distance as distance 
> along the route PLUS the perpendicular distance from the route to the point 
> in question.  That is fine but there is another use case for GeoPaths, which 
> is to compute distance along the route without the perpendicular leg.
> The proposal is to add a method for GeoPath implementations only that 
> computes this distance.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7962) GeoPaths need ability to compute distance along route WITHOUT perpendicular leg

2017-09-07 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16158161#comment-16158161
 ] 

ASF subversion and git services commented on LUCENE-7962:
-

Commit 99ae6f87c8a81129c61e53520ae236fb82069b53 in lucene-solr's branch 
refs/heads/master from [~kwri...@metacarta.com]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=99ae6f8 ]

LUCENE-7962: Add path support for computing distance along the path only.


> GeoPaths need ability to compute distance along route WITHOUT perpendicular 
> leg
> ---
>
> Key: LUCENE-7962
> URL: https://issues.apache.org/jira/browse/LUCENE-7962
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/spatial3d
>Affects Versions: 6.6
>Reporter: Karl Wright
>Assignee: Karl Wright
>
> Distance computation for GeoPaths properly computes distance as distance 
> along the route PLUS the perpendicular distance from the route to the point 
> in question.  That is fine but there is another use case for GeoPaths, which 
> is to compute distance along the route without the perpendicular leg.
> The proposal is to add a method for GeoPath implementations only that 
> computes this distance.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7951) New wrapper classes for Geo3d

2017-09-07 Thread Ignacio Vera (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16158156#comment-16158156
 ] 

Ignacio Vera commented on LUCENE-7951:
--

Hi [~dsmiley],

Updating the configuration files makes precommit happy indeed.

I updated Geo3dSpatialContextFactory accordingly but note that:

* initPlanetModel(args) must be called first so generated objects get the right 
planet model.
* We must always construct the calculator if initCalculator() is called as the 
planet model can be different.

If you agree I can relce current Geo3dShape with the new one and migrate the 
test.

BTW, is the pull request I created correct?

> New wrapper classes for Geo3d
> -
>
> Key: LUCENE-7951
> URL: https://issues.apache.org/jira/browse/LUCENE-7951
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/spatial-extras
>Reporter: Ignacio Vera
>Assignee: David Smiley
>Priority: Minor
> Attachments: LUCENE_7951_build.patch, LUCENE_7951_build.patch, 
> LUCENE-7951.patch, LUCENE-7951.patch
>
>
> Hi,
> After the latest developments in the Geo3d library, in particular:
> [https://issues.apache.org/jira/browse/LUCENE-7906] : Spatial relationships 
> between GeoShapes
> [https://issues.apache.org/jira/browse/LUCENE-7936]: Serialization of 
> GeoShapes.
> I propose a new set of wrapper classes which can be for example linked to 
> Solr as they implement their own SpatialContextFactory. It provides the 
> capability of indexing shapes with 
>  spherical geometry.
> Thanks!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-7.0 - Build # 128 - Unstable

2017-09-07 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.0/128/

1 tests failed.
FAILED:  org.apache.solr.cloud.TestHdfsCloudBackupRestore.test

Error Message:
Error from server at https://127.0.0.1:44216/solr: Could not restore core

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at https://127.0.0.1:44216/solr: Could not restore core
at 
__randomizedtesting.SeedInfo.seed([259AE5AA5F7E8D83:ADCEDA70F182E07B]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:627)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:253)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:242)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1121)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:862)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:793)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:178)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:195)
at 
org.apache.solr.cloud.AbstractCloudBackupRestoreTestCase.testBackupAndRestore(AbstractCloudBackupRestoreTestCase.java:275)
at 
org.apache.solr.cloud.AbstractCloudBackupRestoreTestCase.test(AbstractCloudBackupRestoreTestCase.java:136)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS-EA] Lucene-Solr-master-Linux (32bit/jdk-9-ea+181) - Build # 20433 - Unstable!

2017-09-07 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/20433/
Java: 32bit/jdk-9-ea+181 -client -XX:+UseParallelGC --illegal-access=deny

6 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.ChaosMonkeyNothingIsSafeWithPullReplicasTest

Error Message:
3 threads leaked from SUITE scope at 
org.apache.solr.cloud.ChaosMonkeyNothingIsSafeWithPullReplicasTest: 1) 
Thread[id=1347, name=Connection evictor, state=TIMED_WAITING, 
group=TGRP-ChaosMonkeyNothingIsSafeWithPullReplicasTest] at 
java.base@9/java.lang.Thread.sleep(Native Method) at 
app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
 at java.base@9/java.lang.Thread.run(Thread.java:844)2) 
Thread[id=1349, 
name=TEST-ChaosMonkeyNothingIsSafeWithPullReplicasTest.test-seed#[D9C4593465BA3C34]-EventThread,
 state=WAITING, group=TGRP-ChaosMonkeyNothingIsSafeWithPullReplicasTest]
 at java.base@9/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@9/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194)   
  at 
java.base@9/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2062)
 at 
java.base@9/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:435)
 at 
app//org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:501)3) 
Thread[id=1348, 
name=TEST-ChaosMonkeyNothingIsSafeWithPullReplicasTest.test-seed#[D9C4593465BA3C34]-SendThread(127.0.0.1:35463),
 state=TIMED_WAITING, group=TGRP-ChaosMonkeyNothingIsSafeWithPullReplicasTest]  
   at java.base@9/java.lang.Thread.sleep(Native Method) at 
app//org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1051)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 3 threads leaked from SUITE 
scope at org.apache.solr.cloud.ChaosMonkeyNothingIsSafeWithPullReplicasTest: 
   1) Thread[id=1347, name=Connection evictor, state=TIMED_WAITING, 
group=TGRP-ChaosMonkeyNothingIsSafeWithPullReplicasTest]
at java.base@9/java.lang.Thread.sleep(Native Method)
at 
app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
at java.base@9/java.lang.Thread.run(Thread.java:844)
   2) Thread[id=1349, 
name=TEST-ChaosMonkeyNothingIsSafeWithPullReplicasTest.test-seed#[D9C4593465BA3C34]-EventThread,
 state=WAITING, group=TGRP-ChaosMonkeyNothingIsSafeWithPullReplicasTest]
at java.base@9/jdk.internal.misc.Unsafe.park(Native Method)
at 
java.base@9/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194)
at 
java.base@9/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2062)
at 
java.base@9/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:435)
at 
app//org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:501)
   3) Thread[id=1348, 
name=TEST-ChaosMonkeyNothingIsSafeWithPullReplicasTest.test-seed#[D9C4593465BA3C34]-SendThread(127.0.0.1:35463),
 state=TIMED_WAITING, group=TGRP-ChaosMonkeyNothingIsSafeWithPullReplicasTest]
at java.base@9/java.lang.Thread.sleep(Native Method)
at 
app//org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1051)
at __randomizedtesting.SeedInfo.seed([D9C4593465BA3C34]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.ChaosMonkeyNothingIsSafeWithPullReplicasTest

Error Message:
There are still zombie threads that couldn't be terminated:1) 
Thread[id=1348, 
name=TEST-ChaosMonkeyNothingIsSafeWithPullReplicasTest.test-seed#[D9C4593465BA3C34]-SendThread(127.0.0.1:35463),
 state=TIMED_WAITING, group=TGRP-ChaosMonkeyNothingIsSafeWithPullReplicasTest]  
   at java.base@9/java.lang.Thread.sleep(Native Method) at 
app//org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:101)
 at 
app//org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:997)
 at 
app//org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1060)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie 
threads that couldn't be terminated:
   1) Thread[id=1348, 
name=TEST-ChaosMonkeyNothingIsSafeWithPullReplicasTest.test-seed#[D9C4593465BA3C34]-SendThread(127.0.0.1:35463),
 state=TIMED_WAITING, group=TGRP-ChaosMonkeyNothingIsSafeWithPullReplicasTest]
at java.base@9/java.lang.Thread.sleep(Native Method)
at 
app//org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:101)
at 
app//org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:997)
at 
app//org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1060)
at __randomizedtesting.SeedInfo.seed([D9C4593465BA3C34]:0)


FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestLazyCores

Error Message:
1 

[jira] [Commented] (SOLR-8344) Decide default when requested fields are both column and row stored.

2017-09-07 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16158102#comment-16158102
 ] 

Cao Manh Dat commented on SOLR-8344:


[~tomasflobbe] Yeah in my benchmark for Lucene, they will almost the same, so I 
would like to read from docValues, therefore docValues fields can be cached.

> Decide default when requested fields are both column and row stored.
> 
>
> Key: SOLR-8344
> URL: https://issues.apache.org/jira/browse/SOLR-8344
> Project: Solr
>  Issue Type: New Feature
>Reporter: Ishan Chattopadhyaya
> Attachments: SOLR-8344.patch
>
>
> This issue was discussed in the comments at SOLR-8220. Splitting it out to a 
> separate issue so that we can have a focused discussion on whether/how to do 
> this.
> If a given set of requested fields are all stored and have docValues (column 
> stored), we can retrieve the values from either place.  What should the 
> default be?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8344) Decide default when requested fields are both column and row stored.

2017-09-07 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-8344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16158098#comment-16158098
 ] 

Tomás Fernández Löbbe commented on SOLR-8344:
-

bq. Because we already have to pay for seek cost for reading field1 therefore 
reading other fields from stored will be faster than reading from DV
That would be my guess. But then, wouldn't we prefer to use stored fields for 
all fields (if possible) if at least one of the fields needs to come from 
stored fields? In the patch, it looks like, if rows >= 100, then we try to get 
everything from DVs when possible, right?


> Decide default when requested fields are both column and row stored.
> 
>
> Key: SOLR-8344
> URL: https://issues.apache.org/jira/browse/SOLR-8344
> Project: Solr
>  Issue Type: New Feature
>Reporter: Ishan Chattopadhyaya
> Attachments: SOLR-8344.patch
>
>
> This issue was discussed in the comments at SOLR-8220. Splitting it out to a 
> separate issue so that we can have a focused discussion on whether/how to do 
> this.
> If a given set of requested fields are all stored and have docValues (column 
> stored), we can retrieve the values from either place.  What should the 
> default be?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8344) Decide default when requested fields are both column and row stored.

2017-09-07 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16158097#comment-16158097
 ] 

Cao Manh Dat edited comment on SOLR-8344 at 9/8/17 4:14 AM:


BTW I think that for this ticket, the current optimization is good enough ( we 
can remove numRows check before doing commit ). And we can always change the 
optimize method latter.


was (Author: caomanhdat):
BTW I think that for this ticket, the current optimization is good enough ( we 
can more numRows check before doing commit ). And we can always change the 
optimize method latter.

> Decide default when requested fields are both column and row stored.
> 
>
> Key: SOLR-8344
> URL: https://issues.apache.org/jira/browse/SOLR-8344
> Project: Solr
>  Issue Type: New Feature
>Reporter: Ishan Chattopadhyaya
> Attachments: SOLR-8344.patch
>
>
> This issue was discussed in the comments at SOLR-8220. Splitting it out to a 
> separate issue so that we can have a focused discussion on whether/how to do 
> this.
> If a given set of requested fields are all stored and have docValues (column 
> stored), we can retrieve the values from either place.  What should the 
> default be?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8344) Decide default when requested fields are both column and row stored.

2017-09-07 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16158097#comment-16158097
 ] 

Cao Manh Dat commented on SOLR-8344:


BTW I think that for this ticket, the current optimization is good enough ( we 
can more numRows check before doing commit ). And we can always change the 
optimize method latter.

> Decide default when requested fields are both column and row stored.
> 
>
> Key: SOLR-8344
> URL: https://issues.apache.org/jira/browse/SOLR-8344
> Project: Solr
>  Issue Type: New Feature
>Reporter: Ishan Chattopadhyaya
> Attachments: SOLR-8344.patch
>
>
> This issue was discussed in the comments at SOLR-8220. Splitting it out to a 
> separate issue so that we can have a focused discussion on whether/how to do 
> this.
> If a given set of requested fields are all stored and have docValues (column 
> stored), we can retrieve the values from either place.  What should the 
> default be?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-master - Build # 849 - Still Failing

2017-09-07 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/849/

No tests ran.

Build Log:
[...truncated 27437 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist
 [copy] Copying 476 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 215 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.8 JAVA_HOME=/home/jenkins/tools/java/latest1.8
   [smoker] NOTE: output encoding is UTF-8
   [smoker] 
   [smoker] Load release URL 
"file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/"...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.03 sec (9.2 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-8.0.0-src.tgz...
   [smoker] 29.0 MB in 0.06 sec (464.6 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-8.0.0.tgz...
   [smoker] 69.1 MB in 0.06 sec (1124.9 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-8.0.0.zip...
   [smoker] 79.4 MB in 0.07 sec (1120.7 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-8.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6166 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-8.0.0.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6166 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-8.0.0-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run "ant validate"
   [smoker] run tests w/ Java 8 and testArgs='-Dtests.slow=false'...
   [smoker] test demo with 1.8...
   [smoker]   got 213 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] generate javadocs w/ Java 8...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker]   confirm all releases have coverage in TestBackwardsCompatibility
   [smoker] find all past Lucene releases...
   [smoker] run TestBackwardsCompatibility..
   [smoker] Releases that don't seem to be tested:
   [smoker]   6.6.1
   [smoker] Traceback (most recent call last):
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/dev-tools/scripts/smokeTestRelease.py",
 line 1484, in 
   [smoker] main()
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/dev-tools/scripts/smokeTestRelease.py",
 line 1428, in main
   [smoker] smokeTest(c.java, c.url, c.revision, c.version, c.tmp_dir, 
c.is_signed, ' '.join(c.test_args))
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/dev-tools/scripts/smokeTestRelease.py",
 line 1466, in smokeTest
   [smoker] unpackAndVerify(java, 'lucene', tmpDir, 'lucene-%s-src.tgz' % 
version, gitRevision, version, testArgs, baseURL)
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/dev-tools/scripts/smokeTestRelease.py",
 line 622, in unpackAndVerify
   [smoker] verifyUnpacked(java, project, artifact, unpackPath, 
gitRevision, version, testArgs, tmpDir, baseURL)
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/dev-tools/scripts/smokeTestRelease.py",
 line 774, in verifyUnpacked
   [smoker] confirmAllReleasesAreTestedForBackCompat(version, unpackPath)
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/dev-tools/scripts/smokeTestRelease.py",
 line 1404, in confirmAllReleasesAreTestedForBackCompat
   [smoker] raise RuntimeError('some releases are not tested by 
TestBackwardsCompatibility?')
   [smoker] RuntimeError: some releases are not tested by 
TestBackwardsCompatibility?

BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/build.xml:622:
 exec returned: 1

Total time: 68 minutes 10 seconds
Build step 'Invoke Ant' marked build as failure
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-8344) Decide default when requested fields are both column and row stored.

2017-09-07 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16158083#comment-16158083
 ] 

Cao Manh Dat commented on SOLR-8344:


Ah yeah, you're right.

> Decide default when requested fields are both column and row stored.
> 
>
> Key: SOLR-8344
> URL: https://issues.apache.org/jira/browse/SOLR-8344
> Project: Solr
>  Issue Type: New Feature
>Reporter: Ishan Chattopadhyaya
> Attachments: SOLR-8344.patch
>
>
> This issue was discussed in the comments at SOLR-8220. Splitting it out to a 
> separate issue so that we can have a focused discussion on whether/how to do 
> this.
> If a given set of requested fields are all stored and have docValues (column 
> stored), we can retrieve the values from either place.  What should the 
> default be?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8344) Decide default when requested fields are both column and row stored.

2017-09-07 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16158082#comment-16158082
 ] 

David Smiley commented on SOLR-8344:


I mean just call {{doc.remove(fieldName)}; no need to guard this with an "if".  
If it's not there, doc.remove is harmless.

> Decide default when requested fields are both column and row stored.
> 
>
> Key: SOLR-8344
> URL: https://issues.apache.org/jira/browse/SOLR-8344
> Project: Solr
>  Issue Type: New Feature
>Reporter: Ishan Chattopadhyaya
> Attachments: SOLR-8344.patch
>
>
> This issue was discussed in the comments at SOLR-8220. Splitting it out to a 
> separate issue so that we can have a focused discussion on whether/how to do 
> this.
> If a given set of requested fields are all stored and have docValues (column 
> stored), we can retrieve the values from either place.  What should the 
> default be?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8344) Decide default when requested fields are both column and row stored.

2017-09-07 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16158073#comment-16158073
 ] 

Cao Manh Dat edited comment on SOLR-8344 at 9/8/17 3:53 AM:


bq. Which scenario is this in your benchmark above? Or some other benchmark you 
allude to?
Yeah, it belongs to another simple benchmark.
bq. I understand that but don't see what that has to do with the number of 
documents.
Hmm, I re-run the benchmark and you're right. They are almost the same.
bq. In the event every field in "fl" has docValues, stored, single valued: will 
your patch use docValues data always
Yeah, It always use docValues.
bq.If in some request we realize we can avoid accessing the stored document and 
instead do only with docValues, we should avoid polluting the document cache as 
well
With this patch, if we notice that we do not need stored field, so we will skip 
reading the document as well as caching. 
bq. In your patch, this: if (doc.containsKey(fieldName)) doc.remove(fieldName); 
can be simplified to remove the needless condition
I don't know what you means, but if we remove this line, some tests will get 
failed ( TestPseudoReturnFields )



was (Author: caomanhdat):
bq. Which scenario is this in your benchmark above? Or some other benchmark you 
allude to?
Yeah, it belongs to another simple benchmark.
bq. I understand that but don't see what that has to do with the number of 
documents.
Hmm, I re-run the benchmark and you're right. They are almost the same.
bq. In the event every field in "fl" has docValues, stored, single valued: will 
your patch use docValues data always
Yeah, It always use docValues.
bq.If in some request we realize we can avoid accessing the stored document and 
instead do only with docValues, we should avoid polluting the document cache as 
well
With this patch, if we notice that we do not need stored field, so we will skip 
reading the document as well as caching. 



> Decide default when requested fields are both column and row stored.
> 
>
> Key: SOLR-8344
> URL: https://issues.apache.org/jira/browse/SOLR-8344
> Project: Solr
>  Issue Type: New Feature
>Reporter: Ishan Chattopadhyaya
> Attachments: SOLR-8344.patch
>
>
> This issue was discussed in the comments at SOLR-8220. Splitting it out to a 
> separate issue so that we can have a focused discussion on whether/how to do 
> this.
> If a given set of requested fields are all stored and have docValues (column 
> stored), we can retrieve the values from either place.  What should the 
> default be?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8344) Decide default when requested fields are both column and row stored.

2017-09-07 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16158073#comment-16158073
 ] 

Cao Manh Dat commented on SOLR-8344:


bq. Which scenario is this in your benchmark above? Or some other benchmark you 
allude to?
Yeah, it belongs to another simple benchmark.
bq. I understand that but don't see what that has to do with the number of 
documents.
Hmm, I re-run the benchmark and you're right. They are almost the same.
bq. In the event every field in "fl" has docValues, stored, single valued: will 
your patch use docValues data always
Yeah, It always use docValues.
bq.If in some request we realize we can avoid accessing the stored document and 
instead do only with docValues, we should avoid polluting the document cache as 
well
With this patch, if we notice that we do not need stored field, so we will skip 
reading the document as well as caching. 



> Decide default when requested fields are both column and row stored.
> 
>
> Key: SOLR-8344
> URL: https://issues.apache.org/jira/browse/SOLR-8344
> Project: Solr
>  Issue Type: New Feature
>Reporter: Ishan Chattopadhyaya
> Attachments: SOLR-8344.patch
>
>
> This issue was discussed in the comments at SOLR-8220. Splitting it out to a 
> separate issue so that we can have a focused discussion on whether/how to do 
> this.
> If a given set of requested fields are all stored and have docValues (column 
> stored), we can retrieve the values from either place.  What should the 
> default be?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7871) Platform independent config file instead of solr.in.sh and solr.in.cmd

2017-09-07 Thread Jason Gerlowski (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16158067#comment-16158067
 ] 

Jason Gerlowski commented on SOLR-7871:
---

I like the "two-step rocket" approach, and will move forward with that.

I'm tempted to vote the other way...for the approach that your existing patch 
takes. (maintain backcompat by emulating as much current shell/cmd logic as 
possible in Java-land).  It does a great job of trimming logic from the 
bash/cmd scripts, which is a change I really want to see (see SOLR-11206).  But 
it's hard to make the argument that that is required for this JIRA.  The best 
thing is probably the simplest thing, as you pointed out above.

So I'll go forward with the approach you laid out above in your previous 
comment.

> Platform independent config file instead of solr.in.sh and solr.in.cmd
> --
>
> Key: SOLR-7871
> URL: https://issues.apache.org/jira/browse/SOLR-7871
> Project: Solr
>  Issue Type: Improvement
>  Components: scripts and tools
>Affects Versions: 5.2.1
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>  Labels: bin/solr
> Attachments: SOLR-7871.patch, SOLR-7871.patch, SOLR-7871.patch
>
>
> Spinoff from SOLR-7043
> The config files {{solr.in.sh}} and {{solr.in.cmd}} are currently executable 
> batch files, but all they do is to set environment variables for the start 
> scripts on the format {{key=value}}
> Suggest to instead have one central platform independent config file e.g. 
> {{bin/solr.yml}} or {{bin/solrstart.properties}} which is parsed by 
> {{SolrCLI.java}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-7.x-Linux (32bit/jdk-9-ea+181) - Build # 375 - Still Unstable!

2017-09-07 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/375/
Java: 32bit/jdk-9-ea+181 -server -XX:+UseConcMarkSweepGC --illegal-access=deny

1 tests failed.
FAILED:  org.apache.solr.handler.TestSQLHandler.doTest

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([CC62725054540FBE:6B26CAF439EF1C07]:0)
at 
org.apache.solr.handler.TestSQLHandler.testBasicSelect(TestSQLHandler.java:181)
at org.apache.solr.handler.TestSQLHandler.doTest(TestSQLHandler.java:82)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:993)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)




Build Log:
[...truncated 1709 lines...]
   [junit4] JVM J0: stderr was not empty, see: 

[jira] [Commented] (SOLR-8344) Decide default when requested fields are both column and row stored.

2017-09-07 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16158059#comment-16158059
 ] 

David Smiley commented on SOLR-8344:


bq. No, It matter in case of "fl" contains a field which is stored only (let's 
call it field1) and "rows" is small. 

Which scenario is this in your benchmark above?  Or some other benchmark you 
allude to?

bq. Because we already have to pay for seek cost for reading field1 therefore 
reading other fields from stored will be faster than reading from DV

I understand that but don't see what that has to do with the number of 
documents.

In the event _every_ field in "fl" has docValues, stored, single valued: will 
your patch use docValues data always?  If not, why not?  Maybe I should study 
your patch further but I was a bit confused.

I would like to consider the relationship of this optimization on the document 
cache, if not in this patch then perhaps in a follow-up.  If in some request we 
realize we can avoid accessing the stored document and instead do only with 
docValues, we should avoid polluting the document cache as well, I think.  
Maybe we will  Consider the first phase of distributed search that only wants 
the uniqueKey field.  

In your patch, this: {{if (doc.containsKey(fieldName)) doc.remove(fieldName);}} 
can be simplified to remove the needless condition

> Decide default when requested fields are both column and row stored.
> 
>
> Key: SOLR-8344
> URL: https://issues.apache.org/jira/browse/SOLR-8344
> Project: Solr
>  Issue Type: New Feature
>Reporter: Ishan Chattopadhyaya
> Attachments: SOLR-8344.patch
>
>
> This issue was discussed in the comments at SOLR-8220. Splitting it out to a 
> separate issue so that we can have a focused discussion on whether/how to do 
> this.
> If a given set of requested fields are all stored and have docValues (column 
> stored), we can retrieve the values from either place.  What should the 
> default be?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[ANNOUNCE] Apache Solr 6.6.1 released

2017-09-07 Thread Varun Thacker
7 September 2017, Apache Solr™ 6.6.1 available

The Lucene PMC is pleased to announce the release of Apache Solr 6.6.1

Solr is the popular, blazing fast, open source NoSQL search platform from
the
Apache Lucene project. Its major features include powerful full-text
search,
hit highlighting, faceted search and analytics, rich document parsing,
geospatial search, extensive REST APIs as well as parallel SQL. Solr is
enterprise grade, secure and highly scalable, providing fault tolerant
distributed search and indexing, and powers the search and navigation
features
of many of the world's largest internet sites.

This release includes 15 bug fixes since the 6.6.0 release. Some of the
major fixes are:

* Standalone Solr loads UNLOADed core on request

* ParallelStream should set the StreamContext when constructing SolrStreams

* CloudSolrStream.toExpression incorrectly handles fq clauses

* CoreContainer.load needs to send lazily loaded core descriptors to the
proper list rather than send them all to the transient lists

* Creating a core should write a core.properties file first and clean up on
failure

* Clean up a few details left over from pluggable transient core and
untangling

* Provide a way to know when Core Discovery is finished and when all async
cores are done loading

* CDCR bootstrapping can get into an infinite loop when a core is reloaded

* SolrJmxReporter is broken on core reload. This resulted in some or most
metrics not being reported via JMX after core reloads, depending on timing

* Creating a core.properties fails if the parent of core.properties is a
symlinked directory

* StreamHandler should allow connections to be closed early

* Certain admin UI pages would not load up correctly with kerberos enabled

* Fix DOWNNODE -> queue-work znode explosion in ZooKeeper

* Upgrade to Hadoop 2.7.4 to fix incompatibility with Java 9

* Fix bin/solr.cmd so it can run properly on Java 9

Furthermore, this release includes Apache Lucene 6.6.1 which includes 2 bug
fixes since the 6.6.0 release.

The release is available for immediate download at:

  http://www.apache.org/dyn/closer.lua/lucene/solr/6.6.1

Please read CHANGES.txt for a detailed list of changes:

  https://lucene.apache.org/solr/6_6_1/changes/Changes.html

Please report any feedback to the mailing lists
(http://lucene.apache.org/solr/discussion.html)

Note: The Apache Software Foundation uses an extensive mirroring
network for distributing releases. It is possible that the mirror you
are using may not have replicated the release yet. If that is the
case, please try another mirror. This also goes for Maven access.


[ANNOUNCE] Apache Lucene 6.6.1 released

2017-09-07 Thread Varun Thacker
7 September 2017, Apache Lucene™ 6.6.1 available

The Lucene PMC is pleased to announce the release of Apache Lucene 6.6.1

Apache Lucene is a high-performance, full-featured text search engine
library written entirely in Java. It is a technology suitable for nearly
any application that requires full-text search, especially cross-platform.

This release contains 2 bug fixes since the 6.6.0 release:

  * Documents with multiple points that should match might not match on a
memory index
  * A query which has only one synonym with AND as the default operator
would wrongly translate as an AND between the query term and the synonym

The release is available for immediate download at:

  http://www.apache.org/dyn/closer.lua/lucene/java/6.6.1

Please read CHANGES.txt for a full list of new features and changes:

  https://lucene.apache.org/core/6_6_1/changes/Changes.html

Please report any feedback to the mailing lists
(http://lucene.apache.org/core/discussion.html)

Note: The Apache Software Foundation uses an extensive mirroring network
for distributing releases.  It is possible that the mirror you are using
may not have replicated the release yet.  If that is the case, please
try another mirror.  This also goes for Maven access.


[JENKINS-EA] Lucene-Solr-6.6-Linux (32bit/jdk-9-ea+181) - Build # 154 - Still Unstable!

2017-09-07 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.6-Linux/154/
Java: 32bit/jdk-9-ea+181 -client -XX:+UseG1GC --illegal-access=deny

1 tests failed.
FAILED:  
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI

Error Message:
Something is broken in the assert for no shards using the same indexDir - 
probably something was changed in the attributes published in the MBean of 
SolrCore : {}

Stack Trace:
java.lang.AssertionError: Something is broken in the assert for no shards using 
the same indexDir - probably something was changed in the attributes published 
in the MBean of SolrCore : {}
at 
__randomizedtesting.SeedInfo.seed([9341DF159E363B6F:DB34ABA1980514FA]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.checkNoTwoShardsUseTheSameIndexDir(CollectionsAPIDistributedZkTest.java:646)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI(CollectionsAPIDistributedZkTest.java:524)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Created] (SOLR-11340) Add sample size calculator Stream Evaluator

2017-09-07 Thread Joel Bernstein (JIRA)
Joel Bernstein created SOLR-11340:
-

 Summary: Add sample size calculator Stream Evaluator
 Key: SOLR-11340
 URL: https://issues.apache.org/jira/browse/SOLR-11340
 Project: Solr
  Issue Type: New Feature
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Joel Bernstein


This ticket will add a function that calculate sample sizes using Slovin's 
formula.

http://www.statisticshowto.com/how-to-use-slovins-formula/



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-11340) Add sample size calculator Stream Evaluator

2017-09-07 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein reassigned SOLR-11340:
-

Assignee: Joel Bernstein

> Add sample size calculator Stream Evaluator
> ---
>
> Key: SOLR-11340
> URL: https://issues.apache.org/jira/browse/SOLR-11340
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: master (8.0), 7.1
>
>
> This ticket will add a function that calculate sample sizes using Slovin's 
> formula.
> http://www.statisticshowto.com/how-to-use-slovins-formula/



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11340) Add sample size calculator Stream Evaluator

2017-09-07 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-11340:
--
Fix Version/s: 7.1
   master (8.0)

> Add sample size calculator Stream Evaluator
> ---
>
> Key: SOLR-11340
> URL: https://issues.apache.org/jira/browse/SOLR-11340
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: master (8.0), 7.1
>
>
> This ticket will add a function that calculate sample sizes using Slovin's 
> formula.
> http://www.statisticshowto.com/how-to-use-slovins-formula/



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10783) Using Hadoop Credential Provider as SSL/TLS store password source

2017-09-07 Thread Mano Kovacs (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16157999#comment-16157999
 ] 

Mano Kovacs commented on SOLR-10783:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m 
00s{color} | {color:green} The patch does not contain any @author tags. {color} 
|
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
00s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
|| || || || {color:brown} Patch Compile Tests {color} ||
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 60m 
54s{color} | {color:green} solr in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
08s{color} | {color:red} solr in the patch failed. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 61m 32s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | SOLR-10783 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12885968/SOLR-10783.patch |
| Optional Tests |  asflicense  javac  unit  |
| uname | Darwin MunawAir.local 15.6.0 Darwin Kernel Version 15.6.0: Fri Feb 17 
10:21:18 PST 2017; root:xnu-3248.60.11.4.1~1/RELEASE_X86_64 x86_64 |
| Build tool | ant |
| Personality | 
/Users/munaw/repos/lucene-solr/dev-tools/test-patch/solr-yetus-personality.sh |
| git revision | master / ce29124 |
| Default Java | 1.8.0_45 |
| asflicense | artifact/patchprocess/patch-asflicense-._solr.txt |
| modules | C: solr solr/core solr/server U: solr |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Using Hadoop Credential Provider as SSL/TLS store password source
> -
>
> Key: SOLR-10783
> URL: https://issues.apache.org/jira/browse/SOLR-10783
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: security
>Reporter: Mano Kovacs
>Assignee: Mark Miller
> Attachments: SOLR-10783-fix.patch, SOLR-10783.patch, 
> SOLR-10783.patch, SOLR-10783.patch, SOLR-10783.patch, SOLR-10783.patch
>
>
> As a second iteration of SOLR-10307, I propose support of hadoop credential 
> providers as source of SSL store passwords. 
> Motivation: When SOLR is used in hadoop environment, support of  HCP gives 
> better integration and unified method to pass sensitive credentials to SOLR.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-11339) Add Canberra, Chebyshev, Earth Movers and Manhattan Distance Stream Evaluators

2017-09-07 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein reassigned SOLR-11339:
-

Assignee: Joel Bernstein

> Add Canberra, Chebyshev, Earth Movers and Manhattan Distance Stream Evaluators
> --
>
> Key: SOLR-11339
> URL: https://issues.apache.org/jira/browse/SOLR-11339
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: master (8.0), 7.1
>
>
> Streaming Expressions already supports Euclidean distance. This ticket will 
> add Canberra, Chebyshev, Earth Movers and Manhattan Distance Stream Evaluators



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11339) Add Canberra, Chebyshev, Earth Movers and Manhattan Distance Stream Evaluators

2017-09-07 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-11339:
--
Fix Version/s: 7.1
   master (8.0)

> Add Canberra, Chebyshev, Earth Movers and Manhattan Distance Stream Evaluators
> --
>
> Key: SOLR-11339
> URL: https://issues.apache.org/jira/browse/SOLR-11339
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: master (8.0), 7.1
>
>
> Streaming Expressions already supports Euclidean distance. This ticket will 
> add Canberra, Chebyshev, Earth Movers and Manhattan Distance Stream Evaluators



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11339) Add Canberra, Chebyshev, Earth Movers and Manhattan Distance Stream Evaluators

2017-09-07 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-11339:
--
Description: Streaming Expressions already supports Euclidean distance. 
This ticket will add Canberra, Chebyshev, Earth Movers and Manhattan Distance 
Stream Evaluators

> Add Canberra, Chebyshev, Earth Movers and Manhattan Distance Stream Evaluators
> --
>
> Key: SOLR-11339
> URL: https://issues.apache.org/jira/browse/SOLR-11339
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
> Fix For: master (8.0), 7.1
>
>
> Streaming Expressions already supports Euclidean distance. This ticket will 
> add Canberra, Chebyshev, Earth Movers and Manhattan Distance Stream Evaluators



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11339) Add Canberra, Chebyshev, Earth Movers and Manhattan Distance Stream Evaluators

2017-09-07 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-11339:
--
Summary: Add Canberra, Chebyshev, Earth Movers and Manhattan Distance 
Stream Evaluators  (was: Add Canberra, Chebyshev, Earth Movers and Manhattan 
Distance)

> Add Canberra, Chebyshev, Earth Movers and Manhattan Distance Stream Evaluators
> --
>
> Key: SOLR-11339
> URL: https://issues.apache.org/jira/browse/SOLR-11339
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-11339) Add Canberra, Chebyshev, Earth Movers and Manhattan Distance

2017-09-07 Thread Joel Bernstein (JIRA)
Joel Bernstein created SOLR-11339:
-

 Summary: Add Canberra, Chebyshev, Earth Movers and Manhattan 
Distance
 Key: SOLR-11339
 URL: https://issues.apache.org/jira/browse/SOLR-11339
 Project: Solr
  Issue Type: New Feature
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Joel Bernstein






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11338) Add Kendall's Tau-b rank and Spearmans rank correlation Stream Evaluators

2017-09-07 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-11338:
--
Description: 
Streaming Expressions already supports Pearson's product moment correlation.

This ticket adds *Kendall's Tau-b rank correlation* and *Spearmans rank 
correlation* Stream Evaluators.

Functions are backed by Apache Commons Math


  was:

Streaming Expressions already supports Pearson's product moment correlation.

This ticket adds *Kendall's Kendall's Tau-b rank correlation* and *Spearmans 
rank correlation* Stream Evaluators.

Functions are backed by Apache Commons Math



> Add Kendall's Tau-b rank and Spearmans rank correlation Stream Evaluators
> -
>
> Key: SOLR-11338
> URL: https://issues.apache.org/jira/browse/SOLR-11338
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: master (8.0), 7.1
>
>
> Streaming Expressions already supports Pearson's product moment correlation.
> This ticket adds *Kendall's Tau-b rank correlation* and *Spearmans rank 
> correlation* Stream Evaluators.
> Functions are backed by Apache Commons Math



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11338) Add Kendall's Tau-b rank and Spearmans rank correlation Stream Evaluators

2017-09-07 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-11338:
--
Fix Version/s: 7.1
   master (8.0)

> Add Kendall's Tau-b rank and Spearmans rank correlation Stream Evaluators
> -
>
> Key: SOLR-11338
> URL: https://issues.apache.org/jira/browse/SOLR-11338
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: master (8.0), 7.1
>
>
> Streaming Expressions already supports Pearson's product moment correlation.
> This ticket adds *Kendall's Kendall's Tau-b rank correlation* and *Spearmans 
> rank correlation* Stream Evaluators.
> Functions are backed by Apache Commons Math



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11338) Add Kendall's Tau-b rank and Spearmans rank correlation Stream Evaluators

2017-09-07 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-11338:
--
Summary: Add Kendall's Tau-b rank and Spearmans rank correlation Stream 
Evaluators  (was: Add Kendall's Kendall's Tau-b rank and Spearmans rank 
correlation Stream Evaluators)

> Add Kendall's Tau-b rank and Spearmans rank correlation Stream Evaluators
> -
>
> Key: SOLR-11338
> URL: https://issues.apache.org/jira/browse/SOLR-11338
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
> Fix For: master (8.0), 7.1
>
>
> Streaming Expressions already supports Pearson's product moment correlation.
> This ticket adds *Kendall's Kendall's Tau-b rank correlation* and *Spearmans 
> rank correlation* Stream Evaluators.
> Functions are backed by Apache Commons Math



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-11338) Add Kendall's Tau-b rank and Spearmans rank correlation Stream Evaluators

2017-09-07 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein reassigned SOLR-11338:
-

Assignee: Joel Bernstein

> Add Kendall's Tau-b rank and Spearmans rank correlation Stream Evaluators
> -
>
> Key: SOLR-11338
> URL: https://issues.apache.org/jira/browse/SOLR-11338
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: master (8.0), 7.1
>
>
> Streaming Expressions already supports Pearson's product moment correlation.
> This ticket adds *Kendall's Kendall's Tau-b rank correlation* and *Spearmans 
> rank correlation* Stream Evaluators.
> Functions are backed by Apache Commons Math



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-11338) Add Kendall's Kendall's Tau-b rank and Spearmans rank correlation Stream Evaluators

2017-09-07 Thread Joel Bernstein (JIRA)
Joel Bernstein created SOLR-11338:
-

 Summary: Add Kendall's Kendall's Tau-b rank and Spearmans rank 
correlation Stream Evaluators
 Key: SOLR-11338
 URL: https://issues.apache.org/jira/browse/SOLR-11338
 Project: Solr
  Issue Type: New Feature
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Joel Bernstein



Streaming Expressions already supports Pearson's product moment correlation.

This ticket adds *Kendall's Kendall's Tau-b rank correlation* and *Spearmans 
rank correlation* Stream Evaluators.

Functions are backed by Apache Commons Math




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-11337) Add binomial confidence interval Stream Evaluators

2017-09-07 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein reassigned SOLR-11337:
-

Assignee: Joel Bernstein

> Add binomial confidence interval Stream Evaluators
> --
>
> Key: SOLR-11337
> URL: https://issues.apache.org/jira/browse/SOLR-11337
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: master (8.0), 7.1
>
>
> This ticket will add the four binomial confidence interval functions 
> supported by Apache Commons Math:
> Agresti-Coull interval
> Clopper-Pearson method (exact method)
> Normal approximation (based on central limit theorem)
> Wilson score interval



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11337) Add binomial confidence interval Stream Evaluators

2017-09-07 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-11337:
--
Fix Version/s: 7.1
   master (8.0)

> Add binomial confidence interval Stream Evaluators
> --
>
> Key: SOLR-11337
> URL: https://issues.apache.org/jira/browse/SOLR-11337
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
> Fix For: master (8.0), 7.1
>
>
> This ticket will add the four binomial confidence interval functions 
> supported by Apache Commons Math:
> Agresti-Coull interval
> Clopper-Pearson method (exact method)
> Normal approximation (based on central limit theorem)
> Wilson score interval



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11337) Add binomial confidence interval Stream Evaluators

2017-09-07 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-11337:
--
Description: 
This ticket will add the four binomial confidence interval functions supported 
by Apache Commons Math:

Agresti-Coull interval
Clopper-Pearson method (exact method)
Normal approximation (based on central limit theorem)
Wilson score interval



  was:
This ticket will add the most common binomial confidence interval functions:

Agresti-Coull interval
Clopper-Pearson method (exact method)
Normal approximation (based on central limit theorem)
Wilson score interval




> Add binomial confidence interval Stream Evaluators
> --
>
> Key: SOLR-11337
> URL: https://issues.apache.org/jira/browse/SOLR-11337
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>
> This ticket will add the four binomial confidence interval functions 
> supported by Apache Commons Math:
> Agresti-Coull interval
> Clopper-Pearson method (exact method)
> Normal approximation (based on central limit theorem)
> Wilson score interval



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-11337) Add binomial confidence interval Stream Evaluators

2017-09-07 Thread Joel Bernstein (JIRA)
Joel Bernstein created SOLR-11337:
-

 Summary: Add binomial confidence interval Stream Evaluators
 Key: SOLR-11337
 URL: https://issues.apache.org/jira/browse/SOLR-11337
 Project: Solr
  Issue Type: New Feature
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Joel Bernstein


This ticket will add the most common binomial confidence interval functions:

Agresti-Coull interval
Clopper-Pearson method (exact method)
Normal approximation (based on central limit theorem)
Wilson score interval





--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Null pointer when starting Solr in master and branch_7x

2017-09-07 Thread Joel Bernstein
Yep, things are working now. Thanks!

Joel Bernstein
http://joelsolr.blogspot.com/

On Thu, Sep 7, 2017 at 6:00 PM, Yonik Seeley  wrote:

> Try pulling again... Mark just reverted
> https://issues.apache.org/jira/browse/SOLR-10783
>
> -Yonik
>
>
> On Thu, Sep 7, 2017 at 3:01 PM, Joel Bernstein  wrote:
> > I just did a pull and attempted to start solr with the following command:
> >
> > bin/solr start -c
> >
> >
> > The following error is printed to the console on startup? Anyone else
> seeing
> > this?
> >
> >
> > INFO  - 2017-09-07 14:57:41.803;
> > org.apache.solr.util.configuration.SSLCredentialProviderFactory;
> Processing
> > SSL Credential Provider chain: env;sysprop
> >
> > Exception in thread "main" java.lang.NullPointerException
> >
> > at
> > org.apache.solr.util.configuration.providers.EnvSSLCredentialProvider.
> getCredential(EnvSSLCredentialProvider.java:57)
> >
> > at
> > org.apache.solr.util.configuration.providers.
> AbstractSSLCredentialProvider.getCredential(AbstractSSLCredentialProvider.
> java:53)
> >
> > at
> > org.apache.solr.util.configuration.SSLConfigurations.getPassword(
> SSLConfigurations.java:123)
> >
> > at
> > org.apache.solr.util.configuration.SSLConfigurations.
> getClientKeyStorePassword(SSLConfigurations.java:109)
> >
> > at
> > org.apache.solr.util.configuration.SSLConfigurations.init(
> SSLConfigurations.java:62)
> >
> > at org.apache.solr.util.SolrCLI.main(SolrCLI.java:273)
> >
> >
> >
> >
> > Joel Bernstein
> > http://joelsolr.blogspot.com/
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


[jira] [Resolved] (SOLR-11314) FastCharStream should avoid creating new IOExceptions as a signaling mechanism

2017-09-07 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley resolved SOLR-11314.
-
   Resolution: Fixed
Fix Version/s: 7.1
   master (8.0)

Thanks Michael!

> FastCharStream should avoid creating new IOExceptions as a signaling mechanism
> --
>
> Key: SOLR-11314
> URL: https://issues.apache.org/jira/browse/SOLR-11314
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.0, 6.6.1, master (8.0)
>Reporter: Michael Braun
>Assignee: David Smiley
> Fix For: master (8.0), 7.1
>
> Attachments: Screen Shot 2017-09-06 at 8.21.18 PM.png, 
> SOLR-11314.patch, TestQueryPerfSpeedup.java
>
>
> FastCharStream is used internally by solr query parser classes. It throws a 
> new IOException to signal the end. However, this is quite expensive relative 
> to most operations and it shows up very high on samples for extremely high 
> query cases.  The IOException should be a single static instance that can be 
> shared to avoid the overhead in creation and populating the stack trace, a 
> stack trace which is never used.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11314) FastCharStream should avoid creating new IOExceptions as a signaling mechanism

2017-09-07 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16157973#comment-16157973
 ] 

ASF subversion and git services commented on SOLR-11314:


Commit 21b48b0538e676a7864c39f0d05fdfb45ffef9a3 in lucene-solr's branch 
refs/heads/branch_7x from [~dsmiley]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=21b48b0 ]

SOLR-11314: FastCharStream: re-use the READ_PAST_EOF exception

(cherry picked from commit 89feb15)


> FastCharStream should avoid creating new IOExceptions as a signaling mechanism
> --
>
> Key: SOLR-11314
> URL: https://issues.apache.org/jira/browse/SOLR-11314
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.0, 6.6.1, master (8.0)
>Reporter: Michael Braun
>Assignee: David Smiley
> Attachments: Screen Shot 2017-09-06 at 8.21.18 PM.png, 
> SOLR-11314.patch, TestQueryPerfSpeedup.java
>
>
> FastCharStream is used internally by solr query parser classes. It throws a 
> new IOException to signal the end. However, this is quite expensive relative 
> to most operations and it shows up very high on samples for extremely high 
> query cases.  The IOException should be a single static instance that can be 
> shared to avoid the overhead in creation and populating the stack trace, a 
> stack trace which is never used.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11314) FastCharStream should avoid creating new IOExceptions as a signaling mechanism

2017-09-07 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16157962#comment-16157962
 ] 

ASF subversion and git services commented on SOLR-11314:


Commit 89feb1500848d8d566d63be21d351d27a1bdcf6f in lucene-solr's branch 
refs/heads/master from [~dsmiley]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=89feb15 ]

SOLR-11314: FastCharStream: re-use the READ_PAST_EOF exception


> FastCharStream should avoid creating new IOExceptions as a signaling mechanism
> --
>
> Key: SOLR-11314
> URL: https://issues.apache.org/jira/browse/SOLR-11314
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.0, 6.6.1, master (8.0)
>Reporter: Michael Braun
>Assignee: David Smiley
> Attachments: Screen Shot 2017-09-06 at 8.21.18 PM.png, 
> SOLR-11314.patch, TestQueryPerfSpeedup.java
>
>
> FastCharStream is used internally by solr query parser classes. It throws a 
> new IOException to signal the end. However, this is quite expensive relative 
> to most operations and it shows up very high on samples for extremely high 
> query cases.  The IOException should be a single static instance that can be 
> shared to avoid the overhead in creation and populating the stack trace, a 
> stack trace which is never used.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-7.x - Build # 40 - Still unstable

2017-09-07 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-7.x/40/

107 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.analytics.NoFacetCloudTest

Error Message:
Error starting up MiniSolrCloudCluster

Stack Trace:
java.lang.Exception: Error starting up MiniSolrCloudCluster
at __randomizedtesting.SeedInfo.seed([5ECF0772F96CC9BF]:0)
at 
org.apache.solr.cloud.MiniSolrCloudCluster.checkForExceptions(MiniSolrCloudCluster.java:507)
at 
org.apache.solr.cloud.MiniSolrCloudCluster.(MiniSolrCloudCluster.java:251)
at 
org.apache.solr.cloud.SolrCloudTestCase$Builder.configure(SolrCloudTestCase.java:190)
at 
org.apache.solr.analytics.AbstractAnalyticsStatsCloudTest.setupCluster(AbstractAnalyticsStatsCloudTest.java:73)
at 
org.apache.solr.analytics.NoFacetCloudTest.populate(NoFacetCloudTest.java:62)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:874)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)
Suppressed: java.lang.RuntimeException: java.lang.NullPointerException
at 
org.eclipse.jetty.servlet.ServletHandler.updateMappings(ServletHandler.java:1600)
at 
org.eclipse.jetty.servlet.ServletHandler.setFilterMappings(ServletHandler.java:1659)
at 
org.eclipse.jetty.servlet.ServletHandler.addFilterMapping(ServletHandler.java:1316)
at 
org.eclipse.jetty.servlet.ServletHandler.addFilterWithMapping(ServletHandler.java:1145)
at 
org.eclipse.jetty.servlet.ServletContextHandler.addFilter(ServletContextHandler.java:448)
at 
org.apache.solr.client.solrj.embedded.JettySolrRunner$1.lifeCycleStarted(JettySolrRunner.java:306)
at 
org.eclipse.jetty.util.component.AbstractLifeCycle.setStarted(AbstractLifeCycle.java:179)
at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:69)
at 
org.apache.solr.client.solrj.embedded.JettySolrRunner.start(JettySolrRunner.java:394)
at 
org.apache.solr.client.solrj.embedded.JettySolrRunner.start(JettySolrRunner.java:367)
at 
org.apache.solr.cloud.MiniSolrCloudCluster.startJettySolrRunner(MiniSolrCloudCluster.java:384)
at 
org.apache.solr.cloud.MiniSolrCloudCluster.lambda$new$0(MiniSolrCloudCluster.java:247)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:188)
at 

[JENKINS] Lucene-Solr-Tests-master - Build # 2092 - Still unstable

2017-09-07 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/2092/

3 tests failed.
FAILED:  
org.apache.solr.cloud.HealthCheckHandlerTest.testHealthCheckHandlerSolrJ

Error Message:
Error from server at http://127.0.0.1:37972/solr: Host Unavailable: Not 
connected to zk

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:37972/solr: Host Unavailable: Not connected to 
zk
at 
__randomizedtesting.SeedInfo.seed([6A41B6D4A9256BBB:C0B269C0D2C323BA]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:627)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:253)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:242)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:178)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:195)
at 
org.apache.solr.cloud.HealthCheckHandlerTest.testHealthCheckHandlerSolrJ(HealthCheckHandlerTest.java:78)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 

[jira] [Updated] (SOLR-10783) Using Hadoop Credential Provider as SSL/TLS store password source

2017-09-07 Thread Mano Kovacs (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mano Kovacs updated SOLR-10783:
---
Attachment: SOLR-10783.patch

Attaching new patch, merged the three older plus removing default value.

> Using Hadoop Credential Provider as SSL/TLS store password source
> -
>
> Key: SOLR-10783
> URL: https://issues.apache.org/jira/browse/SOLR-10783
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: security
>Reporter: Mano Kovacs
>Assignee: Mark Miller
> Attachments: SOLR-10783-fix.patch, SOLR-10783.patch, 
> SOLR-10783.patch, SOLR-10783.patch, SOLR-10783.patch, SOLR-10783.patch
>
>
> As a second iteration of SOLR-10307, I propose support of hadoop credential 
> providers as source of SSL store passwords. 
> Motivation: When SOLR is used in hadoop environment, support of  HCP gives 
> better integration and unified method to pass sensitive credentials to SOLR.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8344) Decide default when requested fields are both column and row stored.

2017-09-07 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16157864#comment-16157864
 ] 

Cao Manh Dat edited comment on SOLR-8344 at 9/7/17 11:39 PM:
-

[~dsmiley] 
bq.I don't think the number of documents is that pertinent since both 
strategies are O(docs) multiplied by other factors. 
No, It matter in case of "fl" contains a field which is stored only (let's call 
it field1). Because we already have to pay for seek cost for reading field1 
therefore reading other fields from stored will be faster than reading from DV. 
I tested this result with Lucene (not Solr) so we don't have to worry about 
caching will affect the result here.


was (Author: caomanhdat):
[~dsmiley] 
bq.I don't think the number of documents is that pertinent since both 
strategies are O(docs) multiplied by other factors. 
No, It matter in case of "fl" contains a field which is stored only (let's call 
it field1). Because we already have to pay for seek cost for reading field1 
therefore reading both field2 and field3 from stored will be faster than 
reading field2 and field3 from DV. I tested this result with Lucene (not Solr) 
so we don't have to worry about caching will affect the result here.

> Decide default when requested fields are both column and row stored.
> 
>
> Key: SOLR-8344
> URL: https://issues.apache.org/jira/browse/SOLR-8344
> Project: Solr
>  Issue Type: New Feature
>Reporter: Ishan Chattopadhyaya
> Attachments: SOLR-8344.patch
>
>
> This issue was discussed in the comments at SOLR-8220. Splitting it out to a 
> separate issue so that we can have a focused discussion on whether/how to do 
> this.
> If a given set of requested fields are all stored and have docValues (column 
> stored), we can retrieve the values from either place.  What should the 
> default be?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8344) Decide default when requested fields are both column and row stored.

2017-09-07 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16157864#comment-16157864
 ] 

Cao Manh Dat edited comment on SOLR-8344 at 9/7/17 11:39 PM:
-

[~dsmiley] 
bq.I don't think the number of documents is that pertinent since both 
strategies are O(docs) multiplied by other factors. 
No, It matter in case of "fl" contains a field which is stored only (let's call 
it field1) and "rows" is small. Because we already have to pay for seek cost 
for reading field1 therefore reading other fields from stored will be faster 
than reading from DV. I tested this result with Lucene (not Solr) so we don't 
have to worry about caching will affect the result here.


was (Author: caomanhdat):
[~dsmiley] 
bq.I don't think the number of documents is that pertinent since both 
strategies are O(docs) multiplied by other factors. 
No, It matter in case of "fl" contains a field which is stored only (let's call 
it field1). Because we already have to pay for seek cost for reading field1 
therefore reading other fields from stored will be faster than reading from DV. 
I tested this result with Lucene (not Solr) so we don't have to worry about 
caching will affect the result here.

> Decide default when requested fields are both column and row stored.
> 
>
> Key: SOLR-8344
> URL: https://issues.apache.org/jira/browse/SOLR-8344
> Project: Solr
>  Issue Type: New Feature
>Reporter: Ishan Chattopadhyaya
> Attachments: SOLR-8344.patch
>
>
> This issue was discussed in the comments at SOLR-8220. Splitting it out to a 
> separate issue so that we can have a focused discussion on whether/how to do 
> this.
> If a given set of requested fields are all stored and have docValues (column 
> stored), we can retrieve the values from either place.  What should the 
> default be?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8344) Decide default when requested fields are both column and row stored.

2017-09-07 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16157864#comment-16157864
 ] 

Cao Manh Dat commented on SOLR-8344:


[~dsmiley] 
bq.I don't think the number of documents is that pertinent since both 
strategies are O(docs) multiplied by other factors. 
No, It matter in case of "fl" contains a field which is stored only (let's call 
it field1). Because we already have to pay for seek cost for reading field1 
therefore reading both field2 and field3 from stored will be faster than 
reading field2 and field3 from DV. I tested this result with Lucene (not Solr) 
so we don't have to worry about caching will affect the result here.

> Decide default when requested fields are both column and row stored.
> 
>
> Key: SOLR-8344
> URL: https://issues.apache.org/jira/browse/SOLR-8344
> Project: Solr
>  Issue Type: New Feature
>Reporter: Ishan Chattopadhyaya
> Attachments: SOLR-8344.patch
>
>
> This issue was discussed in the comments at SOLR-8220. Splitting it out to a 
> separate issue so that we can have a focused discussion on whether/how to do 
> this.
> If a given set of requested fields are all stored and have docValues (column 
> stored), we can retrieve the values from either place.  What should the 
> default be?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Release Apache Lucene/Solr 6.6.1 RC1

2017-09-07 Thread Varun Thacker
I'll file a bug , but looks like all quickstarts that ship are mentioning
to 6.2 ( this file ships in the source of the release as well )

https://github.com/apache/lucene-solr/blob/releases/lucene-solr/6.5.0/solr/site/quickstart.mdtext
https://github.com/apache/lucene-solr/blob/releases/lucene-solr/6.6.0/solr/site/quickstart.mdtext

On Wed, Sep 6, 2017 at 3:55 PM, Varun Thacker  wrote:

> Thanks Anshum for the feedback! I'll fix those and also keep it as a DRAFT
> till I actually send out the release mail
>
> On Wed, Sep 6, 2017 at 3:49 PM, Anshum Gupta 
> wrote:
>
>> Varun,
>>
>> The wiki auto generates a link in case of CapitalizedText
>> e.g. CloudSolrStream. There are a lot of such links. You might want to add
>> "{{{" to the start of the notes, and "}}}" to the end to escape those.
>> Also, you're close to releasing 6.6.1 so it's ok, but we generally add a
>> line that says 'DRAFT' at the top of the notes until the release happens.
>>
>> The notes (text) look fine to me.
>>
>> Anshum
>>
>>
>> On Wed, Sep 6, 2017 at 3:02 PM Varun Thacker  wrote:
>>
>>> Here are the release notes :
>>>
>>> https://wiki.apache.org/lucene-java/ReleaseNote661
>>> https://wiki.apache.org/solr/ReleaseNote661
>>>
>>> I'll wait for a few hours for folks to review ( I know it's a short time
>>> but given it's just bug fixes hopefully there is not much to edit ) and
>>> then complete the announcement procedure later in the day
>>>
>>> On Wed, Sep 6, 2017 at 1:56 PM, Steve Rowe  wrote:
>>>
 Hi Varun,

 I’ve added you to the AdminGroup pages on both the Solr and the Lucene
 wikis.

 --
 Steve
 www.lucidworks.com

 > On Sep 6, 2017, at 4:32 PM, Varun Thacker  wrote:
 >
 > @Anshum: I'm almost done with the release process. Hopefully I'll be
 able to send out an email by the end of the day
 >
 > The one thing I should have done a lot earlier was started working on
 the release notes. I'll work on it now.
 >
 > For that can someone please grant 'VarunThacker' edit karma to the
 wiki.
 >
 > On Wed, Sep 6, 2017 at 10:32 AM, Anshum Gupta 
 wrote:
 > Thanks Varun, can you also update the doap files as I need that in
 order to successfully build an RC for 7.0.
 >
 > I can add that, but it requires a release date for 6.6.1, and I
 wanted to wait for you on that.
 >
 > Anshum
 >
 > On Sun, Sep 3, 2017 at 7:48 PM Varun Thacker 
 wrote:
 > The vote has passed. Thanks everyone for voting!
 >
 > I'll begin publishing the artifacts tomorrow.
 >
 > On Sun, Sep 3, 2017 at 1:30 AM, Kevin Risden <
 compuwizard...@gmail.com> wrote:
 > +1
 > SUCCESS! [1:11:03.647449]
 > Kevin Risden
 >
 >
 > On Thu, Aug 31, 2017 at 6:34 AM, Christian Moen 
 wrote:
 > > +1
 > > SUCCESS! [1:11:23.476350]
 > >
 > > On Thu, Aug 31, 2017 at 9:26 AM Tomás Fernández Löbbe
 > >  wrote:
 > >>
 > >> +1
 > >> SUCCESS! [0:43:39.770376]
 > >>
 > >> On Wed, Aug 30, 2017 at 1:12 PM, Yonik Seeley 
 wrote:
 > >>>
 > >>> +1
 > >>>
 > >>> -Yonik
 > >>>
 > >>>
 > >>> On Tue, Aug 29, 2017 at 10:56 PM, Varun Thacker <
 va...@vthacker.in>
 > >>> wrote:
 > >>> > Please vote for release candidate 1 for Lucene/Solr 6.6.1
 > >>> >
 > >>> > The artifacts can be downloaded from:
 > >>> >
 > >>> > https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-6.
 6.1-RC1-rev9aa465a89b64ff2dabe7b4d50c472de32c298683
 > >>> >
 > >>> > You can run the smoke tester directly with this command:
 > >>> >
 > >>> > python3 -u dev-tools/scripts/smokeTestRelease.py \
 > >>> >
 > >>> > https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-6.
 6.1-RC1-rev9aa465a89b64ff2dabe7b4d50c472de32c298683
 > >>> >
 > >>> > Here's my +1
 > >>> > SUCCESS! [0:47:12.552040]
 > >>>
 > >>> 
 -
 > >>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 > >>> For additional commands, e-mail: dev-h...@lucene.apache.org
 > >>>
 > >>
 > >
 >
 > -
 > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 > For additional commands, e-mail: dev-h...@lucene.apache.org
 >
 >
 >


 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org


>>>
>


Re: Remove Solr fieldType XML example from Lucene FilterFactories JavaDoc?

2017-09-07 Thread Uwe Schindler
+1

I am fine with both variants.

Uwe

Am 7. September 2017 22:10:25 MESZ schrieb Robert Muir :
>On Thu, Sep 7, 2017 at 3:58 PM, Jan Høydahl 
>wrote:
>> Hi,
>>
>> Most Analysis factories have a snippet in the JavaDoc providing an
>example for how to configure the component in Solr’s schema.xml.
>> See
>http://lucene.apache.org/core/6_6_0/analyzers-common/org/apache/lucene/analysis/miscellaneous/ASCIIFoldingFilterFactory.html
>for an example.
>>
>> I feel it is a bit mal-placed, and Solr users can find such XML
>snippets in the RefGuide.
>> The snippet gives information about what options the factory accepts
>in its constructor (Map), so we should cannot simply
>delete the XML.
>> But perhaps it could be replaced by a HTML definition list or table
>describing each option?
>>
>
>How about we replace with the equivalent example for CustomAnalyzer,
>which lets you use them in a programmatic way with the lucene API?
>
>http://lucene.apache.org/core/6_6_0/analyzers-common/org/apache/lucene/analysis/custom/CustomAnalyzer.html
>
>-
>To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>For additional commands, e-mail: dev-h...@lucene.apache.org

--
Uwe Schindler
Achterdiek 19, 28357 Bremen
https://www.thetaphi.de

[JENKINS-EA] Lucene-Solr-7.x-Linux (64bit/jdk-9-ea+181) - Build # 374 - Still unstable!

2017-09-07 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/374/
Java: 64bit/jdk-9-ea+181 -XX:+UseCompressedOops -XX:+UseParallelGC 
--illegal-access=deny

5 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.TestMiniSolrCloudClusterSSL

Error Message:
Could not initialize class org.apache.solr.cloud.TestMiniSolrCloudClusterSSL

Stack Trace:
java.lang.NoClassDefFoundError: Could not initialize class 
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL
at java.base/java.lang.Class.forName0(Native Method)
at java.base/java.lang.Class.forName(Class.java:375)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$2.run(RandomizedRunner.java:620)


FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestLazyCores

Error Message:
1 thread leaked from SUITE scope at org.apache.solr.core.TestLazyCores: 1) 
Thread[id=19778, name=searcherExecutor-6089-thread-1, state=WAITING, 
group=TGRP-TestLazyCores] at 
java.base@9/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@9/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194)   
  at 
java.base@9/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2062)
 at 
java.base@9/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:435)
 at 
java.base@9/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1092)
 at 
java.base@9/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1152)
 at 
java.base@9/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
 at java.base@9/java.lang.Thread.run(Thread.java:844)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.core.TestLazyCores: 
   1) Thread[id=19778, name=searcherExecutor-6089-thread-1, state=WAITING, 
group=TGRP-TestLazyCores]
at java.base@9/jdk.internal.misc.Unsafe.park(Native Method)
at 
java.base@9/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194)
at 
java.base@9/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2062)
at 
java.base@9/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:435)
at 
java.base@9/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1092)
at 
java.base@9/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1152)
at 
java.base@9/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
at java.base@9/java.lang.Thread.run(Thread.java:844)
at __randomizedtesting.SeedInfo.seed([9C7F253424C2822F]:0)


FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestLazyCores

Error Message:
There are still zombie threads that couldn't be terminated:1) 
Thread[id=19778, name=searcherExecutor-6089-thread-1, state=WAITING, 
group=TGRP-TestLazyCores] at 
java.base@9/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@9/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194)   
  at 
java.base@9/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2062)
 at 
java.base@9/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:435)
 at 
java.base@9/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1092)
 at 
java.base@9/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1152)
 at 
java.base@9/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
 at java.base@9/java.lang.Thread.run(Thread.java:844)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie 
threads that couldn't be terminated:
   1) Thread[id=19778, name=searcherExecutor-6089-thread-1, state=WAITING, 
group=TGRP-TestLazyCores]
at java.base@9/jdk.internal.misc.Unsafe.park(Native Method)
at 
java.base@9/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194)
at 
java.base@9/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2062)
at 
java.base@9/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:435)
at 
java.base@9/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1092)
at 
java.base@9/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1152)
at 
java.base@9/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
at java.base@9/java.lang.Thread.run(Thread.java:844)
at __randomizedtesting.SeedInfo.seed([9C7F253424C2822F]:0)


FAILED:  
org.apache.solr.cloud.TestSSLRandomization.testRandomizedSslAndClientAuth

Error Message:

Re: Null pointer when starting Solr in master and branch_7x

2017-09-07 Thread Yonik Seeley
Try pulling again... Mark just reverted
https://issues.apache.org/jira/browse/SOLR-10783

-Yonik


On Thu, Sep 7, 2017 at 3:01 PM, Joel Bernstein  wrote:
> I just did a pull and attempted to start solr with the following command:
>
> bin/solr start -c
>
>
> The following error is printed to the console on startup? Anyone else seeing
> this?
>
>
> INFO  - 2017-09-07 14:57:41.803;
> org.apache.solr.util.configuration.SSLCredentialProviderFactory; Processing
> SSL Credential Provider chain: env;sysprop
>
> Exception in thread "main" java.lang.NullPointerException
>
> at
> org.apache.solr.util.configuration.providers.EnvSSLCredentialProvider.getCredential(EnvSSLCredentialProvider.java:57)
>
> at
> org.apache.solr.util.configuration.providers.AbstractSSLCredentialProvider.getCredential(AbstractSSLCredentialProvider.java:53)
>
> at
> org.apache.solr.util.configuration.SSLConfigurations.getPassword(SSLConfigurations.java:123)
>
> at
> org.apache.solr.util.configuration.SSLConfigurations.getClientKeyStorePassword(SSLConfigurations.java:109)
>
> at
> org.apache.solr.util.configuration.SSLConfigurations.init(SSLConfigurations.java:62)
>
> at org.apache.solr.util.SolrCLI.main(SolrCLI.java:273)
>
>
>
>
> Joel Bernstein
> http://joelsolr.blogspot.com/

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-11314) FastCharStream should avoid creating new IOExceptions as a signaling mechanism

2017-09-07 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley reassigned SOLR-11314:
---

Assignee: David Smiley

> FastCharStream should avoid creating new IOExceptions as a signaling mechanism
> --
>
> Key: SOLR-11314
> URL: https://issues.apache.org/jira/browse/SOLR-11314
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.0, 6.6.1, master (8.0)
>Reporter: Michael Braun
>Assignee: David Smiley
> Attachments: Screen Shot 2017-09-06 at 8.21.18 PM.png, 
> SOLR-11314.patch, TestQueryPerfSpeedup.java
>
>
> FastCharStream is used internally by solr query parser classes. It throws a 
> new IOException to signal the end. However, this is quite expensive relative 
> to most operations and it shows up very high on samples for extremely high 
> query cases.  The IOException should be a single static instance that can be 
> shared to avoid the overhead in creation and populating the stack trace, a 
> stack trace which is never used.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-11336) DocBasedVersionConstraintsProcessor should be more extensible

2017-09-07 Thread Michael Braun (JIRA)
Michael Braun created SOLR-11336:


 Summary: DocBasedVersionConstraintsProcessor should be more 
extensible
 Key: SOLR-11336
 URL: https://issues.apache.org/jira/browse/SOLR-11336
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Michael Braun
Priority: Minor


DocBasedVersionConstraintsProcessor supports allowing document updates only if 
the new version is greater than the old. However, if any behavior wants to be 
extended / changed in minor ways, the entire class will need to be copied and 
slightly modified rather than extending and changing the method in question. 

It would be nice if DocBasedVersionConstraintsProcessor stood on its own as a 
non-private class. In addition, certain methods (such as pieces of 
isVersionNewEnough) should be broken out into separate methods so they can be 
extended such that someone can extend the processor class and override what it 
means for a new version to be accepted (allowing equal versions through? What 
if new is a lower not greater number?). 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10783) Using Hadoop Credential Provider as SSL/TLS store password source

2017-09-07 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16157712#comment-16157712
 ] 

ASF subversion and git services commented on SOLR-10783:


Commit ce291247218d733dff12e19110dd9c8bef9d759f in lucene-solr's branch 
refs/heads/master from [~mark.mil...@oblivion.ch]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ce29124 ]

SOLR-10783: Revert for now - having run the tests a few times today, one of 
them may be concerning (reverted from commit 
b4d280f369023a179e98154535ed4b06ea096f5f)


> Using Hadoop Credential Provider as SSL/TLS store password source
> -
>
> Key: SOLR-10783
> URL: https://issues.apache.org/jira/browse/SOLR-10783
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: security
>Reporter: Mano Kovacs
>Assignee: Mark Miller
> Attachments: SOLR-10783-fix.patch, SOLR-10783.patch, 
> SOLR-10783.patch, SOLR-10783.patch, SOLR-10783.patch
>
>
> As a second iteration of SOLR-10307, I propose support of hadoop credential 
> providers as source of SSL store passwords. 
> Motivation: When SOLR is used in hadoop environment, support of  HCP gives 
> better integration and unified method to pass sensitive credentials to SOLR.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10783) Using Hadoop Credential Provider as SSL/TLS store password source

2017-09-07 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16157713#comment-16157713
 ] 

ASF subversion and git services commented on SOLR-10783:


Commit 31a56ab8ef88790465910c546769b7f113e5c11c in lucene-solr's branch 
refs/heads/branch_7x from [~mark.mil...@oblivion.ch]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=31a56ab ]

SOLR-10783: Revert for now - having run the tests a few times today, one of 
them may be concerning (reverted from commit 
b4d280f369023a179e98154535ed4b06ea096f5f)


> Using Hadoop Credential Provider as SSL/TLS store password source
> -
>
> Key: SOLR-10783
> URL: https://issues.apache.org/jira/browse/SOLR-10783
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: security
>Reporter: Mano Kovacs
>Assignee: Mark Miller
> Attachments: SOLR-10783-fix.patch, SOLR-10783.patch, 
> SOLR-10783.patch, SOLR-10783.patch, SOLR-10783.patch
>
>
> As a second iteration of SOLR-10307, I propose support of hadoop credential 
> providers as source of SSL store passwords. 
> Motivation: When SOLR is used in hadoop environment, support of  HCP gives 
> better integration and unified method to pass sensitive credentials to SOLR.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10912) Adding automatic patch validation

2017-09-07 Thread Mano Kovacs (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mano Kovacs updated SOLR-10912:
---
Attachment: SOLR-10912.sample-patch.patch

> Adding automatic patch validation
> -
>
> Key: SOLR-10912
> URL: https://issues.apache.org/jira/browse/SOLR-10912
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Build
>Reporter: Mano Kovacs
> Attachments: SOLR-10912.sample-patch.patch
>
>
> Proposing introduction of automated patch validation, similar what Hadoop or 
> other Apache projects are using (see link). This would ensure that every 
> patch passes a certain set of criterions before getting approved. It would 
> save time for developer (faster feedback loop), save time for committers 
> (less step to do manually), and would increase quality.
> Hadoop is currently using Apache Yetus to run validations, which seems to be 
> a good direction to start. This jira could be the board of discussing the 
> preferred solution.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Null pointer when starting Solr in master and branch_7x

2017-09-07 Thread David Smiley
I tried and didn't see this error.   (Mac OS X)

On Thu, Sep 7, 2017 at 3:01 PM Joel Bernstein  wrote:

> I just did a pull and attempted to start solr with the following command:
>
> bin/solr start -c
>
>
> The following error is printed to the console on startup? Anyone else
> seeing this?
>
>
> INFO  - 2017-09-07 14:57:41.803;
> org.apache.solr.util.configuration.SSLCredentialProviderFactory; Processing
> SSL Credential Provider chain: env;sysprop
>
> Exception in thread "main" java.lang.NullPointerException
>
> at
> org.apache.solr.util.configuration.providers.EnvSSLCredentialProvider.getCredential(EnvSSLCredentialProvider.java:57)
>
> at
> org.apache.solr.util.configuration.providers.AbstractSSLCredentialProvider.getCredential(AbstractSSLCredentialProvider.java:53)
>
> at
> org.apache.solr.util.configuration.SSLConfigurations.getPassword(SSLConfigurations.java:123)
>
> at
> org.apache.solr.util.configuration.SSLConfigurations.getClientKeyStorePassword(SSLConfigurations.java:109)
>
> at
> org.apache.solr.util.configuration.SSLConfigurations.init(SSLConfigurations.java:62)
>
> at org.apache.solr.util.SolrCLI.main(SolrCLI.java:273)
>
>
>
> Joel Bernstein
> http://joelsolr.blogspot.com/
>
-- 
Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
http://www.solrenterprisesearchserver.com


[JENKINS] Lucene-Solr-6.6-Linux (64bit/jdk1.8.0_144) - Build # 153 - Still Unstable!

2017-09-07 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.6-Linux/153/
Java: 64bit/jdk1.8.0_144 -XX:+UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.security.hadoop.TestImpersonationWithHadoopAuth

Error Message:
2 threads leaked from SUITE scope at 
org.apache.solr.security.hadoop.TestImpersonationWithHadoopAuth: 1) 
Thread[id=26892, name=jetty-launcher-5232-thread-2-EventThread, 
state=TIMED_WAITING, group=TGRP-TestImpersonationWithHadoopAuth] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
 at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)  
   at 
org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323)
 at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105)  
   at 
org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:41)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.readValue(SharedValue.java:244)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.access$100(SharedValue.java:44)
 at 
org.apache.curator.framework.recipes.shared.SharedValue$1.process(SharedValue.java:61)
 at 
org.apache.curator.framework.imps.NamespaceWatcher.process(NamespaceWatcher.java:67)
 at 
org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:530)   
  at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:505)   
 2) Thread[id=26902, name=jetty-launcher-5232-thread-1-EventThread, 
state=TIMED_WAITING, group=TGRP-TestImpersonationWithHadoopAuth] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
 at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)  
   at 
org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323)
 at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105)  
   at 
org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:41)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.readValue(SharedValue.java:244)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.access$100(SharedValue.java:44)
 at 
org.apache.curator.framework.recipes.shared.SharedValue$1.process(SharedValue.java:61)
 at 
org.apache.curator.framework.imps.NamespaceWatcher.process(NamespaceWatcher.java:67)
 at 
org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:530)   
  at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:505)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 2 threads leaked from SUITE 
scope at org.apache.solr.security.hadoop.TestImpersonationWithHadoopAuth: 
   1) Thread[id=26892, name=jetty-launcher-5232-thread-2-EventThread, 
state=TIMED_WAITING, group=TGRP-TestImpersonationWithHadoopAuth]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)
at 
org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323)
at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105)
at 
org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288)
at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279)
at 

[jira] [Updated] (LUCENE-7951) New wrapper classes for Geo3d

2017-09-07 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated LUCENE-7951:
-
Attachment: LUCENE_7951_build.patch

> New wrapper classes for Geo3d
> -
>
> Key: LUCENE-7951
> URL: https://issues.apache.org/jira/browse/LUCENE-7951
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/spatial-extras
>Reporter: Ignacio Vera
>Assignee: David Smiley
>Priority: Minor
> Attachments: LUCENE_7951_build.patch, LUCENE_7951_build.patch, 
> LUCENE-7951.patch, LUCENE-7951.patch
>
>
> Hi,
> After the latest developments in the Geo3d library, in particular:
> [https://issues.apache.org/jira/browse/LUCENE-7906] : Spatial relationships 
> between GeoShapes
> [https://issues.apache.org/jira/browse/LUCENE-7936]: Serialization of 
> GeoShapes.
> I propose a new set of wrapper classes which can be for example linked to 
> Solr as they implement their own SpatialContextFactory. It provides the 
> capability of indexing shapes with 
>  spherical geometry.
> Thanks!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Remove Solr fieldType XML example from Lucene FilterFactories JavaDoc?

2017-09-07 Thread David Smiley
> CustomAnalyzer ...
+1 nice!

On Thu, Sep 7, 2017 at 4:10 PM Robert Muir  wrote:

> On Thu, Sep 7, 2017 at 3:58 PM, Jan Høydahl  wrote:
> > Hi,
> >
> > Most Analysis factories have a snippet in the JavaDoc providing an
> example for how to configure the component in Solr’s schema.xml.
> > See
> http://lucene.apache.org/core/6_6_0/analyzers-common/org/apache/lucene/analysis/miscellaneous/ASCIIFoldingFilterFactory.html
> for an example.
> >
> > I feel it is a bit mal-placed, and Solr users can find such XML snippets
> in the RefGuide.
> > The snippet gives information about what options the factory accepts in
> its constructor (Map), so we should cannot simply delete the
> XML.
> > But perhaps it could be replaced by a HTML definition list or table
> describing each option?
> >
>
> How about we replace with the equivalent example for CustomAnalyzer,
> which lets you use them in a programmatic way with the lucene API?
>
>
> http://lucene.apache.org/core/6_6_0/analyzers-common/org/apache/lucene/analysis/custom/CustomAnalyzer.html
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
> --
Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
http://www.solrenterprisesearchserver.com


Re: Remove Solr fieldType XML example from Lucene FilterFactories JavaDoc?

2017-09-07 Thread Robert Muir
On Thu, Sep 7, 2017 at 3:58 PM, Jan Høydahl  wrote:
> Hi,
>
> Most Analysis factories have a snippet in the JavaDoc providing an example 
> for how to configure the component in Solr’s schema.xml.
> See 
> http://lucene.apache.org/core/6_6_0/analyzers-common/org/apache/lucene/analysis/miscellaneous/ASCIIFoldingFilterFactory.html
>  for an example.
>
> I feel it is a bit mal-placed, and Solr users can find such XML snippets in 
> the RefGuide.
> The snippet gives information about what options the factory accepts in its 
> constructor (Map), so we should cannot simply delete the XML.
> But perhaps it could be replaced by a HTML definition list or table 
> describing each option?
>

How about we replace with the equivalent example for CustomAnalyzer,
which lets you use them in a programmatic way with the lucene API?

http://lucene.apache.org/core/6_6_0/analyzers-common/org/apache/lucene/analysis/custom/CustomAnalyzer.html

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Remove Solr fieldType XML example from Lucene FilterFactories JavaDoc?

2017-09-07 Thread Jan Høydahl
Hi,

Most Analysis factories have a snippet in the JavaDoc providing an example for 
how to configure the component in Solr’s schema.xml.
See 
http://lucene.apache.org/core/6_6_0/analyzers-common/org/apache/lucene/analysis/miscellaneous/ASCIIFoldingFilterFactory.html
 for an example.

I feel it is a bit mal-placed, and Solr users can find such XML snippets in the 
RefGuide.
The snippet gives information about what options the factory accepts in its 
constructor (Map), so we should cannot simply delete the XML.
But perhaps it could be replaced by a HTML definition list or table describing 
each option?

--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-7.0 - Build # 38 - Still Failing

2017-09-07 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-7.0/38/

No tests ran.

Build Log:
[...truncated 25710 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.0/lucene/build/smokeTestRelease/dist
 [copy] Copying 476 files to 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.0/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 215 files to 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.0/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.8 JAVA_HOME=/home/jenkins/tools/java/latest1.8
   [smoker] NOTE: output encoding is UTF-8
   [smoker] 
   [smoker] Load release URL 
"file:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.0/lucene/build/smokeTestRelease/dist/"...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.01 sec (42.3 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-7.0.0-src.tgz...
   [smoker] 29.5 MB in 0.08 sec (366.7 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-7.0.0.tgz...
   [smoker] 69.0 MB in 0.21 sec (325.7 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-7.0.0.zip...
   [smoker] 79.3 MB in 0.22 sec (357.1 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-7.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6165 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-7.0.0.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6165 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-7.0.0-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run "ant validate"
   [smoker] run tests w/ Java 8 and testArgs='-Dtests.slow=false'...
   [smoker] test demo with 1.8...
   [smoker]   got 213 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] generate javadocs w/ Java 8...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker]   confirm all releases have coverage in TestBackwardsCompatibility
   [smoker] find all past Lucene releases...
   [smoker] run TestBackwardsCompatibility..
   [smoker] Releases that don't seem to be tested:
   [smoker]   6.6.1
   [smoker] Traceback (most recent call last):
   [smoker]   File 
"/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.0/dev-tools/scripts/smokeTestRelease.py",
 line 1484, in 
   [smoker] main()
   [smoker]   File 
"/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.0/dev-tools/scripts/smokeTestRelease.py",
 line 1428, in main
   [smoker] smokeTest(c.java, c.url, c.revision, c.version, c.tmp_dir, 
c.is_signed, ' '.join(c.test_args))
   [smoker]   File 
"/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.0/dev-tools/scripts/smokeTestRelease.py",
 line 1466, in smokeTest
   [smoker] unpackAndVerify(java, 'lucene', tmpDir, 'lucene-%s-src.tgz' % 
version, gitRevision, version, testArgs, baseURL)
   [smoker]   File 
"/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.0/dev-tools/scripts/smokeTestRelease.py",
 line 622, in unpackAndVerify
   [smoker] verifyUnpacked(java, project, artifact, unpackPath, 
gitRevision, version, testArgs, tmpDir, baseURL)
   [smoker]   File 
"/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.0/dev-tools/scripts/smokeTestRelease.py",
 line 774, in verifyUnpacked
   [smoker] confirmAllReleasesAreTestedForBackCompat(version, unpackPath)
   [smoker]   File 
"/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.0/dev-tools/scripts/smokeTestRelease.py",
 line 1404, in confirmAllReleasesAreTestedForBackCompat
   [smoker] raise RuntimeError('some releases are not tested by 
TestBackwardsCompatibility?')
   [smoker] RuntimeError: some releases are not tested by 
TestBackwardsCompatibility?

BUILD FAILED
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.0/build.xml:606:
 exec returned: 1

Total time: 130 minutes 53 seconds
Build step 'Invoke Ant' marked build as failure
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-10783) Using Hadoop Credential Provider as SSL/TLS store password source

2017-09-07 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16157526#comment-16157526
 ] 

ASF subversion and git services commented on SOLR-10783:


Commit b8ba331aab9400e1272442f711754ad98a9c38f4 in lucene-solr's branch 
refs/heads/branch_7x from [~mark.mil...@oblivion.ch]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=b8ba331 ]

SOLR-10783: Fix constructor.


> Using Hadoop Credential Provider as SSL/TLS store password source
> -
>
> Key: SOLR-10783
> URL: https://issues.apache.org/jira/browse/SOLR-10783
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: security
>Reporter: Mano Kovacs
>Assignee: Mark Miller
> Attachments: SOLR-10783-fix.patch, SOLR-10783.patch, 
> SOLR-10783.patch, SOLR-10783.patch, SOLR-10783.patch
>
>
> As a second iteration of SOLR-10307, I propose support of hadoop credential 
> providers as source of SSL store passwords. 
> Motivation: When SOLR is used in hadoop environment, support of  HCP gives 
> better integration and unified method to pass sensitive credentials to SOLR.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10783) Using Hadoop Credential Provider as SSL/TLS store password source

2017-09-07 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16157525#comment-16157525
 ] 

ASF subversion and git services commented on SOLR-10783:


Commit 938820861334c6ad1de5efc520d78ac3fec71981 in lucene-solr's branch 
refs/heads/master from [~mark.mil...@oblivion.ch]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=9388208 ]

SOLR-10783: Fix constructor.


> Using Hadoop Credential Provider as SSL/TLS store password source
> -
>
> Key: SOLR-10783
> URL: https://issues.apache.org/jira/browse/SOLR-10783
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: security
>Reporter: Mano Kovacs
>Assignee: Mark Miller
> Attachments: SOLR-10783-fix.patch, SOLR-10783.patch, 
> SOLR-10783.patch, SOLR-10783.patch, SOLR-10783.patch
>
>
> As a second iteration of SOLR-10307, I propose support of hadoop credential 
> providers as source of SSL store passwords. 
> Motivation: When SOLR is used in hadoop environment, support of  HCP gives 
> better integration and unified method to pass sensitive credentials to SOLR.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7951) New wrapper classes for Geo3d

2017-09-07 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16157460#comment-16157460
 ] 

David Smiley commented on LUCENE-7951:
--

I propose the contents of the Geo3dSpatialContextFactory class look like this:
{code:java}

  public static final PlanetModel DEFAULT_PLANET_MODEL = PlanetModel.SPHERE;

  public PlanetModel planetModel = DEFAULT_PLANET_MODEL;

  public Geo3dSpatialContextFactory() {
this.binaryCodecClass = Geo3dBinaryCodec.class;
this.shapeFactoryClass = Geo3dShapeFactory.class;
this.distCalc = new Geo3dDistanceCalculator(planetModel);
  }

  @Override
  protected void init(Map args, ClassLoader classLoader) {
super.init(args, classLoader);
initPlanetModel(args);
  }

  protected void initPlanetModel(Map args) {
String planetModel = args.get("planetModel");
if (planetModel != null) {
  if (planetModel.equalsIgnoreCase("sphere")) {
this.planetModel = PlanetModel.SPHERE;
  } else if (planetModel.equalsIgnoreCase("wgs84")) {
this.planetModel = PlanetModel.WGS84;
  } else {
throw new RuntimeException("Unknown planet model: " + planetModel);
  }
}
  }

  @Override
  protected void initCalculator() {
String calcStr = this.args.get("distCalculator");
if (calcStr == null || calcStr.equals("geo3d")) {
  return;// we already have Geo3d
}
super.initCalculator(); // some other distance calculator
  }
{code}

Notice that you don't have to even call init(args).  Some tests (in Spatial4j) 
and some here create the factory, manually set some fields directly, and then 
finally called newSpatialContext() while never calling init(args).  init(args) 
is optional, designed for when you have name-value pair based configuration, 
e.g. Solr field type.

> New wrapper classes for Geo3d
> -
>
> Key: LUCENE-7951
> URL: https://issues.apache.org/jira/browse/LUCENE-7951
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/spatial-extras
>Reporter: Ignacio Vera
>Assignee: David Smiley
>Priority: Minor
> Attachments: LUCENE_7951_build.patch, LUCENE-7951.patch, 
> LUCENE-7951.patch
>
>
> Hi,
> After the latest developments in the Geo3d library, in particular:
> [https://issues.apache.org/jira/browse/LUCENE-7906] : Spatial relationships 
> between GeoShapes
> [https://issues.apache.org/jira/browse/LUCENE-7936]: Serialization of 
> GeoShapes.
> I propose a new set of wrapper classes which can be for example linked to 
> Solr as they implement their own SpatialContextFactory. It provides the 
> capability of indexing shapes with 
>  spherical geometry.
> Thanks!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-10783) Using Hadoop Credential Provider as SSL/TLS store password source

2017-09-07 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16157451#comment-16157451
 ] 

Joel Bernstein edited comment on SOLR-10783 at 9/7/17 7:03 PM:
---

I'm getting the following error when starting solr now. It may be related to 
this ticket.

NFO  - 2017-09-07 14:57:41.803; 
org.apache.solr.util.configuration.SSLCredentialProviderFactory; Processing SSL 
Credential Provider chain: env;sysprop
Exception in thread "main" java.lang.NullPointerException
at 
org.apache.solr.util.configuration.providers.EnvSSLCredentialProvider.getCredential(EnvSSLCredentialProvider.java:57)
at 
org.apache.solr.util.configuration.providers.AbstractSSLCredentialProvider.getCredential(AbstractSSLCredentialProvider.java:53)
at 
org.apache.solr.util.configuration.SSLConfigurations.getPassword(SSLConfigurations.java:123)
at 
org.apache.solr.util.configuration.SSLConfigurations.getClientKeyStorePassword(SSLConfigurations.java:109)
at 
org.apache.solr.util.configuration.SSLConfigurations.init(SSLConfigurations.java:62)
at org.apache.solr.util.SolrCLI.main(SolrCLI.java:273)


was (Author: joel.bernstein):
I'm getting the following error when start solr now. It may be related to this 
ticket.

NFO  - 2017-09-07 14:57:41.803; 
org.apache.solr.util.configuration.SSLCredentialProviderFactory; Processing SSL 
Credential Provider chain: env;sysprop
Exception in thread "main" java.lang.NullPointerException
at 
org.apache.solr.util.configuration.providers.EnvSSLCredentialProvider.getCredential(EnvSSLCredentialProvider.java:57)
at 
org.apache.solr.util.configuration.providers.AbstractSSLCredentialProvider.getCredential(AbstractSSLCredentialProvider.java:53)
at 
org.apache.solr.util.configuration.SSLConfigurations.getPassword(SSLConfigurations.java:123)
at 
org.apache.solr.util.configuration.SSLConfigurations.getClientKeyStorePassword(SSLConfigurations.java:109)
at 
org.apache.solr.util.configuration.SSLConfigurations.init(SSLConfigurations.java:62)
at org.apache.solr.util.SolrCLI.main(SolrCLI.java:273)

> Using Hadoop Credential Provider as SSL/TLS store password source
> -
>
> Key: SOLR-10783
> URL: https://issues.apache.org/jira/browse/SOLR-10783
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: security
>Reporter: Mano Kovacs
>Assignee: Mark Miller
> Attachments: SOLR-10783-fix.patch, SOLR-10783.patch, 
> SOLR-10783.patch, SOLR-10783.patch, SOLR-10783.patch
>
>
> As a second iteration of SOLR-10307, I propose support of hadoop credential 
> providers as source of SSL store passwords. 
> Motivation: When SOLR is used in hadoop environment, support of  HCP gives 
> better integration and unified method to pass sensitive credentials to SOLR.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10783) Using Hadoop Credential Provider as SSL/TLS store password source

2017-09-07 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16157451#comment-16157451
 ] 

Joel Bernstein commented on SOLR-10783:
---

I'm getting the following error when start solr now. It may be related to this 
ticket.

NFO  - 2017-09-07 14:57:41.803; 
org.apache.solr.util.configuration.SSLCredentialProviderFactory; Processing SSL 
Credential Provider chain: env;sysprop
Exception in thread "main" java.lang.NullPointerException
at 
org.apache.solr.util.configuration.providers.EnvSSLCredentialProvider.getCredential(EnvSSLCredentialProvider.java:57)
at 
org.apache.solr.util.configuration.providers.AbstractSSLCredentialProvider.getCredential(AbstractSSLCredentialProvider.java:53)
at 
org.apache.solr.util.configuration.SSLConfigurations.getPassword(SSLConfigurations.java:123)
at 
org.apache.solr.util.configuration.SSLConfigurations.getClientKeyStorePassword(SSLConfigurations.java:109)
at 
org.apache.solr.util.configuration.SSLConfigurations.init(SSLConfigurations.java:62)
at org.apache.solr.util.SolrCLI.main(SolrCLI.java:273)

> Using Hadoop Credential Provider as SSL/TLS store password source
> -
>
> Key: SOLR-10783
> URL: https://issues.apache.org/jira/browse/SOLR-10783
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: security
>Reporter: Mano Kovacs
>Assignee: Mark Miller
> Attachments: SOLR-10783-fix.patch, SOLR-10783.patch, 
> SOLR-10783.patch, SOLR-10783.patch, SOLR-10783.patch
>
>
> As a second iteration of SOLR-10307, I propose support of hadoop credential 
> providers as source of SSL store passwords. 
> Motivation: When SOLR is used in hadoop environment, support of  HCP gives 
> better integration and unified method to pass sensitive credentials to SOLR.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7951) New wrapper classes for Geo3d

2017-09-07 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated LUCENE-7951:
-
Attachment: LUCENE_7951_build.patch

The attached build patch file I developed modifies spatial-extras's build.xml 
to include the test classpath of spatial3d.  This seems to work; I'm even doing 
the maven based build and it's working.

> New wrapper classes for Geo3d
> -
>
> Key: LUCENE-7951
> URL: https://issues.apache.org/jira/browse/LUCENE-7951
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/spatial-extras
>Reporter: Ignacio Vera
>Assignee: David Smiley
>Priority: Minor
> Attachments: LUCENE_7951_build.patch, LUCENE-7951.patch, 
> LUCENE-7951.patch
>
>
> Hi,
> After the latest developments in the Geo3d library, in particular:
> [https://issues.apache.org/jira/browse/LUCENE-7906] : Spatial relationships 
> between GeoShapes
> [https://issues.apache.org/jira/browse/LUCENE-7936]: Serialization of 
> GeoShapes.
> I propose a new set of wrapper classes which can be for example linked to 
> Solr as they implement their own SpatialContextFactory. It provides the 
> capability of indexing shapes with 
>  spherical geometry.
> Thanks!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Null pointer when starting Solr in master and branch_7x

2017-09-07 Thread Joel Bernstein
I just did a pull and attempted to start solr with the following command:

bin/solr start -c


The following error is printed to the console on startup? Anyone else
seeing this?


INFO  - 2017-09-07 14:57:41.803;
org.apache.solr.util.configuration.SSLCredentialProviderFactory; Processing
SSL Credential Provider chain: env;sysprop

Exception in thread "main" java.lang.NullPointerException

at
org.apache.solr.util.configuration.providers.EnvSSLCredentialProvider.getCredential(EnvSSLCredentialProvider.java:57)

at
org.apache.solr.util.configuration.providers.AbstractSSLCredentialProvider.getCredential(AbstractSSLCredentialProvider.java:53)

at
org.apache.solr.util.configuration.SSLConfigurations.getPassword(SSLConfigurations.java:123)

at
org.apache.solr.util.configuration.SSLConfigurations.getClientKeyStorePassword(SSLConfigurations.java:109)

at
org.apache.solr.util.configuration.SSLConfigurations.init(SSLConfigurations.java:62)

at org.apache.solr.util.SolrCLI.main(SolrCLI.java:273)



Joel Bernstein
http://joelsolr.blogspot.com/


[jira] [Created] (LUCENE-7962) GeoPaths need ability to compute distance along route WITHOUT perpendicular leg

2017-09-07 Thread Karl Wright (JIRA)
Karl Wright created LUCENE-7962:
---

 Summary: GeoPaths need ability to compute distance along route 
WITHOUT perpendicular leg
 Key: LUCENE-7962
 URL: https://issues.apache.org/jira/browse/LUCENE-7962
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/spatial3d
Affects Versions: 6.6
Reporter: Karl Wright
Assignee: Karl Wright


Distance computation for GeoPaths properly computes distance as distance along 
the route PLUS the perpendicular distance from the route to the point in 
question.  That is fine but there is another use case for GeoPaths, which is to 
compute distance along the route without the perpendicular leg.

The proposal is to add a method for GeoPath implementations only that computes 
this distance.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Linux (32bit/jdk-9-ea+181) - Build # 20431 - Still Failing!

2017-09-07 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/20431/
Java: 32bit/jdk-9-ea+181 -client -XX:+UseSerialGC --illegal-access=deny

134 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.analytics.facet.FieldFacetCloudTest

Error Message:
Error starting up MiniSolrCloudCluster

Stack Trace:
java.lang.Exception: Error starting up MiniSolrCloudCluster
at __randomizedtesting.SeedInfo.seed([E526E9C6B4250E4C]:0)
at 
org.apache.solr.cloud.MiniSolrCloudCluster.checkForExceptions(MiniSolrCloudCluster.java:507)
at 
org.apache.solr.cloud.MiniSolrCloudCluster.(MiniSolrCloudCluster.java:251)
at 
org.apache.solr.cloud.SolrCloudTestCase$Builder.configure(SolrCloudTestCase.java:190)
at 
org.apache.solr.analytics.facet.AbstractAnalyticsFacetCloudTest.setupCluster(AbstractAnalyticsFacetCloudTest.java:56)
at 
org.apache.solr.analytics.facet.FieldFacetCloudTest.beforeClass(FieldFacetCloudTest.java:90)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:874)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)
Suppressed: java.lang.RuntimeException: java.lang.NullPointerException
at 
org.eclipse.jetty.servlet.ServletHandler.updateMappings(ServletHandler.java:1600)
at 
org.eclipse.jetty.servlet.ServletHandler.setFilterMappings(ServletHandler.java:1659)
at 
org.eclipse.jetty.servlet.ServletHandler.addFilterMapping(ServletHandler.java:1316)
at 
org.eclipse.jetty.servlet.ServletHandler.addFilterWithMapping(ServletHandler.java:1145)
at 
org.eclipse.jetty.servlet.ServletContextHandler.addFilter(ServletContextHandler.java:448)
at 
org.apache.solr.client.solrj.embedded.JettySolrRunner$1.lifeCycleStarted(JettySolrRunner.java:306)
at 
org.eclipse.jetty.util.component.AbstractLifeCycle.setStarted(AbstractLifeCycle.java:179)
at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:69)
at 
org.apache.solr.client.solrj.embedded.JettySolrRunner.start(JettySolrRunner.java:394)
at 
org.apache.solr.client.solrj.embedded.JettySolrRunner.start(JettySolrRunner.java:367)
at 
org.apache.solr.cloud.MiniSolrCloudCluster.startJettySolrRunner(MiniSolrCloudCluster.java:384)
at 
org.apache.solr.cloud.MiniSolrCloudCluster.lambda$new$0(MiniSolrCloudCluster.java:247)
at 
java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:188)
  

[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1382 - Failure

2017-09-07 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1382/

121 tests failed.
FAILED:  org.apache.lucene.index.TestIndexWriterReader.testDuringAddDelete

Error Message:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/lucene/build/core/test/J2/temp/lucene.index.TestIndexWriterReader_CE9A4DC3E2D41852-001/index-NIOFSDirectory-001/_237_BlockTreeOrds_0.pos:
 Too many open files

Stack Trace:
java.nio.file.FileSystemException: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/lucene/build/core/test/J2/temp/lucene.index.TestIndexWriterReader_CE9A4DC3E2D41852-001/index-NIOFSDirectory-001/_237_BlockTreeOrds_0.pos:
 Too many open files
at 
__randomizedtesting.SeedInfo.seed([CE9A4DC3E2D41852:500B3A44C8159A52]:0)
at 
org.apache.lucene.mockfile.HandleLimitFS.onOpen(HandleLimitFS.java:48)
at 
org.apache.lucene.mockfile.HandleTrackingFS.callOpenHook(HandleTrackingFS.java:81)
at 
org.apache.lucene.mockfile.HandleTrackingFS.newFileChannel(HandleTrackingFS.java:197)
at 
org.apache.lucene.mockfile.FilterFileSystemProvider.newFileChannel(FilterFileSystemProvider.java:202)
at java.nio.channels.FileChannel.open(FileChannel.java:287)
at java.nio.channels.FileChannel.open(FileChannel.java:335)
at 
org.apache.lucene.store.NIOFSDirectory.openInput(NIOFSDirectory.java:81)
at 
org.apache.lucene.util.LuceneTestCase.slowFileExists(LuceneTestCase.java:2709)
at 
org.apache.lucene.store.MockDirectoryWrapper.openInput(MockDirectoryWrapper.java:749)
at 
org.apache.lucene.codecs.lucene50.Lucene50PostingsReader.(Lucene50PostingsReader.java:88)
at 
org.apache.lucene.codecs.blocktreeords.BlockTreeOrdsPostingsFormat.fieldsProducer(BlockTreeOrdsPostingsFormat.java:90)
at 
org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader.(PerFieldPostingsFormat.java:292)
at 
org.apache.lucene.codecs.perfield.PerFieldPostingsFormat.fieldsProducer(PerFieldPostingsFormat.java:372)
at 
org.apache.lucene.index.SegmentCoreReaders.(SegmentCoreReaders.java:112)
at org.apache.lucene.index.SegmentReader.(SegmentReader.java:78)
at 
org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:208)
at 
org.apache.lucene.index.ReadersAndUpdates.getReadOnlyClone(ReadersAndUpdates.java:258)
at 
org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:105)
at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:490)
at 
org.apache.lucene.index.StandardDirectoryReader.doOpenFromWriter(StandardDirectoryReader.java:293)
at 
org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:268)
at 
org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:258)
at 
org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:140)
at 
org.apache.lucene.index.TestIndexWriterReader.testDuringAddDelete(TestIndexWriterReader.java:871)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 

[jira] [Comment Edited] (SOLR-11241) Add discrete counting and probability Stream Evaluators

2017-09-07 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16157329#comment-16157329
 ] 

Joel Bernstein edited comment on SOLR-11241 at 9/7/17 5:50 PM:
---

Ok, after running the jar-checksums it modified commons-math3-3.6.1.jar.sha1 
and I pushed out the change. Hopefully this resolves the issues.

So it seems that jar-checksums generates a proper checksum which should be used 
rather then checksum that is published to maven central.


was (Author: joel.bernstein):
Ok, after running the jar-checksums it modified commons-math3-3.6.1.jar.sha1 
and pushed out the change.

Hopefully this resolves the issues.

So it seems that jar-checksums generates a proper checksum which should be used 
rather then checksum that is published.

> Add discrete counting and probability Stream Evaluators
> ---
>
> Key: SOLR-11241
> URL: https://issues.apache.org/jira/browse/SOLR-11241
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 7.1
>
> Attachments: SOLR-11241.path, SOLR-11241.path, SOLR-11241.path
>
>
> This ticket will add a number of statistical functions that deal with 
> discrete counting and probability distributions:
> freqTable
> enumeratedDistribution
> poissonDistribution
> uniformIntegerDistribution
> binomialDistribution
> probability
> All functions backed by *Apache Commons Math*



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11241) Add discrete counting and probability Stream Evaluators

2017-09-07 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16157329#comment-16157329
 ] 

Joel Bernstein commented on SOLR-11241:
---

Ok, after running the jar-checksums it modified commons-math3-3.6.1.jar.sha1 
and pushed out the change.

Hopefully this resolves the issues.

So it seems that jar-checksums generates a proper checksum which should be used 
rather then checksum that is published.

> Add discrete counting and probability Stream Evaluators
> ---
>
> Key: SOLR-11241
> URL: https://issues.apache.org/jira/browse/SOLR-11241
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 7.1
>
> Attachments: SOLR-11241.path, SOLR-11241.path, SOLR-11241.path
>
>
> This ticket will add a number of statistical functions that deal with 
> discrete counting and probability distributions:
> freqTable
> enumeratedDistribution
> poissonDistribution
> uniformIntegerDistribution
> binomialDistribution
> probability
> All functions backed by *Apache Commons Math*



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-11241) Add discrete counting and probability Stream Evaluators

2017-09-07 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16157329#comment-16157329
 ] 

Joel Bernstein edited comment on SOLR-11241 at 9/7/17 5:50 PM:
---

Ok, after running the jar-checksums it modified commons-math3-3.6.1.jar.sha1 
and I pushed out the change. Hopefully this resolves the issues.

So it seems that jar-checksums generates a proper checksum which should be used 
rather then the checksum that is published to maven central.


was (Author: joel.bernstein):
Ok, after running the jar-checksums it modified commons-math3-3.6.1.jar.sha1 
and I pushed out the change. Hopefully this resolves the issues.

So it seems that jar-checksums generates a proper checksum which should be used 
rather then checksum that is published to maven central.

> Add discrete counting and probability Stream Evaluators
> ---
>
> Key: SOLR-11241
> URL: https://issues.apache.org/jira/browse/SOLR-11241
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 7.1
>
> Attachments: SOLR-11241.path, SOLR-11241.path, SOLR-11241.path
>
>
> This ticket will add a number of statistical functions that deal with 
> discrete counting and probability distributions:
> freqTable
> enumeratedDistribution
> poissonDistribution
> uniformIntegerDistribution
> binomialDistribution
> probability
> All functions backed by *Apache Commons Math*



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11241) Add discrete counting and probability Stream Evaluators

2017-09-07 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16157325#comment-16157325
 ] 

ASF subversion and git services commented on SOLR-11241:


Commit d63b47e6db87d3f209a6df93835ac140432b4651 in lucene-solr's branch 
refs/heads/branch_7x from [~joel.bernstein]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=d63b47e ]

SOLR-11241: Fix sha1 file


> Add discrete counting and probability Stream Evaluators
> ---
>
> Key: SOLR-11241
> URL: https://issues.apache.org/jira/browse/SOLR-11241
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 7.1
>
> Attachments: SOLR-11241.path, SOLR-11241.path, SOLR-11241.path
>
>
> This ticket will add a number of statistical functions that deal with 
> discrete counting and probability distributions:
> freqTable
> enumeratedDistribution
> poissonDistribution
> uniformIntegerDistribution
> binomialDistribution
> probability
> All functions backed by *Apache Commons Math*



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11241) Add discrete counting and probability Stream Evaluators

2017-09-07 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16157318#comment-16157318
 ] 

ASF subversion and git services commented on SOLR-11241:


Commit f828edf332000ff83b228bf35f75dd17f9c6ceb9 in lucene-solr's branch 
refs/heads/master from [~joel.bernstein]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f828edf ]

SOLR-11241: Fix sha1 file


> Add discrete counting and probability Stream Evaluators
> ---
>
> Key: SOLR-11241
> URL: https://issues.apache.org/jira/browse/SOLR-11241
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 7.1
>
> Attachments: SOLR-11241.path, SOLR-11241.path, SOLR-11241.path
>
>
> This ticket will add a number of statistical functions that deal with 
> discrete counting and probability distributions:
> freqTable
> enumeratedDistribution
> poissonDistribution
> uniformIntegerDistribution
> binomialDistribution
> probability
> All functions backed by *Apache Commons Math*



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10783) Using Hadoop Credential Provider as SSL/TLS store password source

2017-09-07 Thread Mano Kovacs (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16157317#comment-16157317
 ] 

Mano Kovacs commented on SOLR-10783:


[~hossman], I agree, its a small change in size but affects the 
{{jetty-ssl.xml}}, therefore most of the tests. Would it make sense to roll 
back from 7x and have the fix on master?

> Using Hadoop Credential Provider as SSL/TLS store password source
> -
>
> Key: SOLR-10783
> URL: https://issues.apache.org/jira/browse/SOLR-10783
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: security
>Reporter: Mano Kovacs
>Assignee: Mark Miller
> Attachments: SOLR-10783-fix.patch, SOLR-10783.patch, 
> SOLR-10783.patch, SOLR-10783.patch, SOLR-10783.patch
>
>
> As a second iteration of SOLR-10307, I propose support of hadoop credential 
> providers as source of SSL store passwords. 
> Motivation: When SOLR is used in hadoop environment, support of  HCP gives 
> better integration and unified method to pass sensitive credentials to SOLR.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11241) Add discrete counting and probability Stream Evaluators

2017-09-07 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16157314#comment-16157314
 ] 

Joel Bernstein commented on SOLR-11241:
---

[~hossman_luc...@fucit.org], I see your last update. I'll run jar-checksums and 
see if I can resolve this.

> Add discrete counting and probability Stream Evaluators
> ---
>
> Key: SOLR-11241
> URL: https://issues.apache.org/jira/browse/SOLR-11241
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 7.1
>
> Attachments: SOLR-11241.path, SOLR-11241.path, SOLR-11241.path
>
>
> This ticket will add a number of statistical functions that deal with 
> discrete counting and probability distributions:
> freqTable
> enumeratedDistribution
> poissonDistribution
> uniformIntegerDistribution
> binomialDistribution
> probability
> All functions backed by *Apache Commons Math*



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-11241) Add discrete counting and probability Stream Evaluators

2017-09-07 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16157314#comment-16157314
 ] 

Joel Bernstein edited comment on SOLR-11241 at 9/7/17 5:43 PM:
---

hoss, I see your last update. I'll run jar-checksums and see if I can resolve 
this.


was (Author: joel.bernstein):
[~hossman_luc...@fucit.org], I see your last update. I'll run jar-checksums and 
see if I can resolve this.

> Add discrete counting and probability Stream Evaluators
> ---
>
> Key: SOLR-11241
> URL: https://issues.apache.org/jira/browse/SOLR-11241
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 7.1
>
> Attachments: SOLR-11241.path, SOLR-11241.path, SOLR-11241.path
>
>
> This ticket will add a number of statistical functions that deal with 
> discrete counting and probability distributions:
> freqTable
> enumeratedDistribution
> poissonDistribution
> uniformIntegerDistribution
> binomialDistribution
> probability
> All functions backed by *Apache Commons Math*



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-11241) Add discrete counting and probability Stream Evaluators

2017-09-07 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16157307#comment-16157307
 ] 

Joel Bernstein edited comment on SOLR-11241 at 9/7/17 5:40 PM:
---

It looks like we're getting an un-versioned file error related to git. This 
code snippet seems to be responsible in the build.xml
{code}
final Status status = new Git(repository).status().call();
  if (!status.isClean()) {
final SortedSet unversioned = new TreeSet(), modified = new 
TreeSet();
status.properties.each{ prop, val ->
  if (val instanceof Set) {
if (prop in ['untracked', 'untrackedFolders', 'missing']) {
  unversioned.addAll(val);
} else if (prop != 'ignoredNotInIndex') {
  modified.addAll(val);
}
  }
};
setProjectPropertyFromSet('wc.unversioned.files', unversioned);
setProjectPropertyFromSet('wc.modified.files', modified);
  }
{code}
I'm not quite sure what this means though...




was (Author: joel.bernstein):
It looks like we're getting an un-versioned file error related to git. This 
code snippet seems to be responsible in the build.xml

final Status status = new Git(repository).status().call();
  if (!status.isClean()) {
final SortedSet unversioned = new TreeSet(), modified = new 
TreeSet();
status.properties.each{ prop, val ->
  if (val instanceof Set) {
if (prop in ['untracked', 'untrackedFolders', 'missing']) {
  unversioned.addAll(val);
} else if (prop != 'ignoredNotInIndex') {
  modified.addAll(val);
}
  }
};
setProjectPropertyFromSet('wc.unversioned.files', unversioned);
setProjectPropertyFromSet('wc.modified.files', modified);
  }

I'm not quite sure what this means though...



> Add discrete counting and probability Stream Evaluators
> ---
>
> Key: SOLR-11241
> URL: https://issues.apache.org/jira/browse/SOLR-11241
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 7.1
>
> Attachments: SOLR-11241.path, SOLR-11241.path, SOLR-11241.path
>
>
> This ticket will add a number of statistical functions that deal with 
> discrete counting and probability distributions:
> freqTable
> enumeratedDistribution
> poissonDistribution
> uniformIntegerDistribution
> binomialDistribution
> probability
> All functions backed by *Apache Commons Math*



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10783) Using Hadoop Credential Provider as SSL/TLS store password source

2017-09-07 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16157308#comment-16157308
 ] 

Hoss Man commented on SOLR-10783:
-

(NOTE: nothing personal to you, or this change in particular, i just feel like 
we -- as a project -- have gotten in the habit of backporting to quickly w/o 
letting changes soak.  This change just happens to be a topical example)

> Using Hadoop Credential Provider as SSL/TLS store password source
> -
>
> Key: SOLR-10783
> URL: https://issues.apache.org/jira/browse/SOLR-10783
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: security
>Reporter: Mano Kovacs
>Assignee: Mark Miller
> Attachments: SOLR-10783-fix.patch, SOLR-10783.patch, 
> SOLR-10783.patch, SOLR-10783.patch, SOLR-10783.patch
>
>
> As a second iteration of SOLR-10307, I propose support of hadoop credential 
> providers as source of SSL store passwords. 
> Motivation: When SOLR is used in hadoop environment, support of  HCP gives 
> better integration and unified method to pass sensitive credentials to SOLR.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11241) Add discrete counting and probability Stream Evaluators

2017-09-07 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16157307#comment-16157307
 ] 

Joel Bernstein commented on SOLR-11241:
---

It looks like we're getting an un-versioned file error related to git. This 
code snippet seems to be responsible in the build.xml

final Status status = new Git(repository).status().call();
  if (!status.isClean()) {
final SortedSet unversioned = new TreeSet(), modified = new 
TreeSet();
status.properties.each{ prop, val ->
  if (val instanceof Set) {
if (prop in ['untracked', 'untrackedFolders', 'missing']) {
  unversioned.addAll(val);
} else if (prop != 'ignoredNotInIndex') {
  modified.addAll(val);
}
  }
};
setProjectPropertyFromSet('wc.unversioned.files', unversioned);
setProjectPropertyFromSet('wc.modified.files', modified);
  }

I'm not quite sure what this means though...



> Add discrete counting and probability Stream Evaluators
> ---
>
> Key: SOLR-11241
> URL: https://issues.apache.org/jira/browse/SOLR-11241
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 7.1
>
> Attachments: SOLR-11241.path, SOLR-11241.path, SOLR-11241.path
>
>
> This ticket will add a number of statistical functions that deal with 
> discrete counting and probability distributions:
> freqTable
> enumeratedDistribution
> poissonDistribution
> uniformIntegerDistribution
> binomialDistribution
> probability
> All functions backed by *Apache Commons Math*



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10783) Using Hadoop Credential Provider as SSL/TLS store password source

2017-09-07 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16157303#comment-16157303
 ] 

Hoss Man commented on SOLR-10783:
-

matter of perspective i guess ... i think any change (even if it was only 1 
line of code) that can cause ~100 tests to NPE is "big enough" (in terms of 
affected code paths) to deserve to soak on master for a bit.

> Using Hadoop Credential Provider as SSL/TLS store password source
> -
>
> Key: SOLR-10783
> URL: https://issues.apache.org/jira/browse/SOLR-10783
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: security
>Reporter: Mano Kovacs
>Assignee: Mark Miller
> Attachments: SOLR-10783-fix.patch, SOLR-10783.patch, 
> SOLR-10783.patch, SOLR-10783.patch, SOLR-10783.patch
>
>
> As a second iteration of SOLR-10307, I propose support of hadoop credential 
> providers as source of SSL store passwords. 
> Motivation: When SOLR is used in hadoop environment, support of  HCP gives 
> better integration and unified method to pass sensitive credentials to SOLR.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11241) Add discrete counting and probability Stream Evaluators

2017-09-07 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16157297#comment-16157297
 ] 

Hoss Man commented on SOLR-11241:
-

joel: running {{ant jar-checksums}} (and comparing with what you expect from 
maven central) should be the corect approach -- then commit that and everything 
should be fine.

see LUCENE-7949 for my notes about how our process doesn't make these types of 
mistakes/inconsistencies obvious to developers who run precommit.

> Add discrete counting and probability Stream Evaluators
> ---
>
> Key: SOLR-11241
> URL: https://issues.apache.org/jira/browse/SOLR-11241
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 7.1
>
> Attachments: SOLR-11241.path, SOLR-11241.path, SOLR-11241.path
>
>
> This ticket will add a number of statistical functions that deal with 
> discrete counting and probability distributions:
> freqTable
> enumeratedDistribution
> poissonDistribution
> uniformIntegerDistribution
> binomialDistribution
> probability
> All functions backed by *Apache Commons Math*



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11241) Add discrete counting and probability Stream Evaluators

2017-09-07 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16157292#comment-16157292
 ] 

Joel Bernstein commented on SOLR-11241:
---

I see the error, but I'm not quite sure how to fix it.

I pulled the SHA1 file from the maven central binary download site I believe. 
It looks Iike that wasn't the right approach...

I'll see if I can figure it out.

> Add discrete counting and probability Stream Evaluators
> ---
>
> Key: SOLR-11241
> URL: https://issues.apache.org/jira/browse/SOLR-11241
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 7.1
>
> Attachments: SOLR-11241.path, SOLR-11241.path, SOLR-11241.path
>
>
> This ticket will add a number of statistical functions that deal with 
> discrete counting and probability distributions:
> freqTable
> enumeratedDistribution
> poissonDistribution
> uniformIntegerDistribution
> binomialDistribution
> probability
> All functions backed by *Apache Commons Math*



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11241) Add discrete counting and probability Stream Evaluators

2017-09-07 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16157278#comment-16157278
 ] 

Joel Bernstein commented on SOLR-11241:
---

Ok, I'll take a look. All tests passed along with pre-commit once I got the 
SHA1 file in place. So I didn't realize I was missing a step.

> Add discrete counting and probability Stream Evaluators
> ---
>
> Key: SOLR-11241
> URL: https://issues.apache.org/jira/browse/SOLR-11241
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 7.1
>
> Attachments: SOLR-11241.path, SOLR-11241.path, SOLR-11241.path
>
>
> This ticket will add a number of statistical functions that deal with 
> discrete counting and probability distributions:
> freqTable
> enumeratedDistribution
> poissonDistribution
> uniformIntegerDistribution
> binomialDistribution
> probability
> All functions backed by *Apache Commons Math*



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7954) add 'replace' option for 'beidermorse' phonetic filter

2017-09-07 Thread Fabien Baligand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16157262#comment-16157262
 ] 

Fabien Baligand commented on LUCENE-7954:
-

I use elasticsearch and every other phonetic filter encoder has this option.
It would be great that beidermorse too !

> add 'replace' option for 'beidermorse' phonetic filter
> --
>
> Key: LUCENE-7954
> URL: https://issues.apache.org/jira/browse/LUCENE-7954
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Reporter: Fabien Baligand
>Priority: Minor
>
> It would be great to add 'replace' boolean option for `beidermorse` phonetic 
> filter.
> This option would allow to say if original term is replaced or not.
> It would have `true` as default value (current behaviour).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7949) precommit should fail the build if any jar sha1 files contain any whitespace or newlines

2017-09-07 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16157258#comment-16157258
 ] 

Hoss Man commented on LUCENE-7949:
--

looks like the same problem just big Joel in SOLR-11241

> precommit should fail the build if any jar sha1 files contain any whitespace 
> or newlines
> 
>
> Key: LUCENE-7949
> URL: https://issues.apache.org/jira/browse/LUCENE-7949
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Hoss Man
>  Labels: build
>
> as part of SOLR-11209, miller updated the sha1 files for the jar's he was 
> upgrading, and somehow a trailing newline got added to one of them, which 
> caused jenkins to freak out about modified source files as part of the build 
> (IIUC because jenkins was rebuilding the sha files and noticing they were 
> different -- just because of the trailing newline.
> If precommit validated/enforced the expected structure of the sha1 files we 
> could prevent these types of confusing build failures down the road.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11241) Add discrete counting and probability Stream Evaluators

2017-09-07 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16157257#comment-16157257
 ] 

Hoss Man commented on SOLR-11241:
-

Joel: how did you create the checksum file for the upgraded math3 jar?
{{ant clean clean-jars && ant jar-checksums}} currently results in a dirty 
checkout because the committed version has no newline -- this is causing 
jenkins builds to fail...

{noformat}
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/20426/
Java: 32bit/jdk-9-ea+181 -client -XX:+UseG1GC --illegal-access=deny

All tests passed

Build Log:
[...truncated 52629 lines...]
BUILD FAILED
/home/jenkins/workspace/Lucene-Solr-master-Linux/build.xml:810: The following 
error occurred while executing this
line:
/home/jenkins/workspace/Lucene-Solr-master-Linux/build.xml:690: The following 
error occurred while executing this
line:
/home/jenkins/workspace/Lucene-Solr-master-Linux/build.xml:678: Source checkout 
is modified!!! Offending files:
* solr/licenses/commons-math3-3.6.1.jar.sha1
{noformat}


> Add discrete counting and probability Stream Evaluators
> ---
>
> Key: SOLR-11241
> URL: https://issues.apache.org/jira/browse/SOLR-11241
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 7.1
>
> Attachments: SOLR-11241.path, SOLR-11241.path, SOLR-11241.path
>
>
> This ticket will add a number of statistical functions that deal with 
> discrete counting and probability distributions:
> freqTable
> enumeratedDistribution
> poissonDistribution
> uniformIntegerDistribution
> binomialDistribution
> probability
> All functions backed by *Apache Commons Math*



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Linux (32bit/jdk1.8.0_144) - Build # 373 - Still Failing!

2017-09-07 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/373/
Java: 32bit/jdk1.8.0_144 -server -XX:+UseConcMarkSweepGC

97 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.ltr.TestLTRWithSort

Error Message:
java.lang.NullPointerException

Stack Trace:
java.lang.RuntimeException: java.lang.NullPointerException
at __randomizedtesting.SeedInfo.seed([DFF1D436E5D2A280]:0)
at 
org.eclipse.jetty.servlet.ServletHandler.updateMappings(ServletHandler.java:1600)
at 
org.eclipse.jetty.servlet.ServletHandler.setFilterMappings(ServletHandler.java:1659)
at 
org.eclipse.jetty.servlet.ServletHandler.addFilterMapping(ServletHandler.java:1316)
at 
org.eclipse.jetty.servlet.ServletHandler.addFilterWithMapping(ServletHandler.java:1145)
at 
org.eclipse.jetty.servlet.ServletContextHandler.addFilter(ServletContextHandler.java:448)
at 
org.apache.solr.client.solrj.embedded.JettySolrRunner$1.lifeCycleStarted(JettySolrRunner.java:306)
at 
org.eclipse.jetty.util.component.AbstractLifeCycle.setStarted(AbstractLifeCycle.java:179)
at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:69)
at 
org.apache.solr.client.solrj.embedded.JettySolrRunner.start(JettySolrRunner.java:394)
at 
org.apache.solr.client.solrj.embedded.JettySolrRunner.start(JettySolrRunner.java:367)
at 
org.apache.solr.SolrJettyTestBase.createJetty(SolrJettyTestBase.java:114)
at 
org.apache.solr.SolrJettyTestBase.createJetty(SolrJettyTestBase.java:80)
at 
org.apache.solr.util.RestTestBase.createJettyAndHarness(RestTestBase.java:52)
at org.apache.solr.ltr.TestRerankBase.setuptest(TestRerankBase.java:202)
at org.apache.solr.ltr.TestRerankBase.setuptest(TestRerankBase.java:122)
at org.apache.solr.ltr.TestLTRWithSort.before(TestLTRWithSort.java:31)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:874)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.NullPointerException
at 
org.apache.solr.util.configuration.providers.EnvSSLCredentialProvider.getCredential(EnvSSLCredentialProvider.java:57)
at 
org.apache.solr.util.configuration.providers.AbstractSSLCredentialProvider.getCredential(AbstractSSLCredentialProvider.java:53)
at 
org.apache.solr.util.configuration.SSLConfigurations.getPassword(SSLConfigurations.java:123)
at 
org.apache.solr.util.configuration.SSLConfigurations.getClientKeyStorePassword(SSLConfigurations.java:109)
at 
org.apache.solr.util.configuration.SSLConfigurations.init(SSLConfigurations.java:62)

[jira] [Commented] (SOLR-10783) Using Hadoop Credential Provider as SSL/TLS store password source

2017-09-07 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16157221#comment-16157221
 ] 

Mark Miller commented on SOLR-10783:


No rush really, small directory snafu on test run. Don't think it's very big 
change really.

> Using Hadoop Credential Provider as SSL/TLS store password source
> -
>
> Key: SOLR-10783
> URL: https://issues.apache.org/jira/browse/SOLR-10783
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: security
>Reporter: Mano Kovacs
>Assignee: Mark Miller
> Attachments: SOLR-10783-fix.patch, SOLR-10783.patch, 
> SOLR-10783.patch, SOLR-10783.patch, SOLR-10783.patch
>
>
> As a second iteration of SOLR-10307, I propose support of hadoop credential 
> providers as source of SSL store passwords. 
> Motivation: When SOLR is used in hadoop environment, support of  HCP gives 
> better integration and unified method to pass sensitive credentials to SOLR.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10783) Using Hadoop Credential Provider as SSL/TLS store password source

2017-09-07 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16157217#comment-16157217
 ] 

Hoss Man commented on SOLR-10783:
-

[~markrmil...@gmail.com] -- this seems like a big enough change, causing a lot 
of problems, that it should be rolled back until it gets better testing (and 
once it does, it should probably be alowed to soak on master for at least a day 
before backporting into 7x ... what's the rush?)

> Using Hadoop Credential Provider as SSL/TLS store password source
> -
>
> Key: SOLR-10783
> URL: https://issues.apache.org/jira/browse/SOLR-10783
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: security
>Reporter: Mano Kovacs
>Assignee: Mark Miller
> Attachments: SOLR-10783-fix.patch, SOLR-10783.patch, 
> SOLR-10783.patch, SOLR-10783.patch, SOLR-10783.patch
>
>
> As a second iteration of SOLR-10307, I propose support of hadoop credential 
> providers as source of SSL store passwords. 
> Motivation: When SOLR is used in hadoop environment, support of  HCP gives 
> better integration and unified method to pass sensitive credentials to SOLR.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10783) Using Hadoop Credential Provider as SSL/TLS store password source

2017-09-07 Thread Mano Kovacs (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mano Kovacs updated SOLR-10783:
---
Attachment: SOLR-10783-fix.patch

Attaching fix with the missing constructor. Run solr test suite.

> Using Hadoop Credential Provider as SSL/TLS store password source
> -
>
> Key: SOLR-10783
> URL: https://issues.apache.org/jira/browse/SOLR-10783
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: security
>Reporter: Mano Kovacs
>Assignee: Mark Miller
> Attachments: SOLR-10783-fix.patch, SOLR-10783.patch, 
> SOLR-10783.patch, SOLR-10783.patch, SOLR-10783.patch
>
>
> As a second iteration of SOLR-10307, I propose support of hadoop credential 
> providers as source of SSL store passwords. 
> Motivation: When SOLR is used in hadoop environment, support of  HCP gives 
> better integration and unified method to pass sensitive credentials to SOLR.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-11215) Make a metric accessible through a single param

2017-09-07 Thread Andrzej Bialecki (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki  resolved SOLR-11215.
--
Resolution: Fixed

> Make a metric accessible through a single param
> ---
>
> Key: SOLR-11215
> URL: https://issues.apache.org/jira/browse/SOLR-11215
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Reporter: Noble Paul
>Assignee: Andrzej Bialecki 
> Fix For: 7.1
>
> Attachments: SOLR-11215.diff
>
>
> example
> {code}
> /admin/metrics?key=solr.jvm:classes.loaded=solr.jvm:system.properties:java.specification.version
> {code}
> The above request must return just the two items in their corresponding path



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.6-Linux (32bit/jdk1.8.0_144) - Build # 152 - Unstable!

2017-09-07 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.6-Linux/152/
Java: 32bit/jdk1.8.0_144 -client -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI

Error Message:
Something is broken in the assert for no shards using the same indexDir - 
probably something was changed in the attributes published in the MBean of 
SolrCore : {}

Stack Trace:
java.lang.AssertionError: Something is broken in the assert for no shards using 
the same indexDir - probably something was changed in the attributes published 
in the MBean of SolrCore : {}
at 
__randomizedtesting.SeedInfo.seed([2D297F6D227C8661:655C0BD9244FA9F4]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.checkNoTwoShardsUseTheSameIndexDir(CollectionsAPIDistributedZkTest.java:646)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI(CollectionsAPIDistributedZkTest.java:524)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-11215) Make a metric accessible through a single param

2017-09-07 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16157173#comment-16157173
 ] 

ASF subversion and git services commented on SOLR-11215:


Commit 2d7b2db4139929c811dcafe87ab43c5dc8f9eb21 in lucene-solr's branch 
refs/heads/branch_7x from [~ab]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=2d7b2db ]

SOLR-11215: Support retrieval of any property of a regular metric when
using 'key' parameter.


> Make a metric accessible through a single param
> ---
>
> Key: SOLR-11215
> URL: https://issues.apache.org/jira/browse/SOLR-11215
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Reporter: Noble Paul
>Assignee: Andrzej Bialecki 
> Fix For: 7.1
>
> Attachments: SOLR-11215.diff
>
>
> example
> {code}
> /admin/metrics?key=solr.jvm:classes.loaded=solr.jvm:system.properties:java.specification.version
> {code}
> The above request must return just the two items in their corresponding path



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >