[JENKINS] Lucene-Solr-master-Windows (64bit/jdk-9.0.1) - Build # 7079 - Still Unstable!

2017-12-28 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7079/
Java: 64bit/jdk-9.0.1 -XX:-UseCompressedOops -XX:+UseSerialGC

5 tests failed.
FAILED:  
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI

Error Message:
Error from server at http://127.0.0.1:55074/solr/awhollynewcollection_0: No 
registered leader was found after waiting for 4000ms , collection: 
awhollynewcollection_0 slice: shard2 saw 
state=DocCollection(awhollynewcollection_0//collections/awhollynewcollection_0/state.json/9)={
   "pullReplicas":"0",   "replicationFactor":"2",   "shards":{ "shard1":{   
"range":"8000-",   "state":"active",   "replicas":{ 
"core_node3":{   "core":"awhollynewcollection_0_shard1_replica_n1", 
  "base_url":"http://127.0.0.1:55074/solr;,   
"node_name":"127.0.0.1:55074_solr",   "state":"active",   
"type":"NRT",   "leader":"true"}, "core_node5":{   
"core":"awhollynewcollection_0_shard1_replica_n2",   
"base_url":"http://127.0.0.1:55064/solr;,   
"node_name":"127.0.0.1:55064_solr",   "state":"active",   
"type":"NRT"}}}, "shard2":{   "range":"0-7fff",   
"state":"active",   "replicas":{ "core_node7":{   
"core":"awhollynewcollection_0_shard2_replica_n4",   
"base_url":"http://127.0.0.1:55071/solr;,   
"node_name":"127.0.0.1:55071_solr",   "state":"down",   
"type":"NRT"}, "core_node8":{   
"core":"awhollynewcollection_0_shard2_replica_n6",   
"base_url":"http://127.0.0.1:55061/solr;,   
"node_name":"127.0.0.1:55061_solr",   "state":"active",   
"type":"NRT",   "router":{"name":"compositeId"},   "maxShardsPerNode":"2",  
 "autoAddReplicas":"false",   "nrtReplicas":"2",   "tlogReplicas":"0"} with 
live_nodes=[127.0.0.1:55071_solr, 127.0.0.1:55074_solr, 127.0.0.1:55064_solr, 
127.0.0.1:55061_solr]

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:55074/solr/awhollynewcollection_0: No 
registered leader was found after waiting for 4000ms , collection: 
awhollynewcollection_0 slice: shard2 saw 
state=DocCollection(awhollynewcollection_0//collections/awhollynewcollection_0/state.json/9)={
  "pullReplicas":"0",
  "replicationFactor":"2",
  "shards":{
"shard1":{
  "range":"8000-",
  "state":"active",
  "replicas":{
"core_node3":{
  "core":"awhollynewcollection_0_shard1_replica_n1",
  "base_url":"http://127.0.0.1:55074/solr;,
  "node_name":"127.0.0.1:55074_solr",
  "state":"active",
  "type":"NRT",
  "leader":"true"},
"core_node5":{
  "core":"awhollynewcollection_0_shard1_replica_n2",
  "base_url":"http://127.0.0.1:55064/solr;,
  "node_name":"127.0.0.1:55064_solr",
  "state":"active",
  "type":"NRT"}}},
"shard2":{
  "range":"0-7fff",
  "state":"active",
  "replicas":{
"core_node7":{
  "core":"awhollynewcollection_0_shard2_replica_n4",
  "base_url":"http://127.0.0.1:55071/solr;,
  "node_name":"127.0.0.1:55071_solr",
  "state":"down",
  "type":"NRT"},
"core_node8":{
  "core":"awhollynewcollection_0_shard2_replica_n6",
  "base_url":"http://127.0.0.1:55061/solr;,
  "node_name":"127.0.0.1:55061_solr",
  "state":"active",
  "type":"NRT",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"2",
  "autoAddReplicas":"false",
  "nrtReplicas":"2",
  "tlogReplicas":"0"} with live_nodes=[127.0.0.1:55071_solr, 
127.0.0.1:55074_solr, 127.0.0.1:55064_solr, 127.0.0.1:55061_solr]
at 
__randomizedtesting.SeedInfo.seed([E0FD70C25AEC7AC2:A88804765CDF5557]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1104)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.request.UpdateRequest.commit(UpdateRequest.java:233)
at 

[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-10-ea+32) - Build # 21166 - Still Unstable!

2017-12-28 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/21166/
Java: 64bit/jdk-10-ea+32 -XX:+UseCompressedOops -XX:+UseParallelGC

4 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.BasicZkTest

Error Message:
SolrCore.getOpenCount()==2

Stack Trace:
java.lang.RuntimeException: SolrCore.getOpenCount()==2
at __randomizedtesting.SeedInfo.seed([10D8D94FE79D1271]:0)
at org.apache.solr.util.TestHarness.close(TestHarness.java:379)
at org.apache.solr.SolrTestCaseJ4.deleteCore(SolrTestCaseJ4.java:792)
at 
org.apache.solr.cloud.AbstractZkTestCase.azt_afterClass(AbstractZkTestCase.java:147)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:897)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.BasicZkTest

Error Message:
SolrCore.getOpenCount()==2

Stack Trace:
java.lang.RuntimeException: SolrCore.getOpenCount()==2
at __randomizedtesting.SeedInfo.seed([10D8D94FE79D1271]:0)
at org.apache.solr.util.TestHarness.close(TestHarness.java:379)
at org.apache.solr.SolrTestCaseJ4.deleteCore(SolrTestCaseJ4.java:792)
at 
org.apache.solr.SolrTestCaseJ4.teardownTestCases(SolrTestCaseJ4.java:288)
at jdk.internal.reflect.GeneratedMethodAccessor39.invoke(Unknown Source)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:897)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-11725) json.facet's stddev() function should be changed to use the "Corrected sample stddev" formula

2017-12-28 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16305994#comment-16305994
 ] 

Yonik Seeley commented on SOLR-11725:
-

{quote}
> ...In general we've been moving toward omitting undefined functions. Stats 
> like min() and max() already do this.
Whoa... really? ... that seems like it would make th client parsing realy 
hard...
{quote}

Trying to remember.  I *think* it may have just worked out that way originally 
when null is returned as the value from SlotAcc.getValue()
And I may have also conflated "empty bucket" with "stat over no values".  I'm 
not sure if client parsing is really much harder since a map interface of 
bucket.get("mystat") would return null in both cases.
On the other hand, I can see how it could be confusing to request a stat and 
not see it at all in the response.  Overall I guess I'm leaning toward 
returning "mystat":null for a non-empty bucket where mystat has no value / 
undefined value.

bq. For a singleton set, the stddev() should absolutely be "0"

Standard deviation of a population of size 1, yes. But this issue was about 
switching to standard deviation of samples, and that is undefined (or infinite) 
for a single sample.
Python throws an exception: 
https://docs.python.org/3/library/statistics.html#statistics.stdev
Google sheets will return a div-by-0 error: 
https://support.google.com/docs/answer/3094054?hl=en
Excel also gives a div-by-0 error with a single value.  I can't find anything 
using the "N-1" variant that uses 0 for a single sample.



> json.facet's stddev() function should be changed to use the "Corrected sample 
> stddev" formula
> -
>
> Key: SOLR-11725
> URL: https://issues.apache.org/jira/browse/SOLR-11725
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
> Attachments: SOLR-11725.patch
>
>
> While working on some equivalence tests/demonstrations for 
> {{facet.pivot+stats.field}} vs {{json.facet}} I noticed that the {{stddev}} 
> calculations done between the two code paths can be measurably different, and 
> realized this is due to them using very different code...
> * {{json.facet=foo:stddev(foo)}}
> ** {{StddevAgg.java}}
> ** {{Math.sqrt((sumSq/count)-Math.pow(sum/count, 2))}}
> * {{stats.field=\{!stddev=true\}foo}}
> ** {{StatsValuesFactory.java}}
> ** {{Math.sqrt(((count * sumOfSquares) - (sum * sum)) / (count * (count - 
> 1.0D)))}}
> Since I"m not really a math guy, I consulting with a bunch of smart math/stat 
> nerds I know online to help me sanity check if these equations (some how) 
> reduced to eachother (In which case the discrepancies I was seeing in my 
> results might have just been due to the order of intermediate operation 
> execution & floating point rounding differences).
> They confirmed that the two bits of code are _not_ equivalent to each other, 
> and explained that the code JSON Faceting is using is equivalent to the 
> "Uncorrected sample stddev" formula, while StatsComponent's code is 
> equivalent to the the "Corrected sample stddev" formula...
> https://en.wikipedia.org/wiki/Standard_deviation#Uncorrected_sample_standard_deviation
> When I told them that stuff like this is why no one likes mathematicians and 
> pressed them to explain which one was the "most canonical" (or "most 
> generally applicable" or "best") definition of stddev, I was told that:
> # This is something statisticians frequently disagree on
> # Practically speaking the diff between the calculations doesn't tend to 
> differ significantly when count is "very large"
> # _"Corrected sample stddev" is more appropriate when comparing two 
> distributions_
> Given that:
> * the primary usage of computing the stddev of a field/function against a 
> Solr result set (or against a sub-set of results defined by a facet 
> constraint) is probably to compare that distribution to a different Solr 
> result set (or to compare N sub-sets of results defined by N facet 
> constraints)
> * the size of the sets of documents (values) can be relatively small when 
> computing stats over facet constraint sub-sets
> ...it seems like {{StddevAgg.java}} should be updated to use the "Corrected 
> sample stddev" equation.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-11286) First doc Inplace Update, updating whole document.

2017-12-28 Thread Abhishek Umarjikar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16305989#comment-16305989
 ] 

Abhishek Umarjikar edited comment on SOLR-11286 at 12/29/17 4:57 AM:
-

[~ichattopadhyaya] thank you for quick reply. 


was (Author: abhis...@patentinsightpro.com):
[~ichattopadhyaya] thank you for quick reply on this issue. 

> First doc Inplace Update, updating whole document.
> --
>
> Key: SOLR-11286
> URL: https://issues.apache.org/jira/browse/SOLR-11286
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: update
>Affects Versions: 6.6
> Environment:  stored="false" docValues="true"/>
> trying inplace update for 
>  stored="false" docValues="true"/>
>Reporter: Abhishek Umarjikar
>
> I am trying in place update , for first doc the whole document is getting 
> indexed. so in place update is not working for first time. After that it 
> works for remaining docs. I am using solrj for inplace update.
> First Doc For in place update
> *2017-08-24 21:59:14,603 DEBUG org.apache.solr.update.DirectUpdateHandler2  ? 
> updateDocument(add{_version_=1576617435037958144,id=US9668251B2})*
> After First In place update
> *2017-08-24 22:01:33,109 DEBUG org.apache.solr.update.DirectUpdateHandler2  ? 
> updateDocValues(add{_version_=1576617580281462784,id=US2014029560A1})*



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11286) First doc Inplace Update, updating whole document.

2017-12-28 Thread Abhishek Umarjikar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16305989#comment-16305989
 ] 

Abhishek Umarjikar commented on SOLR-11286:
---

[~ichattopadhyaya] thank you for quick reply on this issue. 

> First doc Inplace Update, updating whole document.
> --
>
> Key: SOLR-11286
> URL: https://issues.apache.org/jira/browse/SOLR-11286
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: update
>Affects Versions: 6.6
> Environment:  stored="false" docValues="true"/>
> trying inplace update for 
>  stored="false" docValues="true"/>
>Reporter: Abhishek Umarjikar
>
> I am trying in place update , for first doc the whole document is getting 
> indexed. so in place update is not working for first time. After that it 
> works for remaining docs. I am using solrj for inplace update.
> First Doc For in place update
> *2017-08-24 21:59:14,603 DEBUG org.apache.solr.update.DirectUpdateHandler2  ? 
> updateDocument(add{_version_=1576617435037958144,id=US9668251B2})*
> After First In place update
> *2017-08-24 22:01:33,109 DEBUG org.apache.solr.update.DirectUpdateHandler2  ? 
> updateDocValues(add{_version_=1576617580281462784,id=US2014029560A1})*



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11801) support customisation of the "highlighting" query response element

2017-12-28 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16305988#comment-16305988
 ] 

David Smiley commented on SOLR-11801:
-

BTW somewhat related but I've been kicking around an idea of that Solr ought to 
have a HighlightDocTransformer as an alternative to the HighlightComponent.  
Why set aside highlighting in the Solr response when it's per-document 
information -- it ought to go in the document response!

> support customisation of the "highlighting" query response element
> --
>
> Key: SOLR-11801
> URL: https://issues.apache.org/jira/browse/SOLR-11801
> Project: Solr
>  Issue Type: New Feature
>  Components: highlighter
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-11801.patch
>
>
> The objective and use case behind the proposed changes is to be able to 
> receive not the out-of-the-box highlighting map
> {code}
> {
>   ...
>   "highlighting" : {
> "MA147LL/A" : {
>   "manu" : [
> "Apple Computer Inc."
>   ]
> }
>   }
> }
> {code}
> as illustrated in 
> https://lucene.apache.org/solr/guide/7_2/highlighting.html#highlighting-in-the-query-response
>  but to be able to customise the highlighting element of the query response 
> to (for example) be like this
> {code}
> {
>   ...
>   "highlighting" : [
> {
>   "id" : "MA147LL/A",
>   "snippets" : {
> "manu" : [
>   "Apple Computer Inc."
> ]
>   }
> }
>   ]
> }
> {code}
> where the highlighting element itself is a list and where the keys of each 
> list element are 'knowable' in advance i.e. they are not 'unknowable' 
> document ids.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11801) support customisation of the "highlighting" query response element

2017-12-28 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16305986#comment-16305986
 ] 

David Smiley commented on SOLR-11801:
-

If the point here to make it easier for Solr plugin authors (a subset of Solr 
users) to customize the highlighting response then wouldn't it be simpler to 
add a few protected methods to HighlightComponent and suggest it be subclassed, 
instead of adding a new abstraction "HighlightCollator"?  Or if the objective 
is to also provide a hl.collator param with some options then I can understand 
that you might want this collator thing.  But I'm not sure what value there is 
to your suggested hl.collator=mapmap|arrmap params... seems a matter of taste.

BTW it would be nice if the highlight component's response was restructured to 
optionally allow returning richer information -- see SOLR-1954 (return char 
offsets).  Certainly not to be tackled in this issue but just want to share.

bq. David Smiley, you as part of SOLR-9708 deprecated some code portions and 
mentioned about future restructuring. It seems the changes proposed here would 
not interfere with such plans, do you agree?

Yes.



> support customisation of the "highlighting" query response element
> --
>
> Key: SOLR-11801
> URL: https://issues.apache.org/jira/browse/SOLR-11801
> Project: Solr
>  Issue Type: New Feature
>  Components: highlighter
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-11801.patch
>
>
> The objective and use case behind the proposed changes is to be able to 
> receive not the out-of-the-box highlighting map
> {code}
> {
>   ...
>   "highlighting" : {
> "MA147LL/A" : {
>   "manu" : [
> "Apple Computer Inc."
>   ]
> }
>   }
> }
> {code}
> as illustrated in 
> https://lucene.apache.org/solr/guide/7_2/highlighting.html#highlighting-in-the-query-response
>  but to be able to customise the highlighting element of the query response 
> to (for example) be like this
> {code}
> {
>   ...
>   "highlighting" : [
> {
>   "id" : "MA147LL/A",
>   "snippets" : {
> "manu" : [
>   "Apple Computer Inc."
> ]
>   }
> }
>   ]
> }
> {code}
> where the highlighting element itself is a list and where the keys of each 
> list element are 'knowable' in advance i.e. they are not 'unknowable' 
> document ids.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-7733) remove "optimize" from the UI.

2017-12-28 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-7733.
--
   Resolution: Fixed
Fix Version/s: 7.3

> remove "optimize" from the UI.
> --
>
> Key: SOLR-7733
> URL: https://issues.apache.org/jira/browse/SOLR-7733
> Project: Solr
>  Issue Type: Improvement
>  Components: Admin UI
>Affects Versions: 5.3, 6.0
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Minor
> Fix For: 7.3
>
> Attachments: SOLR-7733.patch, SOLR-7733.patch, SOLR-7733.patch, 
> SOLR-7733.patch
>
>
> Since optimizing indexes is kind of a special circumstance thing, what do we 
> think about removing (or renaming) optimize-related stuff on the core admin 
> and core overview pages? The "optimize" button is already gone from the core 
> admin screen (was this intentional?).
> My personal feeling is that we should remove this entirely as it's too easy 
> to think "Of course I want my index optimized" and "look, this screen says my 
> index isn't optimized, that must mean I should optimize it".
> The core admin screen and the core overview page both have an "optimized" 
> checkmark, I propose just removing it from the "overview" page and on the 
> "core admin" page changing it to "Segment Count #". NOTE: the "overview" page 
> already has a "Segment Count" entry.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7733) remove "optimize" from the UI.

2017-12-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16305985#comment-16305985
 ] 

ASF subversion and git services commented on SOLR-7733:
---

Commit e2a26a42e66351d80b170569aa1a712eed0156da in lucene-solr's branch 
refs/heads/branch_7x from [~erickerickson]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=e2a26a4 ]

SOLR-7733: remove optimize from the UI.


> remove "optimize" from the UI.
> --
>
> Key: SOLR-7733
> URL: https://issues.apache.org/jira/browse/SOLR-7733
> Project: Solr
>  Issue Type: Improvement
>  Components: Admin UI
>Affects Versions: 5.3, 6.0
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Minor
> Attachments: SOLR-7733.patch, SOLR-7733.patch, SOLR-7733.patch, 
> SOLR-7733.patch
>
>
> Since optimizing indexes is kind of a special circumstance thing, what do we 
> think about removing (or renaming) optimize-related stuff on the core admin 
> and core overview pages? The "optimize" button is already gone from the core 
> admin screen (was this intentional?).
> My personal feeling is that we should remove this entirely as it's too easy 
> to think "Of course I want my index optimized" and "look, this screen says my 
> index isn't optimized, that must mean I should optimize it".
> The core admin screen and the core overview page both have an "optimized" 
> checkmark, I propose just removing it from the "overview" page and on the 
> "core admin" page changing it to "Segment Count #". NOTE: the "overview" page 
> already has a "Segment Count" entry.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7733) remove "optimize" from the UI.

2017-12-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16305984#comment-16305984
 ] 

ASF subversion and git services commented on SOLR-7733:
---

Commit 8e439a0a5c37a95ae632719b8901b225462b80bf in lucene-solr's branch 
refs/heads/master from [~erickerickson]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=8e439a0 ]

SOLR-7733: remove optimize from the UI.


> remove "optimize" from the UI.
> --
>
> Key: SOLR-7733
> URL: https://issues.apache.org/jira/browse/SOLR-7733
> Project: Solr
>  Issue Type: Improvement
>  Components: Admin UI
>Affects Versions: 5.3, 6.0
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Minor
> Attachments: SOLR-7733.patch, SOLR-7733.patch, SOLR-7733.patch, 
> SOLR-7733.patch
>
>
> Since optimizing indexes is kind of a special circumstance thing, what do we 
> think about removing (or renaming) optimize-related stuff on the core admin 
> and core overview pages? The "optimize" button is already gone from the core 
> admin screen (was this intentional?).
> My personal feeling is that we should remove this entirely as it's too easy 
> to think "Of course I want my index optimized" and "look, this screen says my 
> index isn't optimized, that must mean I should optimize it".
> The core admin screen and the core overview page both have an "optimized" 
> checkmark, I propose just removing it from the "overview" page and on the 
> "core admin" page changing it to "Segment Count #". NOTE: the "overview" page 
> already has a "Segment Count" entry.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7733) remove "optimize" from the UI.

2017-12-28 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-7733:
-
Attachment: SOLR-7733.patch

Final patch with CHANGES.txt

> remove "optimize" from the UI.
> --
>
> Key: SOLR-7733
> URL: https://issues.apache.org/jira/browse/SOLR-7733
> Project: Solr
>  Issue Type: Improvement
>  Components: Admin UI
>Affects Versions: 5.3, 6.0
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Minor
> Attachments: SOLR-7733.patch, SOLR-7733.patch, SOLR-7733.patch, 
> SOLR-7733.patch
>
>
> Since optimizing indexes is kind of a special circumstance thing, what do we 
> think about removing (or renaming) optimize-related stuff on the core admin 
> and core overview pages? The "optimize" button is already gone from the core 
> admin screen (was this intentional?).
> My personal feeling is that we should remove this entirely as it's too easy 
> to think "Of course I want my index optimized" and "look, this screen says my 
> index isn't optimized, that must mean I should optimize it".
> The core admin screen and the core overview page both have an "optimized" 
> checkmark, I propose just removing it from the "overview" page and on the 
> "core admin" page changing it to "Segment Count #". NOTE: the "overview" page 
> already has a "Segment Count" entry.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Solaris (64bit/jdk1.8.0) - Build # 359 - Still Unstable!

2017-12-28 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Solaris/359/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation

Error Message:
2 threads leaked from SUITE scope at 
org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation: 1) 
Thread[id=37934, name=jetty-launcher-8694-thread-1-EventThread, 
state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation] 
at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
 at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)  
   at 
org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323)
 at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105)  
   at 
org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:41)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.readValue(SharedValue.java:244)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.access$100(SharedValue.java:44)
 at 
org.apache.curator.framework.recipes.shared.SharedValue$1.process(SharedValue.java:61)
 at 
org.apache.curator.framework.imps.NamespaceWatcher.process(NamespaceWatcher.java:67)
 at 
org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:530)   
  at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:505)   
 2) Thread[id=37927, name=jetty-launcher-8694-thread-2-EventThread, 
state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation] 
at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
 at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)  
   at 
org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323)
 at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105)  
   at 
org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:41)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.readValue(SharedValue.java:244)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.access$100(SharedValue.java:44)
 at 
org.apache.curator.framework.recipes.shared.SharedValue$1.process(SharedValue.java:61)
 at 
org.apache.curator.framework.imps.NamespaceWatcher.process(NamespaceWatcher.java:67)
 at 
org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:530)   
  at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:505)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 2 threads leaked from SUITE 
scope at org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation: 
   1) Thread[id=37934, name=jetty-launcher-8694-thread-1-EventThread, 
state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)
at 
org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323)
at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105)
at 
org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288)
at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279)
at 

[JENKINS] Lucene-Solr-SmokeRelease-master - Build # 914 - Still Failing

2017-12-28 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/914/

No tests ran.

Build Log:
[...truncated 28244 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist
 [copy] Copying 491 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 215 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.8 JAVA_HOME=/home/jenkins/tools/java/latest1.8
   [smoker] NOTE: output encoding is UTF-8
   [smoker] 
   [smoker] Load release URL 
"file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/"...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.05 sec (4.9 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-8.0.0-src.tgz...
   [smoker] 30.1 MB in 1.03 sec (29.1 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-8.0.0.tgz...
   [smoker] 72.2 MB in 0.79 sec (91.9 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-8.0.0.zip...
   [smoker] 82.7 MB in 0.92 sec (89.9 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-8.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6229 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-8.0.0.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6229 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-8.0.0-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run "ant validate"
   [smoker] 
   [smoker] command "export JAVA_HOME="/home/jenkins/tools/java/latest1.8" 
PATH="/home/jenkins/tools/java/latest1.8/bin:$PATH" 
JAVACMD="/home/jenkins/tools/java/latest1.8/bin/java"; ant validate" failed:
   [smoker] Buildfile: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/lucene-8.0.0/build.xml
   [smoker] 
   [smoker] common.compile-tools:
   [smoker] 
   [smoker] -check-git-state:
   [smoker] 
   [smoker] -git-cleanroot:
   [smoker] 
   [smoker] -copy-git-state:
   [smoker] 
   [smoker] git-autoclean:
   [smoker] 
   [smoker] ivy-availability-check:
   [smoker] [loadresource] Do not set property disallowed.ivy.jars.list as its 
length is 0.
   [smoker] 
   [smoker] -ivy-fail-disallowed-ivy-version:
   [smoker] 
   [smoker] ivy-fail:
   [smoker] 
   [smoker] ivy-configure:
   [smoker] [ivy:configure] :: Apache Ivy 2.4.0 - 20141213170938 :: 
http://ant.apache.org/ivy/ ::
   [smoker] [ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/lucene-8.0.0/top-level-ivy-settings.xml
   [smoker] 
   [smoker] resolve:
   [smoker] 
   [smoker] init:
   [smoker] 
   [smoker] compile-core:
   [smoker] [mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/lucene-8.0.0/build/tools/classes/java
   [smoker] [javac] Compiling 7 source files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/lucene-8.0.0/build/tools/classes/java
   [smoker]  [copy] Copying 1 file to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/lucene-8.0.0/build/tools/classes/java
   [smoker] 
   [smoker] compile-tools:
   [smoker] 
   [smoker] compile-tools:
   [smoker] 
   [smoker] common.compile-tools:
   [smoker] 
   [smoker] -check-git-state:
   [smoker] 
   [smoker] -git-cleanroot:
   [smoker] 
   [smoker] -copy-git-state:
   [smoker] 
   [smoker] git-autoclean:
   [smoker] 
   [smoker] ivy-availability-check:
   [smoker] [loadresource] Do not set property disallowed.ivy.jars.list as its 
length is 0.
   [smoker] 
   [smoker] -ivy-fail-disallowed-ivy-version:
   [smoker] 
   [smoker] ivy-fail:
   [smoker] 
   [smoker] ivy-configure:
   [smoker] [ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/lucene-8.0.0/top-level-ivy-settings.xml
   [smoker] 
   [smoker] resolve:
   [smoker] 
   [smoker] init:
   [smoker] 
   [smoker] compile-core:
   [smoker] 
   [smoker] compile-tools:
   [smoker] [mkdir] Created dir: 

[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_144) - Build # 21165 - Still Unstable!

2017-12-28 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/21165/
Java: 64bit/jdk1.8.0_144 -XX:+UseCompressedOops -XX:+UseParallelGC

2 tests failed.
FAILED:  
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI

Error Message:
Error from server at 
https://127.0.0.1:34873/solr/awhollynewcollection_0_shard4_replica_n6: 
ClusterState says we are the leader 
(https://127.0.0.1:34873/solr/awhollynewcollection_0_shard4_replica_n6), but 
locally we don't think so. Request came from null

Stack Trace:
org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from 
server at 
https://127.0.0.1:34873/solr/awhollynewcollection_0_shard4_replica_n6: 
ClusterState says we are the leader 
(https://127.0.0.1:34873/solr/awhollynewcollection_0_shard4_replica_n6), but 
locally we don't think so. Request came from null
at 
__randomizedtesting.SeedInfo.seed([9DEFE4E2BBEF99FA:D59A9056BDDCB66F]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:550)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1013)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:946)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:946)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:946)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:946)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:946)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.request.UpdateRequest.commit(UpdateRequest.java:233)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI(CollectionsAPIDistributedZkTest.java:461)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-Tests-master - Build # 2243 - Still Failing

2017-12-28 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/2243/

All tests passed

Build Log:
[...truncated 6636 lines...]
   [junit4] JVM J0: stdout was not empty, see: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/lucene/build/codecs/test/temp/junit4-J0-20171229_022543_4684363538288798594480.sysout
   [junit4] >>> JVM J0 emitted unexpected output (verbatim) 
   [junit4] #
   [junit4] # There is insufficient memory for the Java Runtime Environment to 
continue.
   [junit4] # Native memory allocation (mmap) failed to map 65536 bytes for 
committing reserved memory.
   [junit4] # An error report file with more information is saved as:
   [junit4] # 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/lucene/build/codecs/test/J0/hs_err_pid30384.log
   [junit4] #
   [junit4] # Compiler replay data is saved as:
   [junit4] # 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/lucene/build/codecs/test/J0/replay_pid30384.log
   [junit4] <<< JVM J0: EOF 

   [junit4] JVM J0: stderr was not empty, see: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/lucene/build/codecs/test/temp/junit4-J0-20171229_022543_4681235482176639385064.syserr
   [junit4] >>> JVM J0 emitted unexpected output (verbatim) 
   [junit4] Java HotSpot(TM) 64-Bit Server VM warning: INFO: 
os::commit_memory(0x7f718ee0, 65536, 1) failed; error='Cannot allocate 
memory' (errno=12)
   [junit4] <<< JVM J0: EOF 

[...truncated 56 lines...]
   [junit4] ERROR: JVM J0 ended with an exception, command line: 
/usr/local/asfpackages/java/jdk1.8.0_144/jre/bin/java 
-XX:+HeapDumpOnOutOfMemoryError 
-XX:HeapDumpPath=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/heapdumps
 -ea -esa -Dtests.prefix=tests -Dtests.seed=E8F2F8FB8509E49B -Xmx512M 
-Dtests.iters= -Dtests.verbose=false -Dtests.infostream=false 
-Dtests.codec=random -Dtests.postingsformat=random 
-Dtests.docvaluesformat=random -Dtests.locale=random -Dtests.timezone=random 
-Dtests.directory=random -Dtests.linedocsfile=europarl.lines.txt.gz 
-Dtests.luceneMatchVersion=8.0.0 -Dtests.cleanthreads=perMethod 
-Djava.util.logging.config.file=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/lucene/tools/junit4/logging.properties
 -Dtests.nightly=false -Dtests.weekly=false -Dtests.monster=false 
-Dtests.slow=true -Dtests.asserts=true -Dtests.multiplier=2 -DtempDir=./temp 
-Djava.io.tmpdir=./temp 
-Djunit4.tempDir=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/lucene/build/codecs/test/temp
 
-Dcommon.dir=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/lucene
 
-Dclover.db.dir=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/lucene/build/clover/db
 
-Djava.security.policy=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/lucene/tools/junit4/tests.policy
 -Dtests.LUCENE_VERSION=8.0.0 -Djetty.testMode=1 -Djetty.insecurerandom=1 
-Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory 
-Djava.awt.headless=true -Djdk.map.althashing.threshold=0 
-Dtests.src.home=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master 
-Djava.security.egd=file:/dev/./urandom 
-Djunit4.childvm.cwd=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/lucene/build/codecs/test/J0
 -Djunit4.childvm.id=0 -Djunit4.childvm.count=3 -Dtests.leaveTemporary=false 
-Dtests.filterstacks=true 
-Djava.security.manager=org.apache.lucene.util.TestSecurityManager -classpath 

[jira] [Updated] (SOLR-7733) remove "optimize" from the UI.

2017-12-28 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-7733:
-
Summary: remove "optimize" from the UI.  (was: remove/rename "optimize" 
references in the UI.)

> remove "optimize" from the UI.
> --
>
> Key: SOLR-7733
> URL: https://issues.apache.org/jira/browse/SOLR-7733
> Project: Solr
>  Issue Type: Improvement
>  Components: Admin UI
>Affects Versions: 5.3, 6.0
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Minor
> Attachments: SOLR-7733.patch, SOLR-7733.patch, SOLR-7733.patch
>
>
> Since optimizing indexes is kind of a special circumstance thing, what do we 
> think about removing (or renaming) optimize-related stuff on the core admin 
> and core overview pages? The "optimize" button is already gone from the core 
> admin screen (was this intentional?).
> My personal feeling is that we should remove this entirely as it's too easy 
> to think "Of course I want my index optimized" and "look, this screen says my 
> index isn't optimized, that must mean I should optimize it".
> The core admin screen and the core overview page both have an "optimized" 
> checkmark, I propose just removing it from the "overview" page and on the 
> "core admin" page changing it to "Segment Count #". NOTE: the "overview" page 
> already has a "Segment Count" entry.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7733) remove/rename "optimize" references in the UI.

2017-12-28 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16305918#comment-16305918
 ] 

Erick Erickson commented on SOLR-7733:
--

[~ctargett] Ugh. I started actually looking at all the places there's any 
parameter "optimize" and it's ugly. See SOLR-11803. Even using forceMerge 
rather than optimize on the URL is more than I want to put into this JIRA which 
is just about removing the temptation from the UI.

Actually I don't see any urgency in changing optimize to forceMerge as far as 
any of the parameters and the like are concerned, now that it's not so tempting 
and anyone looking at the docs will see warnings if they go digging.


> remove/rename "optimize" references in the UI.
> --
>
> Key: SOLR-7733
> URL: https://issues.apache.org/jira/browse/SOLR-7733
> Project: Solr
>  Issue Type: Improvement
>  Components: Admin UI
>Affects Versions: 5.3, 6.0
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Minor
> Attachments: SOLR-7733.patch, SOLR-7733.patch, SOLR-7733.patch
>
>
> Since optimizing indexes is kind of a special circumstance thing, what do we 
> think about removing (or renaming) optimize-related stuff on the core admin 
> and core overview pages? The "optimize" button is already gone from the core 
> admin screen (was this intentional?).
> My personal feeling is that we should remove this entirely as it's too easy 
> to think "Of course I want my index optimized" and "look, this screen says my 
> index isn't optimized, that must mean I should optimize it".
> The core admin screen and the core overview page both have an "optimized" 
> checkmark, I propose just removing it from the "overview" page and on the 
> "core admin" page changing it to "Segment Count #". NOTE: the "overview" page 
> already has a "Segment Count" entry.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1591 - Still Unstable!

2017-12-28 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1591/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.MoveReplicaTest

Error Message:
java.util.concurrent.TimeoutException: Could not connect to ZooKeeper 
127.0.0.1:40394 within 3 ms

Stack Trace:
org.apache.solr.common.SolrException: java.util.concurrent.TimeoutException: 
Could not connect to ZooKeeper 127.0.0.1:40394 within 3 ms
at __randomizedtesting.SeedInfo.seed([8DA6E14C823ABE17]:0)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:182)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:119)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:114)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:101)
at 
org.apache.solr.cloud.MiniSolrCloudCluster.waitForAllNodes(MiniSolrCloudCluster.java:268)
at 
org.apache.solr.cloud.MiniSolrCloudCluster.(MiniSolrCloudCluster.java:262)
at 
org.apache.solr.cloud.SolrCloudTestCase$Builder.configure(SolrCloudTestCase.java:190)
at 
org.apache.solr.cloud.MoveReplicaTest.setupCluster(MoveReplicaTest.java:73)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:874)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.util.concurrent.TimeoutException: Could not connect to 
ZooKeeper 127.0.0.1:40394 within 3 ms
at 
org.apache.solr.common.cloud.ConnectionManager.waitForConnected(ConnectionManager.java:232)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:174)
... 31 more


FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.MoveReplicaTest

Error Message:
51 threads leaked from SUITE scope at org.apache.solr.cloud.MoveReplicaTest:
 1) Thread[id=21636, name=ProcessThread(sid:0 cport:40394):, state=WAITING, 
group=TGRP-MoveReplicaTest] at sun.misc.Unsafe.park(Native Method)  
   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at 
org.apache.zookeeper.server.PrepRequestProcessor.run(PrepRequestProcessor.java:122)
2) Thread[id=21640, name=jetty-launcher-5650-thread-1, state=TIMED_WAITING, 
group=TGRP-MoveReplicaTest] at sun.misc.Unsafe.park(Native Method)  
   at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)

Re: Welcome Karl Wright to the PMC

2017-12-28 Thread Koji Sekiguchi

Welcome Karl!

Koji

On 2017/12/28 23:08, Adrien Grand wrote:

I am pleased to announce that Karl Wright has accepted the PMC's invitation to 
join.

Welcome Karl!


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Welcome Karl Wright to the PMC

2017-12-28 Thread Karl Wright
Thanks, everyone!!
Karl

On Thu, Dec 28, 2017 at 4:28 PM, Michael McCandless <
luc...@mikemccandless.com> wrote:

> Welcome Karl!
>
> Mike McCandless
>
> http://blog.mikemccandless.com
>
> On Thu, Dec 28, 2017 at 9:08 AM, Adrien Grand  wrote:
>
>> I am pleased to announce that Karl Wright has accepted the PMC's
>> invitation to join.
>>
>> Welcome Karl!
>>
>
>


[jira] [Created] (SOLR-11803) Remove all traces of "optimize" from Solr and replice with "forceMerge"

2017-12-28 Thread Erick Erickson (JIRA)
Erick Erickson created SOLR-11803:
-

 Summary: Remove all traces of "optimize" from Solr and replice 
with "forceMerge"
 Key: SOLR-11803
 URL: https://issues.apache.org/jira/browse/SOLR-11803
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Erick Erickson
Priority: Minor


Umbrella issue for removing optimize from Solr.

This has been kicked around for quite some time. It turns out that there are a 
number of places all this is baked in to code. Here are the places hinted at by 
just looking at the reference guide:

suggester has "buildOnOptimize".
DIH has optimize.
postOptimize hook
IgnoreCommitOptimizeUpdateProcessorFactory
ignoreOptimizeOnly
buildOnOptimize
And what about JMX stats? UPDATE.updateHandler.optimizes

Then there are about a zillion places in the code that use optimize, I'm not 
sure how many of those would need to change. Lots of tests for instance have: 
"assertU(optimize());"

The first step would be to deprecate public static String OPTIMIZE = "optimize" 
in  UpdateParams, add FORCEMERGE and proceed from there.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-8111) IndexOrDocValuesQuery Javadoc references outdated method name

2017-12-28 Thread Kai Chan (JIRA)
Kai Chan created LUCENE-8111:


 Summary: IndexOrDocValuesQuery Javadoc references outdated method 
name
 Key: LUCENE-8111
 URL: https://issues.apache.org/jira/browse/LUCENE-8111
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/search
Affects Versions: 7.2, master (8.0)
Reporter: Kai Chan
Priority: Minor
 Attachments: IndexOrDocValuesQuery.patch

The Javadoc on IndexOrDocValuesQuery references the 
SortedNumericDocValuesField.newRangeQuery method, which has been renamed 
SortedNumericDocValuesField.newSlowRangeQuery per LUCENE-7892.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-MAVEN] Lucene-Solr-Maven-master #2149: POMs out of sync

2017-12-28 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-master/2149/

No tests ran.

Build Log:
[...truncated 17453 lines...]
BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-master/build.xml:860: The 
following error occurred while executing this line:
: Java returned: 137

Total time: 24 minutes 27 seconds
Build step 'Invoke Ant' marked build as failure
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_144) - Build # 21164 - Unstable!

2017-12-28 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/21164/
Java: 32bit/jdk1.8.0_144 -server -XX:+UseConcMarkSweepGC

14 tests failed.
FAILED:  org.apache.solr.cloud.ChaosMonkeyNothingIsSafeWithPullReplicasTest.test

Error Message:
Timeout occured while waiting response from server at: http://127.0.0.1:44693

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: http://127.0.0.1:44693
at 
__randomizedtesting.SeedInfo.seed([BF2A00D2EAE2EB0E:377E3F08441E86F6]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:654)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1104)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createServers(AbstractFullDistribZkTestBase.java:320)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:991)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[JENKINS] Lucene-Solr-Tests-7.x - Build # 302 - Still Failing

2017-12-28 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/302/

5 tests failed.
FAILED:  org.apache.solr.cloud.ShardSplitTest.testSplitAfterFailedSplit

Error Message:
expected:<1> but was:<2>

Stack Trace:
java.lang.AssertionError: expected:<1> but was:<2>
at 
__randomizedtesting.SeedInfo.seed([43E9D036D2D7BE79:BAA44399EEA2F3F3]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.cloud.ShardSplitTest.testSplitAfterFailedSplit(ShardSplitTest.java:279)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:993)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (LUCENE-7976) Add a parameter to TieredMergePolicy to merge segments that have more than X percent deleted documents

2017-12-28 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16305820#comment-16305820
 ] 

Erick Erickson commented on LUCENE-7976:


bq: So I think maxMergedSegmentMB should win over maxSegments passed to 
forceMerge

Works for me. 

bq: one way to shoot yourself (3a) is enough
;)

WDYT about going one step further and deprecating maxSegments? Does having that 
extra knob (maxSegments) really add any value? A value of -1 for 
maxMergedSegmentMB would mean the same thing as the old optimize. That would 
avoid having to reconcile the two

So here's what we tell users (needs to be prettied up):

1> In general invoking forceMerge is unnecessary. Especially in a frequently 
updated index the default settings should suffice and forceMerge can hurt 
(there's a blog about that).

2> If you find  you have too many deleted documents in your index, consider 
changing reclaimDeletesWeight in your configuration (and provide some guidance 
on reasonable values).

3> forceMerge now respects maxMergedSegmentMB. This means that forceMerge will 
no longer create an index with one segment by default although it will purge 
all deleted documents.

4> If you require forceMerge to produce a single segment, you must provide a 
parameter maxMergedSegmentMB=-1 to the forceMerge command. It is not 
recommended to set maxMergedSegmentMB=-1 as a permanent setting in your config 
as it will lead to excessive I/O during normal indexing. Invoking forceMerge 
with maxMergedSegmentMB=-1 is only recommended when you're willing and able to 
perform this operation whenever the index is changed or it will lead to 
excessive space occupied by deleted documents.

5> (assuming we deprecate maxSegments). forceMerge no longer supports 
maxSegments. You can approximate this behavior by selecting an appropriate 
value for maxMergedSegmentMB based on the total size of your index.




> Add a parameter to TieredMergePolicy to merge segments that have more than X 
> percent deleted documents
> --
>
> Key: LUCENE-7976
> URL: https://issues.apache.org/jira/browse/LUCENE-7976
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Erick Erickson
> Attachments: LUCENE-7976.patch
>
>
> We're seeing situations "in the wild" where there are very large indexes (on 
> disk) handled quite easily in a single Lucene index. This is particularly 
> true as features like docValues move data into MMapDirectory space. The 
> current TMP algorithm allows on the order of 50% deleted documents as per a 
> dev list conversation with Mike McCandless (and his blog here:  
> https://www.elastic.co/blog/lucenes-handling-of-deleted-documents).
> Especially in the current era of very large indexes in aggregate, (think many 
> TB) solutions like "you need to distribute your collection over more shards" 
> become very costly. Additionally, the tempting "optimize" button exacerbates 
> the issue since once you form, say, a 100G segment (by 
> optimizing/forceMerging) it is not eligible for merging until 97.5G of the 
> docs in it are deleted (current default 5G max segment size).
> The proposal here would be to add a new parameter to TMP, something like 
>  (no, that's not serious name, suggestions 
> welcome) which would default to 100 (or the same behavior we have now).
> So if I set this parameter to, say, 20%, and the max segment size stays at 
> 5G, the following would happen when segments were selected for merging:
> > any segment with > 20% deleted documents would be merged or rewritten NO 
> > MATTER HOW LARGE. There are two cases,
> >> the segment has < 5G "live" docs. In that case it would be merged with 
> >> smaller segments to bring the resulting segment up to 5G. If no smaller 
> >> segments exist, it would just be rewritten
> >> The segment has > 5G "live" docs (the result of a forceMerge or optimize). 
> >> It would be rewritten into a single segment removing all deleted docs no 
> >> matter how big it is to start. The 100G example above would be rewritten 
> >> to an 80G segment for instance.
> Of course this would lead to potentially much more I/O which is why the 
> default would be the same behavior we see now. As it stands now, though, 
> there's no way to recover from an optimize/forceMerge except to re-index from 
> scratch. We routinely see 200G-300G Lucene indexes at this point "in the 
> wild" with 10s of  shards replicated 3 or more times. And that doesn't even 
> include having these over HDFS.
> Alternatives welcome! Something like the above seems minimally invasive. A 
> new merge policy is certainly an alternative.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, 

[JENKINS-EA] Lucene-Solr-7.x-Linux (64bit/jdk-10-ea+32) - Build # 1069 - Still Unstable!

2017-12-28 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/1069/
Java: 64bit/jdk-10-ea+32 -XX:-UseCompressedOops -XX:+UseParallelGC

3 tests failed.
FAILED:  
org.apache.solr.cloud.TestCloudRecovery.leaderRecoverFromLogOnStartupTest

Error Message:
Timeout waiting for all live and active

Stack Trace:
java.lang.AssertionError: Timeout waiting for all live and active
at 
__randomizedtesting.SeedInfo.seed([886F8538F0CBC81C:FC9F647DB4EF8F93]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.TestCloudRecovery.leaderRecoverFromLogOnStartupTest(TestCloudRecovery.java:99)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  org.apache.solr.cloud.TestCloudRecovery.corruptedLogTest

Error Message:
Could not find a healthy node to handle the request.

Stack Trace:
org.apache.solr.common.SolrException: Could not find a healthy node to handle 
the request.
at 

[JENKINS] Lucene-Solr-7.x-Windows (64bit/jdk1.8.0_144) - Build # 366 - Still Unstable!

2017-12-28 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/366/
Java: 64bit/jdk1.8.0_144 -XX:+UseCompressedOops -XX:+UseSerialGC

5 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.lucene.store.TestMultiMMap

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.store.TestMultiMMap_B02065603529F69D-001\testSeekSliceZero-015:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.store.TestMultiMMap_B02065603529F69D-001\testSeekSliceZero-015
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.store.TestMultiMMap_B02065603529F69D-001\testSeekSliceZero-015:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.store.TestMultiMMap_B02065603529F69D-001\testSeekSliceZero-015

at __randomizedtesting.SeedInfo.seed([B02065603529F69D]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI

Error Message:
Error from server at 
https://127.0.0.1:51724/solr/awhollynewcollection_0_shard2_replica_n3: 
ClusterState says we are the leader 
(https://127.0.0.1:51724/solr/awhollynewcollection_0_shard2_replica_n3), but 
locally we don't think so. Request came from null

Stack Trace:
org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from 
server at 
https://127.0.0.1:51724/solr/awhollynewcollection_0_shard2_replica_n3: 
ClusterState says we are the leader 
(https://127.0.0.1:51724/solr/awhollynewcollection_0_shard2_replica_n3), but 
locally we don't think so. Request came from null
at 
__randomizedtesting.SeedInfo.seed([EDB257ACD6859467:A5C72318D0B6BBF2]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:550)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1013)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:946)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:946)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:946)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:946)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:946)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.request.UpdateRequest.commit(UpdateRequest.java:233)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI(CollectionsAPIDistributedZkTest.java:460)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 

[jira] [Commented] (LUCENE-7976) Add a parameter to TieredMergePolicy to merge segments that have more than X percent deleted documents

2017-12-28 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16305755#comment-16305755
 ] 

Michael McCandless commented on LUCENE-7976:


+1 to the summary; thanks [~erickerickson], except I think this is dangerous:

bq. specify maxSegments = 1 during forceMerge. This will override any 
maxMergedSegmentMB settings.

because it means you can get a too large segment in your index just by invoking 
{{forceMerge}}.  I don't think we need that behavior?  I.e., one way to shoot 
yourself (3a) is enough?  So I think {{maxMergedSegmentMB}} should win over 
{{maxSegments}} passed to {{forceMerge}}.

> Add a parameter to TieredMergePolicy to merge segments that have more than X 
> percent deleted documents
> --
>
> Key: LUCENE-7976
> URL: https://issues.apache.org/jira/browse/LUCENE-7976
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Erick Erickson
> Attachments: LUCENE-7976.patch
>
>
> We're seeing situations "in the wild" where there are very large indexes (on 
> disk) handled quite easily in a single Lucene index. This is particularly 
> true as features like docValues move data into MMapDirectory space. The 
> current TMP algorithm allows on the order of 50% deleted documents as per a 
> dev list conversation with Mike McCandless (and his blog here:  
> https://www.elastic.co/blog/lucenes-handling-of-deleted-documents).
> Especially in the current era of very large indexes in aggregate, (think many 
> TB) solutions like "you need to distribute your collection over more shards" 
> become very costly. Additionally, the tempting "optimize" button exacerbates 
> the issue since once you form, say, a 100G segment (by 
> optimizing/forceMerging) it is not eligible for merging until 97.5G of the 
> docs in it are deleted (current default 5G max segment size).
> The proposal here would be to add a new parameter to TMP, something like 
>  (no, that's not serious name, suggestions 
> welcome) which would default to 100 (or the same behavior we have now).
> So if I set this parameter to, say, 20%, and the max segment size stays at 
> 5G, the following would happen when segments were selected for merging:
> > any segment with > 20% deleted documents would be merged or rewritten NO 
> > MATTER HOW LARGE. There are two cases,
> >> the segment has < 5G "live" docs. In that case it would be merged with 
> >> smaller segments to bring the resulting segment up to 5G. If no smaller 
> >> segments exist, it would just be rewritten
> >> The segment has > 5G "live" docs (the result of a forceMerge or optimize). 
> >> It would be rewritten into a single segment removing all deleted docs no 
> >> matter how big it is to start. The 100G example above would be rewritten 
> >> to an 80G segment for instance.
> Of course this would lead to potentially much more I/O which is why the 
> default would be the same behavior we see now. As it stands now, though, 
> there's no way to recover from an optimize/forceMerge except to re-index from 
> scratch. We routinely see 200G-300G Lucene indexes at this point "in the 
> wild" with 10s of  shards replicated 3 or more times. And that doesn't even 
> include having these over HDFS.
> Alternatives welcome! Something like the above seems minimally invasive. A 
> new merge policy is certainly an alternative.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Welcome Dennis Gove to the PMC

2017-12-28 Thread Michael McCandless
Welcome Dennis!

Mike McCandless

http://blog.mikemccandless.com

On Tue, Dec 26, 2017 at 8:12 AM, Joel Bernstein  wrote:

> I am pleased to announce that Dennis Gove has accepted the PMC's
> invitation to join.
>
> Welcome Dennis!
>


Re: Welcome Karl Wright to the PMC

2017-12-28 Thread Michael McCandless
Welcome Karl!

Mike McCandless

http://blog.mikemccandless.com

On Thu, Dec 28, 2017 at 9:08 AM, Adrien Grand  wrote:

> I am pleased to announce that Karl Wright has accepted the PMC's
> invitation to join.
>
> Welcome Karl!
>


[jira] [Comment Edited] (LUCENE-7976) Add a parameter to TieredMergePolicy to merge segments that have more than X percent deleted documents

2017-12-28 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16305721#comment-16305721
 ] 

Erick Erickson edited comment on LUCENE-7976 at 12/28/17 8:30 PM:
--

OK, let's see if I can summarize where we are on this:

1> make TMP respect maxMergeSegmentSize, even during forcemerge unless 
maxSegments is specified (see <3>).

2> Add some documentation about how reclaimDeletesWeight can be used to tune 
the % deleted documents that will be in the index along with some guidance. 
Exactly how this should be set is rather opaque. It defaults to 2.0. The 
comment in the code is: "but be careful not to go so high that way too much 
merging takes place; a value of 3.0 is probably nearly too high". We need to 
keep people from setting it to 1000. Should we establish an upper bound with 
perhaps a warning if it's exceeded? 

3> If people want the old behavior they have two choices:
3a> set maxMergedSegmentMB very high. This has the consequence of kicking in 
when normal merging happens. I think this is sub-optimal for the pattern where 
once a day I index docs and then want to optimize at the end though.
3b> specify maxSegments = 1 during forceMerge. This will override any 
maxMergedSegmentMB settings.

<3b> is my attempt to reconcile the issue of _wanting_ one huge segment but 
only when doing forceMerge. Yes, they can back themselves into a the same 
corner they get into now by doing this, but this is acceptable IMO. We're not 
trying to make it _impossible_ to get into a bad state, just trying to make it 
so users don't do it by accident.

Is this at least good enough for going on with until we see how it behaves?

Meanwhile, I'll check in SOLR-7733


was (Author: erickerickson):
OK, let's see if I can summarize where we are on this:

1> make TMP respect maxMergeSegmentSize, even during forcemerge unless 
maxSegments is specified (see <3>).

2> Add some documentation about how reclaimDeletesWeight can be used to tune 
the % deleted documents that will be in the index along with some guidance. 
Exactly how this should be set is rather opaque. It defaults to 2.0. The 
comment in the code is: "but be careful not to go so high that way too much 
merging takes place; a value of 3.0 is probably nearly too high". We need to 
keep people from setting it to 1000. Should we establish an upper bound with 
perhaps a warning if it's exceeded? 

3> If people want the old behavior they have two choices:
3a> set maxMergedSegmentMB very high. This has the consequence of kicking in 
when normal merging happens. I think this is sub-optimal for the pattern where 
once a day I index docs and then want to optimize at the end though.
3b> specify maxSegments = 1 during forceMerge. This will override any 
maxMergedSegmentMB settings.

<4b> is my attempt to reconcile the issue of _wanting_ one huge segment but 
only when doing forceMerge. Yes, they can back themselves into a the same 
corner they get into now by doing this, but this is acceptable IMO. We're not 
trying to make it _impossible_ to get into a bad state, just trying to make it 
so users don't do it by accident.

Is this at least good enough for going on with until we see how it behaves?

Meanwhile, I'll check in SOLR-7733

> Add a parameter to TieredMergePolicy to merge segments that have more than X 
> percent deleted documents
> --
>
> Key: LUCENE-7976
> URL: https://issues.apache.org/jira/browse/LUCENE-7976
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Erick Erickson
> Attachments: LUCENE-7976.patch
>
>
> We're seeing situations "in the wild" where there are very large indexes (on 
> disk) handled quite easily in a single Lucene index. This is particularly 
> true as features like docValues move data into MMapDirectory space. The 
> current TMP algorithm allows on the order of 50% deleted documents as per a 
> dev list conversation with Mike McCandless (and his blog here:  
> https://www.elastic.co/blog/lucenes-handling-of-deleted-documents).
> Especially in the current era of very large indexes in aggregate, (think many 
> TB) solutions like "you need to distribute your collection over more shards" 
> become very costly. Additionally, the tempting "optimize" button exacerbates 
> the issue since once you form, say, a 100G segment (by 
> optimizing/forceMerging) it is not eligible for merging until 97.5G of the 
> docs in it are deleted (current default 5G max segment size).
> The proposal here would be to add a new parameter to TMP, something like 
>  (no, that's not serious name, suggestions 
> welcome) which would default to 100 (or the same behavior we have now).
> So if I set this parameter to, say, 20%, and the max segment size stays at 
> 5G, the 

[jira] [Commented] (LUCENE-7976) Add a parameter to TieredMergePolicy to merge segments that have more than X percent deleted documents

2017-12-28 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16305721#comment-16305721
 ] 

Erick Erickson commented on LUCENE-7976:


OK, let's see if I can summarize where we are on this:

1> make TMP respect maxMergeSegmentSize, even during forcemerge unless 
maxSegments is specified (see <3>).

2> Add some documentation about how reclaimDeletesWeight can be used to tune 
the % deleted documents that will be in the index along with some guidance. 
Exactly how this should be set is rather opaque. It defaults to 2.0. The 
comment in the code is: "but be careful not to go so high that way too much 
merging takes place; a value of 3.0 is probably nearly too high". We need to 
keep people from setting it to 1000. Should we establish an upper bound with 
perhaps a warning if it's exceeded? 

3> If people want the old behavior they have two choices:
3a> set maxMergedSegmentMB very high. This has the consequence of kicking in 
when normal merging happens. I think this is sub-optimal for the pattern where 
once a day I index docs and then want to optimize at the end though.
3b> specify maxSegments = 1 during forceMerge. This will override any 
maxMergedSegmentMB settings.

<4b> is my attempt to reconcile the issue of _wanting_ one huge segment but 
only when doing forceMerge. Yes, they can back themselves into a the same 
corner they get into now by doing this, but this is acceptable IMO. We're not 
trying to make it _impossible_ to get into a bad state, just trying to make it 
so users don't do it by accident.

Is this at least good enough for going on with until we see how it behaves?

Meanwhile, I'll check in SOLR-7733

> Add a parameter to TieredMergePolicy to merge segments that have more than X 
> percent deleted documents
> --
>
> Key: LUCENE-7976
> URL: https://issues.apache.org/jira/browse/LUCENE-7976
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Erick Erickson
> Attachments: LUCENE-7976.patch
>
>
> We're seeing situations "in the wild" where there are very large indexes (on 
> disk) handled quite easily in a single Lucene index. This is particularly 
> true as features like docValues move data into MMapDirectory space. The 
> current TMP algorithm allows on the order of 50% deleted documents as per a 
> dev list conversation with Mike McCandless (and his blog here:  
> https://www.elastic.co/blog/lucenes-handling-of-deleted-documents).
> Especially in the current era of very large indexes in aggregate, (think many 
> TB) solutions like "you need to distribute your collection over more shards" 
> become very costly. Additionally, the tempting "optimize" button exacerbates 
> the issue since once you form, say, a 100G segment (by 
> optimizing/forceMerging) it is not eligible for merging until 97.5G of the 
> docs in it are deleted (current default 5G max segment size).
> The proposal here would be to add a new parameter to TMP, something like 
>  (no, that's not serious name, suggestions 
> welcome) which would default to 100 (or the same behavior we have now).
> So if I set this parameter to, say, 20%, and the max segment size stays at 
> 5G, the following would happen when segments were selected for merging:
> > any segment with > 20% deleted documents would be merged or rewritten NO 
> > MATTER HOW LARGE. There are two cases,
> >> the segment has < 5G "live" docs. In that case it would be merged with 
> >> smaller segments to bring the resulting segment up to 5G. If no smaller 
> >> segments exist, it would just be rewritten
> >> The segment has > 5G "live" docs (the result of a forceMerge or optimize). 
> >> It would be rewritten into a single segment removing all deleted docs no 
> >> matter how big it is to start. The 100G example above would be rewritten 
> >> to an 80G segment for instance.
> Of course this would lead to potentially much more I/O which is why the 
> default would be the same behavior we see now. As it stands now, though, 
> there's no way to recover from an optimize/forceMerge except to re-index from 
> scratch. We routinely see 200G-300G Lucene indexes at this point "in the 
> wild" with 10s of  shards replicated 3 or more times. And that doesn't even 
> include having these over HDFS.
> Alternatives welcome! Something like the above seems minimally invasive. A 
> new merge policy is certainly an alternative.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11794) PULL replicas stop replicating after schema push and RELOAD collection action

2017-12-28 Thread Samuel Tatipamula (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samuel Tatipamula updated SOLR-11794:
-
Attachment: SOLR-11794.patch

Adding a working patch - not sure if it is the optimal way to go about starting 
the replication after the core reload.

> PULL replicas stop replicating after schema push and RELOAD collection action
> -
>
> Key: SOLR-11794
> URL: https://issues.apache.org/jira/browse/SOLR-11794
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: replication (java), Schema and Analysis, SolrCloud, 
> update
>Affects Versions: 7.1, 7.2
> Environment: Linux version 2.6.32-642.15.1.el6.x86_64 
> (mockbu...@c1bm.rdu2.centos.org) (gcc version 4.4.7 20120313 (Red Hat 
> 4.4.7-17) (GCC) ) #1 SMP Fri Feb 24 14:31:22 UTC 2017
>Reporter: Samuel Tatipamula
>Priority: Critical
>  Labels: patch
> Attachments: SOLR-11794.patch
>
>
> h3. *UPDATE*
> PULL replica replication stops after calling the RELOAD collection API, even 
> without any config/schema changes!
> It's also happening when schema API is used to add a new field.
> An operating SolrCloud with NRT, TLOG, and PULL replicas.
> Solr - 7.1.0
> ZK - 3.4.10
> Used config set - sample_techproducts_configs
> Shards - 1
> Whenever a schema change (adding of new fields/changing field types) is 
> pushed to ZK and the collection is reloaded using
> /solr/admin/collections?action=RELOAD=sample, the index changes stop 
> replicating to PULL replicas. NRT and TLOG are able to replicate the index.
> Before the schema change, I can see the indexFetcher thread running on PULL 
> replica
> 2017-12-26 10:17:11.802 INFO  (indexFetcher-14-thread-1) [c:sample s:shard1 
> r:core_node6 x:sample_shard1_replica_p5] o.a.s.h.IndexFetcher Master's 
> generation: 2
> 2017-12-26 10:17:11.802 INFO  (indexFetcher-14-thread-1) [c:sample s:shard1 
> r:core_node6 x:sample_shard1_replica_p5] o.a.s.h.IndexFetcher Master's 
> version: 1514283298419
> 2017-12-26 10:17:11.802 INFO  (indexFetcher-14-thread-1) [c:sample s:shard1 
> r:core_node6 x:sample_shard1_replica_p5] o.a.s.h.IndexFetcher Slave's 
> generation: 2
> 2017-12-26 10:17:11.802 INFO  (indexFetcher-14-thread-1) [c:sample s:shard1 
> r:core_node6 x:sample_shard1_replica_p5] o.a.s.h.IndexFetcher Slave's 
> version: 1514283298419
> 2017-12-26 10:17:11.802 INFO  (indexFetcher-14-thread-1) [c:sample s:shard1 
> r:core_node6 x:sample_shard1_replica_p5] o.a.s.h.IndexFetcher Slave in sync 
> with master.
> After that, the following change in schema that is made to managed-schema of 
> sample_techproducts_configs, pushed to ZK, and collection reloaded.
> 
> 
> 
> I can no longer see IndexFetcher thread running on PULL replica. No logs are 
> printed. The logs end with the collection reload log
> 2017-12-26 10:22:09.256 INFO  (qtp128526626-16) [c:sample s:shard1 
> r:core_node6 x:sample_shard1_replica_p5] o.a.s.s.HttpSolrCall [admin] 
> webapp=null path=/admin/cores 
> params={core=sample_shard1_replica_p5=/admin/cores=RELOAD=javabin=2}
>  status=0 QTime=624
> The index is never modified after this, and leader doesn't get the polls from 
> the PULL replica.
> Observations:
> - Manually forcing an index fetch using /replication?command=fetchindex syncs 
> the index, but doesn't start the IndexFetcher polling.
> - Restarting the replica will sync the index, starts IndexFetcher thread and 
> polling.
> - Removing and adding the replica back as PULL will sync the index, starts 
> IndexFetcher thread and polling.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Windows (32bit/jdk1.8.0_144) - Build # 7078 - Still Unstable!

2017-12-28 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7078/
Java: 32bit/jdk1.8.0_144 -client -XX:+UseG1GC

7 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.lucene.store.TestMultiMMap

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J0\temp\lucene.store.TestMultiMMap_FBDEF38E39F22023-001\testSeekSliceZero-027:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J0\temp\lucene.store.TestMultiMMap_FBDEF38E39F22023-001\testSeekSliceZero-027
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J0\temp\lucene.store.TestMultiMMap_FBDEF38E39F22023-001\testSeekSliceZero-027:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J0\temp\lucene.store.TestMultiMMap_FBDEF38E39F22023-001\testSeekSliceZero-027

at __randomizedtesting.SeedInfo.seed([FBDEF38E39F22023]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  junit.framework.TestSuite.org.apache.lucene.mockfile.TestExtrasFS

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\test-framework\test\J1\temp\lucene.mockfile.TestExtrasFS_94477F2E239C0C77-001\tempDir-004:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\test-framework\test\J1\temp\lucene.mockfile.TestExtrasFS_94477F2E239C0C77-001\tempDir-004
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\test-framework\test\J1\temp\lucene.mockfile.TestExtrasFS_94477F2E239C0C77-001\tempDir-004:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\test-framework\test\J1\temp\lucene.mockfile.TestExtrasFS_94477F2E239C0C77-001\tempDir-004

at __randomizedtesting.SeedInfo.seed([94477F2E239C0C77]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  junit.framework.TestSuite.org.apache.solr.TestDistributedMissingSort

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.TestDistributedMissingSort_F888BFE5091C1171-001\tempDir-001\shard2\collection1:
 java.nio.file.DirectoryNotEmptyException: 

[jira] [Updated] (SOLR-11802) Add wilcoxonSignedRank Stream Evaluator to support the Wilcoxon Signed Rank Test

2017-12-28 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-11802:
--
Fix Version/s: 7.3

> Add wilcoxonSignedRank Stream Evaluator to support the  Wilcoxon Signed Rank 
> Test
> -
>
> Key: SOLR-11802
> URL: https://issues.apache.org/jira/browse/SOLR-11802
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 7.3
>
>
> This ticket will add the  Wilcoxon Signed Rank Test to the Streaming 
> Expression statistical function library. 
> https://en.wikipedia.org/wiki/Wilcoxon_signed-rank_test
> Implementation provided by Apache Commons Math.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-11802) Add wilcoxonSignedRank Stream Evaluator to support the Wilcoxon Signed Rank Test

2017-12-28 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein reassigned SOLR-11802:
-

Assignee: Joel Bernstein

> Add wilcoxonSignedRank Stream Evaluator to support the  Wilcoxon Signed Rank 
> Test
> -
>
> Key: SOLR-11802
> URL: https://issues.apache.org/jira/browse/SOLR-11802
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 7.3
>
>
> This ticket will add the  Wilcoxon Signed Rank Test to the Streaming 
> Expression statistical function library. 
> https://en.wikipedia.org/wiki/Wilcoxon_signed-rank_test
> Implementation provided by Apache Commons Math.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-11802) Add wilcoxonSignedRank Stream Evaluator to support the Wilcoxon Signed Rank Test

2017-12-28 Thread Joel Bernstein (JIRA)
Joel Bernstein created SOLR-11802:
-

 Summary: Add wilcoxonSignedRank Stream Evaluator to support the  
Wilcoxon Signed Rank Test
 Key: SOLR-11802
 URL: https://issues.apache.org/jira/browse/SOLR-11802
 Project: Solr
  Issue Type: New Feature
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Joel Bernstein


This ticket will add the  Wilcoxon Signed Rank Test to the Streaming Expression 
statistical function library. 

https://en.wikipedia.org/wiki/Wilcoxon_signed-rank_test

Implementation provided by Apache Commons Math.




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Welcome Karl Wright to the PMC

2017-12-28 Thread Dennis Gove
Welcome Karl! Congratulations!

On Thu, Dec 28, 2017 at 2:32 PM, David Smiley 
wrote:

> Welcome Karl!
>
> On Thu, Dec 28, 2017 at 12:36 PM Ahmet Arslan 
> wrote:
>
>> Congratulations  Karl!
>>
>> Ahmet
>>
>>
>> On Thursday, December 28, 2017, 7:32:41 PM GMT+3, Steve Rowe <
>> sar...@gmail.com> wrote:
>>
>>
>> Congrats and welcome Karl!
>>
>> --
>> Steve
>> www.lucidworks.com
>>
>> > On Dec 28, 2017, at 9:08 AM, Adrien Grand  wrote:
>> >
>> > I am pleased to announce that Karl Wright has accepted the PMC's
>> invitation to join.
>> >
>> > Welcome Karl!
>>
>>
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>> --
> Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
> LinkedIn: http://linkedin.com/in/davidwsmiley | Book: http://www.
> solrenterprisesearchserver.com
>


Re: Welcome Karl Wright to the PMC

2017-12-28 Thread David Smiley
Welcome Karl!

On Thu, Dec 28, 2017 at 12:36 PM Ahmet Arslan 
wrote:

> Congratulations  Karl!
>
> Ahmet
>
>
> On Thursday, December 28, 2017, 7:32:41 PM GMT+3, Steve Rowe <
> sar...@gmail.com> wrote:
>
>
> Congrats and welcome Karl!
>
> --
> Steve
> www.lucidworks.com
>
> > On Dec 28, 2017, at 9:08 AM, Adrien Grand  wrote:
> >
> > I am pleased to announce that Karl Wright has accepted the PMC's
> invitation to join.
> >
> > Welcome Karl!
>
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
> --
Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
http://www.solrenterprisesearchserver.com


[jira] [Commented] (SOLR-11656) TLOG replication doesn't work properly after rebalancing leaders.

2017-12-28 Thread Samuel Tatipamula (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16305680#comment-16305680
 ] 

Samuel Tatipamula commented on SOLR-11656:
--

[~erickerickson] I have observed that, once a reload command is issued on a 
collection, all PULL replicas stop replicating. Thought this may be similar to 
the issue being discussed here. Check 
https://issues.apache.org/jira/browse/SOLR-11794 for more details.

> TLOG replication doesn't work properly after rebalancing leaders.
> -
>
> Key: SOLR-11656
> URL: https://issues.apache.org/jira/browse/SOLR-11656
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: replication (java)
>Affects Versions: 7.1
>Reporter: Yuki Yano
>Assignee: Erick Erickson
> Attachments: SOLR-11656.patch
>
>
> With TLOG replica type, the replication may stop after invoking rebalance 
> leaders API.
> This can be reproduced by following steps:
> # Create SolrCloud with TLOG replica type.
> # Set perferredleader flag to some of no-leader nodes.
> # Invoke rebalance leaders API.
> # The replication stops in nodes which were "leader" before rebalancing. 
> Because the leader node doesn't have the replication thread, we need to 
> create it when the status is changed from "leader" to "replica". On the other 
> hand, rebalance leaders API doesn't consider this matter, and the replication 
> may stop if the status is changed from "leader" to "replica" by rebalance 
> leaders.
> Note that, we can avoid this problem if we reload or restart Solr after 
> rebalancing leaders.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11172) Add Mann-Whitney U test Stream Evaluator

2017-12-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16305675#comment-16305675
 ] 

ASF subversion and git services commented on SOLR-11172:


Commit 1f48fc4a9e02f91b0f6e4e429f1adeab490402fc in lucene-solr's branch 
refs/heads/branch_7x from [~joel.bernstein]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=1f48fc4 ]

SOLR-11172: Add Mann-Whitney U test Stream Evaluator


> Add Mann-Whitney U test Stream Evaluator
> 
>
> Key: SOLR-11172
> URL: https://issues.apache.org/jira/browse/SOLR-11172
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: master (8.0), 7.3
>
> Attachments: SOLR-11172
>
>
> This ticket will add a Stream Evaluator to perform the Mann-Whitney U Test on 
> two arrays of numbers.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11172) Add Mann-Whitney U test Stream Evaluator

2017-12-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16305671#comment-16305671
 ] 

ASF subversion and git services commented on SOLR-11172:


Commit fbea59b0864768356f1057a0a099d8a54887d272 in lucene-solr's branch 
refs/heads/master from [~joel.bernstein]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=fbea59b ]

SOLR-11172: Add Mann-Whitney U test Stream Evaluator


> Add Mann-Whitney U test Stream Evaluator
> 
>
> Key: SOLR-11172
> URL: https://issues.apache.org/jira/browse/SOLR-11172
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: master (8.0), 7.3
>
> Attachments: SOLR-11172
>
>
> This ticket will add a Stream Evaluator to perform the Mann-Whitney U Test on 
> two arrays of numbers.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk-9.0.1) - Build # 1068 - Unstable!

2017-12-28 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/1068/
Java: 64bit/jdk-9.0.1 -XX:+UseCompressedOops -XX:+UseG1GC

5 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.BasicZkTest

Error Message:
SolrCore.getOpenCount()==2

Stack Trace:
java.lang.RuntimeException: SolrCore.getOpenCount()==2
at __randomizedtesting.SeedInfo.seed([D82CD0377CEDB37A]:0)
at org.apache.solr.util.TestHarness.close(TestHarness.java:379)
at org.apache.solr.SolrTestCaseJ4.deleteCore(SolrTestCaseJ4.java:792)
at 
org.apache.solr.cloud.AbstractZkTestCase.azt_afterClass(AbstractZkTestCase.java:147)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:897)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.BasicZkTest

Error Message:
SolrCore.getOpenCount()==2

Stack Trace:
java.lang.RuntimeException: SolrCore.getOpenCount()==2
at __randomizedtesting.SeedInfo.seed([D82CD0377CEDB37A]:0)
at org.apache.solr.util.TestHarness.close(TestHarness.java:379)
at org.apache.solr.SolrTestCaseJ4.deleteCore(SolrTestCaseJ4.java:792)
at 
org.apache.solr.SolrTestCaseJ4.teardownTestCases(SolrTestCaseJ4.java:288)
at jdk.internal.reflect.GeneratedMethodAccessor52.invoke(Unknown Source)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:897)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-11258) ChaosMonkeySafeLeaderWithPullReplicasTest fails a lot & reproducibly: The Monkey ran for over 45 seconds and no jetties were stopped - this is worth investigating!

2017-12-28 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16305639#comment-16305639
 ] 

Steve Rowe commented on SOLR-11258:
---

This branch_7x nightly failure from my Jenkins reproduced for me 5/5 iterations:

{noformat}
Checking out Revision 89344ea4c5c1c1f7da2797f0e724574751723976 
(refs/remotes/origin/branch_7x)
[...]
   [junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=ChaosMonkeySafeLeaderWithPullReplicasTest -Dtests.method=test 
-Dtests.seed=DD16DE67F3F5708A -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/lucene-data/enwiki.random.lines.txt 
-Dtests.locale=fr -Dtests.timezone=US/Indiana-Starke -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
   [junit4] FAILURE  104s J6 | ChaosMonkeySafeLeaderWithPullReplicasTest.test 
<<<
   [junit4]> Throwable #1: java.lang.AssertionError: The Monkey ran for 
over 60 seconds and no jetties were stopped - this is worth investigating!
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([DD16DE67F3F5708A:5542E1BD5D091D72]:0)
   [junit4]>at 
org.apache.solr.cloud.ChaosMonkey.stopTheMonkey(ChaosMonkey.java:589)
   [junit4]>at 
org.apache.solr.cloud.ChaosMonkeySafeLeaderWithPullReplicasTest.test(ChaosMonkeySafeLeaderWithPullReplicasTest.java:175)
   [junit4]>at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:993)
   [junit4]>at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
   [junit4]>at java.lang.Thread.run(Thread.java:748)
[...]
   [junit4]   2> NOTE: test params are: codec=Asserting(Lucene70): 
{rnd_b=TestBloomFilteredLucenePostings(BloomFilteringPostingsFormat(Lucene50(blocksize=128))),
 a_t=PostingsFormat(name=Memory), 
id=TestBloomFilteredLucenePostings(BloomFilteringPostingsFormat(Lucene50(blocksize=128)))},
 docValues:{_version_=DocValuesFormat(name=Asserting)}, 
maxPointsInLeafNode=590, maxMBSortInHeap=5.448183350014092, 
sim=RandomSimilarity(queryNorm=false): {}, locale=fr, timezone=US/Indiana-Starke
   [junit4]   2> NOTE: Linux 4.1.0-custom2-amd64 amd64/Oracle Corporation 
1.8.0_151 (64-bit)/cpus=16,threads=6,free=170276048,total=524288000
{noformat}

> ChaosMonkeySafeLeaderWithPullReplicasTest fails a lot & reproducibly:  The 
> Monkey ran for over 45 seconds and no jetties were stopped - this is worth 
> investigating!
> 
>
> Key: SOLR-11258
> URL: https://issues.apache.org/jira/browse/SOLR-11258
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>
> Between June21 & Aug18, there have been 18 failures like this...
> {noformat}
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=ChaosMonkeySafeLeaderWithPullReplicasTest -Dtests.method=test 
> -Dtests.seed=7669B63E9E4D1685 -Dtests.nightly=true -Dtests.slow=true 
> -Dtests.locale=pa-Guru -Dtests.timezone=Europe/Podgorica -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>[junit4] FAILURE 82.4s | ChaosMonkeySafeLeaderWithPullReplicasTest.test <<<
>[junit4]> Throwable #1: java.lang.AssertionError: The Monkey ran for 
> over 45 seconds and no jetties were stopped - this is worth investigating!
>[junit4]>at 
> __randomizedtesting.SeedInfo.seed([7669B63E9E4D1685:FE3D89E430B17B7D]:0)
>[junit4]>at 
> org.apache.solr.cloud.ChaosMonkey.stopTheMonkey(ChaosMonkey.java:587)
>[junit4]>at 
> org.apache.solr.cloud.ChaosMonkeySafeLeaderWithPullReplicasTest.test(ChaosMonkeySafeLeaderWithPullReplicasTest.java:174)
>[junit4]>at 
> org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:993)
>[junit4]>at 
> org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
>[junit4]>at java.lang.Thread.run(Thread.java:748)
> {noformat}
> In my own testing, when these failures happen, the seeds reproduce - 
> suggesting the problem is logic flaw in the test that can can happen by 
> chance.
> Perhaps the ChaosMonkey needs to be changed to get more aggressive about 
> stopping nodes bsaed on how long it's been since hte last time it stopped a 
> node?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: 

[jira] [Updated] (SOLR-11801) support customisation of the "highlighting" query response element

2017-12-28 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11801?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated SOLR-11801:
---
Attachment: SOLR-11801.patch

support customisation of the "highlighting" query response element
(Ramsey Haddad, Pranav Murugappan, Christine Poerschke)

The attached patch meets the objective but we would welcome feedback and input, 
generally and specifically on the following points:

* CustomHighlightComponentTest extending AbstractFullDistribZkTestBase seems a 
little heavy. To do/Yet to explore extending SolrCloudTestCase instead whilst 
still using the Config API, similar to TestReqParamsAPI.

* [~dsmiley], you as part of SOLR-9708 deprecated some code portions and 
[mentioned|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/7.2.0/solr/core/src/java/org/apache/solr/handler/component/HighlightComponent.java#L86]
 about future restructuring. It seems the changes proposed here would not 
interfere with such plans, do you agree?

* Instead of refactoring the code as attached to allow customisation, might 
there be scope for and/or value in formally exposing the alternative 
"highlighting" query response element e.g. via a new {{hl.collator}} parameter 
(similar to JSON Response Writer's [json.nl 
parameter|https://lucene.apache.org/solr/guide/7_2/response-writers.html#json-nl])
 e.g.
** {{hl.collator=mapmap}} - the default corresponding to current/existing 
behaviour, and
** {{hl.collator=arrmap}} - the alternative format as in the patch's test 
component, and
** (potentially in future) {{hl.collator=arrarr}} - avoiding use of the "manu" 
field as a key in the snippets map, i.e.
{code}
{
  ...
  "highlighting" : [
{
  "id" : "MA147LL/A",
  "highlights" : [
{
  "field" : "manu",
  "snippets" : [
"Apple Computer Inc."
  ]
}
  ]
}
  ]
}
{code}

> support customisation of the "highlighting" query response element
> --
>
> Key: SOLR-11801
> URL: https://issues.apache.org/jira/browse/SOLR-11801
> Project: Solr
>  Issue Type: New Feature
>  Components: highlighter
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-11801.patch
>
>
> The objective and use case behind the proposed changes is to be able to 
> receive not the out-of-the-box highlighting map
> {code}
> {
>   ...
>   "highlighting" : {
> "MA147LL/A" : {
>   "manu" : [
> "Apple Computer Inc."
>   ]
> }
>   }
> }
> {code}
> as illustrated in 
> https://lucene.apache.org/solr/guide/7_2/highlighting.html#highlighting-in-the-query-response
>  but to be able to customise the highlighting element of the query response 
> to (for example) be like this
> {code}
> {
>   ...
>   "highlighting" : [
> {
>   "id" : "MA147LL/A",
>   "snippets" : {
> "manu" : [
>   "Apple Computer Inc."
> ]
>   }
> }
>   ]
> }
> {code}
> where the highlighting element itself is a list and where the keys of each 
> list element are 'knowable' in advance i.e. they are not 'unknowable' 
> document ids.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-9.0.1) - Build # 21162 - Still Unstable!

2017-12-28 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/21162/
Java: 64bit/jdk-9.0.1 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

7 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.BasicZkTest

Error Message:
SolrCore.getOpenCount()==2

Stack Trace:
java.lang.RuntimeException: SolrCore.getOpenCount()==2
at __randomizedtesting.SeedInfo.seed([57DCFF2A82A393CF]:0)
at org.apache.solr.util.TestHarness.close(TestHarness.java:379)
at org.apache.solr.SolrTestCaseJ4.deleteCore(SolrTestCaseJ4.java:792)
at 
org.apache.solr.cloud.AbstractZkTestCase.azt_afterClass(AbstractZkTestCase.java:147)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:897)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.BasicZkTest

Error Message:
SolrCore.getOpenCount()==2

Stack Trace:
java.lang.RuntimeException: SolrCore.getOpenCount()==2
at __randomizedtesting.SeedInfo.seed([57DCFF2A82A393CF]:0)
at org.apache.solr.util.TestHarness.close(TestHarness.java:379)
at org.apache.solr.SolrTestCaseJ4.deleteCore(SolrTestCaseJ4.java:792)
at 
org.apache.solr.SolrTestCaseJ4.teardownTestCases(SolrTestCaseJ4.java:288)
at jdk.internal.reflect.GeneratedMethodAccessor116.invoke(Unknown 
Source)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:897)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Updated] (SOLR-11218) Fail and return an error when attempting to delete a collection that's part of an alias

2017-12-28 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-11218:
--
Summary: Fail and return an error when attempting to delete a collection 
that's part of an alias  (was: Return an error when a collection is deleted 
that's part of an alias)

> Fail and return an error when attempting to delete a collection that's part 
> of an alias
> ---
>
> Key: SOLR-11218
> URL: https://issues.apache.org/jira/browse/SOLR-11218
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
> Attachments: SOLR-11218.patch, SOLR-11218.patch
>
>
> We don't really have good tests that when an alias and collection have the 
> same name "the right thing" happens. In this case, admin operations should 
> operate on the collection rather than the alias.
> Additionally we should have some tests to insure that alias resolution takes 
> precedence for adds and searches in this case.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-11801) support customisation of the "highlighting" query response element

2017-12-28 Thread Christine Poerschke (JIRA)
Christine Poerschke created SOLR-11801:
--

 Summary: support customisation of the "highlighting" query 
response element
 Key: SOLR-11801
 URL: https://issues.apache.org/jira/browse/SOLR-11801
 Project: Solr
  Issue Type: New Feature
  Components: highlighter
Reporter: Christine Poerschke
Assignee: Christine Poerschke
Priority: Minor


The objective and use case behind the proposed changes is to be able to receive 
not the out-of-the-box highlighting map
{code}
{
  ...
  "highlighting" : {
"MA147LL/A" : {
  "manu" : [
"Apple Computer Inc."
  ]
}
  }
}
{code}
as illustrated in 
https://lucene.apache.org/solr/guide/7_2/highlighting.html#highlighting-in-the-query-response
 but to be able to customise the highlighting element of the query response to 
(for example) be like this
{code}
{
  ...
  "highlighting" : [
{
  "id" : "MA147LL/A",
  "snippets" : {
"manu" : [
  "Apple Computer Inc."
]
  }
}
  ]
}
{code}
where the highlighting element itself is a list and where the keys of each list 
element are 'knowable' in advance i.e. they are not 'unknowable' document ids.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11218) Return an error when a collection is deleted that's part of an alias

2017-12-28 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-11218:
--
Attachment: SOLR-11218.patch

Changed the title since "beefing up testing" is not controversial at all.

This patch adds some tests but also changes behavior by refusing to delete a 
collection if it's pointed to by an alias.

Comments?

I'm not going to commit this until at least next week sometime to give people a 
chance to comment.

> Return an error when a collection is deleted that's part of an alias
> 
>
> Key: SOLR-11218
> URL: https://issues.apache.org/jira/browse/SOLR-11218
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
> Attachments: SOLR-11218.patch, SOLR-11218.patch
>
>
> We don't really have good tests that when an alias and collection have the 
> same name "the right thing" happens. In this case, admin operations should 
> operate on the collection rather than the alias.
> Additionally we should have some tests to insure that alias resolution takes 
> precedence for adds and searches in this case.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11218) Return an error when a collection is deleted that's part of an alias

2017-12-28 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-11218:
--
Summary: Return an error when a collection is deleted that's part of an 
alias  (was: Beef up alias testing when an alias and collection have the same 
name)

> Return an error when a collection is deleted that's part of an alias
> 
>
> Key: SOLR-11218
> URL: https://issues.apache.org/jira/browse/SOLR-11218
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
> Attachments: SOLR-11218.patch
>
>
> We don't really have good tests that when an alias and collection have the 
> same name "the right thing" happens. In this case, admin operations should 
> operate on the collection rather than the alias.
> Additionally we should have some tests to insure that alias resolution takes 
> precedence for adds and searches in this case.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Welcome Karl Wright to the PMC

2017-12-28 Thread Ahmet Arslan
 Congratulations  Karl!
Ahmet

On Thursday, December 28, 2017, 7:32:41 PM GMT+3, Steve Rowe 
 wrote:  
 
 Congrats and welcome Karl!

--
Steve
www.lucidworks.com

> On Dec 28, 2017, at 9:08 AM, Adrien Grand  wrote:
> 
> I am pleased to announce that Karl Wright has accepted the PMC's invitation 
> to join.
> 
> Welcome Karl!


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org
  

[JENKINS] Lucene-Solr-7.x-Solaris (64bit/jdk1.8.0) - Build # 358 - Unstable!

2017-12-28 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Solaris/358/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

4 tests failed.
FAILED:  org.apache.solr.cloud.RecoveryZkTest.test

Error Message:


Stack Trace:
java.util.concurrent.TimeoutException
at 
__randomizedtesting.SeedInfo.seed([15BD1115E22ED0EC:9DE92ECF4CD2BD14]:0)
at 
org.apache.solr.common.cloud.ZkStateReader.waitForState(ZkStateReader.java:1261)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.waitForState(CloudSolrClient.java:449)
at org.apache.solr.cloud.RecoveryZkTest.test(RecoveryZkTest.java:122)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
org.apache.solr.cloud.autoscaling.sim.TestTriggerIntegration.testEventQueue

Error Message:
action wasn't interrupted

Stack Trace:
java.lang.AssertionError: action wasn't interrupted
at 
__randomizedtesting.SeedInfo.seed([15BD1115E22ED0EC:DC0853BBEB491619]:0)
at 

[jira] [Commented] (SOLR-11800) Improve error message when parsing RankQuery

2017-12-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11800?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16305583#comment-16305583
 ] 

ASF GitHub Bot commented on SOLR-11800:
---

Github user diegoceccarelli commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/295#discussion_r158969387
  
--- Diff: solr/core/src/java/org/apache/solr/search/QParser.java ---
@@ -360,6 +360,10 @@ public static QParser getParser(String qstr, String 
parserName, boolean allowLoc
 }
 
 QParserPlugin qplug = req.getCore().getQueryPlugin(parserName);
+if (qplug == null){
+  // error: log ?
--- End diff --

I would log an error here, but `QParser` doesn't have a logger, should we 
add one? 


> Improve error message when parsing RankQuery
> 
>
> Key: SOLR-11800
> URL: https://issues.apache.org/jira/browse/SOLR-11800
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Diego Ceccarelli
>Priority: Minor
>
> When a user specifies something wrong for the parameter {{rq}} sometimes it 
> is hard to understand where is the problem, this patch attempts to improve 
> the error message returned in the response.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #295: SOLR-11800: Improve error message when parsin...

2017-12-28 Thread diegoceccarelli
Github user diegoceccarelli commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/295#discussion_r158969387
  
--- Diff: solr/core/src/java/org/apache/solr/search/QParser.java ---
@@ -360,6 +360,10 @@ public static QParser getParser(String qstr, String 
parserName, boolean allowLoc
 }
 
 QParserPlugin qplug = req.getCore().getQueryPlugin(parserName);
+if (qplug == null){
+  // error: log ?
--- End diff --

I would log an error here, but `QParser` doesn't have a logger, should we 
add one? 


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11800) Improve error message when parsing RankQuery

2017-12-28 Thread Diego Ceccarelli (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11800?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Diego Ceccarelli updated SOLR-11800:

Description: When a user specifies something wrong for the parameter {{rq}} 
sometimes it is hard to understand where is the problem, this patch attempts to 
improve the error message returned in the response.  (was: when a user specify 
something wrong for the parameter rq sometimes it is hard to understand where 
is the problem, this patch attempts to improve the error message returned in 
the response.)

> Improve error message when parsing RankQuery
> 
>
> Key: SOLR-11800
> URL: https://issues.apache.org/jira/browse/SOLR-11800
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Diego Ceccarelli
>Priority: Minor
>
> When a user specifies something wrong for the parameter {{rq}} sometimes it 
> is hard to understand where is the problem, this patch attempts to improve 
> the error message returned in the response.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11800) Improve error message when parsing RankQuery

2017-12-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11800?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16305580#comment-16305580
 ] 

ASF GitHub Bot commented on SOLR-11800:
---

GitHub user diegoceccarelli opened a pull request:

https://github.com/apache/lucene-solr/pull/295

SOLR-11800: Improve error message when parsing RankQuery

When a user specifies something wrong for the parameter `rq` sometimes it 
is hard to understand where is the problem, this patch attempts to improve the 
error message returned in the response.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/bloomberg/lucene-solr SOLR-11800

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/295.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #295


commit 64928e43710eb86be0f7e23adbc8107da124efb8
Author: Diego Ceccarelli 
Date:   2017-12-28T16:22:55Z

SOLR-11800: Improve error message when parsing RankQuery




> Improve error message when parsing RankQuery
> 
>
> Key: SOLR-11800
> URL: https://issues.apache.org/jira/browse/SOLR-11800
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Diego Ceccarelli
>Priority: Minor
>
> when a user specify something wrong for the parameter rq sometimes it is hard 
> to understand where is the problem, this patch attempts to improve the error 
> message returned in the response.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #295: SOLR-11800: Improve error message when parsin...

2017-12-28 Thread diegoceccarelli
GitHub user diegoceccarelli opened a pull request:

https://github.com/apache/lucene-solr/pull/295

SOLR-11800: Improve error message when parsing RankQuery

When a user specifies something wrong for the parameter `rq` sometimes it 
is hard to understand where is the problem, this patch attempts to improve the 
error message returned in the response.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/bloomberg/lucene-solr SOLR-11800

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/295.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #295


commit 64928e43710eb86be0f7e23adbc8107da124efb8
Author: Diego Ceccarelli 
Date:   2017-12-28T16:22:55Z

SOLR-11800: Improve error message when parsing RankQuery




---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-11800) Improve error message when parsing RankQuery

2017-12-28 Thread Diego Ceccarelli (JIRA)
Diego Ceccarelli created SOLR-11800:
---

 Summary: Improve error message when parsing RankQuery
 Key: SOLR-11800
 URL: https://issues.apache.org/jira/browse/SOLR-11800
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Diego Ceccarelli
Priority: Minor


when a user specify something wrong for the parameter rq sometimes it is hard 
to understand where is the problem, this patch attempts to improve the error 
message returned in the response.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Welcome Karl Wright to the PMC

2017-12-28 Thread Steve Rowe
Congrats and welcome Karl!

--
Steve
www.lucidworks.com

> On Dec 28, 2017, at 9:08 AM, Adrien Grand  wrote:
> 
> I am pleased to announce that Karl Wright has accepted the PMC's invitation 
> to join.
> 
> Welcome Karl!


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11799) Fix NPE and class cast exceptions in the TimeSeriesStream

2017-12-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16305569#comment-16305569
 ] 

ASF subversion and git services commented on SOLR-11799:


Commit 14206aec05b2db3ac165862b41d70ddd9ee69376 in lucene-solr's branch 
refs/heads/branch_7x from [~joel.bernstein]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=14206ae ]

SOLR-11799: Fix NPE and class cast exceptions in the TimeSeriesStream


> Fix NPE and class cast exceptions in the TimeSeriesStream
> -
>
> Key: SOLR-11799
> URL: https://issues.apache.org/jira/browse/SOLR-11799
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 7.3
>
> Attachments: SOLR-11799.patch
>
>
> Currently the timeseries Streaming Expression will throw an NPE if there are 
> no results for a bucket and any function other then count(*) is used. It can 
> also throw class cast exceptions if the JSON facet API returns a long for any 
> function (other then count(*)), as it is always expecting a double.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11799) Fix NPE and class cast exceptions in the TimeSeriesStream

2017-12-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16305563#comment-16305563
 ] 

ASF subversion and git services commented on SOLR-11799:


Commit 0c4fb31205aa68e4d432ce3a67f5cac6bb5a9681 in lucene-solr's branch 
refs/heads/master from [~joel.bernstein]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=0c4fb31 ]

SOLR-11799: Fix NPE and class cast exceptions in the TimeSeriesStream


> Fix NPE and class cast exceptions in the TimeSeriesStream
> -
>
> Key: SOLR-11799
> URL: https://issues.apache.org/jira/browse/SOLR-11799
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 7.3
>
> Attachments: SOLR-11799.patch
>
>
> Currently the timeseries Streaming Expression will throw an NPE if there are 
> no results for a bucket and any function other then count(*) is used. It can 
> also throw class cast exceptions if the JSON facet API returns a long for any 
> function (other then count(*)), as it is always expecting a double.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Welcome Dennis Gove to the PMC

2017-12-28 Thread Erick Erickson
Please follow the instructions here:
http://lucene.apache.org/solr/community.html#mailing-lists-irc. You must
use the _exact_ same e-mail as you used to subscribe.

If the initial try doesn't work and following the suggestions at the
"problems" link doesn't work for you, let us know. But note you need to
show us the _entire_ return header to allow anyone to diagnose the problem.

Best,
Erick


On Thu, Dec 28, 2017 at 12:36 AM, anurag choudhary <
anurag300choudh...@hotmail.com> wrote:

> please unsubscribe me
>
>
> --
> *From:* Ahmet Arslan 
> *Sent:* Thursday, December 28, 2017 1:49 AM
> *To:* dev@lucene.apache.org
> *Subject:* Re: Welcome Dennis Gove to the PMC
>
> Congratulations Dennis!
>
> Ahmet
>
>
>
>
> On Wednesday, December 27, 2017, 7:56:58 PM GMT+3, Dawid Weiss <
> dawid.we...@gmail.com> wrote:
>
>
>
>
>
> Congratulations Dennis!
>
> Dawid
>
> On Wed, Dec 27, 2017 at 5:37 PM, Anshum Gupta 
> wrote:
> > Congratulations and welcome Dennis!
> >
> > On Wed, Dec 27, 2017 at 4:59 PM Steve Rowe  wrote:
> >>
> >> Congrats and welcome Dennis!
> >>
> >> --
> >> Steve
> >> www.lucidworks.com
> Spark and Solr Based Enterprise Search Applications | Lucidworks
> 
> www.lucidworks.com
> Lucidworks Fusion is the Solr enterprise search and analytics platform
> powering the next generation of big data applications.
>
>
> >>
> >> > On Dec 26, 2017, at 8:12 AM, Joel Bernstein 
> wrote:
> >> >
> >> > I am pleased to announce that Dennis Gove has accepted the PMC's
> >> > invitation to join.
> >> >
> >> > Welcome Dennis!
> >>
> >>
> >> -
> >> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> >> For additional commands, e-mail: dev-h...@lucene.apache.org
>
> >>
> >
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


Re: Welcome Karl Wright to the PMC

2017-12-28 Thread Erick Erickson
Welcome Karl!

On Thu, Dec 28, 2017 at 8:05 AM, Joel Bernstein  wrote:
> Welcome Karl!
>
> Joel Bernstein
> http://joelsolr.blogspot.com/
>
> On Thu, Dec 28, 2017 at 9:52 AM, Martin Gainty  wrote:
>>
>> Willkommen Karl!
>>
>>
>> Martin
>> ___
>>
>> 
>> From: Adrien Grand 
>> Sent: Thursday, December 28, 2017 9:08 AM
>> To: Lucene Dev
>> Subject: Welcome Karl Wright to the PMC
>>
>> I am pleased to announce that Karl Wright has accepted the PMC's
>> invitation to join.
>>
>> Welcome Karl!
>
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Welcome Karl Wright to the PMC

2017-12-28 Thread Joel Bernstein
Welcome Karl!

Joel Bernstein
http://joelsolr.blogspot.com/

On Thu, Dec 28, 2017 at 9:52 AM, Martin Gainty  wrote:

> Willkommen Karl!
>
>
> Martin
> ___
>
> --
> *From:* Adrien Grand 
> *Sent:* Thursday, December 28, 2017 9:08 AM
> *To:* Lucene Dev
> *Subject:* Welcome Karl Wright to the PMC
>
> I am pleased to announce that Karl Wright has accepted the PMC's
> invitation to join.
>
> Welcome Karl!
>


[jira] [Updated] (SOLR-11799) Fix NPE and class cast exceptions in the TimeSeriesStream

2017-12-28 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-11799:
--
Summary: Fix NPE and class cast exceptions in the TimeSeriesStream  (was: 
Fix NPE and class exceptions in the TimeSeriesStream)

> Fix NPE and class cast exceptions in the TimeSeriesStream
> -
>
> Key: SOLR-11799
> URL: https://issues.apache.org/jira/browse/SOLR-11799
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 7.3
>
> Attachments: SOLR-11799.patch
>
>
> Currently the timeseries Streaming Expression will throw an NPE if there are 
> no results for a bucket and any function other then count(*) is used. It can 
> also throw class cast exceptions if the JSON facet API returns a long for any 
> function (other then count(*)), as it is always expecting a double.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11799) Fix NPE and class exceptions in the TimeSeriesStream

2017-12-28 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-11799:
--
Attachment: SOLR-11799.patch

> Fix NPE and class exceptions in the TimeSeriesStream
> 
>
> Key: SOLR-11799
> URL: https://issues.apache.org/jira/browse/SOLR-11799
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 7.3
>
> Attachments: SOLR-11799.patch
>
>
> Currently the timeseries Streaming Expression will throw an NPE if there are 
> no results for a bucket and any function other then count(*) is used. It can 
> also throw class cast exceptions if the JSON facet API returns a long for any 
> function (other then count(*)), as it is always expecting a double.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11799) Fix NPE and class exceptions in the TimeSeriesStream

2017-12-28 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-11799:
--
Description: Currently the timeseries Streaming Expression will throw an 
NPE if there are no results for a bucket and any function other then count(*) 
is used. It can also throw class cast exceptions if the JSON facet API returns 
a long for any function (other then count(*)), as it is always expecting a 
double.  (was: Currently the timeseries Streaming Expression will throw an NPE 
if there are no results for a bucket and any function other then count\(*\) is 
used. It can also throw class cast exceptions if the JSON facet API returns a 
long for any function (other then count\(*\)), as it is always expecting a 
double.)

> Fix NPE and class exceptions in the TimeSeriesStream
> 
>
> Key: SOLR-11799
> URL: https://issues.apache.org/jira/browse/SOLR-11799
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 7.3
>
>
> Currently the timeseries Streaming Expression will throw an NPE if there are 
> no results for a bucket and any function other then count(*) is used. It can 
> also throw class cast exceptions if the JSON facet API returns a long for any 
> function (other then count(*)), as it is always expecting a double.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-11799) Fix NPE and class exceptions in the TimeSeriesStream

2017-12-28 Thread Joel Bernstein (JIRA)
Joel Bernstein created SOLR-11799:
-

 Summary: Fix NPE and class exceptions in the TimeSeriesStream
 Key: SOLR-11799
 URL: https://issues.apache.org/jira/browse/SOLR-11799
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Joel Bernstein


Currently the timeseries Streaming Expression will throw an NPE if there are no 
results for a bucket and any function other then count\(*\) is used. It can 
also throw class cast exceptions if the JSON facet API returns a long for any 
function (other then count\(*\)), as it is always expecting a double.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11799) Fix NPE and class exceptions in the TimeSeriesStream

2017-12-28 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-11799:
--
Fix Version/s: 7.3

> Fix NPE and class exceptions in the TimeSeriesStream
> 
>
> Key: SOLR-11799
> URL: https://issues.apache.org/jira/browse/SOLR-11799
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 7.3
>
>
> Currently the timeseries Streaming Expression will throw an NPE if there are 
> no results for a bucket and any function other then count\(*\) is used. It 
> can also throw class cast exceptions if the JSON facet API returns a long for 
> any function (other then count\(*\)), as it is always expecting a double.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-11799) Fix NPE and class exceptions in the TimeSeriesStream

2017-12-28 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein reassigned SOLR-11799:
-

Assignee: Joel Bernstein

> Fix NPE and class exceptions in the TimeSeriesStream
> 
>
> Key: SOLR-11799
> URL: https://issues.apache.org/jira/browse/SOLR-11799
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 7.3
>
>
> Currently the timeseries Streaming Expression will throw an NPE if there are 
> no results for a bucket and any function other then count\(*\) is used. It 
> can also throw class cast exceptions if the JSON facet API returns a long for 
> any function (other then count\(*\)), as it is always expecting a double.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-MAVEN] Lucene-Solr-Maven-7.x #96: POMs out of sync

2017-12-28 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-7.x/96/

No tests ran.

Build Log:
[...truncated 17911 lines...]
BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-7.x/build.xml:860: The 
following error occurred while executing this line:
: Java returned: 137

Total time: 24 minutes 34 seconds
Build step 'Invoke Ant' marked build as failure
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1590 - Still Unstable!

2017-12-28 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1590/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.sim.TestExecutePlanAction.testIntegration

Error Message:


Stack Trace:
java.util.ConcurrentModificationException
at 
__randomizedtesting.SeedInfo.seed([FEDABDA36045C926:4EBBB38F457A6803]:0)
at java.util.ArrayList$Itr.checkForComodification(ArrayList.java:909)
at java.util.ArrayList$Itr.next(ArrayList.java:859)
at 
org.apache.solr.cloud.autoscaling.sim.SimSolrCloudTestCase.tearDown(SimSolrCloudTestCase.java:141)
at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:992)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 12469 lines...]
   [junit4] Suite: org.apache.solr.cloud.autoscaling.sim.TestExecutePlanAction
   [junit4]   2> Creating dataDir: 
/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/solr/build/solr-core/test/J0/temp/solr.cloud.autoscaling.sim.TestExecutePlanAction_FEDABDA36045C926-001/init-core-data-001
   [junit4]   2> 1242766 WARN  
(SUITE-TestExecutePlanAction-seed#[FEDABDA36045C926]-worker) [] 
o.a.s.SolrTestCaseJ4 startTrackingSearchers: numOpens=33 numCloses=33
   [junit4]   2> 1242766 INFO  

[jira] [Resolved] (SOLR-11793) reduce code duplication w.r.t. RestTestHarness(es)

2017-12-28 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke resolved SOLR-11793.

   Resolution: Fixed
Fix Version/s: 7.3
   master (8.0)

> reduce code duplication w.r.t. RestTestHarness(es)
> --
>
> Key: SOLR-11793
> URL: https://issues.apache.org/jira/browse/SOLR-11793
> Project: Solr
>  Issue Type: Test
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Fix For: master (8.0), 7.3
>
> Attachments: SOLR-11793.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (LUCENE-8110) fix potential IndexOutOfBoundsException in *Classifier.getClasses(?,int)

2017-12-28 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke reassigned LUCENE-8110:
---

Assignee: (was: Christine Poerschke)

> fix potential IndexOutOfBoundsException in *Classifier.getClasses(?,int)
> 
>
> Key: LUCENE-8110
> URL: https://issues.apache.org/jira/browse/LUCENE-8110
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Christine Poerschke
>Priority: Minor
> Attachments: LUCENE-8110.patch
>
>
> KNearestNeighborDocumentClassifier already has the [one-line 
> fix|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/7.2.0/lucene/classification/src/java/org/apache/lucene/classification/document/KNearestNeighborDocumentClassifier.java#L102]
>  via SOLR-8871 and this ticket here is to add the same to the remaining 
> Classifier classes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8110) fix potential IndexOutOfBoundsException in *Classifier.getClasses(?,int)

2017-12-28 Thread Christine Poerschke (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16305520#comment-16305520
 ] 

Christine Poerschke commented on LUCENE-8110:
-

Reverts complete. Another 'just noticed' thing, would it make sense to have 
more 'Component/s' choices e.g. classification and spatial on JIRA? A question 
for the dev-list that perhaps though rather than here.

> fix potential IndexOutOfBoundsException in *Classifier.getClasses(?,int)
> 
>
> Key: LUCENE-8110
> URL: https://issues.apache.org/jira/browse/LUCENE-8110
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: LUCENE-8110.patch
>
>
> KNearestNeighborDocumentClassifier already has the [one-line 
> fix|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/7.2.0/lucene/classification/src/java/org/apache/lucene/classification/document/KNearestNeighborDocumentClassifier.java#L102]
>  via SOLR-8871 and this ticket here is to add the same to the remaining 
> Classifier classes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8110) fix potential IndexOutOfBoundsException in *Classifier.getClasses(?,int)

2017-12-28 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated LUCENE-8110:

Lucene Fields: Patch Available  (was: New)

> fix potential IndexOutOfBoundsException in *Classifier.getClasses(?,int)
> 
>
> Key: LUCENE-8110
> URL: https://issues.apache.org/jira/browse/LUCENE-8110
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: LUCENE-8110.patch
>
>
> KNearestNeighborDocumentClassifier already has the [one-line 
> fix|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/7.2.0/lucene/classification/src/java/org/apache/lucene/classification/document/KNearestNeighborDocumentClassifier.java#L102]
>  via SOLR-8871 and this ticket here is to add the same to the remaining 
> Classifier classes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8110) fix potential IndexOutOfBoundsException in *Classifier.getClasses(?,int)

2017-12-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16305513#comment-16305513
 ] 

ASF subversion and git services commented on LUCENE-8110:
-

Commit 6652d4fb00f2c50ca7f6f640977675557734e367 in lucene-solr's branch 
refs/heads/branch_7x from [~cpoerschke]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=6652d4f ]

Revert "LUCENE-8110: Fix potential IndexOutOfBoundsException in 
*Classifier.getClasses(?,int)."

This reverts commit c73edb869a96bd6869da200944deb6d078cba283.


> fix potential IndexOutOfBoundsException in *Classifier.getClasses(?,int)
> 
>
> Key: LUCENE-8110
> URL: https://issues.apache.org/jira/browse/LUCENE-8110
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: LUCENE-8110.patch
>
>
> KNearestNeighborDocumentClassifier already has the [one-line 
> fix|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/7.2.0/lucene/classification/src/java/org/apache/lucene/classification/document/KNearestNeighborDocumentClassifier.java#L102]
>  via SOLR-8871 and this ticket here is to add the same to the remaining 
> Classifier classes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8110) fix potential IndexOutOfBoundsException in *Classifier.getClasses(?,int)

2017-12-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16305516#comment-16305516
 ] 

ASF subversion and git services commented on LUCENE-8110:
-

Commit 152d223b3235459a30f6a8b1cb5331bec46dfb3d in lucene-solr's branch 
refs/heads/master from [~cpoerschke]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=152d223 ]

Revert "LUCENE-8110: Fix potential IndexOutOfBoundsException in 
*Classifier.getClasses(?,int)."

This reverts commit af41d02eae6a58fd450553f9a09c9325ddf6e0ab.


> fix potential IndexOutOfBoundsException in *Classifier.getClasses(?,int)
> 
>
> Key: LUCENE-8110
> URL: https://issues.apache.org/jira/browse/LUCENE-8110
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: LUCENE-8110.patch
>
>
> KNearestNeighborDocumentClassifier already has the [one-line 
> fix|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/7.2.0/lucene/classification/src/java/org/apache/lucene/classification/document/KNearestNeighborDocumentClassifier.java#L102]
>  via SOLR-8871 and this ticket here is to add the same to the remaining 
> Classifier classes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8110) fix potential IndexOutOfBoundsException in *Classifier.getClasses(?,int)

2017-12-28 Thread Christine Poerschke (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16305512#comment-16305512
 ] 

Christine Poerschke commented on LUCENE-8110:
-

bq. Should we add a test as well?

Yes, that would be ideal. From what I vaguely recall, one way to encounter the 
exception is to index a document _without_ an assigned class before there are 
any other documents or before there are documents with an assigned class. Or 
alternatively asking for multiple predicted classes when there is only one 
class assigned so far, something like that. I only stumbled across this and 
don't have bandwidth at the moment to work on a test or tests, so will revert 
for now then.

> fix potential IndexOutOfBoundsException in *Classifier.getClasses(?,int)
> 
>
> Key: LUCENE-8110
> URL: https://issues.apache.org/jira/browse/LUCENE-8110
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: LUCENE-8110.patch
>
>
> KNearestNeighborDocumentClassifier already has the [one-line 
> fix|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/7.2.0/lucene/classification/src/java/org/apache/lucene/classification/document/KNearestNeighborDocumentClassifier.java#L102]
>  via SOLR-8871 and this ticket here is to add the same to the remaining 
> Classifier classes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8108) Field class should not let you analyze int values?

2017-12-28 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16305498#comment-16305498
 ] 

Adrien Grand commented on LUCENE-8108:
--

+1 to be less lenient. If someone needs to do something like this, I'd rather 
like the toString conversion to be performed by the user before creating the 
Field instance.

> Field class should not let you analyze int values?
> --
>
> Key: LUCENE-8108
> URL: https://issues.apache.org/jira/browse/LUCENE-8108
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Michael McCandless
> Attachments: LUCENE-8108.patch
>
>
> I stumbled on this by accident, by creating a {{Field}} instance with a 
> {{Integer}} value for its {{fieldsData}} and then setting {{tokenized = 
> true}} in its {{FieldType}}.
> If you do this then Lucene silently converts the int to a string and then 
> tokenizes it, e.g. applying synonyms, etc., if that's what your analysis 
> chain does.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Welcome Karl Wright to the PMC

2017-12-28 Thread Martin Gainty
Willkommen Karl!


Martin
___


From: Adrien Grand 
Sent: Thursday, December 28, 2017 9:08 AM
To: Lucene Dev
Subject: Welcome Karl Wright to the PMC

I am pleased to announce that Karl Wright has accepted the PMC's invitation to 
join.

Welcome Karl!


[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-9.0.1) - Build # 21161 - Unstable!

2017-12-28 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/21161/
Java: 64bit/jdk-9.0.1 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.sim.TestTriggerIntegration.testNodeLostTriggerRestoreState

Error Message:


Stack Trace:
java.util.ConcurrentModificationException
at 
__randomizedtesting.SeedInfo.seed([DFA50ED2B3DDA65:260585B6B145CFB5]:0)
at 
java.base/java.util.ArrayList$Itr.checkForComodification(ArrayList.java:939)
at java.base/java.util.ArrayList$Itr.next(ArrayList.java:893)
at 
org.apache.solr.cloud.autoscaling.sim.SimSolrCloudTestCase.tearDown(SimSolrCloudTestCase.java:141)
at jdk.internal.reflect.GeneratedMethodAccessor25.invoke(Unknown Source)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:992)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)




Build Log:
[...truncated 13637 lines...]
   [junit4] Suite: org.apache.solr.cloud.autoscaling.sim.TestTriggerIntegration
   [junit4]   2> Creating dataDir: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.autoscaling.sim.TestTriggerIntegration_DFA50ED2B3DDA65-001/init-core-data-001
   [junit4]   2> 2435576 INFO  
(SUITE-TestTriggerIntegration-seed#[DFA50ED2B3DDA65]-worker) [] 
o.a.s.SolrTestCaseJ4 Using TrieFields (NUMERIC_POINTS_SYSPROP=false) 
w/NUMERIC_DOCVALUES_SYSPROP=false
   [junit4]   2> 2435576 INFO  

[jira] [Commented] (LUCENE-8110) fix potential IndexOutOfBoundsException in *Classifier.getClasses(?,int)

2017-12-28 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16305497#comment-16305497
 ] 

Adrien Grand commented on LUCENE-8110:
--

Should we add a test as well?

> fix potential IndexOutOfBoundsException in *Classifier.getClasses(?,int)
> 
>
> Key: LUCENE-8110
> URL: https://issues.apache.org/jira/browse/LUCENE-8110
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: LUCENE-8110.patch
>
>
> KNearestNeighborDocumentClassifier already has the [one-line 
> fix|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/7.2.0/lucene/classification/src/java/org/apache/lucene/classification/document/KNearestNeighborDocumentClassifier.java#L102]
>  via SOLR-8871 and this ticket here is to add the same to the remaining 
> Classifier classes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11793) reduce code duplication w.r.t. RestTestHarness(es)

2017-12-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16305493#comment-16305493
 ] 

ASF subversion and git services commented on SOLR-11793:


Commit 06438bab00645309598b976254da234594f73a50 in lucene-solr's branch 
refs/heads/branch_7x from [~cpoerschke]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=06438ba ]

SOLR-11793: Reduce code duplication w.r.t. RestTestHarness(es).


> reduce code duplication w.r.t. RestTestHarness(es)
> --
>
> Key: SOLR-11793
> URL: https://issues.apache.org/jira/browse/SOLR-11793
> Project: Solr
>  Issue Type: Test
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-11793.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8110) fix potential IndexOutOfBoundsException in *Classifier.getClasses(?,int)

2017-12-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16305492#comment-16305492
 ] 

ASF subversion and git services commented on LUCENE-8110:
-

Commit c73edb869a96bd6869da200944deb6d078cba283 in lucene-solr's branch 
refs/heads/branch_7x from [~cpoerschke]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=c73edb8 ]

LUCENE-8110: Fix potential IndexOutOfBoundsException in 
*Classifier.getClasses(?,int).


> fix potential IndexOutOfBoundsException in *Classifier.getClasses(?,int)
> 
>
> Key: LUCENE-8110
> URL: https://issues.apache.org/jira/browse/LUCENE-8110
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: LUCENE-8110.patch
>
>
> KNearestNeighborDocumentClassifier already has the [one-line 
> fix|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/7.2.0/lucene/classification/src/java/org/apache/lucene/classification/document/KNearestNeighborDocumentClassifier.java#L102]
>  via SOLR-8871 and this ticket here is to add the same to the remaining 
> Classifier classes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11793) reduce code duplication w.r.t. RestTestHarness(es)

2017-12-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16305487#comment-16305487
 ] 

ASF subversion and git services commented on SOLR-11793:


Commit 287062df37b6e4a1f158fc53418baa9ae40eeeda in lucene-solr's branch 
refs/heads/master from [~cpoerschke]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=287062d ]

SOLR-11793: Reduce code duplication w.r.t. RestTestHarness(es).


> reduce code duplication w.r.t. RestTestHarness(es)
> --
>
> Key: SOLR-11793
> URL: https://issues.apache.org/jira/browse/SOLR-11793
> Project: Solr
>  Issue Type: Test
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-11793.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8110) fix potential IndexOutOfBoundsException in *Classifier.getClasses(?,int)

2017-12-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16305486#comment-16305486
 ] 

ASF subversion and git services commented on LUCENE-8110:
-

Commit af41d02eae6a58fd450553f9a09c9325ddf6e0ab in lucene-solr's branch 
refs/heads/master from [~cpoerschke]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=af41d02 ]

LUCENE-8110: Fix potential IndexOutOfBoundsException in 
*Classifier.getClasses(?,int).


> fix potential IndexOutOfBoundsException in *Classifier.getClasses(?,int)
> 
>
> Key: LUCENE-8110
> URL: https://issues.apache.org/jira/browse/LUCENE-8110
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: LUCENE-8110.patch
>
>
> KNearestNeighborDocumentClassifier already has the [one-line 
> fix|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/7.2.0/lucene/classification/src/java/org/apache/lucene/classification/document/KNearestNeighborDocumentClassifier.java#L102]
>  via SOLR-8871 and this ticket here is to add the same to the remaining 
> Classifier classes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Welcome Karl Wright to the PMC

2017-12-28 Thread Yonik Seeley
Congtats Karl!

-Yonik


On Thu, Dec 28, 2017 at 9:08 AM, Adrien Grand  wrote:
> I am pleased to announce that Karl Wright has accepted the PMC's invitation
> to join.
>
> Welcome Karl!

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Welcome Karl Wright to the PMC

2017-12-28 Thread Adrien Grand
I am pleased to announce that Karl Wright has accepted the PMC's invitation
to join.

Welcome Karl!


[JENKINS-EA] Lucene-Solr-7.x-Linux (64bit/jdk-10-ea+32) - Build # 1066 - Still Unstable!

2017-12-28 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/1066/
Java: 64bit/jdk-10-ea+32 -XX:+UseCompressedOops -XX:+UseParallelGC

5 tests failed.
FAILED:  org.apache.solr.cloud.AddReplicaTest.test

Error Message:
core_node6:{"core":"addreplicatest_coll_shard1_replica_n5","base_url":"https://127.0.0.1:36321/solr","node_name":"127.0.0.1:36321_solr","state":"active","type":"NRT"}

Stack Trace:
java.lang.AssertionError: 
core_node6:{"core":"addreplicatest_coll_shard1_replica_n5","base_url":"https://127.0.0.1:36321/solr","node_name":"127.0.0.1:36321_solr","state":"active","type":"NRT"}
at 
__randomizedtesting.SeedInfo.seed([A0CEF9407F5F21:88F4F123EE8332D9]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.apache.solr.cloud.AddReplicaTest.test(AddReplicaTest.java:84)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  org.apache.solr.cloud.AliasIntegrationTest.test

Error Message:
Error from server 

[JENKINS] Lucene-Solr-Tests-master - Build # 2242 - Still Failing

2017-12-28 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/2242/

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.LeaderElectionTest

Error Message:
1 thread leaked from SUITE scope at org.apache.solr.cloud.LeaderElectionTest:   
  1) Thread[id=1249, name=zkConnectionManagerCallback-325-thread-1, 
state=WAITING, group=TGRP-LeaderElectionTest] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
at java.lang.Thread.run(Thread.java:748)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.cloud.LeaderElectionTest: 
   1) Thread[id=1249, name=zkConnectionManagerCallback-325-thread-1, 
state=WAITING, group=TGRP-LeaderElectionTest]
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
at __randomizedtesting.SeedInfo.seed([6B76F5E1536ABDFC]:0)


FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.LeaderElectionTest

Error Message:
There are still zombie threads that couldn't be terminated:1) 
Thread[id=1249, name=zkConnectionManagerCallback-325-thread-1, state=WAITING, 
group=TGRP-LeaderElectionTest] at sun.misc.Unsafe.park(Native Method)   
  at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)  
   at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
at java.lang.Thread.run(Thread.java:748)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie 
threads that couldn't be terminated:
   1) Thread[id=1249, name=zkConnectionManagerCallback-325-thread-1, 
state=WAITING, group=TGRP-LeaderElectionTest]
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
at __randomizedtesting.SeedInfo.seed([6B76F5E1536ABDFC]:0)


FAILED:  org.apache.solr.handler.admin.AutoscalingHistoryHandlerTest.testHistory

Error Message:
expected:<8> but was:<9>

Stack Trace:
java.lang.AssertionError: expected:<8> but was:<9>
at 
__randomizedtesting.SeedInfo.seed([6B76F5E1536ABDFC:68A511CE92242FB]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.handler.admin.AutoscalingHistoryHandlerTest.testHistory(AutoscalingHistoryHandlerTest.java:268)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 

[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 4349 - Still Unstable!

2017-12-28 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/4349/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC

3 tests failed.
FAILED:  org.apache.solr.cloud.PeerSyncReplicationTest.test

Error Message:
java.util.concurrent.TimeoutException: Could not connect to ZooKeeper 
127.0.0.1:52595 within 3 ms

Stack Trace:
org.apache.solr.common.SolrException: java.util.concurrent.TimeoutException: 
Could not connect to ZooKeeper 127.0.0.1:52595 within 3 ms
at 
__randomizedtesting.SeedInfo.seed([7DBFAD331ADABDBE:F5EB92E9B426D046]:0)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:182)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:119)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:114)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:101)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.printLayout(AbstractDistribZkTestBase.java:323)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.distribTearDown(AbstractFullDistribZkTestBase.java:1553)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.distribTearDown(PeerSyncReplicationTest.java:86)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:970)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.util.concurrent.TimeoutException: Could not connect to 
ZooKeeper 127.0.0.1:52595 within 3 ms
at 
org.apache.solr.common.cloud.ConnectionManager.waitForConnected(ConnectionManager.java:232)
at 

[jira] [Updated] (LUCENE-8110) fix potential IndexOutOfBoundsException in *Classifier.getClasses(?,int)

2017-12-28 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated LUCENE-8110:

Attachment: LUCENE-8110.patch

> fix potential IndexOutOfBoundsException in *Classifier.getClasses(?,int)
> 
>
> Key: LUCENE-8110
> URL: https://issues.apache.org/jira/browse/LUCENE-8110
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: LUCENE-8110.patch
>
>
> KNearestNeighborDocumentClassifier already has the [one-line 
> fix|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/7.2.0/lucene/classification/src/java/org/apache/lucene/classification/document/KNearestNeighborDocumentClassifier.java#L102]
>  via SOLR-8871 and this ticket here is to add the same to the remaining 
> Classifier classes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-8110) fix potential IndexOutOfBoundsException in *Classifier.getClasses(?,int)

2017-12-28 Thread Christine Poerschke (JIRA)
Christine Poerschke created LUCENE-8110:
---

 Summary: fix potential IndexOutOfBoundsException in 
*Classifier.getClasses(?,int)
 Key: LUCENE-8110
 URL: https://issues.apache.org/jira/browse/LUCENE-8110
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Christine Poerschke
Assignee: Christine Poerschke
Priority: Minor


KNearestNeighborDocumentClassifier already has the [one-line 
fix|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/7.2.0/lucene/classification/src/java/org/apache/lucene/classification/document/KNearestNeighborDocumentClassifier.java#L102]
 via SOLR-8871 and this ticket here is to add the same to the remaining 
Classifier classes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8107) GeoExactCircleTest.RandomPointBearingCardinalTest failures

2017-12-28 Thread Karl Wright (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16305330#comment-16305330
 ] 

Karl Wright commented on LUCENE-8107:
-

I'll wait for your resolution, then.  Thanks!!


> GeoExactCircleTest.RandomPointBearingCardinalTest failures
> --
>
> Key: LUCENE-8107
> URL: https://issues.apache.org/jira/browse/LUCENE-8107
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Adrien Grand
>Assignee: Karl Wright
>
> I hit some reproducing failures over the weekend:
> {noformat}
> ant test  -Dtestcase=GeoExactCircleTest 
> -Dtests.method=RandomPointBearingCardinalTest -Dtests.seed=30B96A8700F32D8F 
> -Dtests.slow=true -Dtests.locale=ar-SD -Dtests.timezone=Turkey 
> -Dtests.asserts=true -Dtests.file.encoding=UTF8
> [junit4] FAILURE 0.01s J0 | GeoExactCircleTest.RandomPointBearingCardinalTest 
> {seed=[30B96A8700F32D8F:475E54A204015A1C]} <<<
>[junit4]> Throwable #1: java.lang.AssertionError: 
> PlanetModel(ab=1.7929995623606008 c=1.1596251282) 0.022823921875714692 
> 2.6270976802297388
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([30B96A8700F32D8F:475E54A204015A1C]:0)
>[junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoExactCircleTest.checkBearingPoint(GeoExactCircleTest.java:117)
>[junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoExactCircleTest.RandomPointBearingCardinalTest(GeoExactCircleTest.java:109)
>[junit4]>  at java.lang.Thread.run(Thread.java:748)
>[junit4]   2> NOTE: test params are: codec=Asserting(Lucene70): {}, 
> docValues:{}, maxPointsInLeafNode=478, maxMBSortInHeap=5.961909961194244, 
> sim=Asserting(org.apache.lucene.search.similarities.AssertingSimilarity@179f38fb),
>  locale=ar-SD, timezone=Turkey
>[junit4]   2> NOTE: Linux 4.4.0-1043-aws amd64/Oracle Corporation 
> 1.8.0_151 (64-bit)/cpus=4,threads=1,free=269120312,total=319291392
>[junit4]   2> NOTE: All tests run in this JVM: [XYZSolidTest, 
> TestGeo3DDocValues, GeoExactCircleTest]
> ant test  -Dtestcase=GeoExactCircleTest 
> -Dtests.method=RandomPointBearingCardinalTest -Dtests.seed=30B96A8700F32D8F 
> -Dtests.slow=true -Dtests.locale=ar-SD -Dtests.timezone=Turkey 
> -Dtests.asserts=true -Dtests.file.encoding=UTF8
>[junit4] FAILURE 0.02s J2 | 
> GeoExactCircleTest.RandomPointBearingCardinalTest 
> {seed=[8C1E53DFCE9646F5:8DCCE74ADEC6D907]} <<<
>[junit4]> Throwable #1: java.lang.AssertionError: 
> PlanetModel(ab=1.0366200558773102 c=0.6736249299915238) 0.0011591580078804675 
> 2.649410126114567
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([8C1E53DFCE9646F5:8DCCE74ADEC6D907]:0)
>[junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoExactCircleTest.checkBearingPoint(GeoExactCircleTest.java:117)
>[junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoExactCircleTest.RandomPointBearingCardinalTest(GeoExactCircleTest.java:109)
>[junit4]>  at java.lang.Thread.run(Thread.java:748)
>[junit4]   2> NOTE: test params are: codec=Asserting(Lucene70): {}, 
> docValues:{}, maxPointsInLeafNode=1185, maxMBSortInHeap=5.925083864677718, 
> sim=Asserting(org.apache.lucene.search.similarities.AssertingSimilarity@6a7f1f9),
>  locale=en-AU, timezone=CNT
>[junit4]   2> NOTE: Linux 2.6.32-696.6.3.el6.x86_64 amd64/Oracle 
> Corporation 1.8.0_151 (64-bit)/cpus=4,threads=1,free=207196520,total=251658240
>[junit4]   2> NOTE: All tests run in this JVM: [TestGeo3DDocValues, 
> GeoCircleTest, GeoExactCircleTest]
>[junit4] Completed [11/16 (1!)] on J2 in 1.60s, 311 tests, 1 failure <<< 
> FAILURES!
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Windows (32bit/jdk1.8.0_144) - Build # 365 - Still Unstable!

2017-12-28 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/365/
Java: 32bit/jdk1.8.0_144 -server -XX:+UseG1GC

8 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.lucene.codecs.memory.TestFSTPostingsFormat

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\codecs\test\J1\temp\lucene.codecs.memory.TestFSTPostingsFormat_ED988CBA3269CD79-001\testPostingsFormat-004:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\codecs\test\J1\temp\lucene.codecs.memory.TestFSTPostingsFormat_ED988CBA3269CD79-001\testPostingsFormat-004

C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\codecs\test\J1\temp\lucene.codecs.memory.TestFSTPostingsFormat_ED988CBA3269CD79-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\codecs\test\J1\temp\lucene.codecs.memory.TestFSTPostingsFormat_ED988CBA3269CD79-001
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\codecs\test\J1\temp\lucene.codecs.memory.TestFSTPostingsFormat_ED988CBA3269CD79-001\testPostingsFormat-004:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\codecs\test\J1\temp\lucene.codecs.memory.TestFSTPostingsFormat_ED988CBA3269CD79-001\testPostingsFormat-004
   
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\codecs\test\J1\temp\lucene.codecs.memory.TestFSTPostingsFormat_ED988CBA3269CD79-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\codecs\test\J1\temp\lucene.codecs.memory.TestFSTPostingsFormat_ED988CBA3269CD79-001

at __randomizedtesting.SeedInfo.seed([ED988CBA3269CD79]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
junit.framework.TestSuite.org.apache.lucene.store.TestHardLinkCopyDirectoryWrapper

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\misc\test\J0\temp\lucene.store.TestHardLinkCopyDirectoryWrapper_C287CE12F685DECB-001\testLongs-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\misc\test\J0\temp\lucene.store.TestHardLinkCopyDirectoryWrapper_C287CE12F685DECB-001\testLongs-001

C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\misc\test\J0\temp\lucene.store.TestHardLinkCopyDirectoryWrapper_C287CE12F685DECB-001\testLongs-001\extra0:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\misc\test\J0\temp\lucene.store.TestHardLinkCopyDirectoryWrapper_C287CE12F685DECB-001\testLongs-001\extra0

C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\misc\test\J0\temp\lucene.store.TestHardLinkCopyDirectoryWrapper_C287CE12F685DECB-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\misc\test\J0\temp\lucene.store.TestHardLinkCopyDirectoryWrapper_C287CE12F685DECB-001
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\misc\test\J0\temp\lucene.store.TestHardLinkCopyDirectoryWrapper_C287CE12F685DECB-001\testLongs-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\misc\test\J0\temp\lucene.store.TestHardLinkCopyDirectoryWrapper_C287CE12F685DECB-001\testLongs-001
   

[jira] [Updated] (LUCENE-8109) Propagate minimum competitive scores in BooleanQuery

2017-12-28 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-8109:
-
Attachment: LUCENE-8109.patch

Here is a patch. For now it tries to keep things simple by only propagating 
information about the minimum competitive score to the maximum scoring clause.

> Propagate minimum competitive scores in BooleanQuery
> 
>
> Key: LUCENE-8109
> URL: https://issues.apache.org/jira/browse/LUCENE-8109
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-8109.patch
>
>
> Propagating information about the minimum competitive score means that we 
> will also see speedups for conjunctions of disjunctions, or disjunctions of 
> phrase queries.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-8109) Propagate minimum competitive scores in BooleanQuery

2017-12-28 Thread Adrien Grand (JIRA)
Adrien Grand created LUCENE-8109:


 Summary: Propagate minimum competitive scores in BooleanQuery
 Key: LUCENE-8109
 URL: https://issues.apache.org/jira/browse/LUCENE-8109
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Priority: Minor


Propagating information about the minimum competitive score means that we will 
also see speedups for conjunctions of disjunctions, or disjunctions of phrase 
queries.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-7.x-Linux (64bit/jdk-10-ea+32) - Build # 1065 - Still Unstable!

2017-12-28 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/1065/
Java: 64bit/jdk-10-ea+32 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.sim.TestComputePlanAction.testNodeWithMultipleReplicasLost

Error Message:


Stack Trace:
java.util.ConcurrentModificationException
at 
__randomizedtesting.SeedInfo.seed([26855E7AD51C907A:1645BFF85D6E7126]:0)
at 
java.base/java.util.ArrayList$Itr.checkForComodification(ArrayList.java:937)
at java.base/java.util.ArrayList$Itr.next(ArrayList.java:891)
at 
org.apache.solr.cloud.autoscaling.sim.SimSolrCloudTestCase.tearDown(SimSolrCloudTestCase.java:141)
at jdk.internal.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:992)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)




Build Log:
[...truncated 1723 lines...]
   [junit4] JVM J0: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/lucene/build/core/test/temp/junit4-J0-20171228_085105_63513482622964970616281.syserr
   [junit4] >>> JVM J0 emitted unexpected output (verbatim) 
   [junit4] Java HotSpot(TM) 64-Bit Server VM warning: Option 
UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in 
a future release.
   [junit4] <<< JVM J0: EOF 

[...truncated 3 lines...]
   [junit4] JVM J2: stderr was not empty, see: 

  1   2   >