[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-9.0.1) - Build # 20886 - Still Unstable!

2017-11-10 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/20886/
Java: 64bit/jdk-9.0.1 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  
org.apache.solr.client.solrj.impl.CloudSolrClientTest.testRetryUpdatesWhenClusterStateIsStale

Error Message:
Error from server at 
http://127.0.0.1:39573/solr/stale_state_test_col_shard1_replica_n1: Expected 
mime type application/octet-stream but got text/html.Error 
404HTTP ERROR: 404 Problem accessing 
/solr/stale_state_test_col_shard1_replica_n1/update. Reason: Can not 
find: /solr/stale_state_test_col_shard1_replica_n1/update http://eclipse.org/jetty;>Powered by Jetty:// 9.3.20.v20170531 
  

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at 
http://127.0.0.1:39573/solr/stale_state_test_col_shard1_replica_n1: Expected 
mime type application/octet-stream but got text/html. 


Error 404 


HTTP ERROR: 404
Problem accessing /solr/stale_state_test_col_shard1_replica_n1/update. 
Reason:
Can not find: 
/solr/stale_state_test_col_shard1_replica_n1/update
http://eclipse.org/jetty;>Powered by Jetty:// 
9.3.20.v20170531



at 
__randomizedtesting.SeedInfo.seed([89A340612E75BF3B:3D92D889CD9CC917]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:607)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:559)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1016)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:883)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:816)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.request.UpdateRequest.commit(UpdateRequest.java:233)
at 
org.apache.solr.client.solrj.impl.CloudSolrClientTest.testRetryUpdatesWhenClusterStateIsStale(CloudSolrClientTest.java:844)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:110)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-7.x-MacOSX (64bit/jdk-9) - Build # 301 - Still Unstable!

2017-11-10 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-MacOSX/301/
Java: 64bit/jdk-9 -XX:+UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  
org.apache.solr.client.solrj.impl.CloudSolrClientTest.testRetryUpdatesWhenClusterStateIsStale

Error Message:
Error from server at 
http://127.0.0.1:58106/solr/stale_state_test_col_shard1_replica_n1: Expected 
mime type application/octet-stream but got text/html.Error 
404HTTP ERROR: 404 Problem accessing 
/solr/stale_state_test_col_shard1_replica_n1/update. Reason: Can not 
find: /solr/stale_state_test_col_shard1_replica_n1/update http://eclipse.org/jetty;>Powered by Jetty:// 9.3.20.v20170531 
  

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at 
http://127.0.0.1:58106/solr/stale_state_test_col_shard1_replica_n1: Expected 
mime type application/octet-stream but got text/html. 


Error 404 


HTTP ERROR: 404
Problem accessing /solr/stale_state_test_col_shard1_replica_n1/update. 
Reason:
Can not find: 
/solr/stale_state_test_col_shard1_replica_n1/update
http://eclipse.org/jetty;>Powered by Jetty:// 
9.3.20.v20170531



at 
__randomizedtesting.SeedInfo.seed([93F259C397AFA25A:27C3C12B7446D476]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:607)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:559)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1016)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:883)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:816)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.request.UpdateRequest.commit(UpdateRequest.java:233)
at 
org.apache.solr.client.solrj.impl.CloudSolrClientTest.testRetryUpdatesWhenClusterStateIsStale(CloudSolrClientTest.java:844)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:110)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk-9.0.1) - Build # 792 - Still Unstable!

2017-11-10 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/792/
Java: 64bit/jdk-9.0.1 -XX:-UseCompressedOops -XX:+UseSerialGC

16 tests failed.
FAILED:  
org.apache.solr.cloud.TestCollectionsAPIViaSolrCloudCluster.testCollectionCreateSearchDelete

Error Message:
Error from server at 
http://127.0.0.1:39151/solr/testcollection_shard1_replica_n3: Expected mime 
type application/octet-stream but got text/html.Error 
404HTTP ERROR: 404 Problem accessing 
/solr/testcollection_shard1_replica_n3/update. Reason: Can not find: 
/solr/testcollection_shard1_replica_n3/update http://eclipse.org/jetty;>Powered by Jetty:// 9.3.20.v20170531 
  

Stack Trace:
org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from 
server at http://127.0.0.1:39151/solr/testcollection_shard1_replica_n3: 
Expected mime type application/octet-stream but got text/html. 


Error 404 


HTTP ERROR: 404
Problem accessing /solr/testcollection_shard1_replica_n3/update. Reason:
Can not find: /solr/testcollection_shard1_replica_n3/update
http://eclipse.org/jetty;>Powered by Jetty:// 
9.3.20.v20170531



at 
__randomizedtesting.SeedInfo.seed([B6EC569193A74BC6:1516F834144FA163]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:549)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1016)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:883)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:945)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:945)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:945)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:945)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:945)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:816)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.request.UpdateRequest.commit(UpdateRequest.java:233)
at 
org.apache.solr.cloud.TestCollectionsAPIViaSolrCloudCluster.testCollectionCreateSearchDelete(TestCollectionsAPIViaSolrCloudCluster.java:167)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-11624) _default configset overwrites a a configset if collection.configName isn't specified even if a confiset of the same name already exists.

2017-11-10 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16248351#comment-16248351
 ] 

Ishan Chattopadhyaya commented on SOLR-11624:
-

Here's the reason behind putting in that change:

The idea of that whole exercise (default configset) was to not need the user to 
be aware of the concept of configsets. In that situation, here's a typical 
situation that a user (who isn't aware of the concept of configsets) would be 
faced with, if the current behavior were not to be there: a) user creates a 
collection without specifying a configset, b) makes some changes to 
schema/configs using APIs, c) he's unhappy with how things turned out and 
deleted that collection and re-created it, d) instead of starting from scratch 
(default configset), as he was hoping, he is at the same place where he left 
off before deleting the collection.

Overwriting any existing configset results in a climate of least surprises for 
a new user. However, those who understand concepts of configsets can always 
specify a configset name while creating his collections. IMHO, we should keep 
the current behavior. But, as I mentioned before, if there's consensus on 
switching back to previous behavior, I can work on bringing it back.

> _default configset overwrites a a configset if collection.configName isn't 
> specified even if a confiset of the same name already exists.
> 
>
> Key: SOLR-11624
> URL: https://issues.apache.org/jira/browse/SOLR-11624
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.2
>Reporter: Erick Erickson
>Assignee: Erick Erickson
> Attachments: SOLR-11624.patch
>
>
> Looks like a problem that crept in when we changed the _default configset 
> stuff.
> setup:
> upload a configset named "wiki"
> collections?action=CREATE=wiki&.
> My custom configset "wiki" gets overwritten by _default and then used by the 
> "wiki" collection.
> Assigning to myself only because it really needs to be fixed IMO and I don't 
> want to lose track of it. Anyone else please feel free to take it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 4285 - Still Unstable!

2017-11-10 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/4285/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  
org.apache.solr.client.solrj.impl.CloudSolrClientTest.testRetryUpdatesWhenClusterStateIsStale

Error Message:
Error from server at 
http://127.0.0.1:55245/solr/stale_state_test_col_shard1_replica_n1: Expected 
mime type application/octet-stream but got text/html.Error 
404HTTP ERROR: 404 Problem accessing 
/solr/stale_state_test_col_shard1_replica_n1/update. Reason: Can not 
find: /solr/stale_state_test_col_shard1_replica_n1/update http://eclipse.org/jetty;>Powered by Jetty:// 9.3.20.v20170531 
  

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at 
http://127.0.0.1:55245/solr/stale_state_test_col_shard1_replica_n1: Expected 
mime type application/octet-stream but got text/html. 


Error 404 


HTTP ERROR: 404
Problem accessing /solr/stale_state_test_col_shard1_replica_n1/update. 
Reason:
Can not find: 
/solr/stale_state_test_col_shard1_replica_n1/update
http://eclipse.org/jetty;>Powered by Jetty:// 
9.3.20.v20170531



at 
__randomizedtesting.SeedInfo.seed([A30A4211E7266542:173BDAF904CF136E]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:607)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:559)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1016)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:883)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:816)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.request.UpdateRequest.commit(UpdateRequest.java:233)
at 
org.apache.solr.client.solrj.impl.CloudSolrClientTest.testRetryUpdatesWhenClusterStateIsStale(CloudSolrClientTest.java:844)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:110)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_144) - Build # 20885 - Still Unstable!

2017-11-10 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/20885/
Java: 64bit/jdk1.8.0_144 -XX:+UseCompressedOops -XX:+UseParallelGC

2 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.client.solrj.impl.CloudSolrClientTest

Error Message:
There are still nodes recoverying - waited for 30 seconds

Stack Trace:
java.lang.AssertionError: There are still nodes recoverying - waited for 30 
seconds
at __randomizedtesting.SeedInfo.seed([C7150671B82F8B1C]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:185)
at 
org.apache.solr.client.solrj.impl.CloudSolrClientTest.setupCluster(CloudSolrClientTest.java:104)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:874)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
org.apache.solr.cloud.LeaderElectionIntegrationTest.testSimpleSliceLeaderElection

Error Message:
Error from server at https://127.0.0.1:35615/solr: Timed out waiting to see all 
replicas: [collection2_shard1_replica_n9] in cluster state.

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at https://127.0.0.1:35615/solr: Timed out waiting to see all 
replicas: [collection2_shard1_replica_n9] in cluster state.
at 
__randomizedtesting.SeedInfo.seed([DA31CFF668403EA6:840502B82B3FA241]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1104)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:883)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:816)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at 
org.apache.solr.cloud.LeaderElectionIntegrationTest.createCollection(LeaderElectionIntegrationTest.java:57)
at 

[JENKINS] Lucene-Solr-7.x-Windows (64bit/jdk-9.0.1) - Build # 301 - Still Unstable!

2017-11-10 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/301/
Java: 64bit/jdk-9.0.1 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

8 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.lucene.index.TestBackwardsCompatibility

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\backward-codecs\test\J1\temp\lucene.index.TestBackwardsCompatibility_AE9E8C0D5CF1CC45-001\2.4.0-cfs-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\backward-codecs\test\J1\temp\lucene.index.TestBackwardsCompatibility_AE9E8C0D5CF1CC45-001\2.4.0-cfs-001
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\backward-codecs\test\J1\temp\lucene.index.TestBackwardsCompatibility_AE9E8C0D5CF1CC45-001\2.4.0-cfs-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\backward-codecs\test\J1\temp\lucene.index.TestBackwardsCompatibility_AE9E8C0D5CF1CC45-001\2.4.0-cfs-001

at __randomizedtesting.SeedInfo.seed([AE9E8C0D5CF1CC45]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  
junit.framework.TestSuite.org.apache.lucene.search.grouping.TestGrouping

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\grouping\test\J0\temp\lucene.search.grouping.TestGrouping_76C3D761FD533046-001\index-SimpleFSDirectory-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\grouping\test\J0\temp\lucene.search.grouping.TestGrouping_76C3D761FD533046-001\index-SimpleFSDirectory-001

C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\grouping\test\J0\temp\lucene.search.grouping.TestGrouping_76C3D761FD533046-001\index-SimpleFSDirectory-001\segments_1:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\grouping\test\J0\temp\lucene.search.grouping.TestGrouping_76C3D761FD533046-001\index-SimpleFSDirectory-001\segments_1

C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\grouping\test\J0\temp\lucene.search.grouping.TestGrouping_76C3D761FD533046-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\grouping\test\J0\temp\lucene.search.grouping.TestGrouping_76C3D761FD533046-001
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\grouping\test\J0\temp\lucene.search.grouping.TestGrouping_76C3D761FD533046-001\index-SimpleFSDirectory-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\grouping\test\J0\temp\lucene.search.grouping.TestGrouping_76C3D761FD533046-001\index-SimpleFSDirectory-001
   
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\grouping\test\J0\temp\lucene.search.grouping.TestGrouping_76C3D761FD533046-001\index-SimpleFSDirectory-001\segments_1:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\grouping\test\J0\temp\lucene.search.grouping.TestGrouping_76C3D761FD533046-001\index-SimpleFSDirectory-001\segments_1
   
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\grouping\test\J0\temp\lucene.search.grouping.TestGrouping_76C3D761FD533046-001:
 java.nio.file.DirectoryNotEmptyException: 

[GitHub] lucene-solr pull request #272: Correct inconsistency on plugin support

2017-11-10 Thread jrpool
GitHub user jrpool opened a pull request:

https://github.com/apache/lucene-solr/pull/272

Correct inconsistency on plugin support

The deleted sentence contradicts the sentence near the top of this document 
that says “All authentication and authorization plugins can work with Solr 
whether they are running in SolrCloud mode or standalone mode.”

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/jrpool/lucene-solr patch-1

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/272.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #272


commit 066d889799087765943efe044e47cce7efbe2994
Author: Jonathan Pool 
Date:   2017-11-11T03:40:57Z

Correct inconsistency on plugin support

The deleted sentence contradicts the sentence near the top of this document 
that says “All authentication and authorization plugins can work with Solr 
whether they are running in SolrCloud mode or standalone mode.”




---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-7.x - Build # 233 - Still Unstable

2017-11-10 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/233/

10 tests failed.
FAILED:  org.apache.solr.cloud.SimpleCollectionCreateDeleteTest.test

Error Message:
Error from server at https://127.0.0.1:41194/os/nd: KeeperErrorCode = Session 
expired for /overseer/collection-queue-work/qnr-

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at https://127.0.0.1:41194/os/nd: KeeperErrorCode = Session expired 
for /overseer/collection-queue-work/qnr-
at 
__randomizedtesting.SeedInfo.seed([FDD3000FC1707560:75873FD56F8C1898]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1104)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:883)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:816)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createServers(AbstractFullDistribZkTestBase.java:314)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:991)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[JENKINS-EA] Lucene-Solr-7.x-Linux (64bit/jdk-10-ea+29) - Build # 791 - Still Unstable!

2017-11-10 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/791/
Java: 64bit/jdk-10-ea+29 -XX:-UseCompressedOops -XX:+UseG1GC

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestLazyCores

Error Message:
1 thread leaked from SUITE scope at org.apache.solr.core.TestLazyCores: 1) 
Thread[id=4524, name=searcherExecutor-1698-thread-1, state=WAITING, 
group=TGRP-TestLazyCores] at 
java.base@10-ea/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@10-ea/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194)
 at 
java.base@10-ea/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2074)
 at 
java.base@10-ea/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:435)
 at 
java.base@10-ea/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1061)
 at 
java.base@10-ea/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1121)
 at 
java.base@10-ea/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
 at java.base@10-ea/java.lang.Thread.run(Thread.java:844)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.core.TestLazyCores: 
   1) Thread[id=4524, name=searcherExecutor-1698-thread-1, state=WAITING, 
group=TGRP-TestLazyCores]
at java.base@10-ea/jdk.internal.misc.Unsafe.park(Native Method)
at 
java.base@10-ea/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194)
at 
java.base@10-ea/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2074)
at 
java.base@10-ea/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:435)
at 
java.base@10-ea/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1061)
at 
java.base@10-ea/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1121)
at 
java.base@10-ea/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base@10-ea/java.lang.Thread.run(Thread.java:844)
at __randomizedtesting.SeedInfo.seed([234990C65389C752]:0)


FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestLazyCores

Error Message:
There are still zombie threads that couldn't be terminated:1) 
Thread[id=4524, name=searcherExecutor-1698-thread-1, state=WAITING, 
group=TGRP-TestLazyCores] at 
java.base@10-ea/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@10-ea/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194)
 at 
java.base@10-ea/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2074)
 at 
java.base@10-ea/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:435)
 at 
java.base@10-ea/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1061)
 at 
java.base@10-ea/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1121)
 at 
java.base@10-ea/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
 at java.base@10-ea/java.lang.Thread.run(Thread.java:844)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie 
threads that couldn't be terminated:
   1) Thread[id=4524, name=searcherExecutor-1698-thread-1, state=WAITING, 
group=TGRP-TestLazyCores]
at java.base@10-ea/jdk.internal.misc.Unsafe.park(Native Method)
at 
java.base@10-ea/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194)
at 
java.base@10-ea/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2074)
at 
java.base@10-ea/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:435)
at 
java.base@10-ea/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1061)
at 
java.base@10-ea/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1121)
at 
java.base@10-ea/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base@10-ea/java.lang.Thread.run(Thread.java:844)
at __randomizedtesting.SeedInfo.seed([234990C65389C752]:0)


FAILED:  org.apache.solr.core.TestLazyCores.testNoCommit

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at 
__randomizedtesting.SeedInfo.seed([234990C65389C752:FC29311798AEA4F7]:0)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:901)
at org.apache.solr.core.TestLazyCores.check10(TestLazyCores.java:847)
at 
org.apache.solr.core.TestLazyCores.testNoCommit(TestLazyCores.java:829)
at 

[JENKINS] Lucene-Solr-master-Windows (64bit/jdk1.8.0_144) - Build # 7013 - Still Unstable!

2017-11-10 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7013/
Java: 64bit/jdk1.8.0_144 -XX:+UseCompressedOops -XX:+UseSerialGC

6 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.lucene.store.TestMmapDirectory

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J0\temp\lucene.store.TestMmapDirectory_FAAEF99E7F2A692D-001\testThreadSafety-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J0\temp\lucene.store.TestMmapDirectory_FAAEF99E7F2A692D-001\testThreadSafety-001
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J0\temp\lucene.store.TestMmapDirectory_FAAEF99E7F2A692D-001\testThreadSafety-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J0\temp\lucene.store.TestMmapDirectory_FAAEF99E7F2A692D-001\testThreadSafety-001

at __randomizedtesting.SeedInfo.seed([FAAEF99E7F2A692D]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.admin.CoreAdminHandlerTest

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.handler.admin.CoreAdminHandlerTest_620073B2395B6E90-001\tempDir-001\instProp:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.handler.admin.CoreAdminHandlerTest_620073B2395B6E90-001\tempDir-001\instProp

C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.handler.admin.CoreAdminHandlerTest_620073B2395B6E90-001\tempDir-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.handler.admin.CoreAdminHandlerTest_620073B2395B6E90-001\tempDir-001
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.handler.admin.CoreAdminHandlerTest_620073B2395B6E90-001\tempDir-001\instProp:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.handler.admin.CoreAdminHandlerTest_620073B2395B6E90-001\tempDir-001\instProp
   
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.handler.admin.CoreAdminHandlerTest_620073B2395B6E90-001\tempDir-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.handler.admin.CoreAdminHandlerTest_620073B2395B6E90-001\tempDir-001

at __randomizedtesting.SeedInfo.seed([620073B2395B6E90]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 

[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_144) - Build # 20884 - Still Unstable!

2017-11-10 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/20884/
Java: 32bit/jdk1.8.0_144 -server -XX:+UseConcMarkSweepGC

2 tests failed.
FAILED:  
org.apache.solr.client.solrj.impl.CloudSolrClientTest.testRetryUpdatesWhenClusterStateIsStale

Error Message:
Error from server at 
http://127.0.0.1:37839/solr/stale_state_test_col_shard1_replica_n1: Expected 
mime type application/octet-stream but got text/html.Error 
404HTTP ERROR: 404 Problem accessing 
/solr/stale_state_test_col_shard1_replica_n1/update. Reason: Can not 
find: /solr/stale_state_test_col_shard1_replica_n1/update http://eclipse.org/jetty;>Powered by Jetty:// 9.3.20.v20170531 
  

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at 
http://127.0.0.1:37839/solr/stale_state_test_col_shard1_replica_n1: Expected 
mime type application/octet-stream but got text/html. 


Error 404 


HTTP ERROR: 404
Problem accessing /solr/stale_state_test_col_shard1_replica_n1/update. 
Reason:
Can not find: 
/solr/stale_state_test_col_shard1_replica_n1/update
http://eclipse.org/jetty;>Powered by Jetty:// 
9.3.20.v20170531



at 
__randomizedtesting.SeedInfo.seed([D90B00789429CD20:6D3A989077C0BB0C]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:607)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:559)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1016)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:883)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:816)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.request.UpdateRequest.commit(UpdateRequest.java:233)
at 
org.apache.solr.client.solrj.impl.CloudSolrClientTest.testRetryUpdatesWhenClusterStateIsStale(CloudSolrClientTest.java:844)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:110)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk-9.0.1) - Build # 790 - Still Unstable!

2017-11-10 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/790/
Java: 64bit/jdk-9.0.1 -XX:-UseCompressedOops -XX:+UseG1GC

4 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.ltr.TestLTROnSolrCloud

Error Message:
77 threads leaked from SUITE scope at org.apache.solr.ltr.TestLTROnSolrCloud:   
  1) Thread[id=84, 
name=org.eclipse.jetty.server.session.HashSessionManager@1271994aTimer, 
state=TIMED_WAITING, group=TGRP-TestLTROnSolrCloud] at 
java.base@9.0.1/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@9.0.1/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
 at 
java.base@9.0.1/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2104)
 at 
java.base@9.0.1/java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1131)
 at 
java.base@9.0.1/java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:848)
 at 
java.base@9.0.1/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1092)
 at 
java.base@9.0.1/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1152)
 at 
java.base@9.0.1/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
 at java.base@9.0.1/java.lang.Thread.run(Thread.java:844)2) 
Thread[id=129, name=Thread-38, state=WAITING, group=TGRP-TestLTROnSolrCloud]
 at java.base@9.0.1/java.lang.Object.wait(Native Method) at 
java.base@9.0.1/java.lang.Object.wait(Object.java:516) at 
app//org.apache.solr.core.CloserThread.run(CoreContainer.java:1717)3) 
Thread[id=112, name=jetty-launcher-5-thread-3-EventThread, state=WAITING, 
group=TGRP-TestLTROnSolrCloud] at 
java.base@9.0.1/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@9.0.1/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194)
 at 
java.base@9.0.1/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2062)
 at 
java.base@9.0.1/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:435)
 at 
app//org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:501)4) 
Thread[id=139, 
name=TEST-TestLTROnSolrCloud.testSimpleQuery-seed#[6AB9BC2451FAAC83]-SendThread(127.0.0.1:45175),
 state=RUNNABLE, group=TGRP-TestLTROnSolrCloud] at 
java.base@9.0.1/sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) 
at 
java.base@9.0.1/sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:265)   
  at 
java.base@9.0.1/sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:92)
 at 
java.base@9.0.1/sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)   
  at java.base@9.0.1/sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)   
  at 
app//org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:349)
 at 
app//org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1141)5) 
Thread[id=122, name=Connection evictor, state=TIMED_WAITING, 
group=TGRP-TestLTROnSolrCloud] at 
java.base@9.0.1/java.lang.Thread.sleep(Native Method) at 
app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
 at java.base@9.0.1/java.lang.Thread.run(Thread.java:844)6) 
Thread[id=157, name=qtp939890676-157, state=RUNNABLE, 
group=TGRP-TestLTROnSolrCloud] at 
java.base@9.0.1/sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) 
at 
java.base@9.0.1/sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:265)   
  at 
java.base@9.0.1/sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:92)
 at 
java.base@9.0.1/sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)   
  at java.base@9.0.1/sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)   
  at java.base@9.0.1/sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101)  
   at 
app//org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:243)
 at 
app//org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:191)
 at 
app//org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:249)
 at 
app//org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
 at 
app//org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
 at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
 at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
 at java.base@9.0.1/java.lang.Thread.run(Thread.java:844)7) 
Thread[id=179, 

[jira] [Created] (SOLR-11636) Add CSRF protection to Admin UI

2017-11-10 Thread Gradle Build (JIRA)
Gradle Build created SOLR-11636:
---

 Summary: Add CSRF protection to Admin UI
 Key: SOLR-11636
 URL: https://issues.apache.org/jira/browse/SOLR-11636
 Project: Solr
  Issue Type: New Feature
  Security Level: Public (Default Security Level. Issues are Public)
  Components: Admin UI, Server
Reporter: Gradle Build
Priority: Minor


Add CSRF protection to the Admin UI as an option for users who are deploying 
Solr in an environment where maintaining information security is a priority. I 
understand adding CSRF protection to a REST service is not an trivial task and 
this is a nice-to-have feature for those who work in aforementioned 
environment. Thanks.




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1526 - Still Unstable!

2017-11-10 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1526/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC

2 tests failed.
FAILED:  
org.apache.solr.security.hadoop.TestDelegationWithHadoopAuth.testDelegationTokenRenew

Error Message:
expected:<200> but was:<403>

Stack Trace:
java.lang.AssertionError: expected:<200> but was:<403>
at 
__randomizedtesting.SeedInfo.seed([66EDB9C13A4956EE:51764DDF02858B4A]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.security.hadoop.TestDelegationWithHadoopAuth.renewDelegationToken(TestDelegationWithHadoopAuth.java:118)
at 
org.apache.solr.security.hadoop.TestDelegationWithHadoopAuth.verifyDelegationTokenRenew(TestDelegationWithHadoopAuth.java:302)
at 
org.apache.solr.security.hadoop.TestDelegationWithHadoopAuth.testDelegationTokenRenew(TestDelegationWithHadoopAuth.java:319)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 

[JENKINS] Lucene-Solr-Tests-master - Build # 2168 - Failure

2017-11-10 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/2168/

No tests ran.

Build Log:
[...truncated 12088 lines...]
   [junit4] Suite: org.apache.solr.cloud.autoscaling.AutoScalingHandlerTest
   [junit4]   2> Creating dataDir: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/build/solr-core/test/J1/temp/solr.cloud.autoscaling.AutoScalingHandlerTest_D8117D99600FAF63-001/init-core-data-001
   [junit4]   2> 2088450 WARN  
(SUITE-AutoScalingHandlerTest-seed#[D8117D99600FAF63]-worker) [] 
o.a.s.SolrTestCaseJ4 startTrackingSearchers: numOpens=77 numCloses=77
   [junit4]   2> 2088450 INFO  
(SUITE-AutoScalingHandlerTest-seed#[D8117D99600FAF63]-worker) [] 
o.a.s.SolrTestCaseJ4 Using PointFields (NUMERIC_POINTS_SYSPROP=true) 
w/NUMERIC_DOCVALUES_SYSPROP=true
   [junit4]   2> 2088451 INFO  
(SUITE-AutoScalingHandlerTest-seed#[D8117D99600FAF63]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (true) via: 
@org.apache.solr.util.RandomizeSSL(reason=, value=NaN, ssl=NaN, clientAuth=NaN)
   [junit4]   2> 2088451 INFO  
(SUITE-AutoScalingHandlerTest-seed#[D8117D99600FAF63]-worker) [] 
o.a.s.SolrTestCaseJ4 SecureRandom sanity checks: 
test.solr.allowed.securerandom=null & java.security.egd=file:/dev/./urandom
   [junit4]   2> 2088452 INFO  
(SUITE-AutoScalingHandlerTest-seed#[D8117D99600FAF63]-worker) [] 
o.a.s.c.MiniSolrCloudCluster Starting cluster of 2 servers in 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/build/solr-core/test/J1/temp/solr.cloud.autoscaling.AutoScalingHandlerTest_D8117D99600FAF63-001/tempDir-001
   [junit4]   2> 2088452 INFO  
(SUITE-AutoScalingHandlerTest-seed#[D8117D99600FAF63]-worker) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 2088462 INFO  (Thread-1365) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 2088462 INFO  (Thread-1365) [] o.a.s.c.ZkTestServer 
Starting server
   [junit4]   2> 2088499 ERROR (Thread-1365) [] o.a.z.s.ZooKeeperServer 
ZKShutdownHandler is not registered, so ZooKeeper server won't take any action 
on ERROR or SHUTDOWN server state changes
   [junit4]   2> 2088583 INFO  
(SUITE-AutoScalingHandlerTest-seed#[D8117D99600FAF63]-worker) [] 
o.a.s.c.ZkTestServer start zk server on port:45466
   [junit4]   2> 2088722 INFO  (jetty-launcher-2377-thread-1) [] 
o.e.j.s.Server jetty-9.3.20.v20170531
   [junit4]   2> 2088775 INFO  (jetty-launcher-2377-thread-1) [] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@25bb820d{/solr,null,AVAILABLE}
   [junit4]   2> 2088776 INFO  (jetty-launcher-2377-thread-1) [] 
o.e.j.s.AbstractConnector Started 
ServerConnector@73616def{HTTP/1.1,[http/1.1]}{127.0.0.1:35110}
   [junit4]   2> 2088776 INFO  (jetty-launcher-2377-thread-1) [] 
o.e.j.s.Server Started @2104118ms
   [junit4]   2> 2088776 INFO  (jetty-launcher-2377-thread-1) [] 
o.a.s.c.s.e.JettySolrRunner Jetty properties: {hostContext=/solr, 
hostPort=35110}
   [junit4]   2> 2088776 ERROR (jetty-launcher-2377-thread-1) [] 
o.a.s.u.StartupLoggingUtils Missing Java Option solr.log.dir. Logging may be 
missing or incomplete.
   [junit4]   2> 2088776 INFO  (jetty-launcher-2377-thread-1) [] 
o.a.s.s.SolrDispatchFilter  ___  _   Welcome to Apache Solr? version 
8.0.0
   [junit4]   2> 2088776 INFO  (jetty-launcher-2377-thread-1) [] 
o.a.s.s.SolrDispatchFilter / __| ___| |_ _   Starting in cloud mode on port null
   [junit4]   2> 2088776 INFO  (jetty-launcher-2377-thread-1) [] 
o.a.s.s.SolrDispatchFilter \__ \/ _ \ | '_|  Install dir: null
   [junit4]   2> 2088776 INFO  (jetty-launcher-2377-thread-1) [] 
o.a.s.s.SolrDispatchFilter |___/\___/_|_|Start time: 
2017-11-10T21:01:00.930Z
   [junit4]   2> 2088812 INFO  (jetty-launcher-2377-thread-2) [] 
o.e.j.s.Server jetty-9.3.20.v20170531
   [junit4]   2> 200 INFO  (jetty-launcher-2377-thread-1) [] 
o.a.s.s.SolrDispatchFilter solr.xml found in ZooKeeper. Loading...
   [junit4]   2> 2088919 INFO  (jetty-launcher-2377-thread-1) [] 
o.a.s.c.ZkContainer Zookeeper client=127.0.0.1:45466/solr
   [junit4]   2> 2088953 INFO  (jetty-launcher-2377-thread-2) [] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@69d7a603{/solr,null,AVAILABLE}
   [junit4]   2> 2089027 INFO  (jetty-launcher-2377-thread-2) [] 
o.e.j.s.AbstractConnector Started 
ServerConnector@185a2297{HTTP/1.1,[http/1.1]}{127.0.0.1:33769}
   [junit4]   2> 2089027 INFO  (jetty-launcher-2377-thread-2) [] 
o.e.j.s.Server Started @2104369ms
   [junit4]   2> 2089027 INFO  (jetty-launcher-2377-thread-2) [] 
o.a.s.c.s.e.JettySolrRunner Jetty properties: {hostContext=/solr, 
hostPort=33769}
   [junit4]   2> 2089028 ERROR (jetty-launcher-2377-thread-2) [] 
o.a.s.u.StartupLoggingUtils Missing Java Option solr.log.dir. Logging may be 
missing or incomplete.
   [junit4]   2> 2089028 INFO  (jetty-launcher-2377-thread-2) [] 

[jira] [Resolved] (SOLR-4337) typo in solrconfig.xml; EditorialMarkerFactory should be ElevatedMarkerFactory

2017-11-10 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett resolved SOLR-4337.
-
   Resolution: Won't Fix
Fix Version/s: (was: 4.0)

>From what I understand, the EditorialMarkerFactory is automatically registered 
>when called - it's definitely there in the code, so I believe the docs are 
>correct.

> typo in solrconfig.xml; EditorialMarkerFactory should be ElevatedMarkerFactory
> --
>
> Key: SOLR-4337
> URL: https://issues.apache.org/jira/browse/SOLR-4337
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.0
> Environment: solr 4.0 final on tomcat 7.0.34
>Reporter: David Morana
>Priority: Minor
>
> Sorry, I don't know how to classify this;it's either a bug or an improvement.
> The EditorialMarkerFactory doesn't exist; it should be the 
> ElevatedMarkerFactory 
> See 
> [SOLR-4334|https://issues.apache.org/jira/browse/SOLR-4331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=13560701#comment-13560701]
>  for more info.
> So the documentation needs to be fixed in the solr wiki (doc transformers) 
> and in the default solrconfig.xml file like so:
> {noformat}
> - class="org.apache.solr.response.transform.EditorialMarkerFactory" />
> + class="org.apache.solr.response.transform.ElevatedMarkerFactory" />
> {noformat}
> Thank you,



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_144) - Build # 20883 - Still unstable!

2017-11-10 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/20883/
Java: 32bit/jdk1.8.0_144 -client -XX:+UseG1GC

8 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.hdfs.StressHdfsTest

Error Message:
Timed out waiting for Mini HDFS Cluster to start

Stack Trace:
java.io.IOException: Timed out waiting for Mini HDFS Cluster to start
at __randomizedtesting.SeedInfo.seed([7E86C44144CD0D9A]:0)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.waitClusterUp(MiniDFSCluster.java:1207)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:840)
at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:746)
at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:616)
at 
org.apache.solr.cloud.hdfs.HdfsTestUtil.setupClass(HdfsTestUtil.java:105)
at 
org.apache.solr.cloud.hdfs.HdfsTestUtil.setupClass(HdfsTestUtil.java:63)
at 
org.apache.solr.cloud.hdfs.StressHdfsTest.setupClass(StressHdfsTest.java:73)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:874)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.TestHdfsBackupRestoreCore

Error Message:
Timed out waiting for Mini HDFS Cluster to start

Stack Trace:
java.io.IOException: Timed out waiting for Mini HDFS Cluster to start
at __randomizedtesting.SeedInfo.seed([7E86C44144CD0D9A]:0)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.waitClusterUp(MiniDFSCluster.java:1207)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:840)
at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:746)
at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:616)
at 
org.apache.solr.cloud.hdfs.HdfsTestUtil.setupClass(HdfsTestUtil.java:105)
at 
org.apache.solr.cloud.hdfs.HdfsTestUtil.setupClass(HdfsTestUtil.java:63)
at 
org.apache.solr.handler.TestHdfsBackupRestoreCore.setupClass(TestHdfsBackupRestoreCore.java:108)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:874)
at 

[jira] [Commented] (SOLR-4323) Unterminated lst element for example freq spellchecker

2017-11-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16248112#comment-16248112
 ] 

ASF subversion and git services commented on SOLR-4323:
---

Commit 6b5fbd3265e6819469e1b70a68908767fc14dd87 in lucene-solr's branch 
refs/heads/branch_7x from [~ctargett]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=6b5fbd3 ]

SOLR-4323: fix unterminated lst element in techproducts example solrconfig


> Unterminated lst element for example freq spellchecker
> --
>
> Key: SOLR-4323
> URL: https://issues.apache.org/jira/browse/SOLR-4323
> Project: Solr
>  Issue Type: Bug
>  Components: spellchecker
>Affects Versions: 4.0
>Reporter: Jack Krupansky
> Fix For: 7.2, master (8.0)
>
>
> If I uncomment the "freq" spellchecker in the Solr 4.0 example 
> solrconfig.xml, I get a nasty XML load error because it has an unterminated 
> lst element.
> The exception:
> {code}
> Jan 19, 2013 11:56:07 PM org.apache.solr.common.SolrException log
> SEVERE: Exception during parsing file: 
> solrconfig.xml:org.xml.sax.SAXParseException; systemId: 
> solrres:/solrconfig.xml; lineNumber: 1267; columnNumber: 5; The element type 
> "lst" must be terminated by the matching end-tag "".
> at 
> com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.createSAXParseException(Unknown
>  Source)
> at 
> com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.fatalError(Unknown
>  Source)
> at 
> com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(Unknown 
> Source)
> at 
> com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(Unknown 
> Source)
> at 
> com.sun.org.apache.xerces.internal.impl.XMLScanner.reportFatalError(Unknown 
> Source)
> at 
> com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanEndElement(Unknown
>  Source)
> at 
> com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(Unknown
>  Source)
> at 
> com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(Unknown 
> Source)
> at 
> com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.next(Unknown 
> Source)
> at 
> com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(Unknown
>  Source)
> at 
> com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(Unknown 
> Source)
> at 
> com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(Unknown 
> Source)
> at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(Unknown 
> Source)
> at com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(Unknown 
> Source)
> at 
> com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(Unknown 
> Source)
> at org.apache.solr.core.Config.(Config.java:121)
> at org.apache.solr.core.Config.(Config.java:73)
> at org.apache.solr.core.SolrConfig.(SolrConfig.java:117)
> at org.apache.solr.core.CoreContainer.create(CoreContainer.java:776)
> at org.apache.solr.core.CoreContainer.load(CoreContainer.java:534)
> at org.apache.solr.core.CoreContainer.load(CoreContainer.java:356)
> at 
> org.apache.solr.core.CoreContainer$Initializer.initialize(CoreContainer.java:308)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:107)
> {code}
> From Solr 4.0 example solrconfig.xml:
> {code}
> 
> {code}
> Which should be:
> {code}
> 
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-4323) Unterminated lst element for example freq spellchecker

2017-11-10 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett resolved SOLR-4323.
-
   Resolution: Fixed
 Assignee: Cassandra Targett
Fix Version/s: master (8.0)
   7.2

> Unterminated lst element for example freq spellchecker
> --
>
> Key: SOLR-4323
> URL: https://issues.apache.org/jira/browse/SOLR-4323
> Project: Solr
>  Issue Type: Bug
>  Components: spellchecker
>Affects Versions: 4.0
>Reporter: Jack Krupansky
>Assignee: Cassandra Targett
> Fix For: 7.2, master (8.0)
>
>
> If I uncomment the "freq" spellchecker in the Solr 4.0 example 
> solrconfig.xml, I get a nasty XML load error because it has an unterminated 
> lst element.
> The exception:
> {code}
> Jan 19, 2013 11:56:07 PM org.apache.solr.common.SolrException log
> SEVERE: Exception during parsing file: 
> solrconfig.xml:org.xml.sax.SAXParseException; systemId: 
> solrres:/solrconfig.xml; lineNumber: 1267; columnNumber: 5; The element type 
> "lst" must be terminated by the matching end-tag "".
> at 
> com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.createSAXParseException(Unknown
>  Source)
> at 
> com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.fatalError(Unknown
>  Source)
> at 
> com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(Unknown 
> Source)
> at 
> com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(Unknown 
> Source)
> at 
> com.sun.org.apache.xerces.internal.impl.XMLScanner.reportFatalError(Unknown 
> Source)
> at 
> com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanEndElement(Unknown
>  Source)
> at 
> com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(Unknown
>  Source)
> at 
> com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(Unknown 
> Source)
> at 
> com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.next(Unknown 
> Source)
> at 
> com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(Unknown
>  Source)
> at 
> com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(Unknown 
> Source)
> at 
> com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(Unknown 
> Source)
> at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(Unknown 
> Source)
> at com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(Unknown 
> Source)
> at 
> com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(Unknown 
> Source)
> at org.apache.solr.core.Config.(Config.java:121)
> at org.apache.solr.core.Config.(Config.java:73)
> at org.apache.solr.core.SolrConfig.(SolrConfig.java:117)
> at org.apache.solr.core.CoreContainer.create(CoreContainer.java:776)
> at org.apache.solr.core.CoreContainer.load(CoreContainer.java:534)
> at org.apache.solr.core.CoreContainer.load(CoreContainer.java:356)
> at 
> org.apache.solr.core.CoreContainer$Initializer.initialize(CoreContainer.java:308)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:107)
> {code}
> From Solr 4.0 example solrconfig.xml:
> {code}
> 
> {code}
> Which should be:
> {code}
> 
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4323) Unterminated lst element for example freq spellchecker

2017-11-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16248110#comment-16248110
 ] 

ASF subversion and git services commented on SOLR-4323:
---

Commit eec25eacceec39a6e5afad8a7d03339a2d89415c in lucene-solr's branch 
refs/heads/master from [~ctargett]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=eec25ea ]

SOLR-4323: fix unterminated lst element in techproducts example solrconfig


> Unterminated lst element for example freq spellchecker
> --
>
> Key: SOLR-4323
> URL: https://issues.apache.org/jira/browse/SOLR-4323
> Project: Solr
>  Issue Type: Bug
>  Components: spellchecker
>Affects Versions: 4.0
>Reporter: Jack Krupansky
>
> If I uncomment the "freq" spellchecker in the Solr 4.0 example 
> solrconfig.xml, I get a nasty XML load error because it has an unterminated 
> lst element.
> The exception:
> {code}
> Jan 19, 2013 11:56:07 PM org.apache.solr.common.SolrException log
> SEVERE: Exception during parsing file: 
> solrconfig.xml:org.xml.sax.SAXParseException; systemId: 
> solrres:/solrconfig.xml; lineNumber: 1267; columnNumber: 5; The element type 
> "lst" must be terminated by the matching end-tag "".
> at 
> com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.createSAXParseException(Unknown
>  Source)
> at 
> com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.fatalError(Unknown
>  Source)
> at 
> com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(Unknown 
> Source)
> at 
> com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(Unknown 
> Source)
> at 
> com.sun.org.apache.xerces.internal.impl.XMLScanner.reportFatalError(Unknown 
> Source)
> at 
> com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanEndElement(Unknown
>  Source)
> at 
> com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(Unknown
>  Source)
> at 
> com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(Unknown 
> Source)
> at 
> com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.next(Unknown 
> Source)
> at 
> com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(Unknown
>  Source)
> at 
> com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(Unknown 
> Source)
> at 
> com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(Unknown 
> Source)
> at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(Unknown 
> Source)
> at com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(Unknown 
> Source)
> at 
> com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(Unknown 
> Source)
> at org.apache.solr.core.Config.(Config.java:121)
> at org.apache.solr.core.Config.(Config.java:73)
> at org.apache.solr.core.SolrConfig.(SolrConfig.java:117)
> at org.apache.solr.core.CoreContainer.create(CoreContainer.java:776)
> at org.apache.solr.core.CoreContainer.load(CoreContainer.java:534)
> at org.apache.solr.core.CoreContainer.load(CoreContainer.java:356)
> at 
> org.apache.solr.core.CoreContainer$Initializer.initialize(CoreContainer.java:308)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:107)
> {code}
> From Solr 4.0 example solrconfig.xml:
> {code}
> 
> {code}
> Which should be:
> {code}
> 
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8047) Comparison of String objects using == or !=

2017-11-10 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16248089#comment-16248089
 ] 

Mike Drob commented on LUCENE-8047:
---

Thanks for the clarification, Uwe!

> Comparison of String objects using == or !=
> ---
>
> Key: LUCENE-8047
> URL: https://issues.apache.org/jira/browse/LUCENE-8047
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Affects Versions: 7.0.1
> Environment: Ubuntu 14.04.5 LTS
>Reporter: song
>Priority: Minor
>  Labels: performance
>
> My tool has scanned the whole codebase of Lucene and found there are eight 
> practice issues of string comparison, in which strings are compared by using 
> ==/!= instead of equals( ).
> analysis/common/src/java/org/apache/lucene/analysis/hunspell/Dictionary.java
> {code:java}
> conditionPattern == SUFFIX_CONDITION_REGEX_PATTERN
> {code}
> analysis/common/src/java/org/apache/lucene/analysis/cjk/CJKBigramFilter.java
> {code:java}
>   if (type == doHan || type == doHiragana || type == doKatakana || type == 
> doHangul) {
> {code}
> analysis/common/src/java/org/apache/lucene/analysis/standard/ClassicFilter.java
> {code:java}
>  if (type == APOSTROPHE_TYPE &&...){
>  } else if (type == ACRONYM_TYPE) {  
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-4181) grouping response format inconsistent in distributed search when no docs match group.query

2017-11-10 Thread Cassandra Targett (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=13531410#comment-13531410
 ] 

Cassandra Targett edited comment on SOLR-4181 at 11/10/17 10:17 PM:


The way to demonstrate this bug (and or that the bug is fixed) is to remove 
this line in TestDistributedGrouping...

{noformat}
handle.put(t1 + ":this_will_never_match", SKIP); // :TODO: SOLR-4181
{noformat}

...which currently tells the test framework to ignore that value when 
recursively comparing the respons structures since it's only in the single node 
response, and not the distributed response.


was (Author: hossman):
The way to demonstrate this bug (and or that the bug is fixed) is to remove 
this line in TestDistributedGrouping...

{noformat}
handle.put(t1 + ":this_will_never_match", SKIP); // :TODO: SOLR-4181
{nofromat}

...which currently tells the test framework to ignore that value when 
recursively comparing the respons structures since it's only in the single node 
response, and not the distributed response.

> grouping response format inconsistent in distributed search when no docs 
> match group.query
> --
>
> Key: SOLR-4181
> URL: https://issues.apache.org/jira/browse/SOLR-4181
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>
> if a group.query does not match any documents, then in distribued search 
> (aka: solrcloud) that group.query will be completley excluded frm the final 
> result.
> This is inconsistent with single node grouping behavior, where the 
> group.query will be included in the final result (in the orde that it's pat 
> of the request) with a numFound=0



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4139) TestReplicationHandler fails if preventDoubleWrite is enabled in MockDirectoryWrapper

2017-11-10 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett updated SOLR-4139:

Component/s: Tests

> TestReplicationHandler fails if preventDoubleWrite is enabled in 
> MockDirectoryWrapper
> -
>
> Key: SOLR-4139
> URL: https://issues.apache.org/jira/browse/SOLR-4139
> Project: Solr
>  Issue Type: Bug
>  Components: Tests
>Reporter: Shai Erera
>
> Currently, if preventDoubleWrite is enabled in MockDirectoryWrapper, 
> TestReplicationHandler fails with this exception:
> {noformat}
> [junit4:junit4]   2>Caused by: java.io.IOException: file
> "index.properties" was already written to
> [junit4:junit4]   2>at
> org.apache.lucene.store.MockDirectoryWrapper.createOutput(MockDirectoryWrapper.java:411)
> [junit4:junit4]   2>at
> org.apache.solr.handler.SnapPuller.modifyIndexProps(SnapPuller.java:872)
> [junit4:junit4]   2>... 11 more
> {noformat}
> I diabled that setting in MockDirectoryFactory, but I suggest that someone 
> who understands the code better have a look at it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4107) Parse Error message lost from 3.5.0 to 3.6.1

2017-11-10 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4107?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett updated SOLR-4107:

Component/s: (was: search)
 clients - java

> Parse Error message lost from 3.5.0 to 3.6.1
> 
>
> Key: SOLR-4107
> URL: https://issues.apache.org/jira/browse/SOLR-4107
> Project: Solr
>  Issue Type: Bug
>  Components: clients - java
>Affects Versions: 3.6.1
>Reporter: Pierre Gossé
>Priority: Minor
>
> QueryComponent.prepare builds a SolrException from ParseException 
> encountered, using a constructor that forces error message to null
> in 3.5.0
> public SolrException(ErrorCode code, Throwable th) {
>super(th);
>this.code=code.code;
>logged=true;
> }
> in 3.6.1 :
> public SolrException(ErrorCode code, Throwable th) {
>this(code, null, th, (th instanceof SolrException) ? 
> ((SolrException)th).logged : false);
> }
> calling :
> public SolrException(ErrorCode code, String msg, Throwable th, boolean 
> alreadyLogged) {
>super(msg,th);
>this.code=code.code;
>logged=alreadyLogged;
> }
> I don't think this is a desired behaviour, so I guessed this should be 
> corrected by changing line 93 to 
>this(code, th.getMessage(), th, (th instanceof SolrException) 
> ?(SolrException)th).logged : false);
> I'll try to add a patch later today.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-3934) Error in Pom generation

2017-11-10 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett resolved SOLR-3934.
-
   Resolution: Fixed
Fix Version/s: 5.0

I don't know when this was fixed, but it's correct now.

> Error in Pom generation
> ---
>
> Key: SOLR-3934
> URL: https://issues.apache.org/jira/browse/SOLR-3934
> Project: Solr
>  Issue Type: Bug
>  Components: Build
>Affects Versions: 4.1
> Environment: ant
>Reporter: Stephane Gamard
>  Labels: maven
> Fix For: 5.0
>
>   Original Estimate: 10m
>  Remaining Estimate: 10m
>
> Generating maven's pom with ant produces the wrong parent element for 
> lucene/core/src/java. 
> Produces:
> {code:xml}
>   
> org.apache.lucene
> lucene-parent
> 4.1-SNAPSHOT
> ../pom.xml
>   
> {code}
> Should be:
> {code:xml}
>   
> org.apache.lucene
> lucene-parent
> 4.1-SNAPSHOT
> ../../../pom.xml
>   
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-MacOSX (64bit/jdk1.8.0) - Build # 300 - Still Unstable!

2017-11-10 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-MacOSX/300/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseSerialGC

2 tests failed.
FAILED:  
org.apache.solr.cloud.CustomCollectionTest.testRouteFieldForImplicitRouter

Error Message:
Collection not found: withShardField

Stack Trace:
org.apache.solr.common.SolrException: Collection not found: withShardField
at 
__randomizedtesting.SeedInfo.seed([776F6FCEC1247424:223F875C6DDDBBD4]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:850)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:816)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.request.UpdateRequest.commit(UpdateRequest.java:233)
at 
org.apache.solr.cloud.CustomCollectionTest.testRouteFieldForImplicitRouter(CustomCollectionTest.java:141)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)

[jira] [Reopened] (SOLR-11635) CDCR Source configuration example in the ref guide leaves out important settings

2017-11-10 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson reopened SOLR-11635:
---

Need more refinement, the config changes are overkill

> CDCR Source configuration example in the ref guide leaves out important 
> settings
> 
>
> Key: SOLR-11635
> URL: https://issues.apache.org/jira/browse/SOLR-11635
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Minor
> Fix For: 7.2, master (8.0)
>
>
> If you blindly copy/paste the Source config from the example, your 
> transaction logs on the Source replicas will not be managed correctly.
> Plus another couple of improvements, in particular a caution about why 
> buffering should be disabled most of the time.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-3687) Implement tests that exercise and assert that the example features work properly

2017-11-10 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett updated SOLR-3687:

Component/s: Tests

> Implement tests that exercise and assert that the example features work 
> properly
> 
>
> Key: SOLR-3687
> URL: https://issues.apache.org/jira/browse/SOLR-3687
> Project: Solr
>  Issue Type: Task
>  Components: Tests
>Reporter: Erik Hatcher
>
> The example configuration that Solr ships with wires in a number of contrib 
> components and has other bells and whistles like the /browse search view.
> Tests should be created that assert at least the basics, such as all the 
> steps of our official tutorial and the /browse view, all work as expected 
> exactly as all integrated under example/ (or whatever we refactor it to as 
> has been discussed).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk1.8.0_144) - Build # 789 - Unstable!

2017-11-10 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/789/
Java: 64bit/jdk1.8.0_144 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

7 tests failed.
FAILED:  
org.apache.solr.client.solrj.impl.CloudSolrClientTest.testRetryUpdatesWhenClusterStateIsStale

Error Message:
Error from server at 
http://127.0.0.1:35979/solr/stale_state_test_col_shard1_replica_n1: Expected 
mime type application/octet-stream but got text/html.Error 
404HTTP ERROR: 404 Problem accessing 
/solr/stale_state_test_col_shard1_replica_n1/update. Reason: Can not 
find: /solr/stale_state_test_col_shard1_replica_n1/update http://eclipse.org/jetty;>Powered by Jetty:// 9.3.20.v20170531 
  

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at 
http://127.0.0.1:35979/solr/stale_state_test_col_shard1_replica_n1: Expected 
mime type application/octet-stream but got text/html. 


Error 404 


HTTP ERROR: 404
Problem accessing /solr/stale_state_test_col_shard1_replica_n1/update. 
Reason:
Can not find: 
/solr/stale_state_test_col_shard1_replica_n1/update
http://eclipse.org/jetty;>Powered by Jetty:// 
9.3.20.v20170531



at 
__randomizedtesting.SeedInfo.seed([B677B775260CECA:BF56E39FB189B8E6]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:607)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:559)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1016)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:883)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:816)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.request.UpdateRequest.commit(UpdateRequest.java:233)
at 
org.apache.solr.client.solrj.impl.CloudSolrClientTest.testRetryUpdatesWhenClusterStateIsStale(CloudSolrClientTest.java:844)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:110)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (LUCENE-8042) Add SegmentCachable interface

2017-11-10 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247995#comment-16247995
 ] 

David Smiley commented on LUCENE-8042:
--

Really nice!

> Add SegmentCachable interface
> -
>
> Key: LUCENE-8042
> URL: https://issues.apache.org/jira/browse/LUCENE-8042
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Fix For: 7.2
>
> Attachments: LUCENE-8042.patch, LUCENE-8042.patch, LUCENE-8042.patch, 
> LUCENE-8042.patch
>
>
> Following LUCENE-8017, I tried to add a getCacheHelper(LeafReaderContext) 
> method to DoubleValuesSource so that Weights that use DVS can delegate on.  
> This ended up with the same method being added to LongValuesSource, and some 
> of the similar objects in spatial-extras.  I think it makes sense to abstract 
> this out into a separate SegmentCachable interface.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-7.x - Build # 232 - Still Unstable

2017-11-10 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/232/

12 tests failed.
FAILED:  
org.apache.solr.cloud.CollectionsAPIAsyncDistributedZkTest.testAsyncRequests

Error Message:
Could not load collection from ZK: testAsyncOperations

Stack Trace:
org.apache.solr.common.SolrException: Could not load collection from ZK: 
testAsyncOperations
at 
__randomizedtesting.SeedInfo.seed([91F0F6AD7672CCD4:75B4CA1AD0DA820B]:0)
at 
org.apache.solr.common.cloud.ZkStateReader.getCollectionLive(ZkStateReader.java:1172)
at 
org.apache.solr.common.cloud.ZkStateReader$LazyCollectionRef.get(ZkStateReader.java:692)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.getDocCollection(CloudSolrClient.java:1206)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:848)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:816)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:106)
at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:71)
at 
org.apache.solr.cloud.CollectionsAPIAsyncDistributedZkTest.testAsyncRequests(CollectionsAPIAsyncDistributedZkTest.java:95)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 

[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 4284 - Still unstable!

2017-11-10 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/4284/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseG1GC

2 tests failed.
FAILED:  
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI

Error Message:
Error from server at 
http://127.0.0.1:58966/solr/awhollynewcollection_0_shard1_replica_n1: 
ClusterState says we are the leader 
(http://127.0.0.1:58966/solr/awhollynewcollection_0_shard1_replica_n1), but 
locally we don't think so. Request came from null

Stack Trace:
org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from 
server at http://127.0.0.1:58966/solr/awhollynewcollection_0_shard1_replica_n1: 
ClusterState says we are the leader 
(http://127.0.0.1:58966/solr/awhollynewcollection_0_shard1_replica_n1), but 
locally we don't think so. Request came from null
at 
__randomizedtesting.SeedInfo.seed([9B14CCB53C6159B4:D361B8013A527621]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:549)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1016)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:883)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:945)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:945)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:945)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:945)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:945)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:816)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.request.UpdateRequest.commit(UpdateRequest.java:233)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI(CollectionsAPIDistributedZkTest.java:460)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-10-ea+29) - Build # 20882 - Failure!

2017-11-10 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/20882/
Java: 64bit/jdk-10-ea+29 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.cloud.ForceLeaderTest.testLastPublishedStateIsActive

Error Message:
Timeout occured while waiting response from server at: http://127.0.0.1:46259

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: http://127.0.0.1:46259
at 
__randomizedtesting.SeedInfo.seed([F44F69E291F55A63:1C2D577336F8583F]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:654)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1104)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:883)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:816)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createServers(AbstractFullDistribZkTestBase.java:314)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:991)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 

[jira] [Commented] (SOLR-11571) Add diff Stream Evaluator to support time series differencing

2017-11-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247887#comment-16247887
 ] 

ASF subversion and git services commented on SOLR-11571:


Commit 52bb2bb0b5e78c935c22a5d141d4714d4d46f086 in lucene-solr's branch 
refs/heads/branch_7x from [~joel.bernstein]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=52bb2bb ]

SOLR-11571: Add diff Stream Evaluator to support time series differencing


> Add diff Stream Evaluator to support time series differencing
> -
>
> Key: SOLR-11571
> URL: https://issues.apache.org/jira/browse/SOLR-11571
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 7.2
>
> Attachments: SOLR-11571
>
>
> This ticket adds support for time series differencing to Solr's statistical 
> expression library.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11571) Add diff Stream Evaluator to support time series differencing

2017-11-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247881#comment-16247881
 ] 

ASF subversion and git services commented on SOLR-11571:


Commit 9ea9a85339549ecaeda7f6b71d4c4c3d2a5a4637 in lucene-solr's branch 
refs/heads/master from [~joel.bernstein]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=9ea9a85 ]

SOLR-11571: Add diff Stream Evaluator to support time series differencing


> Add diff Stream Evaluator to support time series differencing
> -
>
> Key: SOLR-11571
> URL: https://issues.apache.org/jira/browse/SOLR-11571
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 7.2
>
> Attachments: SOLR-11571
>
>
> This ticket adds support for time series differencing to Solr's statistical 
> expression library.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11571) Add diff Stream Evaluator to support time series differencing

2017-11-10 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247880#comment-16247880
 ] 

Joel Bernstein commented on SOLR-11571:
---

Thanks [~Mathew]! Patch looks great, I will commit shortly.

> Add diff Stream Evaluator to support time series differencing
> -
>
> Key: SOLR-11571
> URL: https://issues.apache.org/jira/browse/SOLR-11571
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 7.2
>
> Attachments: SOLR-11571
>
>
> This ticket adds support for time series differencing to Solr's statistical 
> expression library.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-8048) Filesystems do not guarantee order of directories updates

2017-11-10 Thread Nikolay Martynov (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247854#comment-16247854
 ] 

Nikolay Martynov edited comment on LUCENE-8048 at 11/10/17 6:03 PM:


Just to clarify:
* On Linux directory {{fsync}} does work, but it doesn't solve the problem 
currently because this {{fsync}} currently happens after segment list has been 
created. Essentially guarantees about fsync (weak as they are) are in case of 
failure: before fsync you can see changes in any combination, after fsync you 
are guaranteed to see exactly what was written.
* This can potentially affect any FS that uses non trivial storage for 
directories (which is pretty much everything these days). Word on the internet 
is that btrfs is capable of doing out of order directory writes.
* 'kernel automatically detects rename pattern' - I think this only works on 
some FSs (ext4) and only if certain mount options are present (auto_da_alloc). 
And I think generally this is about syncing file data with directory, not 
syncing directory as a whole on rename.


was (Author: mar-kolya):
Just to clarify:
* On Linux directory {{fsync}} does work, but it doesn't solve the problem 
because this {{fsync}} happens after segment list has been created. Essentially 
guarantees about fsync (weak as they are) are in case of failure: before fsync 
you can see changes in any combination, after fsync you are guaranteed to see 
exactly what was written.
* This can potentially affect any FS that uses non trivial storage for 
directories (which is pretty much everything these days). Word on the internet 
is that btrfs is capable of doing out of order directory writes.
* 'kernel automatically detects rename pattern' - I think this only works on 
some FSs (ext4) and only if certain mount options are present (auto_da_alloc). 
And I think generally this is about syncing file data with directory, not 
syncing directory as a whole on rename.

> Filesystems do not guarantee order of directories updates
> -
>
> Key: LUCENE-8048
> URL: https://issues.apache.org/jira/browse/LUCENE-8048
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Nikolay Martynov
>
> Currently when index is written to disk the following sequence of events is 
> taking place:
> * write segment file
> * sync segment file
> * write segment file
> * sync segment file
> ...
> * write list of segments
> * sync list of segments
> * rename list of segments
> * sync index directory
> This sequence leads to potential window of opportunity for system to crash 
> after 'rename list of segments' but before 'sync index directory' and 
> depending on exact filesystem implementation this may potentially lead to 
> 'list of segments' being visible in directory while some of the segments are 
> not.
> Solution to this is to sync index directory after all segments have been 
> written. [This 
> commit|https://github.com/mar-kolya/lucene-solr/commit/58e05dd1f633ab9b02d9e6374c7fab59689ae71c]
>  shows idea implemented. I'm fairly certain that I didn't find all the places 
> this may be potentially happening.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8048) Filesystems do not guarantee order of directories updates

2017-11-10 Thread Nikolay Martynov (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247854#comment-16247854
 ] 

Nikolay Martynov commented on LUCENE-8048:
--

Just to clarify:
* On Linux directory {{fsync}} does work, but it doesn't solve the problem 
because this {{fsync}} happens after segment list has been created. Essentially 
guarantees about fsync (weak as they are) are in case of failure: before fsync 
you can see changes in any combination, after fsync you are guaranteed to see 
exactly what was written.
* This can potentially affect any FS that uses non trivial storage for 
directories (which is pretty much everything these days). Word on the internet 
is that btrfs is capable of doing out of order directory writes.
* 'kernel automatically detects rename pattern' - I think this only works on 
some FSs (ext4) and only if certain mount options are present (auto_da_alloc). 
And I think generally this is about syncing file data with directory, not 
syncing directory as a whole on rename.

> Filesystems do not guarantee order of directories updates
> -
>
> Key: LUCENE-8048
> URL: https://issues.apache.org/jira/browse/LUCENE-8048
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Nikolay Martynov
>
> Currently when index is written to disk the following sequence of events is 
> taking place:
> * write segment file
> * sync segment file
> * write segment file
> * sync segment file
> ...
> * write list of segments
> * sync list of segments
> * rename list of segments
> * sync index directory
> This sequence leads to potential window of opportunity for system to crash 
> after 'rename list of segments' but before 'sync index directory' and 
> depending on exact filesystem implementation this may potentially lead to 
> 'list of segments' being visible in directory while some of the segments are 
> not.
> Solution to this is to sync index directory after all segments have been 
> written. [This 
> commit|https://github.com/mar-kolya/lucene-solr/commit/58e05dd1f633ab9b02d9e6374c7fab59689ae71c]
>  shows idea implemented. I'm fairly certain that I didn't find all the places 
> this may be potentially happening.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11629) CloudSolrClient.Builder should accept a zk host

2017-11-10 Thread Jason Gerlowski (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247844#comment-16247844
 ] 

Jason Gerlowski commented on SOLR-11629:


I like the idea here, but it's a little tricky for {{CloudSolrClient}}, since 
zk-host and Solr-URL are both String values.

Java won't let you have two constructors, with the same signature, so the 
straightforward approach won't compile:

{code}
public Builder(String solrUrl) {...}
public Builder(String zkHost) {...}
{code}

We could do something fancier, like introduce SolrUrl and ZkHost 
String-subtypes, but that would require uses to wrap their String-literal in a 
{{new SolrUrl("")}} declaration just to work around the type system.  Would 
clarify which parameters were "required", but also adds some user-irritation.  
I'm ambivalent on whether or not it's worth it.

If no code solution presents itself here, maybe our best bet is just some 
Javadoc improvement.

> CloudSolrClient.Builder should accept a zk host
> ---
>
> Key: SOLR-11629
> URL: https://issues.apache.org/jira/browse/SOLR-11629
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>
> Today we need to create an empty builder and then wither pass zkHost or 
> withSolrUrl
> {code}
> SolrClient solrClient = new 
> CloudSolrClient.Builder().withZkHost("localhost:9983").build();
> solrClient.request(updateRequest, "gettingstarted");
> {code}
> What if we have two constructors , one that accepts a zkHost and one that 
> accepts a SolrUrl .
> The advantages that I can think of are:
> - It will be obvious to users that we support two mechanisms of creating a 
> CloudSolrClient . The SolrUrl option is cool and applications don't need to 
> know about ZooKeeper and new users will learn about this . Maybe our 
> example's on the ref guide should use this? 
> - Today people can set both zkHost and solrUrl  but CloudSolrClient can only 
> utilize one of them
> HttpClient's Builder accepts the host 
> {code}
> HttpSolrClient client = new 
> HttpSolrClient.Builder("http://localhost:8983/solr;).build();
> client.request(updateRequest, "techproducts");
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-11635) CDCR Source configuration example in the ref guide leaves out important settings

2017-11-10 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-11635.
---
   Resolution: Fixed
Fix Version/s: master (8.0)
   7.2

> CDCR Source configuration example in the ref guide leaves out important 
> settings
> 
>
> Key: SOLR-11635
> URL: https://issues.apache.org/jira/browse/SOLR-11635
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Minor
> Fix For: 7.2, master (8.0)
>
>
> If you blindly copy/paste the Source config from the example, your 
> transaction logs on the Source replicas will not be managed correctly.
> Plus another couple of improvements, in particular a caution about why 
> buffering should be disabled most of the time.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11635) CDCR Source configuration example in the ref guide leaves out important settings

2017-11-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247843#comment-16247843
 ] 

ASF subversion and git services commented on SOLR-11635:


Commit c758785c0fd59e672afdcb506989b08cc52fdaf9 in lucene-solr's branch 
refs/heads/branch_7x from [~erickerickson]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=c758785c ]

SOLR-11635: CDCR Source configuration example in the ref guide leaves out 
important settings

(cherry picked from commit 6e3d082)


> CDCR Source configuration example in the ref guide leaves out important 
> settings
> 
>
> Key: SOLR-11635
> URL: https://issues.apache.org/jira/browse/SOLR-11635
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Minor
>
> If you blindly copy/paste the Source config from the example, your 
> transaction logs on the Source replicas will not be managed correctly.
> Plus another couple of improvements, in particular a caution about why 
> buffering should be disabled most of the time.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11635) CDCR Source configuration example in the ref guide leaves out important settings

2017-11-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247841#comment-16247841
 ] 

ASF subversion and git services commented on SOLR-11635:


Commit 6e3d082395fcb097b5a3545a65848788d1e2571f in lucene-solr's branch 
refs/heads/master from [~erickerickson]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=6e3d082 ]

SOLR-11635: CDCR Source configuration example in the ref guide leaves out 
important settings


> CDCR Source configuration example in the ref guide leaves out important 
> settings
> 
>
> Key: SOLR-11635
> URL: https://issues.apache.org/jira/browse/SOLR-11635
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Minor
>
> If you blindly copy/paste the Source config from the example, your 
> transaction logs on the Source replicas will not be managed correctly.
> Plus another couple of improvements, in particular a caution about why 
> buffering should be disabled most of the time.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11635) CDCR Source configuration example in the ref guide leaves out important settings

2017-11-10 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-11635:
--
Summary: CDCR Source configuration example in the ref guide leaves out 
important settings  (was: The Source configuration example leaves out important 
settings for the Source configuratoin)

> CDCR Source configuration example in the ref guide leaves out important 
> settings
> 
>
> Key: SOLR-11635
> URL: https://issues.apache.org/jira/browse/SOLR-11635
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Minor
>
> If you blindly copy/paste the Source config from the example, your 
> transaction logs on the Source replicas will not be managed correctly.
> Plus another couple of improvements, in particular a caution about why 
> buffering should be disabled most of the time.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-master - Build # 885 - Still Failing

2017-11-10 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/885/

No tests ran.

Build Log:
[...truncated 28034 lines...]
ERROR: command execution failed.
ERROR: lucene2 is offline; cannot locate JDK 1.8 (latest)
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any
ERROR: lucene2 is offline; cannot locate JDK 1.8 (latest)
ERROR: lucene2 is offline; cannot locate JDK 1.8 (latest)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Resolved] (LUCENE-8042) Add SegmentCachable interface

2017-11-10 Thread Alan Woodward (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward resolved LUCENE-8042.
---
   Resolution: Fixed
 Assignee: Alan Woodward
Fix Version/s: 7.2

Thanks for the reviews Robert!

> Add SegmentCachable interface
> -
>
> Key: LUCENE-8042
> URL: https://issues.apache.org/jira/browse/LUCENE-8042
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Alan Woodward
>Assignee: Alan Woodward
> Fix For: 7.2
>
> Attachments: LUCENE-8042.patch, LUCENE-8042.patch, LUCENE-8042.patch, 
> LUCENE-8042.patch
>
>
> Following LUCENE-8017, I tried to add a getCacheHelper(LeafReaderContext) 
> method to DoubleValuesSource so that Weights that use DVS can delegate on.  
> This ended up with the same method being added to LongValuesSource, and some 
> of the similar objects in spatial-extras.  I think it makes sense to abstract 
> this out into a separate SegmentCachable interface.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8042) Add SegmentCachable interface

2017-11-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247808#comment-16247808
 ] 

ASF subversion and git services commented on LUCENE-8042:
-

Commit 317c9f359f3779725324fdb546fbb2ebe7fcf54c in lucene-solr's branch 
refs/heads/branch_7x from [~romseygeek]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=317c9f3 ]

LUCENE-8042: Fix precommit and CHANGES


> Add SegmentCachable interface
> -
>
> Key: LUCENE-8042
> URL: https://issues.apache.org/jira/browse/LUCENE-8042
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Alan Woodward
> Attachments: LUCENE-8042.patch, LUCENE-8042.patch, LUCENE-8042.patch, 
> LUCENE-8042.patch
>
>
> Following LUCENE-8017, I tried to add a getCacheHelper(LeafReaderContext) 
> method to DoubleValuesSource so that Weights that use DVS can delegate on.  
> This ended up with the same method being added to LongValuesSource, and some 
> of the similar objects in spatial-extras.  I think it makes sense to abstract 
> this out into a separate SegmentCachable interface.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8042) Add SegmentCachable interface

2017-11-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247810#comment-16247810
 ] 

ASF subversion and git services commented on LUCENE-8042:
-

Commit b5571031cab9199d7a74370f69d821f4676e2caa in lucene-solr's branch 
refs/heads/master from [~romseygeek]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=b557103 ]

LUCENE-8042: Fix precommit and CHANGES


> Add SegmentCachable interface
> -
>
> Key: LUCENE-8042
> URL: https://issues.apache.org/jira/browse/LUCENE-8042
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Alan Woodward
> Attachments: LUCENE-8042.patch, LUCENE-8042.patch, LUCENE-8042.patch, 
> LUCENE-8042.patch
>
>
> Following LUCENE-8017, I tried to add a getCacheHelper(LeafReaderContext) 
> method to DoubleValuesSource so that Weights that use DVS can delegate on.  
> This ended up with the same method being added to LongValuesSource, and some 
> of the similar objects in spatial-extras.  I think it makes sense to abstract 
> this out into a separate SegmentCachable interface.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8042) Add SegmentCachable interface

2017-11-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247809#comment-16247809
 ] 

ASF subversion and git services commented on LUCENE-8042:
-

Commit 276e317e9424252d89df7596851c7cd3559d79b1 in lucene-solr's branch 
refs/heads/master from [~romseygeek]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=276e317 ]

LUCENE-8042: Add SegmentCachable interface


> Add SegmentCachable interface
> -
>
> Key: LUCENE-8042
> URL: https://issues.apache.org/jira/browse/LUCENE-8042
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Alan Woodward
> Attachments: LUCENE-8042.patch, LUCENE-8042.patch, LUCENE-8042.patch, 
> LUCENE-8042.patch
>
>
> Following LUCENE-8017, I tried to add a getCacheHelper(LeafReaderContext) 
> method to DoubleValuesSource so that Weights that use DVS can delegate on.  
> This ended up with the same method being added to LongValuesSource, and some 
> of the similar objects in spatial-extras.  I think it makes sense to abstract 
> this out into a separate SegmentCachable interface.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8042) Add SegmentCachable interface

2017-11-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247807#comment-16247807
 ] 

ASF subversion and git services commented on LUCENE-8042:
-

Commit 6e4f9a62e7cc221dcb49788ab683c87f764f2f4a in lucene-solr's branch 
refs/heads/branch_7x from [~romseygeek]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=6e4f9a6 ]

LUCENE-8042: Add SegmentCachable interface


> Add SegmentCachable interface
> -
>
> Key: LUCENE-8042
> URL: https://issues.apache.org/jira/browse/LUCENE-8042
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Alan Woodward
> Attachments: LUCENE-8042.patch, LUCENE-8042.patch, LUCENE-8042.patch, 
> LUCENE-8042.patch
>
>
> Following LUCENE-8017, I tried to add a getCacheHelper(LeafReaderContext) 
> method to DoubleValuesSource so that Weights that use DVS can delegate on.  
> This ended up with the same method being added to LongValuesSource, and some 
> of the similar objects in spatial-extras.  I think it makes sense to abstract 
> this out into a separate SegmentCachable interface.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8047) Comparison of String objects using == or !=

2017-11-10 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247792#comment-16247792
 ] 

Uwe Schindler commented on LUCENE-8047:
---

@Mike: It uses {{TypeAttribute}} in the analysis chain behind the scenes and 
that is a String-typed attribute, so an enum is not working. This is the reason 
why it is done like that.

> Comparison of String objects using == or !=
> ---
>
> Key: LUCENE-8047
> URL: https://issues.apache.org/jira/browse/LUCENE-8047
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Affects Versions: 7.0.1
> Environment: Ubuntu 14.04.5 LTS
>Reporter: song
>Priority: Minor
>  Labels: performance
>
> My tool has scanned the whole codebase of Lucene and found there are eight 
> practice issues of string comparison, in which strings are compared by using 
> ==/!= instead of equals( ).
> analysis/common/src/java/org/apache/lucene/analysis/hunspell/Dictionary.java
> {code:java}
> conditionPattern == SUFFIX_CONDITION_REGEX_PATTERN
> {code}
> analysis/common/src/java/org/apache/lucene/analysis/cjk/CJKBigramFilter.java
> {code:java}
>   if (type == doHan || type == doHiragana || type == doKatakana || type == 
> doHangul) {
> {code}
> analysis/common/src/java/org/apache/lucene/analysis/standard/ClassicFilter.java
> {code:java}
>  if (type == APOSTROPHE_TYPE &&...){
>  } else if (type == ACRONYM_TYPE) {  
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-8047) Comparison of String objects using == or !=

2017-11-10 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247757#comment-16247757
 ] 

Uwe Schindler edited comment on LUCENE-8047 at 11/10/17 5:17 PM:
-

I agree with Robert. The strings here are all final constants and there is no 
need to use equals - in fact the reference comparison is wanted here. The token 
type is used like an enum and should always only be the constants given in the 
tokenizer/filter. The question/issue we have seen here was raised quite often, 
because some code quality tools complain. But Lucene is very low-level and 
performance-critatical code, so the developers are aware of the consequences. 
In earlier versions (before 4.0) this pattern was much more often used for 
"field names", as this was a ongoing comparison (did field name change while 
iterating over terms). Lucene code contains more programming-antipatterns that 
should be fixed in high level projects (like business code) but are done in 
Lucene for performance reasons! 

My last note about this one: Using equals wont make it much worse nowadays, as 
the String.equals() method early exists if the references are equal. In 
addition, {{String#equals()}} is an Hotspot intrinsic - which also early exits 
on same reference. So replacing by equals() shouldn't be an issue here,


was (Author: thetaphi):
I agree with Robert. The strings here are all final constants and there is no 
need to use equals - in fact the reference comparison is wanted here. The token 
type is used like an enum and should always only be the constants given in the 
tokenizer/filter. The question/issue we have seen here was raised quite often, 
because some code quality tools complain. But Lucene is very low-level and 
performance-critatical code, so the developers are aware of the consequences. 
In earlier versions (before 4.0) this pattern was much more often used for 
"field names", as this was a ongoing comparison (did field name change while 
iterating over terms). Lucene code contains more programming-antipatterns that 
should be fixed in high level projects (like business code) but are done in 
Lucene for performance reasons! 

My last note about this one: Using equals wont make it much worse nowadays, as 
the String.equals() method early exists if the references are equal.

> Comparison of String objects using == or !=
> ---
>
> Key: LUCENE-8047
> URL: https://issues.apache.org/jira/browse/LUCENE-8047
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Affects Versions: 7.0.1
> Environment: Ubuntu 14.04.5 LTS
>Reporter: song
>Priority: Minor
>  Labels: performance
>
> My tool has scanned the whole codebase of Lucene and found there are eight 
> practice issues of string comparison, in which strings are compared by using 
> ==/!= instead of equals( ).
> analysis/common/src/java/org/apache/lucene/analysis/hunspell/Dictionary.java
> {code:java}
> conditionPattern == SUFFIX_CONDITION_REGEX_PATTERN
> {code}
> analysis/common/src/java/org/apache/lucene/analysis/cjk/CJKBigramFilter.java
> {code:java}
>   if (type == doHan || type == doHiragana || type == doKatakana || type == 
> doHangul) {
> {code}
> analysis/common/src/java/org/apache/lucene/analysis/standard/ClassicFilter.java
> {code:java}
>  if (type == APOSTROPHE_TYPE &&...){
>  } else if (type == ACRONYM_TYPE) {  
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8047) Comparison of String objects using == or !=

2017-11-10 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247788#comment-16247788
 ] 

Mike Drob commented on LUCENE-8047:
---

Should we replace the strings with enums?

> Comparison of String objects using == or !=
> ---
>
> Key: LUCENE-8047
> URL: https://issues.apache.org/jira/browse/LUCENE-8047
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Affects Versions: 7.0.1
> Environment: Ubuntu 14.04.5 LTS
>Reporter: song
>Priority: Minor
>  Labels: performance
>
> My tool has scanned the whole codebase of Lucene and found there are eight 
> practice issues of string comparison, in which strings are compared by using 
> ==/!= instead of equals( ).
> analysis/common/src/java/org/apache/lucene/analysis/hunspell/Dictionary.java
> {code:java}
> conditionPattern == SUFFIX_CONDITION_REGEX_PATTERN
> {code}
> analysis/common/src/java/org/apache/lucene/analysis/cjk/CJKBigramFilter.java
> {code:java}
>   if (type == doHan || type == doHiragana || type == doKatakana || type == 
> doHangul) {
> {code}
> analysis/common/src/java/org/apache/lucene/analysis/standard/ClassicFilter.java
> {code:java}
>  if (type == APOSTROPHE_TYPE &&...){
>  } else if (type == ACRONYM_TYPE) {  
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11631) Schema API always has status 0

2017-11-10 Thread Jason Gerlowski (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247778#comment-16247778
 ] 

Jason Gerlowski commented on SOLR-11631:


Varun recently noticed this was an issue for a lot of errors-cases in the 
Core-Admin APIs too (SOLR-11551).  I wonder if there are any other APIs that 
don't set the "status" field appropriately.

> Schema API always has status 0
> --
>
> Key: SOLR-11631
> URL: https://issues.apache.org/jira/browse/SOLR-11631
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
> Attachments: SOLR-11631.patch
>
>
> Schema API failures always return status=0.
> Consumers should be able to detect failure using normal mechanisms (i.e. 
> status != 0) rather than having to parse the response for "errors".  Right 
> now if I attempt to {{add-field}} an already existing field, I get:
> {noformat}
> {responseHeader={status=0,QTime=XXX},errors=[{add-field={name=YYY, ...}, 
> errorMessages=[Field 'YYY' already exists.]}]}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-8048) Filesystems do not guarantee order of directories updates

2017-11-10 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247771#comment-16247771
 ] 

Uwe Schindler edited comment on LUCENE-8048 at 11/10/17 5:03 PM:
-

The guarantees about fsync are weak anyways (as Robert said), and on top - 
fsyncing directory metadata is a hack, the Java API does not allow to do it via 
API, you need a hack with a file handle - but it works in our testing 
([~mikemccand] had/has a test computer with a remote powerswitch to stress all 
this for weeks). The directory sync is at least documented in Linux Man Pages, 
for other UNIX-ial operating systems it not defined (lack of POSIX standard for 
it).

In short:
- On Linux, the fsync on the directory really works, but we only know about 
usual file systems (ext4 and xfs I think)
- In addition because the atomic rename use case is very common in Unix world 
to commit stuff, the kernel already does the right thing. If it sees an atomic 
rename, it automatically fsyncs directory under certain conditions (read source 
code). [~rcmuir], is this right? - it's long ago when I last looked at that 
code!
- On MacOSX/Solaris the same applies like for linux, although it does not have 
the automatism in kernel. And we don't know if fsyncing directory is really 
done for all file systems. The Man page does not say anything and POSIX does 
not define it.
- On Windows, the fsync on directory does not work at all (it is a no-op in 
Lucene -> we have a try-catch around it with an assertion on Windows in the 
exception block). But Windows file systems guarantee that after the atomic 
rename the directory is in an consistent state (it's documented). 
Happens-before also works.


was (Author: thetaphi):
The guarantees about fsync are weak anyways (as Robert said), and on top - 
fsyncing directory metadata is a hack, the Java API does not allow to do it via 
API, you need a hack with a file handle - but it works in our testing 
([~mikemccand] had/has a test computer with a remote powerswitch to stress all 
this for weeks). The directory sync is at least documented in Linux Man Pages, 
for other operating systems it not defined (lack of POSIX standard for it).

In short:
- On Linux, the fsync on the directory really works, but we only know about 
usual file systems (ext4 and xfs I think)
- In addition because the atomic rename use case is very common in Unix world 
to commit stuff, the kernel already does the right thing. If it sees an atomic 
rename, it automatically fsyncs directory under certain conditions (read source 
code). [~rcmuir], is this right? - it's long ago when I last looked at that 
code!
- On MacOSX/Solaris the same applies like for linux, although it does not have 
the automatism in kernel. And we don't know if fsyncing directory is really 
done for all file systems. The Man page does not say anything and POSIX does 
not define it.
- On Windows, the fsync on directory does not work at all (it is a no-op in 
Lucene -> we have a try-catch around it with an assertion on Windows in the 
exception block). But Windows file systems guarantee that after the atomic 
rename the directory is in an consistent state (it's documented). 
Happens-before also works.

> Filesystems do not guarantee order of directories updates
> -
>
> Key: LUCENE-8048
> URL: https://issues.apache.org/jira/browse/LUCENE-8048
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Nikolay Martynov
>
> Currently when index is written to disk the following sequence of events is 
> taking place:
> * write segment file
> * sync segment file
> * write segment file
> * sync segment file
> ...
> * write list of segments
> * sync list of segments
> * rename list of segments
> * sync index directory
> This sequence leads to potential window of opportunity for system to crash 
> after 'rename list of segments' but before 'sync index directory' and 
> depending on exact filesystem implementation this may potentially lead to 
> 'list of segments' being visible in directory while some of the segments are 
> not.
> Solution to this is to sync index directory after all segments have been 
> written. [This 
> commit|https://github.com/mar-kolya/lucene-solr/commit/58e05dd1f633ab9b02d9e6374c7fab59689ae71c]
>  shows idea implemented. I'm fairly certain that I didn't find all the places 
> this may be potentially happening.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-8048) Filesystems do not guarantee order of directories updates

2017-11-10 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247771#comment-16247771
 ] 

Uwe Schindler edited comment on LUCENE-8048 at 11/10/17 5:00 PM:
-

The guarantees about fsync are weak anyways (as Robert said), and on top - 
fsyncing directory metadata is a hack, the Java API does not allow to do it via 
API, you need a hack with a file handle - but it works in our testing 
([~mikemccand] had/has a test computer with a remote powerswitch to stress all 
this for weeks). The directory sync is at least documented in Linux Man Pages, 
for other operating systems it not defined (lack of POSIX standard for it).

In short:
- On Linux, the fsync on the directory really works, but we only know about 
usual file systems (ext4 and xfs I think)
- In addition because the atomic rename use case is very common in Unix world 
to commit stuff, the kernel already does the right thing. If it sees an atomic 
rename, it automatically fsyncs directory under certain conditions (read source 
code). [~rcmuir], is this right? - it's long ago when I last looked at that 
code!
- On MacOSX/Solaris the same applies like for linux, although it does not have 
the automatism in kernel. And we don't know if fsyncing directory is really 
done for all file systems. The Man page does not say anything and POSIX does 
not define it.
- On Windows, the fsync on directory does not work at all (it is a no-op in 
Lucene -> we have a try-catch around it with an assertion on Windows in the 
exception block). But Windows file systems guarantee that after the atomic 
rename the directory is in an consistent state (it's documented). 
Happens-before also works.


was (Author: thetaphi):
The guarantees about fsync are weak anyways (as Robert said), and on top - 
fsyncing directory metadata is a hack, the Java API does not allow to do it via 
API, you need a hack with a file handle - but it works in our testing 
([~mikemccand] had/has a test computer with a remote powerswitch to stress all 
this for weeks). The directory sync is at least documented in Linux Man Pages, 
for other operating systems it not defined (lack of POSIX standard for it).

In short:
- On Linux, the fsync on the directory really works, but we only know about 
usual file systems (ext4 and xfs I think)
- In addition because the atomic rename use case is very common in Unix world 
to commit stuff, the kernel already does the right thing. If it sees an atomic 
rename, it automatically fsyncs directory under certain conditions (read source 
code). Robert, is this right - it's long ago when I last looked at that code!
- On MacOSX/Solaris the same applies like for linux, although it does not have 
the automatism in kernel. And we don't know if fsyncing directory is really 
done for all file systems. The Man page does not say anything and POSIX does 
not define it.
- On Windows, the fsync on directory does not work at all (it is a no-op in 
Lucene -> we have a try-catch around it with an assertion on Windows in the 
exception block). But Windows file systems guarantee that after the atomic 
rename the directory is in an consistent state (it's documented). 
Happens-before also works.

> Filesystems do not guarantee order of directories updates
> -
>
> Key: LUCENE-8048
> URL: https://issues.apache.org/jira/browse/LUCENE-8048
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Nikolay Martynov
>
> Currently when index is written to disk the following sequence of events is 
> taking place:
> * write segment file
> * sync segment file
> * write segment file
> * sync segment file
> ...
> * write list of segments
> * sync list of segments
> * rename list of segments
> * sync index directory
> This sequence leads to potential window of opportunity for system to crash 
> after 'rename list of segments' but before 'sync index directory' and 
> depending on exact filesystem implementation this may potentially lead to 
> 'list of segments' being visible in directory while some of the segments are 
> not.
> Solution to this is to sync index directory after all segments have been 
> written. [This 
> commit|https://github.com/mar-kolya/lucene-solr/commit/58e05dd1f633ab9b02d9e6374c7fab59689ae71c]
>  shows idea implemented. I'm fairly certain that I didn't find all the places 
> this may be potentially happening.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-8048) Filesystems do not guarantee order of directories updates

2017-11-10 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247771#comment-16247771
 ] 

Uwe Schindler edited comment on LUCENE-8048 at 11/10/17 4:59 PM:
-

The guarantees about fsync are weak anyways (as Robert said), and on top - 
fsyncing directory metadata is a hack, the Java API does not allow to do it via 
API, you need a hack with a file handle - but it works in our testing 
([~mikemccand] had/has a test computer with a remote powerswitch to stress all 
this for weeks). The directory sync is at least documented in Linux Man Pages, 
for other operating systems it not defined (lack of POSIX standard for it).

In short:
- On Linux, the fsync on the directory really works, but we only know about 
usual file systems (ext4 and xfs I think)
- In addition because the atomic rename use case is very common in Unix world 
to commit stuff, the kernel already does the right thing. If it sees an atomic 
rename, it automatically fsyncs directory under certain conditions (read source 
code). Robert, is this right - it's long ago when I last looked at that code!
- On MacOSX/Solaris the same applies like for linux, although it does not have 
the automatism in kernel. And we don't know if fsyncing directory is really 
done for all file systems. The Man page does not say anything and POSIX does 
not define it.
- On Windows, the fsync on directory does not work at all (it is a no-op in 
Lucene -> we have a try-catch around it with an assertion on Windows in the 
exception block). But Windows file systems guarantee that after the atomic 
rename the directory is in an consistent state (it's documented). 
Happens-before also works.


was (Author: thetaphi):
The guarantees about fsync are weak anyways (as Robert said), and on top - 
fsyncing directory metadata is a hack, the Java API does not allow to do it via 
API, you need a hack with a file handle - but it works in our testing 
([~mikemccand] had/has a test computer with a remote powerswitch to stress all 
this for weeks). The directory sync is at least documented in Linux Man Pages, 
for other operating systems it not defined (lack of POSIX standard for it).

In short:
- On Linux, the fsync on the directory really works, but we only know about 
usual file systems (ext4 and xfs I think)
- In addition because the atomic rename use case is very common in Unix world 
to commit stuff, the kernel already does the right thing. If it sees an atomic 
rename, it automatically fsyncs directory under certain conditions (read source 
code). It "detects
- On MacOSX/Solaris the same applies like for linux, although it does not have 
the automatism in kernel. And we don't know if fsyncing directory is really 
done for all file systems. The Man page does not say anything and POSIX does 
not define it.
- On Windows, the fsync on directory does not work at all (it is a no-op in 
Lucene -> we have a try-catch around it with an assertion on Windows in the 
exception block). But Windows file systems guarantee that after the atomic 
rename the directory is in an consistent state (it's documented). 
Happens-before also works.

> Filesystems do not guarantee order of directories updates
> -
>
> Key: LUCENE-8048
> URL: https://issues.apache.org/jira/browse/LUCENE-8048
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Nikolay Martynov
>
> Currently when index is written to disk the following sequence of events is 
> taking place:
> * write segment file
> * sync segment file
> * write segment file
> * sync segment file
> ...
> * write list of segments
> * sync list of segments
> * rename list of segments
> * sync index directory
> This sequence leads to potential window of opportunity for system to crash 
> after 'rename list of segments' but before 'sync index directory' and 
> depending on exact filesystem implementation this may potentially lead to 
> 'list of segments' being visible in directory while some of the segments are 
> not.
> Solution to this is to sync index directory after all segments have been 
> written. [This 
> commit|https://github.com/mar-kolya/lucene-solr/commit/58e05dd1f633ab9b02d9e6374c7fab59689ae71c]
>  shows idea implemented. I'm fairly certain that I didn't find all the places 
> this may be potentially happening.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8048) Filesystems do not guarantee order of directories updates

2017-11-10 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247771#comment-16247771
 ] 

Uwe Schindler commented on LUCENE-8048:
---

The guarantees about fsync are weak anyways (as Robert said), and on top - 
fsyncing directory metadata is a hack, the Java API does not allow to do it via 
API, you need a hack with a file handle - but it works in our testing 
([~mikemccand] had/has a test computer with a remote powerswitch to stress all 
this for weeks). The directory sync is at least documented in Linux Man Pages, 
for other operating systems it not defined (lack of POSIX standard for it).

In short:
- On Linux, the fsync on the directory really works, but we only know about 
usual file systems (ext4 and xfs I think)
- In addition because the atomic rename use case is very common in Unix world 
to commit stuff, the kernel already does the right thing. If it sees an atomic 
rename, it automatically fsyncs directory under certain conditions (read source 
code). It "detects
- On MacOSX/Solaris the same applies like for linux, although it does not have 
the automatism in kernel. And we don't know if fsyncing directory is really 
done for all file systems. The Man page does not say anything and POSIX does 
not define it.
- On Windows, the fsync on directory does not work at all (it is a no-op in 
Lucene -> we have a try-catch around it with an assertion on Windows in the 
exception block). But Windows file systems guarantee that after the atomic 
rename the directory is in an consistent state (it's documented). 
Happens-before also works.

> Filesystems do not guarantee order of directories updates
> -
>
> Key: LUCENE-8048
> URL: https://issues.apache.org/jira/browse/LUCENE-8048
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Nikolay Martynov
>
> Currently when index is written to disk the following sequence of events is 
> taking place:
> * write segment file
> * sync segment file
> * write segment file
> * sync segment file
> ...
> * write list of segments
> * sync list of segments
> * rename list of segments
> * sync index directory
> This sequence leads to potential window of opportunity for system to crash 
> after 'rename list of segments' but before 'sync index directory' and 
> depending on exact filesystem implementation this may potentially lead to 
> 'list of segments' being visible in directory while some of the segments are 
> not.
> Solution to this is to sync index directory after all segments have been 
> written. [This 
> commit|https://github.com/mar-kolya/lucene-solr/commit/58e05dd1f633ab9b02d9e6374c7fab59689ae71c]
>  shows idea implemented. I'm fairly certain that I didn't find all the places 
> this may be potentially happening.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7871) Platform independent config file instead of solr.in.sh and solr.in.cmd

2017-11-10 Thread Jason Gerlowski (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gerlowski updated SOLR-7871:
--
Attachment: SOLR-7871.patch

Updates patch to apply cleanly against latest {{master}}.

Would appreciate any feedback that anyone can offer.

> Platform independent config file instead of solr.in.sh and solr.in.cmd
> --
>
> Key: SOLR-7871
> URL: https://issues.apache.org/jira/browse/SOLR-7871
> Project: Solr
>  Issue Type: Improvement
>  Components: scripts and tools
>Affects Versions: 5.2.1
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>  Labels: bin/solr
> Attachments: SOLR-7871.patch, SOLR-7871.patch, SOLR-7871.patch, 
> SOLR-7871.patch, SOLR-7871.patch, SOLR-7871.patch, SOLR-7871.patch, 
> SOLR-7871.patch, SOLR-7871.patch, SOLR-7871.patch, SOLR-7871.patch, 
> SOLR-7871.patch
>
>
> Spinoff from SOLR-7043
> The config files {{solr.in.sh}} and {{solr.in.cmd}} are currently executable 
> batch files, but all they do is to set environment variables for the start 
> scripts on the format {{key=value}}
> Suggest to instead have one central platform independent config file e.g. 
> {{bin/solr.yml}} or {{bin/solrstart.properties}} which is parsed by 
> {{SolrCLI.java}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8047) Comparison of String objects using == or !=

2017-11-10 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247757#comment-16247757
 ] 

Uwe Schindler commented on LUCENE-8047:
---

I agree with Robert. The strings here are all final constants and there is no 
need to use equals - in fact the reference comparison is wanted here. The token 
type is used like an enum and should always only be the constants given in the 
tokenizer/filter. The question/issue we have seen here was raised quite often, 
because some code quality tools complain. But Lucene is very low-level and 
performance-critatical code, so the developers are aware of the consequences. 
In earlier versions (before 4.0) this pattern was much more often used for 
"field names", as this was a ongoing comparison (did field name change while 
iterating over terms). Lucene code contains more programming-antipatterns that 
should be fixed in high level projects (like business code) but are done in 
Lucene for performance reasons! 

My last note about this one: Using equals wont make it much worse nowadays, as 
the String.equals() method early exists if the references are equal.

> Comparison of String objects using == or !=
> ---
>
> Key: LUCENE-8047
> URL: https://issues.apache.org/jira/browse/LUCENE-8047
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Affects Versions: 7.0.1
> Environment: Ubuntu 14.04.5 LTS
>Reporter: song
>Priority: Minor
>  Labels: performance
>
> My tool has scanned the whole codebase of Lucene and found there are eight 
> practice issues of string comparison, in which strings are compared by using 
> ==/!= instead of equals( ).
> analysis/common/src/java/org/apache/lucene/analysis/hunspell/Dictionary.java
> {code:java}
> conditionPattern == SUFFIX_CONDITION_REGEX_PATTERN
> {code}
> analysis/common/src/java/org/apache/lucene/analysis/cjk/CJKBigramFilter.java
> {code:java}
>   if (type == doHan || type == doHiragana || type == doKatakana || type == 
> doHangul) {
> {code}
> analysis/common/src/java/org/apache/lucene/analysis/standard/ClassicFilter.java
> {code:java}
>  if (type == APOSTROPHE_TYPE &&...){
>  } else if (type == ACRONYM_TYPE) {  
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-11635) The Source configuration example leaves out important settings for the Source configuratoin

2017-11-10 Thread Erick Erickson (JIRA)
Erick Erickson created SOLR-11635:
-

 Summary: The Source configuration example leaves out important 
settings for the Source configuratoin
 Key: SOLR-11635
 URL: https://issues.apache.org/jira/browse/SOLR-11635
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Erick Erickson
Assignee: Erick Erickson
Priority: Minor


If you blindly copy/paste the Source config from the example, your transaction 
logs on the Source replicas will not be managed correctly.

Plus another couple of improvements, in particular a caution about why 
buffering should be disabled most of the time.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Windows (64bit/jdk1.8.0_144) - Build # 300 - Still Unstable!

2017-11-10 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/300/
Java: 64bit/jdk1.8.0_144 -XX:-UseCompressedOops -XX:+UseParallelGC

5 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.lucene.store.TestFileSwitchDirectory

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.store.TestFileSwitchDirectory_EBD0BC67540A04C2-001\tempDir-005:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.store.TestFileSwitchDirectory_EBD0BC67540A04C2-001\tempDir-005
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.store.TestFileSwitchDirectory_EBD0BC67540A04C2-001\tempDir-005:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.store.TestFileSwitchDirectory_EBD0BC67540A04C2-001\tempDir-005

at __randomizedtesting.SeedInfo.seed([EBD0BC67540A04C2]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
junit.framework.TestSuite.org.apache.lucene.index.TestMultiPassIndexSplitter

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\misc\test\J0\temp\lucene.index.TestMultiPassIndexSplitter_55B5933D9AF98593-001\index-NIOFSDirectory-001:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\misc\test\J0\temp\lucene.index.TestMultiPassIndexSplitter_55B5933D9AF98593-001\index-NIOFSDirectory-001

C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\misc\test\J0\temp\lucene.index.TestMultiPassIndexSplitter_55B5933D9AF98593-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\misc\test\J0\temp\lucene.index.TestMultiPassIndexSplitter_55B5933D9AF98593-001
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\misc\test\J0\temp\lucene.index.TestMultiPassIndexSplitter_55B5933D9AF98593-001\index-NIOFSDirectory-001:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\misc\test\J0\temp\lucene.index.TestMultiPassIndexSplitter_55B5933D9AF98593-001\index-NIOFSDirectory-001
   
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\misc\test\J0\temp\lucene.index.TestMultiPassIndexSplitter_55B5933D9AF98593-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\misc\test\J0\temp\lucene.index.TestMultiPassIndexSplitter_55B5933D9AF98593-001

at __randomizedtesting.SeedInfo.seed([55B5933D9AF98593]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[jira] [Commented] (SOLR-11631) Schema API always has status 0

2017-11-10 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247686#comment-16247686
 ] 

David Smiley commented on SOLR-11631:
-

Similar issue... I'm creating a test in SOLR-11542 which calls this:
{code:java}
solrClient.request(new V2Request.Builder("/collections/" + configName + 
"/config/params")
.withMethod(SolrRequest.METHOD.POST)
.withPayload("{" +
"  'set' : {" +
"'_UPDATE' : {'processor':'inc,tolerant'}" +
"  }" +
"}").build());
{code}
I originally messed up and used 'update' instead of 'set' in the JSON, and this 
request completed without throwing an exception. So now I wrap this with:
{code:java}
  private void checkNoError(NamedList response) {
Object errors = response.get("errorMessages");
assertNull("" + errors, errors);
  }
{code}
Which I think kinda sucks.  Also notice the inconsistency in names... 
"errorMessages" here whereas "errors" for schema API.

> Schema API always has status 0
> --
>
> Key: SOLR-11631
> URL: https://issues.apache.org/jira/browse/SOLR-11631
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
> Attachments: SOLR-11631.patch
>
>
> Schema API failures always return status=0.
> Consumers should be able to detect failure using normal mechanisms (i.e. 
> status != 0) rather than having to parse the response for "errors".  Right 
> now if I attempt to {{add-field}} an already existing field, I get:
> {noformat}
> {responseHeader={status=0,QTime=XXX},errors=[{add-field={name=YYY, ...}, 
> errorMessages=[Field 'YYY' already exists.]}]}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11631) Schema API always has status 0

2017-11-10 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247683#comment-16247683
 ] 

Steve Rowe commented on SOLR-11631:
---

FYI TolerantUpdateProcessor puts the "errors" key in the responseHeader, and 
*throws* an exception if maxErrors is exceeded.

> Schema API always has status 0
> --
>
> Key: SOLR-11631
> URL: https://issues.apache.org/jira/browse/SOLR-11631
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
> Attachments: SOLR-11631.patch
>
>
> Schema API failures always return status=0.
> Consumers should be able to detect failure using normal mechanisms (i.e. 
> status != 0) rather than having to parse the response for "errors".  Right 
> now if I attempt to {{add-field}} an already existing field, I get:
> {noformat}
> {responseHeader={status=0,QTime=XXX},errors=[{add-field={name=YYY, ...}, 
> errorMessages=[Field 'YYY' already exists.]}]}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_144) - Build # 20880 - Still Unstable!

2017-11-10 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/20880/
Java: 32bit/jdk1.8.0_144 -client -XX:+UseSerialGC

8 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.RollingRestartTest

Error Message:
42 threads leaked from SUITE scope at org.apache.solr.cloud.RollingRestartTest: 
1) Thread[id=16426, name=Connection evictor, state=TIMED_WAITING, 
group=TGRP-RollingRestartTest] at java.lang.Thread.sleep(Native Method) 
at 
org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
 at java.lang.Thread.run(Thread.java:748)2) Thread[id=16416, 
name=qtp4693665-16416, state=RUNNABLE, group=TGRP-RollingRestartTest] 
at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at 
sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at 
sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at 
sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at 
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at 
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) at 
org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:243)
 at 
org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:191)
 at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:249)
 at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
 at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.execute(ExecuteProduceConsume.java:100)
 at org.eclipse.jetty.io.ManagedSelector.run(ManagedSelector.java:147)  
   at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) 
at java.lang.Thread.run(Thread.java:748)3) Thread[id=16419, 
name=qtp4693665-16419, state=TIMED_WAITING, group=TGRP-RollingRestartTest]  
   at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:392) 
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.idleJobPoll(QueuedThreadPool.java:563)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool.access$800(QueuedThreadPool.java:48)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626) 
at java.lang.Thread.run(Thread.java:748)4) Thread[id=16450, 
name=org.eclipse.jetty.server.session.HashSessionManager@321b74Timer, 
state=TIMED_WAITING, group=TGRP-RollingRestartTest] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
at java.lang.Thread.run(Thread.java:748)5) Thread[id=16443, 
name=qtp33483831-16443, state=TIMED_WAITING, group=TGRP-RollingRestartTest] 
at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:392) 
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.idleJobPoll(QueuedThreadPool.java:563)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool.access$800(QueuedThreadPool.java:48)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626) 
at java.lang.Thread.run(Thread.java:748)6) Thread[id=16425, 
name=Connection evictor, state=TIMED_WAITING, group=TGRP-RollingRestartTest]
 at java.lang.Thread.sleep(Native Method) at 
org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
 at java.lang.Thread.run(Thread.java:748)7) Thread[id=16482, 
name=zkCallback-2585-thread-3, state=TIMED_WAITING, 
group=TGRP-RollingRestartTest] at 

[jira] [Updated] (LUCENE-8040) Optimize IndexSearcher.collectionStatistics

2017-11-10 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated LUCENE-8040:
-
Attachment: LUCENE-8040.patch

LUCENE-8040.patch is the master branch version.

I *think* the 7x version is simply removing the null return condition when 
docCount is 0.

> Optimize IndexSearcher.collectionStatistics
> ---
>
> Key: LUCENE-8040
> URL: https://issues.apache.org/jira/browse/LUCENE-8040
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/search
>Reporter: David Smiley
>Assignee: David Smiley
> Fix For: 7.2
>
> Attachments: LUCENE-8040.patch, MyBenchmark.java, 
> lucenecollectionStatisticsbench.zip
>
>
> {{IndexSearcher.collectionStatistics(field)}} can do a fair amount of work 
> because with each invocation it will call {{MultiFields.getTerms(...)}}.  The 
> effects of this are aggravated for queries with many fields since each field 
> will want statistics, and also aggravated when there are many segments.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11631) Schema API always has status 0

2017-11-10 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247644#comment-16247644
 ] 

Steve Rowe commented on SOLR-11631:
---

bq. I don't think we should throw SolrException . We should set appropriate 
status in the responseHeader if an errors key is present

Why is the errors key in the response body instead of the responseHeader?  
Seems sketchy to me to set status based on a body key that might be used for 
things not related to status.

BTW at present the only way to set the status is via the response exception - 
see {{SolrCore.postDecorateResponse()}}.

> Schema API always has status 0
> --
>
> Key: SOLR-11631
> URL: https://issues.apache.org/jira/browse/SOLR-11631
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
> Attachments: SOLR-11631.patch
>
>
> Schema API failures always return status=0.
> Consumers should be able to detect failure using normal mechanisms (i.e. 
> status != 0) rather than having to parse the response for "errors".  Right 
> now if I attempt to {{add-field}} an already existing field, I get:
> {noformat}
> {responseHeader={status=0,QTime=XXX},errors=[{add-field={name=YYY, ...}, 
> errorMessages=[Field 'YYY' already exists.]}]}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11595) optimize SolrIndexSearcher.localCollectionStatistics to use cached MultiFields

2017-11-10 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247641#comment-16247641
 ] 

David Smiley commented on SOLR-11595:
-

Rob, I'm not adding a cache here.  SolrIndexSearcher _already_ uses a 
SlowCompositeReaderWrapper.  I simply propose we use it in 
localCollectionStatistics().  Also, the design choices are different between 
Lucene IndexSearcher and Solr's SolrIndexSearcher.  As you know, this is where 
Solr puts most of it's caches.

> optimize SolrIndexSearcher.localCollectionStatistics to use cached MultiFields
> --
>
> Key: SOLR-11595
> URL: https://issues.apache.org/jira/browse/SOLR-11595
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Minor
> Fix For: 7.2
>
> Attachments: 
> SOLR_11595_optimize_SolrIndexSearcher_collectionStatistics.patch
>
>
> {{SolrIndexSearcher.localCollectionStatistics(field)}} simply calls Lucene's 
> {{IndexSearcher.collectionStatistics(field)}} which in turn calls 
> {{MultiFields.getTerms(reader, field)}}.  Profiling in an app with many 150 
> fields in the query shows that building the MultiTerms here is expensive.  
> Fortunately it turns out that Solr already has a cached instance via 
> {{SlowCompositeReaderWrapper}} (using MultiFields which has a 
> ConcurrentHashMap to the MultiTerms keyed by field String.
> Perhaps this should be improved on the Lucene side... not sure.  But here on 
> the Solr side, the solution is straight-forward.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8047) Comparison of String objects using == or !=

2017-11-10 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247638#comment-16247638
 ] 

Robert Muir commented on LUCENE-8047:
-

These token type instances are intentionally using == with "type" and it is not 
a bug, it is totally safe, because they are comparing against the exact static 
final constants they are looking for. Some of these comparisons (e.g. 
CJKBigram) happen per-character and the others happen per-word. i don't think 
they should be changed to equals because it will only make worse performance.

The SUFFIX_CONDITION_REGEX_PATTERN doesn't need equals, it is a private mathod 
used only internally, and is only passed either SUFFIX_CONDITION_REGEX_PATTERN 
or PREFIX_CONDITION_REGEX_PATTERN. so it wants to know, am i processing 
suffixes or prefixes. there is not a third possibility.

> Comparison of String objects using == or !=
> ---
>
> Key: LUCENE-8047
> URL: https://issues.apache.org/jira/browse/LUCENE-8047
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Affects Versions: 7.0.1
> Environment: Ubuntu 14.04.5 LTS
>Reporter: song
>Priority: Minor
>  Labels: performance
>
> My tool has scanned the whole codebase of Lucene and found there are eight 
> practice issues of string comparison, in which strings are compared by using 
> ==/!= instead of equals( ).
> analysis/common/src/java/org/apache/lucene/analysis/hunspell/Dictionary.java
> {code:java}
> conditionPattern == SUFFIX_CONDITION_REGEX_PATTERN
> {code}
> analysis/common/src/java/org/apache/lucene/analysis/cjk/CJKBigramFilter.java
> {code:java}
>   if (type == doHan || type == doHiragana || type == doKatakana || type == 
> doHangul) {
> {code}
> analysis/common/src/java/org/apache/lucene/analysis/standard/ClassicFilter.java
> {code:java}
>  if (type == APOSTROPHE_TYPE &&...){
>  } else if (type == ACRONYM_TYPE) {  
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-3771) While using RSS indexing from Solr, we are getting error "Caused by: java.net.UnknownHostException" & indexing fail.

2017-11-10 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett resolved SOLR-3771.
-
Resolution: Incomplete

There isn't enough info in this issue to reproduce, and all signs point to a 
configuration error.

> While using RSS indexing from Solr, we are getting error "Caused by: 
> java.net.UnknownHostException" & indexing fail.
> 
>
> Key: SOLR-3771
> URL: https://issues.apache.org/jira/browse/SOLR-3771
> Project: Solr
>  Issue Type: Bug
>  Components: search
>Affects Versions: 3.6
> Environment: Solr Search engine using RSS data feeding
>Reporter: Nagaraj Molala
> Attachments: rss-data-config.xml, schema.xml, solrconfig.xml
>
>
> we are getting below error. Please give us the solution as this is a show 
> stopper for our application. Attached the config files for your reference.
>  
> https://issues.apache.org/jira/browse/SOLR 2:51 PM 
> Caused by: java.net.UnknownHostException: xx.abcd.abcd.com
>at java.net.PlainSocketImpl.connect(Unknown Source)
>at java.net.SocksSocketImpl.connect(Unknown Source)
>at java.net.Socket.connect(Unknown Source)
>at sun.net.NetworkClient.doConnect(Unknown Source)
>at sun.net.www.http.HttpClient.openServer(Unknown Source)
>at sun.net.www.http.HttpClient.openServer(Unknown Source)
>at sun.net.www.http.HttpClient.(Unknown Source)
>at sun.net.www.http.HttpClient.New(Unknown Source)
>at sun.net.www.http.HttpClient.New(Unknown Source)
>at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(Unknown
> Source)
>at sun.net.www.protocol.http.HttpURLConnection.plainConnect(Unknown 
> Sour
> ce)
>at sun.net.www.protocol.http.HttpURLConnection.connect(Unknown Source)
>at sun.net.www.protocol.http.HttpURLConnection.getInputStream(Unknown 
> So
> urce)
>at 
> org.apache.solr.handler.dataimport.URLDataSource.getData(URLDataSourc
> e.java:97)
>... 13 more
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8048) Filesystems do not guarantee order of directories updates

2017-11-10 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247616#comment-16247616
 ] 

Robert Muir commented on LUCENE-8048:
-

Hi [~mar-kolya], I like the idea of the patch. Actually not much about fsync is 
guaranteed on any OS, anywhere.So i agree with having an extra dir sync before 
the rename, just to be safe, although i don't know what filesystems might be 
affected by this.



> Filesystems do not guarantee order of directories updates
> -
>
> Key: LUCENE-8048
> URL: https://issues.apache.org/jira/browse/LUCENE-8048
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Nikolay Martynov
>
> Currently when index is written to disk the following sequence of events is 
> taking place:
> * write segment file
> * sync segment file
> * write segment file
> * sync segment file
> ...
> * write list of segments
> * sync list of segments
> * rename list of segments
> * sync index directory
> This sequence leads to potential window of opportunity for system to crash 
> after 'rename list of segments' but before 'sync index directory' and 
> depending on exact filesystem implementation this may potentially lead to 
> 'list of segments' being visible in directory while some of the segments are 
> not.
> Solution to this is to sync index directory after all segments have been 
> written. [This 
> commit|https://github.com/mar-kolya/lucene-solr/commit/58e05dd1f633ab9b02d9e6374c7fab59689ae71c]
>  shows idea implemented. I'm fairly certain that I didn't find all the places 
> this may be potentially happening.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Moved] (LUCENE-8048) Filesystems do not guarantee order of directories updates

2017-11-10 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir moved SOLR-11626 to LUCENE-8048:


 Security: (was: Public)
Lucene Fields: New
  Key: LUCENE-8048  (was: SOLR-11626)
  Project: Lucene - Core  (was: Solr)

> Filesystems do not guarantee order of directories updates
> -
>
> Key: LUCENE-8048
> URL: https://issues.apache.org/jira/browse/LUCENE-8048
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Nikolay Martynov
>
> Currently when index is written to disk the following sequence of events is 
> taking place:
> * write segment file
> * sync segment file
> * write segment file
> * sync segment file
> ...
> * write list of segments
> * sync list of segments
> * rename list of segments
> * sync index directory
> This sequence leads to potential window of opportunity for system to crash 
> after 'rename list of segments' but before 'sync index directory' and 
> depending on exact filesystem implementation this may potentially lead to 
> 'list of segments' being visible in directory while some of the segments are 
> not.
> Solution to this is to sync index directory after all segments have been 
> written. [This 
> commit|https://github.com/mar-kolya/lucene-solr/commit/58e05dd1f633ab9b02d9e6374c7fab59689ae71c]
>  shows idea implemented. I'm fairly certain that I didn't find all the places 
> this may be potentially happening.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8042) Add SegmentCachable interface

2017-11-10 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247602#comment-16247602
 ] 

Robert Muir commented on LUCENE-8042:
-

+1, this really looks a lot better to me.

> Add SegmentCachable interface
> -
>
> Key: LUCENE-8042
> URL: https://issues.apache.org/jira/browse/LUCENE-8042
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Alan Woodward
> Attachments: LUCENE-8042.patch, LUCENE-8042.patch, LUCENE-8042.patch, 
> LUCENE-8042.patch
>
>
> Following LUCENE-8017, I tried to add a getCacheHelper(LeafReaderContext) 
> method to DoubleValuesSource so that Weights that use DVS can delegate on.  
> This ended up with the same method being added to LongValuesSource, and some 
> of the similar objects in spatial-extras.  I think it makes sense to abstract 
> this out into a separate SegmentCachable interface.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11595) optimize SolrIndexSearcher.localCollectionStatistics to use cached MultiFields

2017-11-10 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247598#comment-16247598
 ] 

Robert Muir commented on SOLR-11595:


I don't think you should add such a cache to solr either, for the same reasons 
i am against it on the lucene issue.

> optimize SolrIndexSearcher.localCollectionStatistics to use cached MultiFields
> --
>
> Key: SOLR-11595
> URL: https://issues.apache.org/jira/browse/SOLR-11595
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Minor
> Fix For: 7.2
>
> Attachments: 
> SOLR_11595_optimize_SolrIndexSearcher_collectionStatistics.patch
>
>
> {{SolrIndexSearcher.localCollectionStatistics(field)}} simply calls Lucene's 
> {{IndexSearcher.collectionStatistics(field)}} which in turn calls 
> {{MultiFields.getTerms(reader, field)}}.  Profiling in an app with many 150 
> fields in the query shows that building the MultiTerms here is expensive.  
> Fortunately it turns out that Solr already has a cached instance via 
> {{SlowCompositeReaderWrapper}} (using MultiFields which has a 
> ConcurrentHashMap to the MultiTerms keyed by field String.
> Perhaps this should be improved on the Lucene side... not sure.  But here on 
> the Solr side, the solution is straight-forward.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8040) Optimize IndexSearcher.collectionStatistics

2017-11-10 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247596#comment-16247596
 ] 

Robert Muir commented on LUCENE-8040:
-

Can you please upload a proper patch so it can be reviewed? Note the code needs 
to be different for master and 7.x

> Optimize IndexSearcher.collectionStatistics
> ---
>
> Key: LUCENE-8040
> URL: https://issues.apache.org/jira/browse/LUCENE-8040
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/search
>Reporter: David Smiley
>Assignee: David Smiley
> Fix For: 7.2
>
> Attachments: MyBenchmark.java, lucenecollectionStatisticsbench.zip
>
>
> {{IndexSearcher.collectionStatistics(field)}} can do a fair amount of work 
> because with each invocation it will call {{MultiFields.getTerms(...)}}.  The 
> effects of this are aggravated for queries with many fields since each field 
> will want statistics, and also aggravated when there are many segments.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Windows (64bit/jdk1.8.0_144) - Build # 7012 - Still Unstable!

2017-11-10 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7012/
Java: 64bit/jdk1.8.0_144 -XX:-UseCompressedOops -XX:+UseG1GC

5 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.lucene.codecs.lucene50.TestBlockPostingsFormat2

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J1\temp\lucene.codecs.lucene50.TestBlockPostingsFormat2_9D4128CA0B77F6FB-001\testDFBlockSize-002:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J1\temp\lucene.codecs.lucene50.TestBlockPostingsFormat2_9D4128CA0B77F6FB-001\testDFBlockSize-002

C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J1\temp\lucene.codecs.lucene50.TestBlockPostingsFormat2_9D4128CA0B77F6FB-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J1\temp\lucene.codecs.lucene50.TestBlockPostingsFormat2_9D4128CA0B77F6FB-001
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J1\temp\lucene.codecs.lucene50.TestBlockPostingsFormat2_9D4128CA0B77F6FB-001\testDFBlockSize-002:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J1\temp\lucene.codecs.lucene50.TestBlockPostingsFormat2_9D4128CA0B77F6FB-001\testDFBlockSize-002
   
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J1\temp\lucene.codecs.lucene50.TestBlockPostingsFormat2_9D4128CA0B77F6FB-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J1\temp\lucene.codecs.lucene50.TestBlockPostingsFormat2_9D4128CA0B77F6FB-001

at __randomizedtesting.SeedInfo.seed([9D4128CA0B77F6FB]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  junit.framework.TestSuite.org.apache.solr.OutputWriterTest

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.OutputWriterTest_761623D217ED1D4A-001\init-core-data-001:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.OutputWriterTest_761623D217ED1D4A-001\init-core-data-001

C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.OutputWriterTest_761623D217ED1D4A-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.OutputWriterTest_761623D217ED1D4A-001
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.OutputWriterTest_761623D217ED1D4A-001\init-core-data-001:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.OutputWriterTest_761623D217ED1D4A-001\init-core-data-001
   
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.OutputWriterTest_761623D217ED1D4A-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.OutputWriterTest_761623D217ED1D4A-001

at __randomizedtesting.SeedInfo.seed([761623D217ED1D4A]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329)
at 

Re: Release 7.0 process starts

2017-11-10 Thread David Smiley
Anshum (the RM):
I believe the release process includes instructions on transitioning Jira
"resolved" status to "closed".  I see Lucene 7.0 issues not yet marked
closed; maybe others.  This sort of thing happens often, I think... I
wonder if we have any value in even having a distinction if it gets
forgotten and not often noticed.

On Wed, Sep 20, 2017 at 5:36 PM Jan Høydahl  wrote:

> Now with docs in git this should be within reach with some discipline!
> If we don’t manage to get the official refGuide released simultaneously
> it’d be just one hour of work to convert the "Major Changes in Solr X”
> refguide page into a blog post that could be published on the CMS?
>
> PS: I just updated the “About versions” section of
> http://lucene.apache.org/solr/community.html
> This should probably be part of the RM instructions?
>
> Another observation is that the release announcement mail
> should be formatted with a max line length of 75 or something,
> to format nicely in ASCII format.
>
> Congrats on the release, everyone!
>
> --
> Jan Høydahl, search solution architect
> Cominvent AS - www.cominvent.com
>
> 20. sep. 2017 kl. 21.29 skrev Cassandra Targett :
>
> Hijacking this thread a little bit now that Anshum has what he needs
> to release...I wanted to clarify my earlier point about the Ref Guide
> being in sync with the release notes.
>
> I'd like to see us evolve from our current model of Release Notes ->
> CHANGES.txt and provide an intermediary level of information geared to
> those upgrading. What I wish we could have done is add a link in the
> release notes to the upgrade notes in the Ref Guide instead of
> directing people to CHANGES.
>
> In order to do this, though, the Ref Guide needs to be released at
> just about the same time. Docs shouldn't be something that starts when
> someone proposes a release, but instead is considered a critical part
> of the actual release. I think most of us nod our heads at that idea,
> but when a vote thread starts, we should be looking for typos instead
> of still chasing down information that could have been written/updated
> when the code change was made.
>
> People still need CHANGES.txt for their detailed upgrade planning. But
> we can do more to help them make initial plans for their upgrades
> without overwhelming them with detail. And, of course, with a major
> release like 7.0, it would be nice if they had documentation to go
> with it instead of waiting another X weeks.
>
> Enough of my soapbox - I hope by the time 8.0 is ready to go out we
> are able to publish the Ref Guide with the release. I thought now
> might be a good time to plug for it.
>
> Cassandra
>
> On Wed, Sep 20, 2017 at 12:41 PM, Anshum Gupta  wrote:
>
> I’ve added a note about the analytics component, and restructured the
> points.
>
> Thanks to everyone and please don’t make any more changes as I’m adding
> these to the news section and committing now.
>
> -Anshum
>
>
>
> On Sep 20, 2017, at 10:30 AM, Anshum Gupta  wrote:
>
> +1 on keeping that section. I just read it wrong I guess, and assumed you
> wanted to remove that section.
>
> -Anshum
>
>
>
> On Sep 20, 2017, at 9:55 AM, Joel Bernstein  wrote:
>
> I think we should keep all the typical wording around upgrades. I'm just
> suggesting an arrangement of the highlights section.
>
> Joel Bernstein
> http://joelsolr.blogspot.com/
>
> On Wed, Sep 20, 2017 at 12:43 PM, Anshum Gupta  wrote:
>
>
> Joel, I was actually asking if you meant removing the following section:
>
> Being a major release, Solr 7 removes many deprecated APIs, changes
> various parameter defaults and
> behavior. Some changes may require a re-index of your content. You are
> thus encouraged to thoroughly
> read the "Upgrade Notes" at
> http://lucene.apache.org/solr/7_0_0/changes/Changes.html or in the
> CHANGES.txt file accompanying the release.
>
>
> Uwe: I am ready with all my (website) changes, and just waiting on the
> Solr ‘news’ section that is a subset of the release notes. From the looks
> of
> it, we are done with the changes, and I can copy the relevant sections and
> commit the website changes. So yes, the release would happen on the 20th :)
>
> -Anshum
>
>
>
> On Sep 20, 2017, at 8:34 AM, Joel Bernstein  wrote:
>
> I think the release highlights are about what's exciting in the release.
> So leading with the most exciting features is the way to go. Informing
> people of changes that will affect them can be done in the upgrade notes in
> CHANGES.txt.
>
> What do other people think about this?
>
> Joel Bernstein
> http://joelsolr.blogspot.com/
>
> On Wed, Sep 20, 2017 at 11:21 AM, Anshum Gupta  wrote:
>
>
> Also, I think it might make sense to add a line saying that the Ref Guide
> for 7.0 would be released soon.
>
> -Anshum
>
>
>
> On Sep 20, 2017, at 8:20 AM, Anshum Gupta  wrote:

[JENKINS] Lucene-Solr-7.x-Linux (32bit/jdk1.8.0_144) - Build # 787 - Still Unstable!

2017-11-10 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/787/
Java: 32bit/jdk1.8.0_144 -server -XX:+UseConcMarkSweepGC

3 tests failed.
FAILED:  
org.apache.solr.client.solrj.impl.CloudSolrClientTest.testRetryUpdatesWhenClusterStateIsStale

Error Message:
Error from server at 
https://127.0.0.1:36949/solr/stale_state_test_col_shard1_replica_n1: Expected 
mime type application/octet-stream but got text/html.Error 
404HTTP ERROR: 404 Problem accessing 
/solr/stale_state_test_col_shard1_replica_n1/update. Reason: Can not 
find: /solr/stale_state_test_col_shard1_replica_n1/update http://eclipse.org/jetty;>Powered by Jetty:// 9.3.20.v20170531 
  

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at 
https://127.0.0.1:36949/solr/stale_state_test_col_shard1_replica_n1: Expected 
mime type application/octet-stream but got text/html. 


Error 404 


HTTP ERROR: 404
Problem accessing /solr/stale_state_test_col_shard1_replica_n1/update. 
Reason:
Can not find: 
/solr/stale_state_test_col_shard1_replica_n1/update
http://eclipse.org/jetty;>Powered by Jetty:// 
9.3.20.v20170531



at 
__randomizedtesting.SeedInfo.seed([52C158DA1A9AA452:E6F0C032F973D27E]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:607)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:559)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1016)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:883)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:816)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.request.UpdateRequest.commit(UpdateRequest.java:233)
at 
org.apache.solr.client.solrj.impl.CloudSolrClientTest.testRetryUpdatesWhenClusterStateIsStale(CloudSolrClientTest.java:844)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:110)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-7.x-Solaris (64bit/jdk1.8.0) - Build # 294 - Still Unstable!

2017-11-10 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Solaris/294/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.NodeLostTriggerTest.testListenerAcceptance

Error Message:
expected:<1> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<1> but was:<0>
at 
__randomizedtesting.SeedInfo.seed([410B5F7EBEC2D368:50BF0599074BC79E]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.cloud.autoscaling.NodeLostTriggerTest.testListenerAcceptance(NodeLostTriggerTest.java:253)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 11726 lines...]
   [junit4] Suite: 

[jira] [Commented] (SOLR-11631) Schema API always has status 0

2017-11-10 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247425#comment-16247425
 ] 

Noble Paul commented on SOLR-11631:
---

I don't think we should throw {{SolrException}} . We should set appropriate 
status in the responseHeader if an {{errors}} key is present 

> Schema API always has status 0
> --
>
> Key: SOLR-11631
> URL: https://issues.apache.org/jira/browse/SOLR-11631
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
> Attachments: SOLR-11631.patch
>
>
> Schema API failures always return status=0.
> Consumers should be able to detect failure using normal mechanisms (i.e. 
> status != 0) rather than having to parse the response for "errors".  Right 
> now if I attempt to {{add-field}} an already existing field, I get:
> {noformat}
> {responseHeader={status=0,QTime=XXX},errors=[{add-field={name=YYY, ...}, 
> errorMessages=[Field 'YYY' already exists.]}]}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11595) optimize SolrIndexSearcher.localCollectionStatistics to use cached MultiFields

2017-11-10 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247423#comment-16247423
 ] 

David Smiley commented on SOLR-11595:
-

I'll commit this patch for the Solr level, since it has an easy implementation 
given that it indirectly already has a cached MultiFields instance.  It'll be 
faster than what's happening at LUCENE-8040 based on Rob's feedback.

> optimize SolrIndexSearcher.localCollectionStatistics to use cached MultiFields
> --
>
> Key: SOLR-11595
> URL: https://issues.apache.org/jira/browse/SOLR-11595
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Minor
> Fix For: 7.2
>
> Attachments: 
> SOLR_11595_optimize_SolrIndexSearcher_collectionStatistics.patch
>
>
> {{SolrIndexSearcher.localCollectionStatistics(field)}} simply calls Lucene's 
> {{IndexSearcher.collectionStatistics(field)}} which in turn calls 
> {{MultiFields.getTerms(reader, field)}}.  Profiling in an app with many 150 
> fields in the query shows that building the MultiTerms here is expensive.  
> Fortunately it turns out that Solr already has a cached instance via 
> {{SlowCompositeReaderWrapper}} (using MultiFields which has a 
> ConcurrentHashMap to the MultiTerms keyed by field String.
> Perhaps this should be improved on the Lucene side... not sure.  But here on 
> the Solr side, the solution is straight-forward.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8040) Optimize IndexSearcher.collectionStatistics

2017-11-10 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247417#comment-16247417
 ] 

David Smiley commented on LUCENE-8040:
--

Ok, I'll commit the raw impl then.  It's at least nice to no longer reference 
MultiFields and hence MultiTerms here.

> Optimize IndexSearcher.collectionStatistics
> ---
>
> Key: LUCENE-8040
> URL: https://issues.apache.org/jira/browse/LUCENE-8040
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/search
>Reporter: David Smiley
>Assignee: David Smiley
> Fix For: 7.2
>
> Attachments: MyBenchmark.java, lucenecollectionStatisticsbench.zip
>
>
> {{IndexSearcher.collectionStatistics(field)}} can do a fair amount of work 
> because with each invocation it will call {{MultiFields.getTerms(...)}}.  The 
> effects of this are aggravated for queries with many fields since each field 
> will want statistics, and also aggravated when there are many segments.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-11202) Implement a set-property command for AutoScaling API

2017-11-10 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar reassigned SOLR-11202:


Assignee: Shalin Shekhar Mangar

> Implement a set-property command for AutoScaling API
> 
>
> Key: SOLR-11202
> URL: https://issues.apache.org/jira/browse/SOLR-11202
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
> Fix For: 7.2, master (8.0)
>
>
> The Autoscaling API should support a {{set-property}} command so that 
> properties specific to autoscaling can be set/unset. Examples of such 
> properties are:
> # The scheduled delay between trigger invocations (currently defaults to 1s)
> # Min time between actions (currently defaults to 5s)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-10-ea+29) - Build # 20879 - Still Unstable!

2017-11-10 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/20879/
Java: 64bit/jdk-10-ea+29 -XX:+UseCompressedOops -XX:+UseG1GC

6 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.TestStressCloudBlindAtomicUpdates

Error Message:
Error from server at http://127.0.0.1:35345/solr: create the collection time 
out:180s

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:35345/solr: create the collection time out:180s
at __randomizedtesting.SeedInfo.seed([A3AA4C8235DBE5EE]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1104)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:883)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:816)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at 
org.apache.solr.cloud.TestStressCloudBlindAtomicUpdates.createMiniSolrCloudCluster(TestStressCloudBlindAtomicUpdates.java:132)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:874)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  org.apache.solr.cloud.CustomCollectionTest.testRouteFieldForHashRouter

Error Message:
Error from server at http://127.0.0.1:36999/solr: create the collection time 
out:180s

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:36999/solr: create the collection time out:180s
at 
__randomizedtesting.SeedInfo.seed([A3AA4C8235DBE5EE:B9CD25FAABA0EB4]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 

[jira] [Updated] (SOLR-11598) Export Writer needs to support more than 4 Sort fields - Say 10, ideally it should not be bound at all, but 4 seems to really short sell the StreamRollup capabilities.

2017-11-10 Thread Amrit Sarkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-11598:

Attachment: SOLR-11598-6_6-streamtests

Added another "experimental" patch: {{SOLR-11598-6_6-streamtests}} against 
{{branch_6_6}} with *nocommit* supporting stream expressions (unique & rollup) 
can take more than 4 sort fields now.

Please mind, these patches are for pure experimental performance analysis 
purpose.

> Export Writer needs to support more than 4 Sort fields - Say 10, ideally it 
> should not be bound at all, but 4 seems to really short sell the StreamRollup 
> capabilities.
> ---
>
> Key: SOLR-11598
> URL: https://issues.apache.org/jira/browse/SOLR-11598
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Affects Versions: 6.6.1, 7.0
>Reporter: Aroop
>  Labels: patch
> Attachments: SOLR-11598-6_6-streamtests, SOLR-11598-6_6.patch, 
> SOLR-11598-master.patch
>
>
> I am a user of Streaming and I am currently trying to use rollups on an 10 
> dimensional document.
> I am unable to get correct results on this query as I am bounded by the 
> limitation of the export handler which supports only 4 sort fields.
> I do not see why this needs to be the case, as it could very well be 10 or 20.
> My current needs would be satisfied with 10, but one would want to ask why 
> can't it be any decent integer n, beyond which we know performance degrades, 
> but even then it should be caveat emptor.
> This is a big limitation for me, as I am working on a feature with a tight 
> deadline where I need to support 10 dimensional rollups. I did not read any 
> limitation on the sorting in the documentation and we went ahead with the 
> installation of 6.6.1. Now we are blocked with this limitation.
> This is a Jira to track this work.
> [~varunthacker] 
> Code Link:
> https://github.com/apache/lucene-solr/blob/19db1df81a18e6eb2cce5be973bf2305d606a9f8/solr/core/src/java/org/apache/solr/handler/ExportWriter.java#L455
> Error
> null:java.io.IOException: A max of 4 sorts can be specified
>   at 
> org.apache.solr.handler.ExportWriter.getSortDoc(ExportWriter.java:452)
>   at org.apache.solr.handler.ExportWriter.writeDocs(ExportWriter.java:228)
>   at 
> org.apache.solr.handler.ExportWriter.lambda$null$1(ExportWriter.java:219)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeIterator(JavaBinCodec.java:664)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeKnownType(JavaBinCodec.java:333)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:223)
>   at org.apache.solr.common.util.JavaBinCodec$1.put(JavaBinCodec.java:394)
>   at 
> org.apache.solr.handler.ExportWriter.lambda$null$2(ExportWriter.java:219)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeMap(JavaBinCodec.java:437)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeKnownType(JavaBinCodec.java:354)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:223)
>   at org.apache.solr.common.util.JavaBinCodec$1.put(JavaBinCodec.java:394)
>   at 
> org.apache.solr.handler.ExportWriter.lambda$write$3(ExportWriter.java:217)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeMap(JavaBinCodec.java:437)
>   at org.apache.solr.handler.ExportWriter.write(ExportWriter.java:215)
>   at org.apache.solr.core.SolrCore$3.write(SolrCore.java:2601)
>   at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:49)
>   at 
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:809)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:538)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
>   at 
> 

[jira] [Commented] (SOLR-11622) Bundled mime4j library not sufficient for Tika requirement

2017-11-10 Thread Advokat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247372#comment-16247372
 ] 

Advokat commented on SOLR-11622:


We have the same problem. E-Mails saved in *.eml / *.mht format do not seem to 
work at all right now.

> Bundled mime4j library not sufficient for Tika requirement
> --
>
> Key: SOLR-11622
> URL: https://issues.apache.org/jira/browse/SOLR-11622
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Build
>Affects Versions: 7.1, 6.6.2
>Reporter: Karim Malhas
>Priority: Minor
>  Labels: build
>
> The version 7.2 of Apache James Mime4j bundled with the Solr binary releases 
> does not match what is required by Apache Tika for parsing rfc2822 messages. 
> The master branch for james-mime4j seems to contain the missing Builder class
> [https://github.com/apache/james-mime4j/blob/master/core/src/main/java/org/apache/james/mime4j/stream/MimeConfig.java
> ]
> This prevents import of rfc2822 formatted messages. For example like so:
> {{./bin/post -c dovecot -type 'message/rfc822' 'testdata/email_01.txt'
> }}
> And results in the following stacktrace:
> java.lang.NoClassDefFoundError: 
> org/apache/james/mime4j/stream/MimeConfig$Builder
> at 
> org.apache.tika.parser.mail.RFC822Parser.parse(RFC822Parser.java:63)
> at 
> org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:280)
> at 
> org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:280)
> at 
> org.apache.tika.parser.AutoDetectParser.parse(AutoDetectParser.java:135)
> at 
> org.apache.solr.handler.extraction.ExtractingDocumentLoader.load(ExtractingDocumentLoader.java:228)
> at 
> org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:68)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:173)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:2477)
> at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:723)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:529)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
> at 
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
> at org.eclipse.jetty.server.Server.handle(Server.java:534)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
> at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
> at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
> at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
> at 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
> at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
> at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
> at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
> at 
> 

[jira] [Resolved] (LUCENE-8014) Remove SimScorer.computeSlopFactor and SimScorer.computePayloadFactor

2017-11-10 Thread Alan Woodward (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward resolved LUCENE-8014.
---
Resolution: Fixed

> Remove SimScorer.computeSlopFactor and SimScorer.computePayloadFactor
> -
>
> Key: LUCENE-8014
> URL: https://issues.apache.org/jira/browse/LUCENE-8014
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Adrien Grand
>Assignee: Alan Woodward
>Priority: Minor
> Fix For: master (8.0), 7.2
>
> Attachments: LUCENE-8014-tfidfsim.patch, LUCENE-8014.patch
>
>
> This supersedes LUCENE-8013.
> We should hardcode computeSlopFactor to 1/(N+1) in SloppyPhraseScorer and 
> move computePayloadFactor to PayloadFunction so that all the payload scoring 
> logic is in a single place.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8014) Remove SimScorer.computeSlopFactor and SimScorer.computePayloadFactor

2017-11-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247365#comment-16247365
 ] 

ASF subversion and git services commented on LUCENE-8014:
-

Commit 281e84b988a518d2f14ed11e291d5fd2c4f18870 in lucene-solr's branch 
refs/heads/master from [~romseygeek]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=281e84b ]

LUCENE-8014: Remove TFIDFSimilarity.sloppyFreq() and scorePayload()


> Remove SimScorer.computeSlopFactor and SimScorer.computePayloadFactor
> -
>
> Key: LUCENE-8014
> URL: https://issues.apache.org/jira/browse/LUCENE-8014
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Adrien Grand
>Assignee: Alan Woodward
>Priority: Minor
> Fix For: master (8.0), 7.2
>
> Attachments: LUCENE-8014-tfidfsim.patch, LUCENE-8014.patch
>
>
> This supersedes LUCENE-8013.
> We should hardcode computeSlopFactor to 1/(N+1) in SloppyPhraseScorer and 
> move computePayloadFactor to PayloadFunction so that all the payload scoring 
> logic is in a single place.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8014) Remove SimScorer.computeSlopFactor and SimScorer.computePayloadFactor

2017-11-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247364#comment-16247364
 ] 

ASF subversion and git services commented on LUCENE-8014:
-

Commit faa4293090aee76706d0908e0417af6ba6ab96e5 in lucene-solr's branch 
refs/heads/branch_7x from [~romseygeek]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=faa4293 ]

LUCENE-8014: Deprecate TFIDFSimilarity.scorePayload()


> Remove SimScorer.computeSlopFactor and SimScorer.computePayloadFactor
> -
>
> Key: LUCENE-8014
> URL: https://issues.apache.org/jira/browse/LUCENE-8014
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Adrien Grand
>Assignee: Alan Woodward
>Priority: Minor
> Fix For: master (8.0), 7.2
>
> Attachments: LUCENE-8014-tfidfsim.patch, LUCENE-8014.patch
>
>
> This supersedes LUCENE-8013.
> We should hardcode computeSlopFactor to 1/(N+1) in SloppyPhraseScorer and 
> move computePayloadFactor to PayloadFunction so that all the payload scoring 
> logic is in a single place.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8014) Remove SimScorer.computeSlopFactor and SimScorer.computePayloadFactor

2017-11-10 Thread Alan Woodward (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward updated LUCENE-8014:
--
Attachment: LUCENE-8014-tfidfsim.patch

For the record, attaching a patch with the removal of 
TFIDFSimilarity.sloppyFreq() and scorePayload(), and the relevant test fixes.

> Remove SimScorer.computeSlopFactor and SimScorer.computePayloadFactor
> -
>
> Key: LUCENE-8014
> URL: https://issues.apache.org/jira/browse/LUCENE-8014
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Adrien Grand
>Assignee: Alan Woodward
>Priority: Minor
> Fix For: master (8.0), 7.2
>
> Attachments: LUCENE-8014-tfidfsim.patch, LUCENE-8014.patch
>
>
> This supersedes LUCENE-8013.
> We should hardcode computeSlopFactor to 1/(N+1) in SloppyPhraseScorer and 
> move computePayloadFactor to PayloadFunction so that all the payload scoring 
> logic is in a single place.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-11165) Write documentation for autoscaling APIs, triggers, actions, listeners for Solr 7.1

2017-11-10 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar resolved SOLR-11165.
--
   Resolution: Fixed
 Assignee: Shalin Shekhar Mangar
Fix Version/s: (was: 7.2)
   7.1

> Write documentation for autoscaling APIs, triggers, actions, listeners for 
> Solr 7.1
> ---
>
> Key: SOLR-11165
> URL: https://issues.apache.org/jira/browse/SOLR-11165
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, documentation
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
> Fix For: master (8.0), 7.1
>
>
> This issue is to document all the new features and changes in autoscaling for 
> Solr 7.1



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1525 - Still Unstable!

2017-11-10 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1525/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

2 tests failed.
FAILED:  org.apache.lucene.search.TestSimilarity.testSimilarity

Error Message:
expected:<2.0> but was:<1.0>

Stack Trace:
java.lang.AssertionError: expected:<2.0> but was:<1.0>
at 
__randomizedtesting.SeedInfo.seed([DE7CA3EBACD23F57:2FD3E3D3AC32AA4C]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:443)
at org.junit.Assert.assertEquals(Assert.java:512)
at 
org.apache.lucene.search.TestSimilarity$4.collect(TestSimilarity.java:146)
at 
org.apache.lucene.search.AssertingLeafCollector.collect(AssertingLeafCollector.java:52)
at 
org.apache.lucene.search.AssertingCollector$1.collect(AssertingCollector.java:56)
at 
org.apache.lucene.search.AssertingLeafCollector.collect(AssertingLeafCollector.java:52)
at 
org.apache.lucene.search.Weight$DefaultBulkScorer.scoreAll(Weight.java:293)
at 
org.apache.lucene.search.Weight$DefaultBulkScorer.score(Weight.java:236)
at org.apache.lucene.search.BulkScorer.score(BulkScorer.java:39)
at 
org.apache.lucene.search.AssertingBulkScorer.score(AssertingBulkScorer.java:69)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:645)
at 
org.apache.lucene.search.AssertingIndexSearcher.search(AssertingIndexSearcher.java:72)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:449)
at 
org.apache.lucene.search.TestSimilarity.testSimilarity(TestSimilarity.java:137)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 

[jira] [Reopened] (LUCENE-8014) Remove SimScorer.computeSlopFactor and SimScorer.computePayloadFactor

2017-11-10 Thread Alan Woodward (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward reopened LUCENE-8014:
---

Re-opening to deprecate and remove TFIDFSimilarity.scorePayload() as well

> Remove SimScorer.computeSlopFactor and SimScorer.computePayloadFactor
> -
>
> Key: LUCENE-8014
> URL: https://issues.apache.org/jira/browse/LUCENE-8014
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Adrien Grand
>Assignee: Alan Woodward
>Priority: Minor
> Fix For: master (8.0), 7.2
>
> Attachments: LUCENE-8014.patch
>
>
> This supersedes LUCENE-8013.
> We should hardcode computeSlopFactor to 1/(N+1) in SloppyPhraseScorer and 
> move computePayloadFactor to PayloadFunction so that all the payload scoring 
> logic is in a single place.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11598) Export Writer needs to support more than 4 Sort fields - Say 10, ideally it should not be bound at all, but 4 seems to really short sell the StreamRollup capabilities.

2017-11-10 Thread Amrit Sarkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-11598:

Attachment: SOLR-11598-6_6.patch
SOLR-11598-master.patch

[~aroopganguly],

I have attached patches against {{master}} and {{branch_6_6}} branches 
supporting maximum 8 fields instead of current 4, so that we can analyse how 
the performance gets affected. I have also included very basic tests for 
{{ExportWriter}}; but effective.

> Export Writer needs to support more than 4 Sort fields - Say 10, ideally it 
> should not be bound at all, but 4 seems to really short sell the StreamRollup 
> capabilities.
> ---
>
> Key: SOLR-11598
> URL: https://issues.apache.org/jira/browse/SOLR-11598
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Affects Versions: 6.6.1, 7.0
>Reporter: Aroop
>  Labels: patch
> Attachments: SOLR-11598-6_6.patch, SOLR-11598-master.patch
>
>
> I am a user of Streaming and I am currently trying to use rollups on an 10 
> dimensional document.
> I am unable to get correct results on this query as I am bounded by the 
> limitation of the export handler which supports only 4 sort fields.
> I do not see why this needs to be the case, as it could very well be 10 or 20.
> My current needs would be satisfied with 10, but one would want to ask why 
> can't it be any decent integer n, beyond which we know performance degrades, 
> but even then it should be caveat emptor.
> This is a big limitation for me, as I am working on a feature with a tight 
> deadline where I need to support 10 dimensional rollups. I did not read any 
> limitation on the sorting in the documentation and we went ahead with the 
> installation of 6.6.1. Now we are blocked with this limitation.
> This is a Jira to track this work.
> [~varunthacker] 
> Code Link:
> https://github.com/apache/lucene-solr/blob/19db1df81a18e6eb2cce5be973bf2305d606a9f8/solr/core/src/java/org/apache/solr/handler/ExportWriter.java#L455
> Error
> null:java.io.IOException: A max of 4 sorts can be specified
>   at 
> org.apache.solr.handler.ExportWriter.getSortDoc(ExportWriter.java:452)
>   at org.apache.solr.handler.ExportWriter.writeDocs(ExportWriter.java:228)
>   at 
> org.apache.solr.handler.ExportWriter.lambda$null$1(ExportWriter.java:219)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeIterator(JavaBinCodec.java:664)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeKnownType(JavaBinCodec.java:333)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:223)
>   at org.apache.solr.common.util.JavaBinCodec$1.put(JavaBinCodec.java:394)
>   at 
> org.apache.solr.handler.ExportWriter.lambda$null$2(ExportWriter.java:219)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeMap(JavaBinCodec.java:437)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeKnownType(JavaBinCodec.java:354)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:223)
>   at org.apache.solr.common.util.JavaBinCodec$1.put(JavaBinCodec.java:394)
>   at 
> org.apache.solr.handler.ExportWriter.lambda$write$3(ExportWriter.java:217)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeMap(JavaBinCodec.java:437)
>   at org.apache.solr.handler.ExportWriter.write(ExportWriter.java:215)
>   at org.apache.solr.core.SolrCore$3.write(SolrCore.java:2601)
>   at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:49)
>   at 
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:809)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:538)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
>   at 
> 

[jira] [Commented] (LUCENE-8042) Add SegmentCachable interface

2017-11-10 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247328#comment-16247328
 ] 

Alan Woodward commented on LUCENE-8042:
---

[~rcmuir] are you happy with the last patch?

> Add SegmentCachable interface
> -
>
> Key: LUCENE-8042
> URL: https://issues.apache.org/jira/browse/LUCENE-8042
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Alan Woodward
> Attachments: LUCENE-8042.patch, LUCENE-8042.patch, LUCENE-8042.patch, 
> LUCENE-8042.patch
>
>
> Following LUCENE-8017, I tried to add a getCacheHelper(LeafReaderContext) 
> method to DoubleValuesSource so that Weights that use DVS can delegate on.  
> This ended up with the same method being added to LongValuesSource, and some 
> of the similar objects in spatial-extras.  I think it makes sense to abstract 
> this out into a separate SegmentCachable interface.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8014) Remove SimScorer.computeSlopFactor and SimScorer.computePayloadFactor

2017-11-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247313#comment-16247313
 ] 

ASF subversion and git services commented on LUCENE-8014:
-

Commit 082116057676906a97cfbb088c58e6e238c6e737 in lucene-solr's branch 
refs/heads/branch_7x from [~romseygeek]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=0821160 ]

LUCENE-8014: Also deprecate TFIDFSimilarity.sloppyFreq()


> Remove SimScorer.computeSlopFactor and SimScorer.computePayloadFactor
> -
>
> Key: LUCENE-8014
> URL: https://issues.apache.org/jira/browse/LUCENE-8014
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Adrien Grand
>Assignee: Alan Woodward
>Priority: Minor
> Fix For: master (8.0), 7.2
>
> Attachments: LUCENE-8014.patch
>
>
> This supersedes LUCENE-8013.
> We should hardcode computeSlopFactor to 1/(N+1) in SloppyPhraseScorer and 
> move computePayloadFactor to PayloadFunction so that all the payload scoring 
> logic is in a single place.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8014) Remove SimScorer.computeSlopFactor and SimScorer.computePayloadFactor

2017-11-10 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247314#comment-16247314
 ] 

ASF subversion and git services commented on LUCENE-8014:
-

Commit 1aa049bb270a15c44cffdd503725005f86f8c3ff in lucene-solr's branch 
refs/heads/master from [~romseygeek]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=1aa049b ]

LUCENE-8014: Remove deprecated SimScorer methods


> Remove SimScorer.computeSlopFactor and SimScorer.computePayloadFactor
> -
>
> Key: LUCENE-8014
> URL: https://issues.apache.org/jira/browse/LUCENE-8014
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Adrien Grand
>Assignee: Alan Woodward
>Priority: Minor
> Fix For: master (8.0), 7.2
>
> Attachments: LUCENE-8014.patch
>
>
> This supersedes LUCENE-8013.
> We should hardcode computeSlopFactor to 1/(N+1) in SloppyPhraseScorer and 
> move computePayloadFactor to PayloadFunction so that all the payload scoring 
> logic is in a single place.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk1.8.0_144) - Build # 786 - Still Unstable!

2017-11-10 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/786/
Java: 64bit/jdk1.8.0_144 -XX:+UseCompressedOops -XX:+UseSerialGC

3 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation

Error Message:
2 threads leaked from SUITE scope at 
org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation: 1) 
Thread[id=17143, name=jetty-launcher-3221-thread-1-EventThread, 
state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation] 
at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
 at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)  
   at 
org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323)
 at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105)  
   at 
org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:41)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.readValue(SharedValue.java:244)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.access$100(SharedValue.java:44)
 at 
org.apache.curator.framework.recipes.shared.SharedValue$1.process(SharedValue.java:61)
 at 
org.apache.curator.framework.imps.NamespaceWatcher.process(NamespaceWatcher.java:67)
 at 
org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:530)   
  at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:505)   
 2) Thread[id=17131, name=jetty-launcher-3221-thread-2-EventThread, 
state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation] 
at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
 at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)  
   at 
org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323)
 at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105)  
   at 
org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:41)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.readValue(SharedValue.java:244)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.access$100(SharedValue.java:44)
 at 
org.apache.curator.framework.recipes.shared.SharedValue$1.process(SharedValue.java:61)
 at 
org.apache.curator.framework.imps.NamespaceWatcher.process(NamespaceWatcher.java:67)
 at 
org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:530)   
  at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:505)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 2 threads leaked from SUITE 
scope at org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation: 
   1) Thread[id=17143, name=jetty-launcher-3221-thread-1-EventThread, 
state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)
at 
org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323)
at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105)
at 
org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288)
at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279)
at 

[JENKINS] Lucene-Solr-Tests-7.x - Build # 231 - Still Unstable

2017-11-10 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/231/

1 tests failed.
FAILED:  org.apache.solr.schema.TestBulkSchemaConcurrent.test

Error Message:
Could not load collection from ZK: collection1

Stack Trace:
org.apache.solr.common.SolrException: Could not load collection from ZK: 
collection1
at 
__randomizedtesting.SeedInfo.seed([1B7BEFA71B8DC534:932FD07DB571A8CC]:0)
at 
org.apache.solr.common.cloud.ZkStateReader.getCollectionLive(ZkStateReader.java:1172)
at 
org.apache.solr.common.cloud.ZkStateReader$LazyCollectionRef.get(ZkStateReader.java:692)
at 
org.apache.solr.common.cloud.ClusterState.getCollectionOrNull(ClusterState.java:130)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.getTotalReplicas(AbstractFullDistribZkTestBase.java:487)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createJettys(AbstractFullDistribZkTestBase.java:440)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createServers(AbstractFullDistribZkTestBase.java:333)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:991)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.zookeeper.KeeperException$SessionExpiredException: 
KeeperErrorCode = Session expired for /collections/collection1/state.json
at org.apache.zookeeper.KeeperException.create(KeeperException.java:127)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
   

[jira] [Commented] (LUCENE-8014) Remove SimScorer.computeSlopFactor and SimScorer.computePayloadFactor

2017-11-10 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247228#comment-16247228
 ] 

Alan Woodward commented on LUCENE-8014:
---

Ran tests on the wrong branch so I missed a few places where sloppyFreq() is 
still being used in tests.  We need to deprecate TFIDFSimilarity.sloppyFreq() 
as well, and remove the tests that change score by overriding it.

> Remove SimScorer.computeSlopFactor and SimScorer.computePayloadFactor
> -
>
> Key: LUCENE-8014
> URL: https://issues.apache.org/jira/browse/LUCENE-8014
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Adrien Grand
>Assignee: Alan Woodward
>Priority: Minor
> Fix For: master (8.0), 7.2
>
> Attachments: LUCENE-8014.patch
>
>
> This supersedes LUCENE-8013.
> We should hardcode computeSlopFactor to 1/(N+1) in SloppyPhraseScorer and 
> move computePayloadFactor to PayloadFunction so that all the payload scoring 
> logic is in a single place.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >