[GitHub] lucene-solr pull request #456: Corrected equals method of QueryValueSource

2018-09-26 Thread rozuur
Github user rozuur closed the pull request at:

https://github.com/apache/lucene-solr/pull/456


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_172) - Build # 22930 - Unstable!

2018-09-26 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/22930/
Java: 32bit/jdk1.8.0_172 -server -XX:+UseG1GC

2 tests failed.
FAILED:  org.apache.solr.cloud.MoveReplicaHDFSTest.testFailedMove

Error Message:
No live SolrServers available to handle this 
request:[https://127.0.0.1:35951/solr/MoveReplicaHDFSTest_failed_coll_true, 
https://127.0.0.1:35879/solr/MoveReplicaHDFSTest_failed_coll_true]

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this 
request:[https://127.0.0.1:35951/solr/MoveReplicaHDFSTest_failed_coll_true, 
https://127.0.0.1:35879/solr/MoveReplicaHDFSTest_failed_coll_true]
at 
__randomizedtesting.SeedInfo.seed([C4F17A24AC57F3B1:6E3CA9D61B842661]:0)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:462)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1107)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:994)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:942)
at 
org.apache.solr.cloud.MoveReplicaTest.testFailedMove(MoveReplicaTest.java:301)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-master-Windows (64bit/jdk-11) - Build # 7537 - Still Unstable!

2018-09-26 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7537/
Java: 64bit/jdk-11 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

13 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.sim.TestSimGenericDistributedQueue.testDistributedQueueBlocking

Error Message:


Stack Trace:
java.util.concurrent.TimeoutException
at 
__randomizedtesting.SeedInfo.seed([5801941049BCDD5D:1DABE6690DEE6129]:0)
at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:204)
at 
org.apache.solr.cloud.autoscaling.sim.TestSimDistributedQueue.testDistributedQueueBlocking(TestSimDistributedQueue.java:102)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:834)


FAILED:  
org.apache.solr.cloud.autoscaling.sim.TestSimGenericDistributedQueue.testDistributedQueue
 {#2}

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at 

[jira] [Commented] (LUCENE-7848) QueryBuilder.analyzeGraphPhrase does not handle gaps correctly

2018-09-26 Thread Michael Gibney (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-7848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16629705#comment-16629705
 ] 

Michael Gibney commented on LUCENE-7848:


A patch equivalent to the [^LUCENE-7848-delimOnly-token-offset.patch] of 
14/Jul/2017 has been merged with LUCENE-8395. I think the remaining problems 
related to this issue are more directly addressed by LUCENE-7398.

> QueryBuilder.analyzeGraphPhrase does not handle gaps correctly
> --
>
> Key: LUCENE-7848
> URL: https://issues.apache.org/jira/browse/LUCENE-7848
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 6.5, 6.6
>Reporter: Jim Ferenczi
>Priority: Major
> Attachments: LUCENE-7848-branching-spanOr.patch, 
> LUCENE-7848-delimOnly-token-offset.patch, LUCENE-7848.patch, 
> LUCENE-7848.patch, capture-3.png
>
>
> Position increments greater than 1 are ignored when the query builder creates 
> a graph phrase query. 
> Instead it should use SpanNearQuery.addGap for pos incr > 1.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10199) Solr's Kerberos functionality does not work in Java9 due to dependency on hadoop's AuthenticationFilter which attempt access to JVM protected classes

2018-09-26 Thread Cao Manh Dat (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-10199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16629682#comment-16629682
 ] 

Cao Manh Dat commented on SOLR-10199:
-

I would like to mention here that I filed 
https://issues.apache.org/jira/browse/HADOOP-15681 3 weeks ago with patch. No 
one in Hadoop community have leaved any comment there.

> Solr's Kerberos functionality does not work in Java9 due to dependency on 
> hadoop's AuthenticationFilter which attempt access to JVM protected classes
> -
>
> Key: SOLR-10199
> URL: https://issues.apache.org/jira/browse/SOLR-10199
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Priority: Major
>  Labels: Java9
>
> (discovered this while working on test improvements for SOLR-8052)
> Our Kerberos based authn/authz features are all built on top of Hadoop's 
> {{AuthenticationFilter}} which in turn uses Hadoop's {{KerberosUtil}} -- but 
> this does not work on Java9/jigsaw JVMs because that class in turn attempts 
> to access {{sun.security.jgss.GSSUtil}} which is not exported by {{module 
> java.security.jgss}}
> This means that Solr users who depend on Kerberos will not be able to upgrade 
> to Java9, even if they do not use any Hadoop specific features of Solr.
> 
> Example log messages...
> {noformat}
>[junit4]   2> 6833 WARN  (qtp442059499-30) [] 
> o.a.h.s.a.s.AuthenticationFilter Authentication exception: 
> java.lang.IllegalAccessException: class 
> org.apache.hadoop.security.authentication.util.KerberosUtil cannot access 
> class sun.security.jgss.GSSUtil (in module java.security.jgss) because module 
> java.security.jgss does not export sun.security.jgss to unnamed module 
> @4b38fe8b
>[junit4]   2> 6841 WARN  
> (TEST-TestSolrCloudWithKerberosAlt.testBasics-seed#[95A583AF82D1EBBE]) [] 
> o.a.h.c.p.ResponseProcessCookies Invalid cookie header: "Set-Cookie: 
> hadoop.auth=; Path=/; Domain=127.0.0.1; Expires=Ara, 01-Sa-1970 00:00:00 GMT; 
> HttpOnly". Invalid 'expires' attribute: Ara, 01-Sa-1970 00:00:00 GMT
> {noformat}
> (NOTE: HADOOP-14115 is cause of malformed cookie expiration)
> ultimately the client gets a 403 error (as seen in a testcase with patch from 
> SOLR-8052 applied and java9 assume commented out)...
> {noformat}
>[junit4] ERROR   7.10s | TestSolrCloudWithKerberosAlt.testBasics <<<
>[junit4]> Throwable #1: 
> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
> from server at http://127.0.0.1:34687/solr: Expected mime type 
> application/octet-stream but got text/html. 
>[junit4]> 
>[junit4]>  content="text/html;charset=ISO-8859-1"/>
>[junit4]> Error 403 
>[junit4]> 
>[junit4]> 
>[junit4]> HTTP ERROR: 403
>[junit4]> Problem accessing /solr/admin/collections. Reason:
>[junit4]> java.lang.IllegalAccessException: class 
> org.apache.hadoop.security.authentication.util.KerberosUtil cannot access 
> class sun.security.jgss.GSSUtil (in module java.security.jgss) because module 
> java.security.jgss does not export sun.security.jgss to unnamed module 
> @4b38fe8b
>[junit4]> http://eclipse.org/jetty;>Powered by Jetty:// 
> 9.3.14.v20161028
>[junit4]> 
>[junit4]> 
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-BadApples-Tests-master - Build # 163 - Unstable

2018-09-26 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-master/163/

4 tests failed.
FAILED:  org.apache.solr.cloud.MoveReplicaHDFSTest.testFailedMove

Error Message:
No live SolrServers available to handle this 
request:[https://127.0.0.1:33086/solr/MoveReplicaHDFSTest_failed_coll_true, 
https://127.0.0.1:33179/solr/MoveReplicaHDFSTest_failed_coll_true]

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this 
request:[https://127.0.0.1:33086/solr/MoveReplicaHDFSTest_failed_coll_true, 
https://127.0.0.1:33179/solr/MoveReplicaHDFSTest_failed_coll_true]
at 
__randomizedtesting.SeedInfo.seed([F0F3342719DA2AD9:5A3EE7D5AE09FF09]:0)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:462)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1107)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:994)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:942)
at 
org.apache.solr.cloud.MoveReplicaTest.testFailedMove(MoveReplicaTest.java:301)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 

[jira] [Assigned] (SOLR-12811) Add enclosingDisk, radius and center Stream Evaluators

2018-09-26 Thread Joel Bernstein (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein reassigned SOLR-12811:
-

Assignee: Joel Bernstein

> Add enclosingDisk, radius and center Stream Evaluators
> --
>
> Key: SOLR-12811
> URL: https://issues.apache.org/jira/browse/SOLR-12811
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
>
> This ticket will add the *enclosingDisk*, *radius* and *center* Stream 
> Evaluators. The enclosingDIsk function will calculate the smallest circle 
> that encloses a 2D data set. The implementation is provided by Apache Commons 
> Math



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12811) Add enclosingDisk, radius and center Stream Evaluators

2018-09-26 Thread Joel Bernstein (JIRA)
Joel Bernstein created SOLR-12811:
-

 Summary: Add enclosingDisk, radius and center Stream Evaluators
 Key: SOLR-12811
 URL: https://issues.apache.org/jira/browse/SOLR-12811
 Project: Solr
  Issue Type: New Feature
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Joel Bernstein


This ticket will add the *enclosingDisk*, *radius* and *center* Stream 
Evaluators. The enclosingDIsk function will calculate the smallest circle that 
encloses a 2D data set. The implementation is provided by Apache Commons Math



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-master - Build # 2824 - Still Unstable

2018-09-26 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/2824/

1 tests failed.
FAILED:  org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.test

Error Message:
shard2 is not consistent.  Got 11 from 
http://127.0.0.1:45389/collection1_shard2_replica_n61 (previous client) and got 
10 from http://127.0.0.1:44256/collection1_shard2_replica_n67

Stack Trace:
java.lang.AssertionError: shard2 is not consistent.  Got 11 from 
http://127.0.0.1:45389/collection1_shard2_replica_n61 (previous client) and got 
10 from http://127.0.0.1:44256/collection1_shard2_replica_n67
at 
__randomizedtesting.SeedInfo.seed([4C259C22B4978AC1:C471A3F81A6BE739]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1330)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1309)
at 
org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.test(ChaosMonkeySafeLeaderTest.java:163)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1008)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:983)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 

[jira] [Commented] (SOLR-12807) out of memory error due to a lot of zk watchers in solr cloud

2018-09-26 Thread Mine_Orange (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16629666#comment-16629666
 ] 

Mine_Orange commented on SOLR-12807:


I see,thank you !

> out of memory error due to a lot of zk watchers in solr cloud 
> --
>
> Key: SOLR-12807
> URL: https://issues.apache.org/jira/browse/SOLR-12807
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 6.1
>Reporter: Mine_Orange
>Priority: Major
>
> Analyzing the dump file,we found a lot of watchers in childWatches of 
> ZKWatchManager,nearly 1.8G,the znode of childWatches is 
> /overseer/collection-queue-work,confirm that it is not because of the 
> frequent use of collection API and the network is normal. 
> The instance is the overseer leader of a solr cluster and did not restart for 
> more than a year,suspect that the watchers grow with time.
> Our solr version is 6.1 and zookeeper version is 3.4.9.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8353) FrenchLightStemmer dont work with ë, ö and ï

2018-09-26 Thread Hoss Man (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16629637#comment-16629637
 ] 

Hoss Man commented on LUCENE-8353:
--

{quote}... Another option could be to add a version parameter to the 
constructor but it proved problematic in the past (LUCENE-5859). 
{quote}
It's almost like the exact situation I was worried about & described (when 
urging that we _add_ no arg constructors _in addition_ to the Version 
constructors instead of removing them outright) has come to pass: we would 
ideally like to change the "default" behavior of an analysis class to be the 
"better" then the current behavior, but we also don't  want to break existing 
code for existing users.

so now it seems like we either:
 * add {{FrenchLightStemmer2}} ...or maybe jump straight to 
{{FrenchLightStemmerHuperDuper}} ?
 * leave the default (bad) behavior as it is an add a {{void 
setIWantTheGoodVowelBehavior(boolean)}} option that new users can call if they 
are smart enough to know that they should
 * break backcompat and add a {{void setIWantTheOldVowelBehavior(boolean)}} 
option existing users can call if they are smart enough to know that they 
should.

Man ... LUCENE-5859 really is the gift that just keeps on giving isn't it?



> FrenchLightStemmer dont work with ë, ö and ï
> 
>
> Key: LUCENE-8353
> URL: https://issues.apache.org/jira/browse/LUCENE-8353
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Reporter: Bruno CAILLAUD
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> ë, ö and ï are not present in FrenchLightStemmer so if you search per exemple
> Laicité when you index contains Laïcité, that's not match
> I try to fixe this issue in https://github.com/apache/lucene-solr/pull/379



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5004) Allow a shard to be split into 'n' sub-shards

2018-09-26 Thread Lucene/Solr QA (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-5004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16629567#comment-16629567
 ] 

Lucene/Solr QA commented on SOLR-5004:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} SOLR-5004 does not apply to master. Rebase required? Wrong 
Branch? See 
https://wiki.apache.org/solr/HowToContribute#Creating_the_patch_file for help. 
{color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | SOLR-5004 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12941331/SOLR-5004.01.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-SOLR-Build/191/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> Allow a shard to be split into 'n' sub-shards
> -
>
> Key: SOLR-5004
> URL: https://issues.apache.org/jira/browse/SOLR-5004
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Affects Versions: 4.3, 4.3.1
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
>Priority: Major
> Attachments: SOLR-5004.01.patch, SOLR-5004.patch, SOLR-5004.patch
>
>
> As of now, a SPLITSHARD call is hardcoded to create 2 sub-shards from the 
> parent one. Accept a parameter to split into n sub-shards.
> Default it to 2 and perhaps also have an upper bound to it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12767) Deprecate min_rf parameter and always include the achieved rf in the response

2018-09-26 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/SOLR-12767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe updated SOLR-12767:
-
Attachment: SOLR-12767.patch

> Deprecate min_rf parameter and always include the achieved rf in the response
> -
>
> Key: SOLR-12767
> URL: https://issues.apache.org/jira/browse/SOLR-12767
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Tomás Fernández Löbbe
>Assignee: Tomás Fernández Löbbe
>Priority: Major
> Attachments: SOLR-12767.patch, SOLR-12767.patch, SOLR-12767.patch, 
> SOLR-12767.patch
>
>
> Currently the {{min_rf}} parameter does two things.
> 1. It tells Solr that the user wants to keep track of the achieved 
> replication factor
> 2. (undocumented AFAICT) It prevents Solr from putting replicas in recovery 
> if the achieved replication factor is lower than the {{min_rf}} specified
> #2 is intentional and I believe the reason behind it is to prevent replicas 
> to go into recovery in cases of short hiccups (since the assumption is that 
> the user is going to retry the request anyway). This is dangerous because if 
> the user doesn’t retry (or retries a number of times but keeps failing) the 
> replicas will be permanently inconsistent. Also, since we now retry updates 
> from leaders to replicas, this behavior has less value, since short temporary 
> blips should be recovered by those retries anyway. 
> I think we should remove the behavior described in #2, #1 is still valuable, 
> but there isn’t much point of making the parameter an integer, the user is 
> just telling Solr that they want the achieved replication factor, so it could 
> be a boolean, but I’m thinking we probably don’t even want to expose the 
> parameter, and just always keep track of it, and include it in the response. 
> It’s not costly to calculate, so why keep two separate code paths?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-7036) Faster method for group.facet

2018-09-26 Thread Vasily Volkov (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-7036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16629511#comment-16629511
 ] 

Vasily Volkov edited comment on SOLR-7036 at 9/26/18 11:12 PM:
---

[~erickerickson] could you please confirm this is fixed in Solr 7.2.1

Thanks! ~


was (Author: vav802):
[~erickerickson] could you confirm this is fixed in Solr 7.2.1

Thanks! ~

> Faster method for group.facet
> -
>
> Key: SOLR-7036
> URL: https://issues.apache.org/jira/browse/SOLR-7036
> Project: Solr
>  Issue Type: Improvement
>  Components: faceting
>Affects Versions: 4.10.3
>Reporter: Jim Musil
>Assignee: Erick Erickson
>Priority: Major
> Fix For: 6.4, 7.0
>
> Attachments: SOLR-7036.patch, SOLR-7036.patch, SOLR-7036.patch, 
> SOLR-7036.patch, SOLR-7036.patch, SOLR-7036.patch, SOLR-7036_zipped.zip, 
> jstack-output.txt, performance.txt, source_for_patch.zip
>
>
> This is a patch that speeds up the performance of requests made with 
> group.facet=true. The original code that collects and counts unique facet 
> values for each group does not use the same improved field cache methods that 
> have been added for normal faceting in recent versions.
> Specifically, this approach leverages the UninvertedField class which 
> provides a much faster way to look up docs that contain a term. I've also 
> added a simple grouping map so that when a term is found for a doc, it can 
> quickly look up the group to which it belongs.
> Group faceting was very slow for our data set and when the number of docs or 
> terms was high, the latency spiked to multiple second requests. This solution 
> provides better overall performance -- from an average of 54ms to 32ms. It 
> also dropped our slowest performing queries way down -- from 6012ms to 991ms.
> I also added a few tests.
> I added an additional parameter so that you can choose to use this method or 
> the original. Add group.facet.method=fc to use the improved method or 
> group.facet.method=original which is the default if not specified.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7036) Faster method for group.facet

2018-09-26 Thread Vasily Volkov (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-7036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16629511#comment-16629511
 ] 

Vasily Volkov commented on SOLR-7036:
-

[~erickerickson] could you confirm this is fixed in Solr 7.2.1

Thanks! ~

> Faster method for group.facet
> -
>
> Key: SOLR-7036
> URL: https://issues.apache.org/jira/browse/SOLR-7036
> Project: Solr
>  Issue Type: Improvement
>  Components: faceting
>Affects Versions: 4.10.3
>Reporter: Jim Musil
>Assignee: Erick Erickson
>Priority: Major
> Fix For: 6.4, 7.0
>
> Attachments: SOLR-7036.patch, SOLR-7036.patch, SOLR-7036.patch, 
> SOLR-7036.patch, SOLR-7036.patch, SOLR-7036.patch, SOLR-7036_zipped.zip, 
> jstack-output.txt, performance.txt, source_for_patch.zip
>
>
> This is a patch that speeds up the performance of requests made with 
> group.facet=true. The original code that collects and counts unique facet 
> values for each group does not use the same improved field cache methods that 
> have been added for normal faceting in recent versions.
> Specifically, this approach leverages the UninvertedField class which 
> provides a much faster way to look up docs that contain a term. I've also 
> added a simple grouping map so that when a term is found for a doc, it can 
> quickly look up the group to which it belongs.
> Group faceting was very slow for our data set and when the number of docs or 
> terms was high, the latency spiked to multiple second requests. This solution 
> provides better overall performance -- from an average of 54ms to 32ms. It 
> also dropped our slowest performing queries way down -- from 6012ms to 991ms.
> I also added a few tests.
> I added an additional parameter so that you can choose to use this method or 
> the original. Add group.facet.method=fc to use the improved method or 
> group.facet.method=original which is the default if not specified.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-7.x - Build # 896 - Still Unstable

2018-09-26 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/896/

1 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.ComputePlanActionTest.testSelectedCollections

Error Message:
Error from server at https://127.0.0.1:34032/solr: collection already exists: 
testSelected3

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at https://127.0.0.1:34032/solr: collection already exists: 
testSelected3
at 
__randomizedtesting.SeedInfo.seed([5283E1828E0E0A17:682D045BB06AD379]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1107)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at 
org.apache.solr.cloud.autoscaling.ComputePlanActionTest.testSelectedCollections(ComputePlanActionTest.java:462)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-12502) Unify and reduce the number of SolrClient#add methods

2018-09-26 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16629486#comment-16629486
 ] 

Shawn Heisey commented on SOLR-12502:
-

bq.  But I don't have a good feel for how common it is today for users to reuse 
a single client across collections. 

I don't have a good feel for this either ... but I can say that I would 
strongly recommend the use of a minimal number of client objects.  For 
CloudSolrClient, I would use one object per cluster.  For HttpSolrClient, one 
object per Solr server or load balancer front end.

I don't automatically consider the presence of a large number of overloaded 
methods to be a problem.  It might be something that indicates the class needs 
some scrutiny, though.  If we deprecated all the methods that don't take a 
collection, and interpret a "null" value in that parameter in the same way as 
the removed method, that would get rid of half the methods.


> Unify and reduce the number of SolrClient#add methods
> -
>
> Key: SOLR-12502
> URL: https://issues.apache.org/jira/browse/SOLR-12502
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Reporter: Varun Thacker
>Priority: Major
>
> On SOLR-11654 we noticed that SolrClient#add has 10 overloaded methods which 
> can be very confusing to new users.
> Also the UpdateRequest class is public so that means if a user is looking for 
> a custom combination they can always choose to do so by writing a couple of 
> lines of code.
> For 8.0 which might not be very far away we can improve this situation
>  
> Quoting David from SOLR-11654
> {quote}Any way I guess we'll leave SolrClient alone.  Thanks for your input 
> Varun.  Yes it's a shame there are so many darned overloaded methods... I 
> think it's a large part due to the optional "collection" parameter which like 
> doubles the methods!  I've been bitten several times writing SolrJ code that 
> doesn't use the right overloaded version (forgot to specify collection).  I 
> think for 8.0, *either* all SolrClient methods without "collection" can be 
> removed in favor of insisting you use the overloaded variant accepting a 
> collection, *or* SolrClient itself could be locked down to one collection at 
> the time you create it *or* have a CollectionSolrClient interface retrieved 
> from a SolrClient.withCollection(collection) in which all the operations that 
> require a SolrClient are on that interface and not SolrClient proper.  
> Several ideas to consider.
> {quote}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-7.x-Linux (64bit/jdk-12-ea+12) - Build # 2813 - Failure!

2018-09-26 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2813/
Java: 64bit/jdk-12-ea+12 -XX:-UseCompressedOops -XX:+UseG1GC

2 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.ScheduledTriggerIntegrationTest.testScheduledTrigger

Error Message:
 null Live Nodes: [127.0.0.1:38201_solr, 127.0.0.1:40945_solr] Last available 
state: null

Stack Trace:
java.lang.AssertionError: 
null
Live Nodes: [127.0.0.1:38201_solr, 127.0.0.1:40945_solr]
Last available state: null
at 
__randomizedtesting.SeedInfo.seed([F09483FB93D4ECDE:638FCB89CD29B7EA]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.SolrCloudTestCase.waitForState(SolrCloudTestCase.java:280)
at 
org.apache.solr.cloud.autoscaling.ScheduledTriggerIntegrationTest.testScheduledTrigger(ScheduledTriggerIntegrationTest.java:83)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:835)


FAILED:  org.apache.solr.cloud.cdcr.CdcrBidirectionalTest.testBiDir

Error Message:
Collection not 

[JENKINS] Lucene-Solr-repro - Build # 1538 - Unstable

2018-09-26 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/1538/

[...truncated 28 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-SmokeRelease-7.x/325/consoleText

[repro] Revision: 1ab6b8e5d8883aa21eeba23f8327f2b9431adb43

[repro] Ant options: -DsmokeTestRelease.java9=/home/jenkins/tools/java/latest1.9
[repro] Repro line:  ant test  -Dtestcase=CdcrBidirectionalTest 
-Dtests.method=testBiDir -Dtests.seed=13026B663D0F2B63 -Dtests.multiplier=2 
-Dtests.locale=es-CR -Dtests.timezone=Asia/Taipei -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=TestSimTriggerIntegration 
-Dtests.method=testEventFromRestoredState -Dtests.seed=13026B663D0F2B63 
-Dtests.multiplier=2 -Dtests.locale=zh-HK -Dtests.timezone=Pacific/Marquesas 
-Dtests.asserts=true -Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=TestSimTriggerIntegration 
-Dtests.method=testSearchRate -Dtests.seed=13026B663D0F2B63 
-Dtests.multiplier=2 -Dtests.locale=zh-HK -Dtests.timezone=Pacific/Marquesas 
-Dtests.asserts=true -Dtests.file.encoding=UTF-8

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
03c9c04353ce1b5ace33fddd5bd99059e63ed507
[repro] git fetch
[repro] git checkout 1ab6b8e5d8883aa21eeba23f8327f2b9431adb43

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   TestSimTriggerIntegration
[repro]   CdcrBidirectionalTest
[repro] ant compile-test

[...truncated 3437 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=10 
-Dtests.class="*.TestSimTriggerIntegration|*.CdcrBidirectionalTest" 
-Dtests.showOutput=onerror 
-DsmokeTestRelease.java9=/home/jenkins/tools/java/latest1.9 
-Dtests.seed=13026B663D0F2B63 -Dtests.multiplier=2 -Dtests.locale=zh-HK 
-Dtests.timezone=Pacific/Marquesas -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[...truncated 1317 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   0/5 failed: 
org.apache.solr.cloud.autoscaling.sim.TestSimTriggerIntegration
[repro]   1/5 failed: org.apache.solr.cloud.cdcr.CdcrBidirectionalTest
[repro] git checkout 03c9c04353ce1b5ace33fddd5bd99059e63ed507

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 6 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-BadApples-Tests-7.x - Build # 168 - Unstable

2018-09-26 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-7.x/168/

2 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.SearchRateTriggerIntegrationTest.testDeleteNode

Error Message:
unexpected DELETENODE status: 
{responseHeader={status=0,QTime=7},status={state=notfound,msg=Did not find 
[search_rate_trigger3/12c3efca3da829T3rvifa1vaqxcivnj7ko1ocicc/0] in any tasks 
queue}}

Stack Trace:
java.lang.AssertionError: unexpected DELETENODE status: 
{responseHeader={status=0,QTime=7},status={state=notfound,msg=Did not find 
[search_rate_trigger3/12c3efca3da829T3rvifa1vaqxcivnj7ko1ocicc/0] in any tasks 
queue}}
at 
__randomizedtesting.SeedInfo.seed([D163BF8D5136E1F3:F3F1710F66FC6E8E]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.autoscaling.SearchRateTriggerIntegrationTest.lambda$testDeleteNode$5(SearchRateTriggerIntegrationTest.java:684)
at java.util.ArrayList.forEach(ArrayList.java:1257)
at 
org.apache.solr.cloud.autoscaling.SearchRateTriggerIntegrationTest.testDeleteNode(SearchRateTriggerIntegrationTest.java:676)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 

[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-12-ea+12) - Build # 22928 - Unstable!

2018-09-26 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/22928/
Java: 64bit/jdk-12-ea+12 -XX:+UseCompressedOops -XX:+UseSerialGC

8 tests failed.
FAILED:  org.apache.solr.cloud.MoveReplicaHDFSTest.testFailedMove

Error Message:
No live SolrServers available to handle this 
request:[https://127.0.0.1:42173/solr/MoveReplicaHDFSTest_failed_coll_true, 
https://127.0.0.1:38841/solr/MoveReplicaHDFSTest_failed_coll_true]

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this 
request:[https://127.0.0.1:42173/solr/MoveReplicaHDFSTest_failed_coll_true, 
https://127.0.0.1:38841/solr/MoveReplicaHDFSTest_failed_coll_true]
at 
__randomizedtesting.SeedInfo.seed([CA70FD94E3F434C5:60BD2E665427E115]:0)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:462)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1107)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:994)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:942)
at 
org.apache.solr.cloud.MoveReplicaTest.testFailedMove(MoveReplicaTest.java:301)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 2077 - Still Unstable!

2018-09-26 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/2077/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

7 tests failed.
FAILED:  org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testMixedBounds

Error Message:
expected: but was:

Stack Trace:
java.lang.AssertionError: expected: but was:
at 
__randomizedtesting.SeedInfo.seed([9336AB152EE44632:99B514B8635F4D68]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:147)
at 
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testMixedBounds(IndexSizeTriggerTest.java:669)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testMixedBounds

Error Message:
expected: but was:

Stack Trace:
java.lang.AssertionError: expected: 

[JENKINS] Lucene-Solr-repro - Build # 1536 - Unstable

2018-09-26 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/1536/

[...truncated 28 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-BadApples-NightlyTests-7.x/28/consoleText

[repro] Revision: 1ab6b8e5d8883aa21eeba23f8327f2b9431adb43

[repro] Ant options: -Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-7.x/test-data/enwiki.random.lines.txt
[repro] Repro line:  ant test  -Dtestcase=TestSimLargeCluster 
-Dtests.method=testSearchRate -Dtests.seed=37256A5AA4376587 
-Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
-Dtests.badapples=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-7.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=cs-CZ -Dtests.timezone=America/Glace_Bay -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[repro] Repro line:  ant test  -Dtestcase=LeaderVoteWaitTimeoutTest 
-Dtests.method=testMostInSyncReplicasCanWinElection 
-Dtests.seed=37256A5AA4376587 -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true -Dtests.badapples=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-7.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=ja-JP -Dtests.timezone=Europe/Luxembourg -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[repro] Repro line:  ant test  -Dtestcase=LeaderVoteWaitTimeoutTest 
-Dtests.method=basicTest -Dtests.seed=37256A5AA4376587 -Dtests.multiplier=2 
-Dtests.nightly=true -Dtests.slow=true -Dtests.badapples=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-7.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=ja-JP -Dtests.timezone=Europe/Luxembourg -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[repro] Repro line:  ant test  -Dtestcase=MathExpressionTest 
-Dtests.method=testOlsRegress -Dtests.seed=D4708ABB1C7B9D59 
-Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
-Dtests.badapples=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-7.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=ar-IQ -Dtests.timezone=America/Guayaquil -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
03c9c04353ce1b5ace33fddd5bd99059e63ed507
[repro] git fetch
[repro] git checkout 1ab6b8e5d8883aa21eeba23f8327f2b9431adb43

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   TestSimLargeCluster
[repro]   LeaderVoteWaitTimeoutTest
[repro]solr/solrj
[repro]   MathExpressionTest
[repro] ant compile-test

[...truncated 3437 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=10 
-Dtests.class="*.TestSimLargeCluster|*.LeaderVoteWaitTimeoutTest" 
-Dtests.showOutput=onerror -Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-7.x/test-data/enwiki.random.lines.txt
 -Dtests.seed=37256A5AA4376587 -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true -Dtests.badapples=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-7.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=cs-CZ -Dtests.timezone=America/Glace_Bay -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[...truncated 333250 lines...]
[repro] Setting last failure code to 256

[repro] ant compile-test

[...truncated 454 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.MathExpressionTest" -Dtests.showOutput=onerror 
-Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-7.x/test-data/enwiki.random.lines.txt
 -Dtests.seed=D4708ABB1C7B9D59 -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true -Dtests.badapples=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-7.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=ar-IQ -Dtests.timezone=America/Guayaquil -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[...truncated 576 lines...]
[repro] Failures:
[repro]   0/5 failed: org.apache.solr.client.solrj.io.stream.MathExpressionTest
[repro]   0/5 failed: org.apache.solr.cloud.LeaderVoteWaitTimeoutTest
[repro]   2/5 failed: org.apache.solr.cloud.autoscaling.sim.TestSimLargeCluster
[repro] git checkout 03c9c04353ce1b5ace33fddd5bd99059e63ed507

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 6 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1649 - Failure

2018-09-26 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1649/

1 tests failed.
FAILED:  org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test

Error Message:
Java heap space

Stack Trace:
java.lang.OutOfMemoryError: Java heap space
at 
__randomizedtesting.SeedInfo.seed([586FADBC61702FF:8DD2C50168EB6F07]:0)
at java.util.Arrays.copyOf(Arrays.java:3332)
at 
java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:124)
at 
java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:649)
at java.lang.StringBuilder.append(StringBuilder.java:202)
at 
org.apache.http.client.utils.URLEncodedUtils.urlEncode(URLEncodedUtils.java:536)
at 
org.apache.http.client.utils.URLEncodedUtils.encodeFormFields(URLEncodedUtils.java:652)
at 
org.apache.http.client.utils.URLEncodedUtils.format(URLEncodedUtils.java:404)
at 
org.apache.http.client.utils.URLEncodedUtils.format(URLEncodedUtils.java:382)
at 
org.apache.http.client.entity.UrlEncodedFormEntity.(UrlEncodedFormEntity.java:75)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.fillContentStream(HttpSolrClient.java:513)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.createMethod(HttpSolrClient.java:420)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:253)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1107)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:974)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:990)
at 
org.apache.solr.cloud.CloudInspectUtil.compareResults(CloudInspectUtil.java:228)
at 
org.apache.solr.cloud.CloudInspectUtil.compareResults(CloudInspectUtil.java:167)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.testIndexingBatchPerRequestWithHttpSolrClient(FullSolrCloudDistribCmdsTest.java:669)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test(FullSolrCloudDistribCmdsTest.java:153)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)




Build Log:
[...truncated 13822 lines...]
   [junit4] Suite: org.apache.solr.cloud.FullSolrCloudDistribCmdsTest
   [junit4]   2> 1521153 INFO  
(SUITE-FullSolrCloudDistribCmdsTest-seed#[586FADBC61702FF]-worker) [] 
o.a.s.SolrTestCaseJ4 SecureRandom sanity checks: 
test.solr.allowed.securerandom=null & java.security.egd=file:/dev/./urandom
   [junit4]   2> Creating dataDir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J0/temp/solr.cloud.FullSolrCloudDistribCmdsTest_586FADBC61702FF-001/init-core-data-001
   [junit4]   2> 1521154 WARN  
(SUITE-FullSolrCloudDistribCmdsTest-seed#[586FADBC61702FF]-worker) [] 
o.a.s.SolrTestCaseJ4 startTrackingSearchers: numOpens=5 numCloses=5
   [junit4]   2> 1521154 INFO  
(SUITE-FullSolrCloudDistribCmdsTest-seed#[586FADBC61702FF]-worker) [] 
o.a.s.SolrTestCaseJ4 Using PointFields (NUMERIC_POINTS_SYSPROP=true) 
w/NUMERIC_DOCVALUES_SYSPROP=false
   [junit4]   2> 1521156 INFO  
(SUITE-FullSolrCloudDistribCmdsTest-seed#[586FADBC61702FF]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (false) via: 
@org.apache.solr.SolrTestCaseJ4$SuppressSSL(bugUrl=https://issues.apache.org/jira/browse/SOLR-5776)
   [junit4]   2> 1521156 INFO  
(SUITE-FullSolrCloudDistribCmdsTest-seed#[586FADBC61702FF]-worker) [] 
o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: /
   [junit4]   2> 1521159 INFO  
(TEST-FullSolrCloudDistribCmdsTest.test-seed#[586FADBC61702FF]) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 1521160 INFO  (Thread-3150) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 

[jira] [Commented] (SOLR-12809) Upgrading to a more recent Java (JDK 11?)

2018-09-26 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16629257#comment-16629257
 ] 

Shawn Heisey commented on SOLR-12809:
-

My thoughts, implement or ignore as you see fit:

It will be interesting to see how Java fares as a viable platform, especially 
for open source, as Oracle makes it harder and harder to use Java without 
paying.

Next year when Oracle puts all support for Java 8 behind a pay wall, it's going 
to be a problem if users can't use Java 11 to run Lucene-based software like 
Solr.  We will need to have any problems we currently have fixed by then.  
Dependencies like Hadoop face a similar situation, we will need to be prepared 
to upgrade to new versions of those dependencies.

I think we should explicitly recommend OpenJDK for Lucene/Solr.  Before now, I 
have always recommended Oracle Java, and said that OpenJDK (as long as it's 7 
or later and compatible with the specific Solr version) should work well.  With 
Oracle requiring a paid license for production use of their Java 11 
implementation, I don't think we can recommend it.


> Upgrading to a more recent Java (JDK 11?)
> -
>
> Key: SOLR-12809
> URL: https://issues.apache.org/jira/browse/SOLR-12809
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Priority: Major
>
> JDK 8 will be EOL early next year (except for "premier support"). JDK 9, 10 
> and 11 all have issues for Solr and Lucene IIUC.
> Also IIUC Oracle will start requiring commercial licenses for 11.
> This Jira is to discuss what we want to do going forward. Among the topics:
>  * Skip straight to 11, skipping 9 and 10? If so how to resolve current 
> issues?
>  * How much emphasis on OpenJDK .vs. Oracle's version
>  * What to do about dependencies that don't work (for whatever reason) with 
> the version of Java we go with?
>  * ???
> This may turn into an umbrella Jira with sub-tasks of course. Since JDK 11 
> has had a GA release, I'd also like to have a record of where the current 
> issues are to refer people to.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12502) Unify and reduce the number of SolrClient#add methods

2018-09-26 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-12502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16629253#comment-16629253
 ] 

Tomás Fernández Löbbe commented on SOLR-12502:
--

Thanks for looking at this [~gerlowskija]. Reading all the comments, I’m a bit 
on the fence really. Yes, the {{SolrClient}} interface has many overloaded 
methods but on the other hand it’s very clear what they do and they are clearly 
documented. I don’t think this is a source of confusion for our users and I 
believe it helps the user keep a relatively clean code (like, if they have a 
list of docs they can just call the {{add(list)}} and not need to implement any 
interfaces for that). I don’t want to block improvements in this, but at the 
same time, I want to make sure we think very well in the backward compatibility 
implication of any change to these classes. Any changes here can essentially 
break, not only all the users code but also any blogs/examples out the in the 
wild, that may cause much more confusion for users than the fact that 
SolrClient has many methods.

bq. using UpdateRequest as this builder-like type
I think this is a good idea, especially to prevent the proliferation of new 
methods in SolrClient

bq. the mere presence of a single-doc-add method steers people into misusing 
SolrJ
Right, but this is the first thing new users will see, I think we want to keep 
it very simple to not scare them away. I think some javadocs (or better docs in 
the Ref Guide) should be enough to guide people into batching.

> Unify and reduce the number of SolrClient#add methods
> -
>
> Key: SOLR-12502
> URL: https://issues.apache.org/jira/browse/SOLR-12502
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Reporter: Varun Thacker
>Priority: Major
>
> On SOLR-11654 we noticed that SolrClient#add has 10 overloaded methods which 
> can be very confusing to new users.
> Also the UpdateRequest class is public so that means if a user is looking for 
> a custom combination they can always choose to do so by writing a couple of 
> lines of code.
> For 8.0 which might not be very far away we can improve this situation
>  
> Quoting David from SOLR-11654
> {quote}Any way I guess we'll leave SolrClient alone.  Thanks for your input 
> Varun.  Yes it's a shame there are so many darned overloaded methods... I 
> think it's a large part due to the optional "collection" parameter which like 
> doubles the methods!  I've been bitten several times writing SolrJ code that 
> doesn't use the right overloaded version (forgot to specify collection).  I 
> think for 8.0, *either* all SolrClient methods without "collection" can be 
> removed in favor of insisting you use the overloaded variant accepting a 
> collection, *or* SolrClient itself could be locked down to one collection at 
> the time you create it *or* have a CollectionSolrClient interface retrieved 
> from a SolrClient.withCollection(collection) in which all the operations that 
> require a SolrClient are on that interface and not SolrClient proper.  
> Several ideas to consider.
> {quote}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12810) Add an option to split a shard evenly w.r.t. the documents

2018-09-26 Thread Anshum Gupta (JIRA)
Anshum Gupta created SOLR-12810:
---

 Summary: Add an option to split a shard evenly w.r.t. the documents
 Key: SOLR-12810
 URL: https://issues.apache.org/jira/browse/SOLR-12810
 Project: Solr
  Issue Type: New Feature
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Anshum Gupta
Assignee: Anshum Gupta


Split shard should have an option to create sub-shards on the basis of doc 
distribution i.e. in a way that creates evenly distributed sub-shards.

Right now, the split assumes uniform distribution of data over the hash range, 
but that might not always be true. Having a mechanism that makes a best effort 
for non-uniformly distributed data would be a useful addition.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-master - Build # 2823 - Still Unstable

2018-09-26 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/2823/

3 tests failed.
FAILED:  org.apache.solr.cloud.MultiThreadedOCPTest.test

Error Message:
org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = 
Session expired for /overseer/collection-queue-work

Stack Trace:
org.apache.solr.common.SolrException: 
org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = 
Session expired for /overseer/collection-queue-work
at 
__randomizedtesting.SeedInfo.seed([D041A3E35F39E43D:58159C39F1C589C5]:0)
at 
org.apache.solr.cloud.ZkDistributedQueue.(ZkDistributedQueue.java:124)
at 
org.apache.solr.cloud.ZkDistributedQueue.(ZkDistributedQueue.java:114)
at 
org.apache.solr.cloud.MultiThreadedOCPTest.testTaskExclusivity(MultiThreadedOCPTest.java:155)
at 
org.apache.solr.cloud.MultiThreadedOCPTest.test(MultiThreadedOCPTest.java:66)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1008)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:983)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[jira] [Commented] (SOLR-12767) Deprecate min_rf parameter and always include the achieved rf in the response

2018-09-26 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-12767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16629211#comment-16629211
 ] 

Tomás Fernández Löbbe commented on SOLR-12767:
--

Thanks for the feedback, I'll do that. I'll update the docs and upload a new 
patch

> Deprecate min_rf parameter and always include the achieved rf in the response
> -
>
> Key: SOLR-12767
> URL: https://issues.apache.org/jira/browse/SOLR-12767
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Tomás Fernández Löbbe
>Assignee: Tomás Fernández Löbbe
>Priority: Major
> Attachments: SOLR-12767.patch, SOLR-12767.patch, SOLR-12767.patch
>
>
> Currently the {{min_rf}} parameter does two things.
> 1. It tells Solr that the user wants to keep track of the achieved 
> replication factor
> 2. (undocumented AFAICT) It prevents Solr from putting replicas in recovery 
> if the achieved replication factor is lower than the {{min_rf}} specified
> #2 is intentional and I believe the reason behind it is to prevent replicas 
> to go into recovery in cases of short hiccups (since the assumption is that 
> the user is going to retry the request anyway). This is dangerous because if 
> the user doesn’t retry (or retries a number of times but keeps failing) the 
> replicas will be permanently inconsistent. Also, since we now retry updates 
> from leaders to replicas, this behavior has less value, since short temporary 
> blips should be recovered by those retries anyway. 
> I think we should remove the behavior described in #2, #1 is still valuable, 
> but there isn’t much point of making the parameter an integer, the user is 
> just telling Solr that they want the achieved replication factor, so it could 
> be a boolean, but I’m thinking we probably don’t even want to expose the 
> parameter, and just always keep track of it, and include it in the response. 
> It’s not costly to calculate, so why keep two separate code paths?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12809) Upgrading to a more recent Java (JDK 11?)

2018-09-26 Thread Cassandra Targett (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16629208#comment-16629208
 ] 

Cassandra Targett commented on SOLR-12809:
--

I found [this list of open Jira issues with the label 
"Java9"|https://issues.apache.org/jira/browse/SOLR-10355?jql=project%20%3D%20SOLR%20AND%20resolution%20%3D%20Unresolved%20AND%20labels%20%3D%20Java9].
 Essentially it's Kerberos and Hadoop that are problematic. If test failures 
indicate other problems, they haven't been labeled nor mention.

There are an [additional 12 
issues|https://issues.apache.org/jira/browse/SOLR-11579?jql=project%20%3D%20SOLR%20AND%20resolution%20%3D%20Unresolved%20AND%20environment%20~%20%22java%2010%22]
 that mention Java 10 in the Environment, but it's difficult to know which of 
those are caused by Java 10 or other bugs/user misconfigurations.

One thing I'd like to do is be more specific in the Ref Guide about the known 
issues that exist with JDK 9, 10 & 11, so I'd like to add a short note to the 
{{solr-system-requirements.adoc}} specifically mentioning Hadoop and Kerberos 
as problematic for Java 9+, without getting into details about what exactly is 
wrong, which JDK versions specifically, etc. If other general problems are 
identified, those can be added later. I may point readers to specific Jira 
issues for more information.

Ideally something similar could be added to CHANGES also - I think it would be 
helpful to make it more clear for users where Solr stands in support of Java 
versions.

> Upgrading to a more recent Java (JDK 11?)
> -
>
> Key: SOLR-12809
> URL: https://issues.apache.org/jira/browse/SOLR-12809
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Priority: Major
>
> JDK 8 will be EOL early next year (except for "premier support"). JDK 9, 10 
> and 11 all have issues for Solr and Lucene IIUC.
> Also IIUC Oracle will start requiring commercial licenses for 11.
> This Jira is to discuss what we want to do going forward. Among the topics:
>  * Skip straight to 11, skipping 9 and 10? If so how to resolve current 
> issues?
>  * How much emphasis on OpenJDK .vs. Oracle's version
>  * What to do about dependencies that don't work (for whatever reason) with 
> the version of Java we go with?
>  * ???
> This may turn into an umbrella Jira with sub-tasks of course. Since JDK 11 
> has had a GA release, I'd also like to have a record of where the current 
> issues are to refer people to.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10199) Solr's Kerberos functionality does not work in Java9 due to dependency on hadoop's AuthenticationFilter which attempt access to JVM protected classes

2018-09-26 Thread Cassandra Targett (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-10199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett updated SOLR-10199:
-
Summary: Solr's Kerberos functionality does not work in Java9 due to 
dependency on hadoop's AuthenticationFilter which attempt access to JVM 
protected classes  (was: Solr's Kerberos functionaliy does not work in Java9 
due to dependency on hadoop's AuthenticationFilter which attempt access to JVM 
protected classes)

> Solr's Kerberos functionality does not work in Java9 due to dependency on 
> hadoop's AuthenticationFilter which attempt access to JVM protected classes
> -
>
> Key: SOLR-10199
> URL: https://issues.apache.org/jira/browse/SOLR-10199
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Priority: Major
>  Labels: Java9
>
> (discovered this while working on test improvements for SOLR-8052)
> Our Kerberos based authn/authz features are all built on top of Hadoop's 
> {{AuthenticationFilter}} which in turn uses Hadoop's {{KerberosUtil}} -- but 
> this does not work on Java9/jigsaw JVMs because that class in turn attempts 
> to access {{sun.security.jgss.GSSUtil}} which is not exported by {{module 
> java.security.jgss}}
> This means that Solr users who depend on Kerberos will not be able to upgrade 
> to Java9, even if they do not use any Hadoop specific features of Solr.
> 
> Example log messages...
> {noformat}
>[junit4]   2> 6833 WARN  (qtp442059499-30) [] 
> o.a.h.s.a.s.AuthenticationFilter Authentication exception: 
> java.lang.IllegalAccessException: class 
> org.apache.hadoop.security.authentication.util.KerberosUtil cannot access 
> class sun.security.jgss.GSSUtil (in module java.security.jgss) because module 
> java.security.jgss does not export sun.security.jgss to unnamed module 
> @4b38fe8b
>[junit4]   2> 6841 WARN  
> (TEST-TestSolrCloudWithKerberosAlt.testBasics-seed#[95A583AF82D1EBBE]) [] 
> o.a.h.c.p.ResponseProcessCookies Invalid cookie header: "Set-Cookie: 
> hadoop.auth=; Path=/; Domain=127.0.0.1; Expires=Ara, 01-Sa-1970 00:00:00 GMT; 
> HttpOnly". Invalid 'expires' attribute: Ara, 01-Sa-1970 00:00:00 GMT
> {noformat}
> (NOTE: HADOOP-14115 is cause of malformed cookie expiration)
> ultimately the client gets a 403 error (as seen in a testcase with patch from 
> SOLR-8052 applied and java9 assume commented out)...
> {noformat}
>[junit4] ERROR   7.10s | TestSolrCloudWithKerberosAlt.testBasics <<<
>[junit4]> Throwable #1: 
> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
> from server at http://127.0.0.1:34687/solr: Expected mime type 
> application/octet-stream but got text/html. 
>[junit4]> 
>[junit4]>  content="text/html;charset=ISO-8859-1"/>
>[junit4]> Error 403 
>[junit4]> 
>[junit4]> 
>[junit4]> HTTP ERROR: 403
>[junit4]> Problem accessing /solr/admin/collections. Reason:
>[junit4]> java.lang.IllegalAccessException: class 
> org.apache.hadoop.security.authentication.util.KerberosUtil cannot access 
> class sun.security.jgss.GSSUtil (in module java.security.jgss) because module 
> java.security.jgss does not export sun.security.jgss to unnamed module 
> @4b38fe8b
>[junit4]> http://eclipse.org/jetty;>Powered by Jetty:// 
> 9.3.14.v20161028
>[junit4]> 
>[junit4]> 
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12798) Structural changes in SolrJ since version 7.0.0 have effectively disabled multipart post

2018-09-26 Thread Julien Massiera (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16629146#comment-16629146
 ] 

Julien Massiera commented on SOLR-12798:


Hi [~noble.paul], [~arafalov], [~kwri...@metacarta.com]

I am a ManifoldCF user/committer and you will find as attached files an example 
of an update request that is sent to Solr after being analyzed by Tika 
(solr-update-request.txt) and the corresponding original file.
I also have an entity extractor that produce a lot of metadata on files that 
exceed the URL limits.

> Structural changes in SolrJ since version 7.0.0 have effectively disabled 
> multipart post
> 
>
> Key: SOLR-12798
> URL: https://issues.apache.org/jira/browse/SOLR-12798
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 7.4
>Reporter: Karl Wright
>Assignee: Karl Wright
>Priority: Major
> Attachments: HOT Balloon Trip_Ultra HD.jpg, solr-update-request.txt
>
>
> Project ManifoldCF uses SolrJ to post documents to Solr.  When upgrading from 
> SolrJ 7.0.x to SolrJ 7.4, we encountered significant structural changes to 
> SolrJ's HttpSolrClient class that seemingly disable any use of multipart 
> post.  This is critical because ManifoldCF's documents often contain metadata 
> in excess of 4K that therefore cannot be stuffed into a URL.
> The changes in question seem to have been performed by Paul Noble on 
> 10/31/2017, with the introduction of the RequestWriter mechanism.  Basically, 
> if a request has a RequestWriter, it is used exclusively to write the 
> request, and that overrides the stream mechanism completely.  I haven't 
> chased it back to a specific ticket.
> ManifoldCF's usage of SolrJ involves the creation of 
> ContentStreamUpdateRequests for all posts meant for Solr Cell, and the 
> creation of UpdateRequests for posts not meant for Solr Cell (as well as for 
> delete and commit requests).  For our release cycle that is taking place 
> right now, we're shipping a modified version of HttpSolrClient that ignores 
> the RequestWriter when dealing with ContentStreamUpdateRequests.  We 
> apparently cannot use multipart for all requests because on the Solr side we 
> get "pfountz Should not get here!" errors on the Solr side when we do, which 
> generate HTTP error code 500 responses.  That should not happen either, in my 
> opinion.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Solr examples with long metadata needed

2018-09-26 Thread Karl Wright
Awesome, thanks!

Karl

On Wed, Sep 26, 2018 at 12:58 PM Julien Massiera <
julien.massi...@francelabs.com> wrote:

> Hi Karl,
>
> sorry for the delay, you will find below the solr log that you ask for.
> You did not ask for it but I will also make a reply on your Solr ticket
> with this log and I will attach as well the original file !
>
> INFO 2018-09-26T16:44:40,795 (qtp952486988-14) -
> Solr|Solr|update.processor.LogUpdateProcessorFactory|[c:FileShare
> s:shard1 r:core_node2 x:FileShare_shard1_replica_n1]
> o.a.s.u.p.LogUpdateProcessorFactory [FileShare_shard1_replica_n1]
> webapp=/solr path=/update/extract
>
> 

[jira] [Updated] (SOLR-12798) Structural changes in SolrJ since version 7.0.0 have effectively disabled multipart post

2018-09-26 Thread Julien Massiera (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julien Massiera updated SOLR-12798:
---
Attachment: HOT Balloon Trip_Ultra HD.jpg

> Structural changes in SolrJ since version 7.0.0 have effectively disabled 
> multipart post
> 
>
> Key: SOLR-12798
> URL: https://issues.apache.org/jira/browse/SOLR-12798
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 7.4
>Reporter: Karl Wright
>Assignee: Karl Wright
>Priority: Major
> Attachments: HOT Balloon Trip_Ultra HD.jpg, solr-update-request.txt
>
>
> Project ManifoldCF uses SolrJ to post documents to Solr.  When upgrading from 
> SolrJ 7.0.x to SolrJ 7.4, we encountered significant structural changes to 
> SolrJ's HttpSolrClient class that seemingly disable any use of multipart 
> post.  This is critical because ManifoldCF's documents often contain metadata 
> in excess of 4K that therefore cannot be stuffed into a URL.
> The changes in question seem to have been performed by Paul Noble on 
> 10/31/2017, with the introduction of the RequestWriter mechanism.  Basically, 
> if a request has a RequestWriter, it is used exclusively to write the 
> request, and that overrides the stream mechanism completely.  I haven't 
> chased it back to a specific ticket.
> ManifoldCF's usage of SolrJ involves the creation of 
> ContentStreamUpdateRequests for all posts meant for Solr Cell, and the 
> creation of UpdateRequests for posts not meant for Solr Cell (as well as for 
> delete and commit requests).  For our release cycle that is taking place 
> right now, we're shipping a modified version of HttpSolrClient that ignores 
> the RequestWriter when dealing with ContentStreamUpdateRequests.  We 
> apparently cannot use multipart for all requests because on the Solr side we 
> get "pfountz Should not get here!" errors on the Solr side when we do, which 
> generate HTTP error code 500 responses.  That should not happen either, in my 
> opinion.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12798) Structural changes in SolrJ since version 7.0.0 have effectively disabled multipart post

2018-09-26 Thread Julien Massiera (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julien Massiera updated SOLR-12798:
---
Attachment: solr-update-request.txt

> Structural changes in SolrJ since version 7.0.0 have effectively disabled 
> multipart post
> 
>
> Key: SOLR-12798
> URL: https://issues.apache.org/jira/browse/SOLR-12798
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 7.4
>Reporter: Karl Wright
>Assignee: Karl Wright
>Priority: Major
> Attachments: HOT Balloon Trip_Ultra HD.jpg, solr-update-request.txt
>
>
> Project ManifoldCF uses SolrJ to post documents to Solr.  When upgrading from 
> SolrJ 7.0.x to SolrJ 7.4, we encountered significant structural changes to 
> SolrJ's HttpSolrClient class that seemingly disable any use of multipart 
> post.  This is critical because ManifoldCF's documents often contain metadata 
> in excess of 4K that therefore cannot be stuffed into a URL.
> The changes in question seem to have been performed by Paul Noble on 
> 10/31/2017, with the introduction of the RequestWriter mechanism.  Basically, 
> if a request has a RequestWriter, it is used exclusively to write the 
> request, and that overrides the stream mechanism completely.  I haven't 
> chased it back to a specific ticket.
> ManifoldCF's usage of SolrJ involves the creation of 
> ContentStreamUpdateRequests for all posts meant for Solr Cell, and the 
> creation of UpdateRequests for posts not meant for Solr Cell (as well as for 
> delete and commit requests).  For our release cycle that is taking place 
> right now, we're shipping a modified version of HttpSolrClient that ignores 
> the RequestWriter when dealing with ContentStreamUpdateRequests.  We 
> apparently cannot use multipart for all requests because on the Solr side we 
> get "pfountz Should not get here!" errors on the Solr side when we do, which 
> generate HTTP error code 500 responses.  That should not happen either, in my 
> opinion.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Solr examples with long metadata needed

2018-09-26 Thread Julien Massiera

Hi Karl,

sorry for the delay, you will find below the solr log that you ask for.
You did not ask for it but I will also make a reply on your Solr ticket 
with this log and I will attach as well the original file !


INFO 2018-09-26T16:44:40,795 (qtp952486988-14) - 
Solr|Solr|update.processor.LogUpdateProcessorFactory|[c:FileShare 
s:shard1 r:core_node2 x:FileShare_shard1_replica_n1] 
o.a.s.u.p.LogUpdateProcessorFactory [FileShare_shard1_replica_n1] 
webapp=/solr path=/update/extract 

[jira] [Commented] (SOLR-12767) Deprecate min_rf parameter and always include the achieved rf in the response

2018-09-26 Thread Mark Miller (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16629122#comment-16629122
 ] 

Mark Miller commented on SOLR-12767:


Patch looks good. I'd also enter this in CHANGES as a bug because proper 
eventual consistency is broken when you try to use this. In the past, I have 
used the same JIRA issue and made an improvement entry that describes the 
improvement and then a bug entry that describes the bug fix.

> Deprecate min_rf parameter and always include the achieved rf in the response
> -
>
> Key: SOLR-12767
> URL: https://issues.apache.org/jira/browse/SOLR-12767
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Tomás Fernández Löbbe
>Assignee: Tomás Fernández Löbbe
>Priority: Major
> Attachments: SOLR-12767.patch, SOLR-12767.patch, SOLR-12767.patch
>
>
> Currently the {{min_rf}} parameter does two things.
> 1. It tells Solr that the user wants to keep track of the achieved 
> replication factor
> 2. (undocumented AFAICT) It prevents Solr from putting replicas in recovery 
> if the achieved replication factor is lower than the {{min_rf}} specified
> #2 is intentional and I believe the reason behind it is to prevent replicas 
> to go into recovery in cases of short hiccups (since the assumption is that 
> the user is going to retry the request anyway). This is dangerous because if 
> the user doesn’t retry (or retries a number of times but keeps failing) the 
> replicas will be permanently inconsistent. Also, since we now retry updates 
> from leaders to replicas, this behavior has less value, since short temporary 
> blips should be recovered by those retries anyway. 
> I think we should remove the behavior described in #2, #1 is still valuable, 
> but there isn’t much point of making the parameter an integer, the user is 
> just telling Solr that they want the achieved replication factor, so it could 
> be a boolean, but I’m thinking we probably don’t even want to expose the 
> parameter, and just always keep track of it, and include it in the response. 
> It’s not costly to calculate, so why keep two separate code paths?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12809) Upgrading to a more recent Java (JDK 11?)

2018-09-26 Thread Erick Erickson (JIRA)
Erick Erickson created SOLR-12809:
-

 Summary: Upgrading to a more recent Java (JDK 11?)
 Key: SOLR-12809
 URL: https://issues.apache.org/jira/browse/SOLR-12809
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Erick Erickson


JDK 8 will be EOL early next year (except for "premier support"). JDK 9, 10 and 
11 all have issues for Solr and Lucene IIUC.

Also IIUC Oracle will start requiring commercial licenses for 11.

This Jira is to discuss what we want to do going forward. Among the topics:
 * Skip straight to 11, skipping 9 and 10? If so how to resolve current issues?
 * How much emphasis on OpenJDK .vs. Oracle's version
 * What to do about dependencies that don't work (for whatever reason) with the 
version of Java we go with?
 * ???

This may turn into an umbrella Jira with sub-tasks of course. Since JDK 11 has 
had a GA release, I'd also like to have a record of where the current issues 
are to refer people to.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12807) out of memory error due to a lot of zk watchers in solr cloud

2018-09-26 Thread Erick Erickson (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16628968#comment-16628968
 ] 

Erick Erickson commented on SOLR-12807:
---

Possibly SOLR-10420?

> out of memory error due to a lot of zk watchers in solr cloud 
> --
>
> Key: SOLR-12807
> URL: https://issues.apache.org/jira/browse/SOLR-12807
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 6.1
>Reporter: Mine_Orange
>Priority: Major
>
> Analyzing the dump file,we found a lot of watchers in childWatches of 
> ZKWatchManager,nearly 1.8G,the znode of childWatches is 
> /overseer/collection-queue-work,confirm that it is not because of the 
> frequent use of collection API and the network is normal. 
> The instance is the overseer leader of a solr cluster and did not restart for 
> more than a year,suspect that the watchers grow with time.
> Our solr version is 6.1 and zookeeper version is 3.4.9.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5004) Allow a shard to be split into 'n' sub-shards

2018-09-26 Thread Anshum Gupta (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-5004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16628946#comment-16628946
 ] 

Anshum Gupta commented on SOLR-5004:


[~varunthacker] - thought I commented last night when I uploaded the patch but 
seems like I didn't. Here's a working patch + test. I'll commit it today if no 
one has issues.

> Allow a shard to be split into 'n' sub-shards
> -
>
> Key: SOLR-5004
> URL: https://issues.apache.org/jira/browse/SOLR-5004
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Affects Versions: 4.3, 4.3.1
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
>Priority: Major
> Attachments: SOLR-5004.01.patch, SOLR-5004.patch, SOLR-5004.patch
>
>
> As of now, a SPLITSHARD call is hardcoded to create 2 sub-shards from the 
> parent one. Accept a parameter to split into n sub-shards.
> Default it to 2 and perhaps also have an upper bound to it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12801) Fix the tests, remove BadApples and AwaitsFix annotations, improve env for test development.

2018-09-26 Thread Erick Erickson (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16628943#comment-16628943
 ] 

Erick Erickson commented on SOLR-12801:
---

{quote}You can only meaningful run tests with BadApples off, which means test 
coverage is minimal and shrinking and new problems are being added,
{quote}
Not exactly true. BadApple=true is the default, you have to deliberately 
disable those tests. Of course if you do in your environment, then the above is 
totally true.

What is also totally true is that the noise is such that I can break something 
and never see it because I haven't waded through each and every failure in 
BadApple'd tests to see if I introduced a legitimate failure in one of them.

I would be _thrilled_ if all the BadApple nonsense were no longer useful. As it 
is, what it's mainly recording is whether we're annotating more than we 
un-annotate as well as whether tests come and go over longer time frames. There 
are tests that fail for 4 weeks in a row, then succeed for 5 weeks, then fail 
for 6 weeks etc. That's the purpose of leaving the comments in when tests were 
annotated/unannotated.

The BadApple thrashing is not doing much of anything towards actually _fixing_ 
the issues. About all it's doing is making the extent of the problem more 
visible if anyone bothers to read the weekly e-mails.

I guess the net-net here is that I don't like the BadApple process much and 
would be glad to stop dealing with it altogether if we define a better process.

Sounds like a hot topic at Activate ;)

> Fix the tests, remove BadApples and AwaitsFix annotations, improve env for 
> test development.
> 
>
> Key: SOLR-12801
> URL: https://issues.apache.org/jira/browse/SOLR-12801
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Critical
>
> A single issue to counteract the single issue adding tons of annotations, the 
> continued addition of new flakey tests, and the continued addition of 
> flakiness to existing tests.
> Lots more to come.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk-9) - Build # 4845 - Unstable!

2018-09-26 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/4845/
Java: 64bit/jdk-9 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.cloud.BasicDistributedZk2Test.test

Error Message:
.response[3][id][0]:17!=14

Stack Trace:
junit.framework.AssertionFailedError: .response[3][id][0]:17!=14
at 
__randomizedtesting.SeedInfo.seed([6A36C0E174060DFB:E262FF3BDAFA6003]:0)
at junit.framework.Assert.fail(Assert.java:50)
at 
org.apache.solr.BaseDistributedSearchTestCase.compareSolrResponses(BaseDistributedSearchTestCase.java:928)
at 
org.apache.solr.BaseDistributedSearchTestCase.compareResponses(BaseDistributedSearchTestCase.java:955)
at 
org.apache.solr.BaseDistributedSearchTestCase.query(BaseDistributedSearchTestCase.java:613)
at 
org.apache.solr.BaseDistributedSearchTestCase.query(BaseDistributedSearchTestCase.java:589)
at 
org.apache.solr.BaseDistributedSearchTestCase.query(BaseDistributedSearchTestCase.java:568)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkQueries(AbstractFullDistribZkTestBase.java:950)
at 
org.apache.solr.cloud.BasicDistributedZk2Test.test(BasicDistributedZk2Test.java:102)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1008)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:983)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 

[jira] [Commented] (SOLR-12502) Unify and reduce the number of SolrClient#add methods

2018-09-26 Thread Erick Erickson (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16628919#comment-16628919
 ] 

Erick Erickson commented on SOLR-12502:
---

+1 to cleaning this up, this is a mess.

+1 to making the add-a-single-document less attractive. Discouraging it's use 
is all to the good.

Are you thinking of deprecating then removing? If so, we're targeting removing 
in Solr 9 then?

I can help clean up the tests if we change the interface, I suspect there'll be 
a lot of changes there. How to coordinate?

> Unify and reduce the number of SolrClient#add methods
> -
>
> Key: SOLR-12502
> URL: https://issues.apache.org/jira/browse/SOLR-12502
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Reporter: Varun Thacker
>Priority: Major
>
> On SOLR-11654 we noticed that SolrClient#add has 10 overloaded methods which 
> can be very confusing to new users.
> Also the UpdateRequest class is public so that means if a user is looking for 
> a custom combination they can always choose to do so by writing a couple of 
> lines of code.
> For 8.0 which might not be very far away we can improve this situation
>  
> Quoting David from SOLR-11654
> {quote}Any way I guess we'll leave SolrClient alone.  Thanks for your input 
> Varun.  Yes it's a shame there are so many darned overloaded methods... I 
> think it's a large part due to the optional "collection" parameter which like 
> doubles the methods!  I've been bitten several times writing SolrJ code that 
> doesn't use the right overloaded version (forgot to specify collection).  I 
> think for 8.0, *either* all SolrClient methods without "collection" can be 
> removed in favor of insisting you use the overloaded variant accepting a 
> collection, *or* SolrClient itself could be locked down to one collection at 
> the time you create it *or* have a CollectionSolrClient interface retrieved 
> from a SolrClient.withCollection(collection) in which all the operations that 
> require a SolrClient are on that interface and not SolrClient proper.  
> Several ideas to consider.
> {quote}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Solr examples with long metadata needed

2018-09-26 Thread Karl Wright
Hi ManifoldCF Community,

I need one or two concrete examples of solr [INFO] log messages that
include very long metadata (>8192).  This is apparently critical for
getting the SolrJ team to be able to understand ManifoldCF's usage of
solr.  If you have such examples around, please be sure that the data
contained in the info URL is not confidential in any way.

(Julien, you were the last person to run into this -- hopefully that image
is still around and the metadata can be shared?)

Thanks in advance,
Karl


[jira] [Commented] (SOLR-12798) Structural changes in SolrJ since version 7.0.0 have effectively disabled multipart post

2018-09-26 Thread Noble Paul (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16628912#comment-16628912
 ] 

Noble Paul commented on SOLR-12798:
---

bq.How many examples do you need to convince yourselves that we're not making 
this up?

Looks like you don't understand the objectives here. We design SolrJ client and 
server with certain usecases in mind. While doing that we assume that we meet 
the needs of most/all users. The fact that you had to implement a custom client 
suggests that either we have failed in that or you have failed in understanding 
how SolrJ works . I'm sure you wouldn't open a ticket to waste our time. We 
have also come across so many cases were users are "holding it wrong" . That is 
why a specific example is useful. If we realize that there is a genuine use 
case that cannot be satisfied by the state-of-the-art SolrJ client, we will 
work towards improving our code so that you don't have to do the dirty work. 
The objective of Solr is not to support multipart form posts . It is designed 
to send in docs/commands and get out query results. The multipart mechanism is 
just a means to an end. Imagine, Solr working on a non HTTP standard. In that 
case we still need to support all these use cases. So, please be patient if we 
are trying to get details

> Structural changes in SolrJ since version 7.0.0 have effectively disabled 
> multipart post
> 
>
> Key: SOLR-12798
> URL: https://issues.apache.org/jira/browse/SOLR-12798
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 7.4
>Reporter: Karl Wright
>Assignee: Karl Wright
>Priority: Major
>
> Project ManifoldCF uses SolrJ to post documents to Solr.  When upgrading from 
> SolrJ 7.0.x to SolrJ 7.4, we encountered significant structural changes to 
> SolrJ's HttpSolrClient class that seemingly disable any use of multipart 
> post.  This is critical because ManifoldCF's documents often contain metadata 
> in excess of 4K that therefore cannot be stuffed into a URL.
> The changes in question seem to have been performed by Paul Noble on 
> 10/31/2017, with the introduction of the RequestWriter mechanism.  Basically, 
> if a request has a RequestWriter, it is used exclusively to write the 
> request, and that overrides the stream mechanism completely.  I haven't 
> chased it back to a specific ticket.
> ManifoldCF's usage of SolrJ involves the creation of 
> ContentStreamUpdateRequests for all posts meant for Solr Cell, and the 
> creation of UpdateRequests for posts not meant for Solr Cell (as well as for 
> delete and commit requests).  For our release cycle that is taking place 
> right now, we're shipping a modified version of HttpSolrClient that ignores 
> the RequestWriter when dealing with ContentStreamUpdateRequests.  We 
> apparently cannot use multipart for all requests because on the Solr side we 
> get "pfountz Should not get here!" errors on the Solr side when we do, which 
> generate HTTP error code 500 responses.  That should not happen either, in my 
> opinion.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12798) Structural changes in SolrJ since version 7.0.0 have effectively disabled multipart post

2018-09-26 Thread Alexandre Rafalovitch (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16628903#comment-16628903
 ] 

Alexandre Rafalovitch commented on SOLR-12798:
--

Karl, we totally believe you that it is happening. We just don't have enough 
knowledge about your use cases to easily visualize our side of it. I think one 
or two simple examples would be sufficient, no need to do an all-point. 
Clearly, even though your use-case was working for a long time, we somehow 
missed it in our tests/reasoning. So, this discussion is explicitly trying to 
do better on it than the last time.

> Structural changes in SolrJ since version 7.0.0 have effectively disabled 
> multipart post
> 
>
> Key: SOLR-12798
> URL: https://issues.apache.org/jira/browse/SOLR-12798
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 7.4
>Reporter: Karl Wright
>Assignee: Karl Wright
>Priority: Major
>
> Project ManifoldCF uses SolrJ to post documents to Solr.  When upgrading from 
> SolrJ 7.0.x to SolrJ 7.4, we encountered significant structural changes to 
> SolrJ's HttpSolrClient class that seemingly disable any use of multipart 
> post.  This is critical because ManifoldCF's documents often contain metadata 
> in excess of 4K that therefore cannot be stuffed into a URL.
> The changes in question seem to have been performed by Paul Noble on 
> 10/31/2017, with the introduction of the RequestWriter mechanism.  Basically, 
> if a request has a RequestWriter, it is used exclusively to write the 
> request, and that overrides the stream mechanism completely.  I haven't 
> chased it back to a specific ticket.
> ManifoldCF's usage of SolrJ involves the creation of 
> ContentStreamUpdateRequests for all posts meant for Solr Cell, and the 
> creation of UpdateRequests for posts not meant for Solr Cell (as well as for 
> delete and commit requests).  For our release cycle that is taking place 
> right now, we're shipping a modified version of HttpSolrClient that ignores 
> the RequestWriter when dealing with ContentStreamUpdateRequests.  We 
> apparently cannot use multipart for all requests because on the Solr side we 
> get "pfountz Should not get here!" errors on the Solr side when we do, which 
> generate HTTP error code 500 responses.  That should not happen either, in my 
> opinion.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12798) Structural changes in SolrJ since version 7.0.0 have effectively disabled multipart post

2018-09-26 Thread Karl Wright (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16628879#comment-16628879
 ] 

Karl Wright commented on SOLR-12798:


{quote}
The data may be generic, but it has to be fed into Solr in one of the accepted 
parameters.
{quote}

Um, this stuff has been working for more than a decade.  Yes, we're using 
accepted parameters.

{quote}
This reason why we insist on an example is because we want to know which 
parameters are sent as part of query string.
{quote}

Ok, if that's what you need, I will put out an all points bulletin on the 
ManifoldCF user list for a Solr INFO message that contains an example of long 
metadata.  How many examples do you need to convince yourselves that we're not 
making this up?





> Structural changes in SolrJ since version 7.0.0 have effectively disabled 
> multipart post
> 
>
> Key: SOLR-12798
> URL: https://issues.apache.org/jira/browse/SOLR-12798
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 7.4
>Reporter: Karl Wright
>Assignee: Karl Wright
>Priority: Major
>
> Project ManifoldCF uses SolrJ to post documents to Solr.  When upgrading from 
> SolrJ 7.0.x to SolrJ 7.4, we encountered significant structural changes to 
> SolrJ's HttpSolrClient class that seemingly disable any use of multipart 
> post.  This is critical because ManifoldCF's documents often contain metadata 
> in excess of 4K that therefore cannot be stuffed into a URL.
> The changes in question seem to have been performed by Paul Noble on 
> 10/31/2017, with the introduction of the RequestWriter mechanism.  Basically, 
> if a request has a RequestWriter, it is used exclusively to write the 
> request, and that overrides the stream mechanism completely.  I haven't 
> chased it back to a specific ticket.
> ManifoldCF's usage of SolrJ involves the creation of 
> ContentStreamUpdateRequests for all posts meant for Solr Cell, and the 
> creation of UpdateRequests for posts not meant for Solr Cell (as well as for 
> delete and commit requests).  For our release cycle that is taking place 
> right now, we're shipping a modified version of HttpSolrClient that ignores 
> the RequestWriter when dealing with ContentStreamUpdateRequests.  We 
> apparently cannot use multipart for all requests because on the Solr side we 
> get "pfountz Should not get here!" errors on the Solr side when we do, which 
> generate HTTP error code 500 responses.  That should not happen either, in my 
> opinion.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8493) Stop publishing .sha1 files with releases

2018-09-26 Thread Adrien Grand (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16628871#comment-16628871
 ] 

Adrien Grand commented on LUCENE-8493:
--

Thanks for tackling these release management tasks [~janhoy]!

> Stop publishing .sha1 files with releases
> -
>
> Key: LUCENE-8493
> URL: https://issues.apache.org/jira/browse/LUCENE-8493
> Project: Lucene - Core
>  Issue Type: Task
>  Components: -tools
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
>  Labels: build, release, security, sha1sum
> Fix For: 7.5.1, 7.6, master (8.0)
>
> Attachments: LUCENE-8493.patch
>
>
> In LUCENE-7935 we added {{.sha512}} checksums to releases and removed 
> {{.md5}} files.
> According to the Release Distribution Policy 
> ([http://www.apache.org/dev/release-distribution#sigs-and-sums)]:
> {quote}For every artifact distributed to the public through Apache channels, 
> the PMC
> MUST supply a valid OpenPGP-compatible ASCII-armored detached signature file
> MUST supply at least one checksum file
> SHOULD supply a SHA-256 and/or SHA-512 checksum file
> *SHOULD NOT supply a MD5 or SHA-1 checksum file* (because these are 
> deprecated)
> {quote}
> So this Jira will stop publishing .sha1 files, leaving only the .sha512



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-8493) Stop publishing .sha1 files with releases

2018-09-26 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/LUCENE-8493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl resolved LUCENE-8493.
-
Resolution: Fixed

> Stop publishing .sha1 files with releases
> -
>
> Key: LUCENE-8493
> URL: https://issues.apache.org/jira/browse/LUCENE-8493
> Project: Lucene - Core
>  Issue Type: Task
>  Components: -tools
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
>  Labels: build, release, security, sha1sum
> Fix For: 7.5.1, 7.6, master (8.0)
>
> Attachments: LUCENE-8493.patch
>
>
> In LUCENE-7935 we added {{.sha512}} checksums to releases and removed 
> {{.md5}} files.
> According to the Release Distribution Policy 
> ([http://www.apache.org/dev/release-distribution#sigs-and-sums)]:
> {quote}For every artifact distributed to the public through Apache channels, 
> the PMC
> MUST supply a valid OpenPGP-compatible ASCII-armored detached signature file
> MUST supply at least one checksum file
> SHOULD supply a SHA-256 and/or SHA-512 checksum file
> *SHOULD NOT supply a MD5 or SHA-1 checksum file* (because these are 
> deprecated)
> {quote}
> So this Jira will stop publishing .sha1 files, leaving only the .sha512



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8493) Stop publishing .sha1 files with releases

2018-09-26 Thread JIRA


[ 
https://issues.apache.org/jira/browse/LUCENE-8493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16628840#comment-16628840
 ] 

Jan Høydahl commented on LUCENE-8493:
-

In the 5_5 branch I duplicated the changes entry under both 7.6.0 and 7.5.1. So 
if 7.6 is released instead of or before 7.5.1 then all is correct. However, if 
7.5.1 is releases before 7.6.0 and 8.0.0 then we need to include this issue in 
the 7.5.1 section on those branches as well. But I think that is part of the 
pre-release job anyway.

> Stop publishing .sha1 files with releases
> -
>
> Key: LUCENE-8493
> URL: https://issues.apache.org/jira/browse/LUCENE-8493
> Project: Lucene - Core
>  Issue Type: Task
>  Components: -tools
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
>  Labels: build, release, security, sha1sum
> Fix For: 7.5.1, 7.6, master (8.0)
>
> Attachments: LUCENE-8493.patch
>
>
> In LUCENE-7935 we added {{.sha512}} checksums to releases and removed 
> {{.md5}} files.
> According to the Release Distribution Policy 
> ([http://www.apache.org/dev/release-distribution#sigs-and-sums)]:
> {quote}For every artifact distributed to the public through Apache channels, 
> the PMC
> MUST supply a valid OpenPGP-compatible ASCII-armored detached signature file
> MUST supply at least one checksum file
> SHOULD supply a SHA-256 and/or SHA-512 checksum file
> *SHOULD NOT supply a MD5 or SHA-1 checksum file* (because these are 
> deprecated)
> {quote}
> So this Jira will stop publishing .sha1 files, leaving only the .sha512



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-7.x - Build # 325 - Still Failing

2018-09-26 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-7.x/325/

No tests ran.

Build Log:
[...truncated 23296 lines...]
[asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: 
invalid part, must have at least one section (e.g., chapter, appendix, etc.)
[asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid 
part, must have at least one section (e.g., chapter, appendix, etc.)
 [java] Processed 2430 links (1982 relative) to 3170 anchors in 245 files
 [echo] Validated Links & Anchors via: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/build/solr-ref-guide/bare-bones-html/

-dist-changes:
 [copy] Copying 4 files to 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/package/changes

package:

-unpack-solr-tgz:

-ensure-solr-tgz-exists:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/build/solr.tgz.unpacked
[untar] Expanding: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/package/solr-7.6.0.tgz
 into 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/build/solr.tgz.unpacked

generate-maven-artifacts:

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:

[jira] [Updated] (LUCENE-8493) Stop publishing .sha1 files with releases

2018-09-26 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/LUCENE-8493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated LUCENE-8493:

Fix Version/s: 7.5.1

> Stop publishing .sha1 files with releases
> -
>
> Key: LUCENE-8493
> URL: https://issues.apache.org/jira/browse/LUCENE-8493
> Project: Lucene - Core
>  Issue Type: Task
>  Components: -tools
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
>  Labels: build, release, security, sha1sum
> Fix For: 7.5.1, 7.6, master (8.0)
>
> Attachments: LUCENE-8493.patch
>
>
> In LUCENE-7935 we added {{.sha512}} checksums to releases and removed 
> {{.md5}} files.
> According to the Release Distribution Policy 
> ([http://www.apache.org/dev/release-distribution#sigs-and-sums)]:
> {quote}For every artifact distributed to the public through Apache channels, 
> the PMC
> MUST supply a valid OpenPGP-compatible ASCII-armored detached signature file
> MUST supply at least one checksum file
> SHOULD supply a SHA-256 and/or SHA-512 checksum file
> *SHOULD NOT supply a MD5 or SHA-1 checksum file* (because these are 
> deprecated)
> {quote}
> So this Jira will stop publishing .sha1 files, leaving only the .sha512



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8493) Stop publishing .sha1 files with releases

2018-09-26 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16628828#comment-16628828
 ] 

ASF subversion and git services commented on LUCENE-8493:
-

Commit 5e35f63a4e811a8b27a758f21fafc46208291e47 in lucene-solr's branch 
refs/heads/branch_7_5 from [~janhoy]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=5e35f63 ]

LUCENE-8493: Stop publishing insecure .sha1 files with releases

(cherry picked from commit 03c9c04353ce1b5ace33fddd5bd99059e63ed507)


> Stop publishing .sha1 files with releases
> -
>
> Key: LUCENE-8493
> URL: https://issues.apache.org/jira/browse/LUCENE-8493
> Project: Lucene - Core
>  Issue Type: Task
>  Components: -tools
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
>  Labels: build, release, security, sha1sum
> Fix For: 7.6, master (8.0)
>
> Attachments: LUCENE-8493.patch
>
>
> In LUCENE-7935 we added {{.sha512}} checksums to releases and removed 
> {{.md5}} files.
> According to the Release Distribution Policy 
> ([http://www.apache.org/dev/release-distribution#sigs-and-sums)]:
> {quote}For every artifact distributed to the public through Apache channels, 
> the PMC
> MUST supply a valid OpenPGP-compatible ASCII-armored detached signature file
> MUST supply at least one checksum file
> SHOULD supply a SHA-256 and/or SHA-512 checksum file
> *SHOULD NOT supply a MD5 or SHA-1 checksum file* (because these are 
> deprecated)
> {quote}
> So this Jira will stop publishing .sha1 files, leaving only the .sha512



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12798) Structural changes in SolrJ since version 7.0.0 have effectively disabled multipart post

2018-09-26 Thread Noble Paul (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16628827#comment-16628827
 ] 

Noble Paul commented on SOLR-12798:
---

bq.there's no general answer to that question, because there's no one 
definitive example of metadata.

The data may be generic, but it has to be fed into Solr in one of the accepted 
parameters. This reason why we insist on an example is because we want to know 
which parameters are sent as part of query string. We also want to find out if 
you are using it wrong

> Structural changes in SolrJ since version 7.0.0 have effectively disabled 
> multipart post
> 
>
> Key: SOLR-12798
> URL: https://issues.apache.org/jira/browse/SOLR-12798
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 7.4
>Reporter: Karl Wright
>Assignee: Karl Wright
>Priority: Major
>
> Project ManifoldCF uses SolrJ to post documents to Solr.  When upgrading from 
> SolrJ 7.0.x to SolrJ 7.4, we encountered significant structural changes to 
> SolrJ's HttpSolrClient class that seemingly disable any use of multipart 
> post.  This is critical because ManifoldCF's documents often contain metadata 
> in excess of 4K that therefore cannot be stuffed into a URL.
> The changes in question seem to have been performed by Paul Noble on 
> 10/31/2017, with the introduction of the RequestWriter mechanism.  Basically, 
> if a request has a RequestWriter, it is used exclusively to write the 
> request, and that overrides the stream mechanism completely.  I haven't 
> chased it back to a specific ticket.
> ManifoldCF's usage of SolrJ involves the creation of 
> ContentStreamUpdateRequests for all posts meant for Solr Cell, and the 
> creation of UpdateRequests for posts not meant for Solr Cell (as well as for 
> delete and commit requests).  For our release cycle that is taking place 
> right now, we're shipping a modified version of HttpSolrClient that ignores 
> the RequestWriter when dealing with ContentStreamUpdateRequests.  We 
> apparently cannot use multipart for all requests because on the Solr side we 
> get "pfountz Should not get here!" errors on the Solr side when we do, which 
> generate HTTP error code 500 responses.  That should not happen either, in my 
> opinion.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12502) Unify and reduce the number of SolrClient#add methods

2018-09-26 Thread Jason Gerlowski (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16628774#comment-16628774
 ] 

Jason Gerlowski edited comment on SOLR-12502 at 9/26/18 2:08 PM:
-

{quote}  Ehh; in common cases this adds complexity, I think. Simply adding one 
document means you now need to use SolrInputDocumentProvider
{quote}
True.  Though it's worth mentioning that the "common case" itself disobeys the 
community advice to use document-batching, and the mere presence of a 
single-doc-add method steers people into misusing SolrJ.  It may be the common 
case, and worth disincentivizing at the same time.  I'm not arguing for that I 
guess, just mentioning the point.

I like your suggestion of using UpdateRequest as this builder-like type, as 
opposed to inventing some new abstraction.  If I get a few minutes, I'll take a 
stab at seeing how this looks in a larger snippet and upload it here as an 
example for discussion.

Lastly, re: the collection.  I've noticed a few bug JIRAs crop up recently 
related to the ways collections can be specified in SolrJ.  Specifically, 
SOLR-12415 and SOLR-12803.  Some older bugs point to this being a problem 
historically too (SOLR-9362)  Maybe those build a good argument for changing 
how SolrClient handles collections, totally independent of the 
too-many-similar-methods discussion here.


was (Author: gerlowskija):
bq.  Ehh; in common cases this adds complexity, I think. Simply adding one 
document means you now need to use SolrInputDocumentProvider

 True.  Though it's worth mentioning that the "common case" itself disobeys the 
community advice to use document-batching, and the mere presence of a 
single-doc-add method steers people into misusing SolrJ.  It may be the common 
case, and worth disincentivizing at the same time.  I'm not arguing for that I 
guess, just mentioning the point.

I like your suggestion of using UpdateRequest as this builder-like type, as 
opposed to inventing some new abstraction.  If I get a few minutes, I'll take a 
stab at seeing how this looks in a larger snippet and upload it here as an 
example for discussion.

Lastly, re: the collection.  I've noticed a few bug JIRAs crop up recently 
related to the ways collections can be specified in SolrJ.  Specifically, 
SOLR-12415 and SOLR-12803.  Maybe those build a good argument for changing how 
SolrClient handles collections, totally independent of the 
too-many-similar-methods discussion here.

> Unify and reduce the number of SolrClient#add methods
> -
>
> Key: SOLR-12502
> URL: https://issues.apache.org/jira/browse/SOLR-12502
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Reporter: Varun Thacker
>Priority: Major
>
> On SOLR-11654 we noticed that SolrClient#add has 10 overloaded methods which 
> can be very confusing to new users.
> Also the UpdateRequest class is public so that means if a user is looking for 
> a custom combination they can always choose to do so by writing a couple of 
> lines of code.
> For 8.0 which might not be very far away we can improve this situation
>  
> Quoting David from SOLR-11654
> {quote}Any way I guess we'll leave SolrClient alone.  Thanks for your input 
> Varun.  Yes it's a shame there are so many darned overloaded methods... I 
> think it's a large part due to the optional "collection" parameter which like 
> doubles the methods!  I've been bitten several times writing SolrJ code that 
> doesn't use the right overloaded version (forgot to specify collection).  I 
> think for 8.0, *either* all SolrClient methods without "collection" can be 
> removed in favor of insisting you use the overloaded variant accepting a 
> collection, *or* SolrClient itself could be locked down to one collection at 
> the time you create it *or* have a CollectionSolrClient interface retrieved 
> from a SolrClient.withCollection(collection) in which all the operations that 
> require a SolrClient are on that interface and not SolrClient proper.  
> Several ideas to consider.
> {quote}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-9362) ConcurrentUpdateSolrClient does not work unless core name is passed in constructor

2018-09-26 Thread Jason Gerlowski (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-9362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gerlowski resolved SOLR-9362.
---
Resolution: Cannot Reproduce

> ConcurrentUpdateSolrClient does not work unless core name is passed in 
> constructor
> --
>
> Key: SOLR-9362
> URL: https://issues.apache.org/jira/browse/SOLR-9362
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 5.5.2
> Environment: SolrJ version 5.5.2 and Solr 5.5.2
>Reporter: Toby Hobson
>Priority: Minor
>
> With the standard HttpSolrClient I can use either:
> {code:java}
> new HttpSolrClient("http://localhost:8983/solr/mycore;)
> client.add(doc)
> {code}
> or 
> {code:java}
> new HttpSolrClient("http://localhost:8983/solr;)
> client.add("mycore", doc)
> {code}
> However  I cannot use
> {code:java}
> new ConcurrentUpdateSolrClient("http://localhost:8983/solr;, 100, 10)
> client.add("mycore", doc)
> {code}
> as I get an error:
> java.lang.RuntimeException: Invalid version (expected 2, but 60) or the data 
> in not in 'javabin' format
> {code:java}
> new ConcurrentUpdateSolrClient("http://localhost:8983/solr/mycore;, 100, 10)
> client.add(doc)
> {code}
> works as expected



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12652) SolrMetricManager.overridableRegistryName should be removed; it doesn't work

2018-09-26 Thread Lucene/Solr QA (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16628821#comment-16628821
 ] 

Lucene/Solr QA commented on SOLR-12652:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
26s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Release audit (RAT) {color} | 
{color:green}  1m 20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Check forbidden APIs {color} | 
{color:green}  1m 20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Validate source patterns {color} | 
{color:green}  1m 20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 57m 
16s{color} | {color:green} core in the patch passed. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 62m 25s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | SOLR-12652 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12941309/SOLR-12652.patch |
| Optional Tests |  compile  javac  unit  ratsources  checkforbiddenapis  
validatesourcepatterns  |
| uname | Linux lucene1-us-west 4.4.0-130-generic #156~14.04.1-Ubuntu SMP Thu 
Jun 14 13:51:47 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | ant |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-SOLR-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh
 |
| git revision | master / 667b829 |
| ant | version: Apache Ant(TM) version 1.9.3 compiled on July 24 2018 |
| Default Java | 1.8.0_172 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-SOLR-Build/190/testReport/ |
| modules | C: solr solr/core U: solr |
| Console output | 
https://builds.apache.org/job/PreCommit-SOLR-Build/190/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> SolrMetricManager.overridableRegistryName should be removed; it doesn't work
> 
>
> Key: SOLR-12652
> URL: https://issues.apache.org/jira/browse/SOLR-12652
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: 7.1
>Reporter: David Smiley
>Priority: Minor
> Attachments: SOLR-12652.patch, SOLR-12652.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The {{SolrMetricManager.overridableRegistryName()}} method is a great idea 
> but unfortunately in practice I've found it doesn't really work; it seems 
> fundamentally flawed.  +I wish it could work+.  The main issue I think is 
> that the callers of SMM.registerGauge/registerMetric assumes it can place a 
> gauge/metric and have it be the only once there (force==true).  But it won't 
> be if it's shared.  
> Another problem is in at least one of the reporters -- 
> {{JmxMetricsReporter.JmxListener#registerMBean}} will get in a race condition 
> to remove an already-registered MBean but in the process of removing it, 
> it'll already get removed concurrently by some other core working on the same 
> name.  This results in {{javax.management.InstanceNotFoundException}} logged 
> as a warning; nothing serious.  But I suspect conceptually there is a problem 
> since which MBean should "win"?  Shrug.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9362) ConcurrentUpdateSolrClient does not work unless core name is passed in constructor

2018-09-26 Thread Jason Gerlowski (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16628820#comment-16628820
 ] 

Jason Gerlowski commented on SOLR-9362:
---

Though this bug may exist in Solr 5, I cannot reproduce this problem in recent 
versions of Solr (I tried 7.0 and 7.5.0).  I used the code snippet below:


{code:java}
@Test
public void cusc_bare_url_test() throws Exception {
  try (SolrClient client = new 
ConcurrentUpdateSolrClient.Builder("http://localhost:8983/solr;).build()) {
SolrInputDocument doc = new SolrInputDocument();
doc.setField("id", "value");
client.add("gettingstarted", doc);
client.commit("gettingstarted");
  }
}{code}
 

I'm going to close this out, but if someone manages a modern reproduction, feel 
free to re-open with details.

> ConcurrentUpdateSolrClient does not work unless core name is passed in 
> constructor
> --
>
> Key: SOLR-9362
> URL: https://issues.apache.org/jira/browse/SOLR-9362
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 5.5.2
> Environment: SolrJ version 5.5.2 and Solr 5.5.2
>Reporter: Toby Hobson
>Priority: Minor
>
> With the standard HttpSolrClient I can use either:
> {code:java}
> new HttpSolrClient("http://localhost:8983/solr/mycore;)
> client.add(doc)
> {code}
> or 
> {code:java}
> new HttpSolrClient("http://localhost:8983/solr;)
> client.add("mycore", doc)
> {code}
> However  I cannot use
> {code:java}
> new ConcurrentUpdateSolrClient("http://localhost:8983/solr;, 100, 10)
> client.add("mycore", doc)
> {code}
> as I get an error:
> java.lang.RuntimeException: Invalid version (expected 2, but 60) or the data 
> in not in 'javabin' format
> {code:java}
> new ConcurrentUpdateSolrClient("http://localhost:8983/solr/mycore;, 100, 10)
> client.add(doc)
> {code}
> works as expected



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8493) Stop publishing .sha1 files with releases

2018-09-26 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16628817#comment-16628817
 ] 

ASF subversion and git services commented on LUCENE-8493:
-

Commit ecd392a08d42975960d0cd5d5177061e6a7687f1 in lucene-solr's branch 
refs/heads/branch_7x from [~janhoy]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ecd392a ]

LUCENE-8493: Stop publishing insecure .sha1 files with releases

(cherry picked from commit 03c9c04353ce1b5ace33fddd5bd99059e63ed507)


> Stop publishing .sha1 files with releases
> -
>
> Key: LUCENE-8493
> URL: https://issues.apache.org/jira/browse/LUCENE-8493
> Project: Lucene - Core
>  Issue Type: Task
>  Components: -tools
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
>  Labels: build, release, security, sha1sum
> Fix For: 7.6, master (8.0)
>
> Attachments: LUCENE-8493.patch
>
>
> In LUCENE-7935 we added {{.sha512}} checksums to releases and removed 
> {{.md5}} files.
> According to the Release Distribution Policy 
> ([http://www.apache.org/dev/release-distribution#sigs-and-sums)]:
> {quote}For every artifact distributed to the public through Apache channels, 
> the PMC
> MUST supply a valid OpenPGP-compatible ASCII-armored detached signature file
> MUST supply at least one checksum file
> SHOULD supply a SHA-256 and/or SHA-512 checksum file
> *SHOULD NOT supply a MD5 or SHA-1 checksum file* (because these are 
> deprecated)
> {quote}
> So this Jira will stop publishing .sha1 files, leaving only the .sha512



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12798) Structural changes in SolrJ since version 7.0.0 have effectively disabled multipart post

2018-09-26 Thread Alexandre Rafalovitch (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16628813#comment-16628813
 ] 

Alexandre Rafalovitch commented on SOLR-12798:
--

[~kwri...@metacarta.com] I am with Shalin on this. While I appreciate that MCF 
(which we do refer people to from Solr) is very general framework, I think it 
would be very useful to have a concrete sample example that shows what kind of 
information actually goes to the wire.

Specifically the example that generates meaningful metadata and body 
(multipart) both of which are ending-up used in Solr. This would really help us 
to visualize the kind of use-cases, that are very obvious to your project. The 
link example was about forcing multipart, so was not quite representative. 
Similarly, Tika generates one part with all parameters. An example that has 2 
(3?) meaningful parts would be most helpful I feel. And maybe even something 
that could go into a Solr test (so does not need to be very long, just truly 
multipart).

> Structural changes in SolrJ since version 7.0.0 have effectively disabled 
> multipart post
> 
>
> Key: SOLR-12798
> URL: https://issues.apache.org/jira/browse/SOLR-12798
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 7.4
>Reporter: Karl Wright
>Assignee: Karl Wright
>Priority: Major
>
> Project ManifoldCF uses SolrJ to post documents to Solr.  When upgrading from 
> SolrJ 7.0.x to SolrJ 7.4, we encountered significant structural changes to 
> SolrJ's HttpSolrClient class that seemingly disable any use of multipart 
> post.  This is critical because ManifoldCF's documents often contain metadata 
> in excess of 4K that therefore cannot be stuffed into a URL.
> The changes in question seem to have been performed by Paul Noble on 
> 10/31/2017, with the introduction of the RequestWriter mechanism.  Basically, 
> if a request has a RequestWriter, it is used exclusively to write the 
> request, and that overrides the stream mechanism completely.  I haven't 
> chased it back to a specific ticket.
> ManifoldCF's usage of SolrJ involves the creation of 
> ContentStreamUpdateRequests for all posts meant for Solr Cell, and the 
> creation of UpdateRequests for posts not meant for Solr Cell (as well as for 
> delete and commit requests).  For our release cycle that is taking place 
> right now, we're shipping a modified version of HttpSolrClient that ignores 
> the RequestWriter when dealing with ContentStreamUpdateRequests.  We 
> apparently cannot use multipart for all requests because on the Solr side we 
> get "pfountz Should not get here!" errors on the Solr side when we do, which 
> generate HTTP error code 500 responses.  That should not happen either, in my 
> opinion.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-11002) Error while posting data

2018-09-26 Thread Jason Gerlowski (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-11002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gerlowski resolved SOLR-11002.

Resolution: Invalid

> Error while posting data
> 
>
> Key: SOLR-11002
> URL: https://issues.apache.org/jira/browse/SOLR-11002
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 6.2
> Environment: linux redhat 7, solr 6.2
>Reporter: Naveen Kumar Gundala
>Priority: Major
> Fix For: 6.2.2
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> we regularly go exception while we running solrj jobs , coluld you please 
> help me to resolve this.
> 2017-06-30 17:51:19.985 ERROR (qtp2012232625-241329) [c:mbrdaily s:shard12 
> r:core_node18 x:mbrdaily_shard12_replica1] o.a.s.h.RequestHandlerBase 
> org.apache.solr.common.SolrException: Invalid Number:
> BCC:u...@nribv5mhrrxuxztht34ek6ztnkteh6au1kojc8.burpcollaborator.net
> edm: a
> at 
> org.apache.solr.schema.TrieField.readableToIndexed(TrieField.java:537)
> at org.apache.solr.schema.FieldType.getFieldQuery(FieldType.java:753)
> at org.apache.solr.schema.TrieField.getFieldQuery(TrieField.java:496)
> at 
> org.apache.solr.parser.SolrQueryParserBase.getFieldQuery(SolrQueryParserBase.java:737)
> at 
> org.apache.solr.parser.SolrQueryParserBase.getFieldQuery(SolrQueryParserBase.java:385)
> at 
> org.apache.solr.parser.SolrQueryParserBase.handleQuotedTerm(SolrQueryParserBase.java:544)
> at org.apache.solr.parser.QueryParser.Term(QueryParser.java:419)
> at org.apache.solr.parser.QueryParser.Clause(QueryParser.java:186)
> at org.apache.solr.parser.QueryParser.Query(QueryParser.java:140)
> at 
> org.apache.solr.parser.QueryParser.TopLevelQuery(QueryParser.java:96)
> at 
> org.apache.solr.parser.SolrQueryParserBase.parse(SolrQueryParserBase.java:153)
> at org.apache.solr.search.LuceneQParser.parse(LuceneQParser.java:50)
> at org.apache.solr.search.QParser.getQuery(QParser.java:141)
> at 
> org.apache.solr.handler.component.QueryComponent.prepare(QueryComponent.java:162)
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:267)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:156)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:2036)
> at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:657)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:464)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:257)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:208)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
> at org.eclipse.jetty.server.Server.handle(Server.java:518)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:308)
> at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:244)
> at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
> at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
> at 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
> at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceAndRun(ExecuteProduceConsume.java:246)
> at 
> 

[jira] [Commented] (LUCENE-8493) Stop publishing .sha1 files with releases

2018-09-26 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16628803#comment-16628803
 ] 

ASF subversion and git services commented on LUCENE-8493:
-

Commit 03c9c04353ce1b5ace33fddd5bd99059e63ed507 in lucene-solr's branch 
refs/heads/master from [~janhoy]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=03c9c04 ]

LUCENE-8493: Stop publishing insecure .sha1 files with releases


> Stop publishing .sha1 files with releases
> -
>
> Key: LUCENE-8493
> URL: https://issues.apache.org/jira/browse/LUCENE-8493
> Project: Lucene - Core
>  Issue Type: Task
>  Components: -tools
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
>  Labels: build, release, security, sha1sum
> Fix For: 7.6, master (8.0)
>
> Attachments: LUCENE-8493.patch
>
>
> In LUCENE-7935 we added {{.sha512}} checksums to releases and removed 
> {{.md5}} files.
> According to the Release Distribution Policy 
> ([http://www.apache.org/dev/release-distribution#sigs-and-sums)]:
> {quote}For every artifact distributed to the public through Apache channels, 
> the PMC
> MUST supply a valid OpenPGP-compatible ASCII-armored detached signature file
> MUST supply at least one checksum file
> SHOULD supply a SHA-256 and/or SHA-512 checksum file
> *SHOULD NOT supply a MD5 or SHA-1 checksum file* (because these are 
> deprecated)
> {quote}
> So this Jira will stop publishing .sha1 files, leaving only the .sha512



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-11) - Build # 22926 - Still unstable!

2018-09-26 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/22926/
Java: 64bit/jdk-11 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

39 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.client.solrj.io.stream.StreamDecoratorTest

Error Message:
14 threads leaked from SUITE scope at 
org.apache.solr.client.solrj.io.stream.StreamDecoratorTest: 1) 
Thread[id=2289, name=Connection evictor, state=TIMED_WAITING, 
group=TGRP-StreamDecoratorTest] at 
java.base@11/java.lang.Thread.sleep(Native Method) at 
app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
 at java.base@11/java.lang.Thread.run(Thread.java:834)2) 
Thread[id=959, name=zkConnectionManagerCallback-478-thread-1, state=WAITING, 
group=TGRP-StreamDecoratorTest] at 
java.base@11/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@11/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194)  
   at 
java.base@11/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2081)
 at 
java.base@11/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:433)
 at 
java.base@11/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1054)
 at 
java.base@11/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1114)
 at 
java.base@11/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
 at java.base@11/java.lang.Thread.run(Thread.java:834)3) 
Thread[id=2282, name=Connection evictor, state=TIMED_WAITING, 
group=TGRP-StreamDecoratorTest] at 
java.base@11/java.lang.Thread.sleep(Native Method) at 
app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
 at java.base@11/java.lang.Thread.run(Thread.java:834)4) 
Thread[id=962, name=Connection evictor, state=TIMED_WAITING, 
group=TGRP-StreamDecoratorTest] at 
java.base@11/java.lang.Thread.sleep(Native Method) at 
app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
 at java.base@11/java.lang.Thread.run(Thread.java:834)5) 
Thread[id=956, name=Connection evictor, state=TIMED_WAITING, 
group=TGRP-StreamDecoratorTest] at 
java.base@11/java.lang.Thread.sleep(Native Method) at 
app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
 at java.base@11/java.lang.Thread.run(Thread.java:834)6) 
Thread[id=958, 
name=TEST-StreamDecoratorTest.testExecutorStream-seed#[1A47E4168D375FAA]-EventThread,
 state=WAITING, group=TGRP-StreamDecoratorTest] at 
java.base@11/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@11/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194)  
   at 
java.base@11/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2081)
 at 
java.base@11/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:433)
 at 
app//org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:502)7) 
Thread[id=2284, 
name=TEST-StreamDecoratorTest.testParallelExecutorStream-seed#[1A47E4168D375FAA]-EventThread,
 state=WAITING, group=TGRP-StreamDecoratorTest] at 
java.base@11/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@11/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194)  
   at 
java.base@11/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2081)
 at 
java.base@11/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:433)
 at 
app//org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:502)8) 
Thread[id=963, name=Connection evictor, state=TIMED_WAITING, 
group=TGRP-StreamDecoratorTest] at 
java.base@11/java.lang.Thread.sleep(Native Method) at 
app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
 at java.base@11/java.lang.Thread.run(Thread.java:834)9) 
Thread[id=971, name=Connection evictor, state=TIMED_WAITING, 
group=TGRP-StreamDecoratorTest] at 
java.base@11/java.lang.Thread.sleep(Native Method) at 
app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
 at java.base@11/java.lang.Thread.run(Thread.java:834)   10) 
Thread[id=973, name=Connection evictor, state=TIMED_WAITING, 
group=TGRP-StreamDecoratorTest] at 
java.base@11/java.lang.Thread.sleep(Native Method) at 
app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
 at java.base@11/java.lang.Thread.run(Thread.java:834)   11) 
Thread[id=2283, 

[jira] [Comment Edited] (SOLR-12502) Unify and reduce the number of SolrClient#add methods

2018-09-26 Thread Jason Gerlowski (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16628774#comment-16628774
 ] 

Jason Gerlowski edited comment on SOLR-12502 at 9/26/18 1:24 PM:
-

bq.  Ehh; in common cases this adds complexity, I think. Simply adding one 
document means you now need to use SolrInputDocumentProvider

 True.  Though it's worth mentioning that the "common case" itself disobeys the 
community advice to use document-batching, and the mere presence of a 
single-doc-add method steers people into misusing SolrJ.  It may be the common 
case, and worth disincentivizing at the same time.  I'm not arguing for that I 
guess, just mentioning the point.

I like your suggestion of using UpdateRequest as this builder-like type, as 
opposed to inventing some new abstraction.  If I get a few minutes, I'll take a 
stab at seeing how this looks in a larger snippet and upload it here as an 
example for discussion.

Lastly, re: the collection.  I've noticed a few bug JIRAs crop up recently 
related to the ways collections can be specified in SolrJ.  Specifically, 
SOLR-12415 and SOLR-12803.  Maybe those build a good argument for changing how 
SolrClient handles collections, totally independent of the 
too-many-similar-methods discussion here.


was (Author: gerlowskija):
> Ehh; in common cases this adds complexity, I think. Simply adding one 
> document means you now need to use SolrInputDocumentProvider
True.  Though it's worth mentioning that the "common case" itself disobeys the 
community advice to use document-batching, and the mere presence of a 
single-doc-add method steers people into misusing SolrJ.  It may be the common 
case, and worth disincentivizing at the same time.  I'm not arguing for that I 
guess, just mentioning the point.

I like your suggestion of using UpdateRequest as this builder-like type, as 
opposed to inventing some new abstraction.  If I get a few minutes, I'll take a 
stab at seeing how this looks in a larger snippet and upload it here as an 
example for discussion.

Lastly, re: the collection.  I've noticed a few bug JIRAs crop up recently 
related to the ways collections can be specified in SolrJ.  Specifically, 
SOLR-12415 and SOLR-12803.  Maybe those build a good argument for changing how 
SolrClient handles collections, totally independent of the 
too-many-similar-methods discussion here.

> Unify and reduce the number of SolrClient#add methods
> -
>
> Key: SOLR-12502
> URL: https://issues.apache.org/jira/browse/SOLR-12502
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Reporter: Varun Thacker
>Priority: Major
>
> On SOLR-11654 we noticed that SolrClient#add has 10 overloaded methods which 
> can be very confusing to new users.
> Also the UpdateRequest class is public so that means if a user is looking for 
> a custom combination they can always choose to do so by writing a couple of 
> lines of code.
> For 8.0 which might not be very far away we can improve this situation
>  
> Quoting David from SOLR-11654
> {quote}Any way I guess we'll leave SolrClient alone.  Thanks for your input 
> Varun.  Yes it's a shame there are so many darned overloaded methods... I 
> think it's a large part due to the optional "collection" parameter which like 
> doubles the methods!  I've been bitten several times writing SolrJ code that 
> doesn't use the right overloaded version (forgot to specify collection).  I 
> think for 8.0, *either* all SolrClient methods without "collection" can be 
> removed in favor of insisting you use the overloaded variant accepting a 
> collection, *or* SolrClient itself could be locked down to one collection at 
> the time you create it *or* have a CollectionSolrClient interface retrieved 
> from a SolrClient.withCollection(collection) in which all the operations that 
> require a SolrClient are on that interface and not SolrClient proper.  
> Several ideas to consider.
> {quote}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12502) Unify and reduce the number of SolrClient#add methods

2018-09-26 Thread Jason Gerlowski (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16628774#comment-16628774
 ] 

Jason Gerlowski commented on SOLR-12502:


> Ehh; in common cases this adds complexity, I think. Simply adding one 
> document means you now need to use SolrInputDocumentProvider
True.  Though it's worth mentioning that the "common case" itself disobeys the 
community advice to use document-batching, and the mere presence of a 
single-doc-add method steers people into misusing SolrJ.  It may be the common 
case, and worth disincentivizing at the same time.  I'm not arguing for that I 
guess, just mentioning the point.

I like your suggestion of using UpdateRequest as this builder-like type, as 
opposed to inventing some new abstraction.  If I get a few minutes, I'll take a 
stab at seeing how this looks in a larger snippet and upload it here as an 
example for discussion.

Lastly, re: the collection.  I've noticed a few bug JIRAs crop up recently 
related to the ways collections can be specified in SolrJ.  Specifically, 
SOLR-12415 and SOLR-12803.  Maybe those build a good argument for changing how 
SolrClient handles collections, totally independent of the 
too-many-similar-methods discussion here.

> Unify and reduce the number of SolrClient#add methods
> -
>
> Key: SOLR-12502
> URL: https://issues.apache.org/jira/browse/SOLR-12502
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Reporter: Varun Thacker
>Priority: Major
>
> On SOLR-11654 we noticed that SolrClient#add has 10 overloaded methods which 
> can be very confusing to new users.
> Also the UpdateRequest class is public so that means if a user is looking for 
> a custom combination they can always choose to do so by writing a couple of 
> lines of code.
> For 8.0 which might not be very far away we can improve this situation
>  
> Quoting David from SOLR-11654
> {quote}Any way I guess we'll leave SolrClient alone.  Thanks for your input 
> Varun.  Yes it's a shame there are so many darned overloaded methods... I 
> think it's a large part due to the optional "collection" parameter which like 
> doubles the methods!  I've been bitten several times writing SolrJ code that 
> doesn't use the right overloaded version (forgot to specify collection).  I 
> think for 8.0, *either* all SolrClient methods without "collection" can be 
> removed in favor of insisting you use the overloaded variant accepting a 
> collection, *or* SolrClient itself could be locked down to one collection at 
> the time you create it *or* have a CollectionSolrClient interface retrieved 
> from a SolrClient.withCollection(collection) in which all the operations that 
> require a SolrClient are on that interface and not SolrClient proper.  
> Several ideas to consider.
> {quote}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12708) Async collection actions should not hide failures

2018-09-26 Thread Mano Kovacs (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16628739#comment-16628739
 ] 

Mano Kovacs commented on SOLR-12708:


Hello [~varunthacker], thank you for the review!

bq. I'm curious about the 10 minute latch countdown timeout. Shouldn't we wait 
forever here? 
I think if we would wait forever, any downstream command that stuck or never 
get result would keep this job hanging as well. I would worry about the 
robustness here. This part of the code creates a bunch of empty cores (one per 
shards) in parallel. Considering a larger, 200-300 shard cluster, this might 
take longer than 10 minutes if the overseer queue is already behind, so 10 
minutes in fact might be problematic. However, if Overseer is getting behind 
much more than that, it would seriously hurt the stability of the cluster 
anyway. I increase this wait for an hour, if you agree, which would leave 
plenty of time for overseer to process the core creation on a relatively large 
collection, but still ensures that the job is getting cancelled if one task 
stucks.

bq. So here we're doing something different wrt success and failure . If the 
add replica call has a failure we're adding it back to the main response but if 
it's a success then we will end up skipping it ( at this point 
results.get("success") will always be null ) . 
I have to be honest and admit that I copied the full block from 
{{CreateShardCmd.java}}. I think the code is doing the right thing there. In 
both branches of the {{if}} the code checks if the main {{results}} has 
success/failure node already, and creates if necessary. Then adds the 
corresponding {{addResult}} field into the main one. The only difference is 
that the failure recalled before the {{if}} block.

bq. Can't we do this instead which will append the results directly to the main 
object? We do this for the remaining add replicas as the last step of the 
restore
Then we may let the downstream call override certain other fields that might be 
populated. I think isolation makes it more error-prone. I think this was Dat's 
original intent as well in {{CreateShardCmd}}, but not sure.


> Async collection actions should not hide failures
> -
>
> Key: SOLR-12708
> URL: https://issues.apache.org/jira/browse/SOLR-12708
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI, Backup/Restore
>Affects Versions: 7.4
>Reporter: Mano Kovacs
>Assignee: Varun Thacker
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Async collection API may hide failures compared to sync version. 
> [OverseerCollectionMessageHandler::processResponses|https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/cloud/api/collections/OverseerCollectionMessageHandler.java#L744]
>  structures errors differently in the response, that hides failures from most 
> evaluators. RestoreCmd did not receive, nor handle async addReplica issues.
> Sample create collection sync and async result with invalid solrconfig.xml:
> {noformat}
> {
> "responseHeader":{
> "status":0,
> "QTime":32104},
> "failure":{
> "localhost:8983_solr":"org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:Error
>  from server at http://localhost:8983/solr: Error CREATEing SolrCore 
> 'name4_shard1_replica_n1': Unable to create core [name4_shard1_replica_n1] 
> Caused by: The content of elements must consist of well-formed character data 
> or markup.",
> "localhost:8983_solr":"org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:Error
>  from server at http://localhost:8983/solr: Error CREATEing SolrCore 
> 'name4_shard2_replica_n2': Unable to create core [name4_shard2_replica_n2] 
> Caused by: The content of elements must consist of well-formed character data 
> or markup.",
> "localhost:8983_solr":"org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:Error
>  from server at http://localhost:8983/solr: Error CREATEing SolrCore 
> 'name4_shard1_replica_n2': Unable to create core [name4_shard1_replica_n2] 
> Caused by: The content of elements must consist of well-formed character data 
> or markup.",
> "localhost:8983_solr":"org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:Error
>  from server at http://localhost:8983/solr: Error CREATEing SolrCore 
> 'name4_shard2_replica_n1': Unable to create core [name4_shard2_replica_n1] 
> Caused by: The content of elements must consist of well-formed character data 
> or markup."}
> }
> {noformat}
> vs async:
> {noformat}
> {
> "responseHeader":{
> "status":0,
> "QTime":3},
> "success":{
> "localhost:8983_solr":{
> "responseHeader":{
> "status":0,
> "QTime":12}},
> "localhost:8983_solr":{
> 

Re: Release Announcement: General Availability of JDK 11

2018-09-26 Thread Rory O'Donnell

Thanks Uwe!


On 26/09/2018 13:09, Uwe Schindler wrote:


Hi Rory,

thanks for the information. I updated the Jenkins servers this 
morning. It’s now running tests with the final version.


I also installed JDK 12 build 12 for EA testing.

Uwe

*From:*Rory O'Donnell 
*Sent:* Wednesday, September 26, 2018 10:40 AM
*To:* dawid.we...@cs.put.poznan.pl; uwe.h.schind...@gmail.com
*Cc:* rory.odonn...@oracle.com; Dalibor Topic 
; Balchandra Vaidya 
; Muneer Kolarkunnu 
; dev@lucene.apache.org

*Subject:* Release Announcement: General Availability of JDK 11

Hi Uwe & Dawid,

*1) Release Announcement: General Availability of JDK 11 *

  * JDK 11, the reference implementation of Java 11 and the first
long-term support release produced under the six-month
rapid-cadence release model [1][2], is now Generally Available.
  * GPL-licensed OpenJDK builds from Oracle are available here:
https://jdk.java.net/11

This release includes seventeen features:

  * 181: Nest-Based Access Control 
  * 309: Dynamic Class-File Constants 
  * 315: Improve Aarch64 Intrinsics 
  * 318: Epsilon: A No-Op Garbage Collector

  * 320: Remove the Java EE and CORBA Modules

  * 321: HTTP Client (Standard) 
  * 323: Local-Variable Syntax for Lambda Parameters

  * 324: Key Agreement with Curve25519 and Curve448

  * 327: Unicode 10 
  * 328: Flight Recorder 
  * 329: ChaCha20 and Poly1305 Cryptographic Algorithms

  * 330: Launch Single-File Source-Code Programs

  * 331: Low-Overhead Heap Profiling 
  * 332: Transport Layer Security (TLS) 1.3

  * 333: ZGC: A Scalable Low-Latency Garbage Collector (Experimental)

  * 335: Deprecate the Nashorn JavaScript Engine

  * 336: Deprecate the Pack200 Tools and API



2) Quality Outreach Report for September 2018 is available

  * Quality Outreach report September 2018



*Thanks to everyone who contributed to JDK 11 by downloading and 
testing the early-access builds.
In particular the following developers who logged *18 issues in the 
JDK Bug System.**


  * Netty
  * Eclipse Jetty
  * Apache Lucene
  * JUnit5
  * Apache Tomcat
  * Apache Ant
  * Apache POI
  * AssertJ
  * Eclipse Collections
  * Byte Buddy
  * RxJava

3) JDK 12 EA build 12, under both the GPL and Oracle EA licenses, are 
now available at http://jdk.java.net/11 .


  * Schedule , Status & Features

  o http://openjdk.java.net/projects/jdk/12/

  * Release Notes:

  o http://jdk.java.net/12/release-notes

Rgds,Rory

--
Rgds,Rory O'Donnell
Quality Engineering Manager
Oracle EMEA, Dublin,Ireland


--
Rgds,Rory O'Donnell
Quality Engineering Manager
Oracle EMEA , Dublin, Ireland



[jira] [Commented] (SOLR-5163) edismax should throw exception when qf refers to nonexistent field

2018-09-26 Thread Charles Sanders (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-5163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16628705#comment-16628705
 ] 

Charles Sanders commented on SOLR-5163:
---

[~dsmiley]  I ran 'ant clean test' on the code before creating the patch.  All 
the unit tests passed.  The code does check for an infinite alias loop using 
the existing check 'validateCyclicAliasing'.  I'm relying on that method to 
throw an exception and break the loop if one exists.

Sorry.  Maybe this is bigger than I am.  Maybe [~eribeiro] should finish this.  
He mentioned he had a patch candidate.  Thanks.

> edismax should throw exception when qf refers to nonexistent field
> --
>
> Key: SOLR-5163
> URL: https://issues.apache.org/jira/browse/SOLR-5163
> Project: Solr
>  Issue Type: Bug
>  Components: query parsers, search
>Affects Versions: 4.10.4
>Reporter: Steven Bower
>Assignee: David Smiley
>Priority: Major
>  Labels: newdev
> Attachments: SOLR-5163.patch, SOLR-5163.patch, SOLR-5163.patch
>
>
> query:
> q=foo AND bar
> qf=field1
> qf=field2
> defType=edismax
> Where field1 exists and field2 doesn't..
> will treat the AND as a term vs and operator



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12801) Fix the tests, remove BadApples and AwaitsFix annotations, improve env for test development.

2018-09-26 Thread Jason Gerlowski (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16628680#comment-16628680
 ] 

Jason Gerlowski commented on SOLR-12801:


Hey Mark,

100% agreed the tests are a problem.  But can you clarify what you're opening 
this issue for.  Just asking as someone interested in the problem.

Are you proposing a new approach to solving the test flakiness?  Is this a Jira 
mainly intended to stoke more discussion on things, or as a place to 
brainstorm. Or is this an umbrella Jira to track all the smaller things you 
think are contributing to the larger problem?  Or something else altogether?

> Fix the tests, remove BadApples and AwaitsFix annotations, improve env for 
> test development.
> 
>
> Key: SOLR-12801
> URL: https://issues.apache.org/jira/browse/SOLR-12801
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Critical
>
> A single issue to counteract the single issue adding tons of annotations, the 
> continued addition of new flakey tests, and the continued addition of 
> flakiness to existing tests.
> Lots more to come.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5163) edismax should throw exception when qf refers to nonexistent field

2018-09-26 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-5163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16628675#comment-16628675
 ] 

David Smiley commented on SOLR-5163:


Looks pretty good.  Though I don't think you tested this change?  Also, if 
there's an infinite loop in aliasing, what will happen?  I believe this 
resulted in a failure before but I wonder if now, due to this new code, Solr 
will loop forever?  Perhaps you should in one shot both do the recursive 
aliasing detection (moving from where is now) and also validate the field?

> edismax should throw exception when qf refers to nonexistent field
> --
>
> Key: SOLR-5163
> URL: https://issues.apache.org/jira/browse/SOLR-5163
> Project: Solr
>  Issue Type: Bug
>  Components: query parsers, search
>Affects Versions: 4.10.4
>Reporter: Steven Bower
>Assignee: David Smiley
>Priority: Major
>  Labels: newdev
> Attachments: SOLR-5163.patch, SOLR-5163.patch, SOLR-5163.patch
>
>
> query:
> q=foo AND bar
> qf=field1
> qf=field2
> defType=edismax
> Where field1 exists and field2 doesn't..
> will treat the AND as a term vs and operator



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12756) Refactor Assign and extract replica placement strategies out of it

2018-09-26 Thread Shalin Shekhar Mangar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16628668#comment-16628668
 ] 

Shalin Shekhar Mangar commented on SOLR-12756:
--

This patch passes all tests. Now to remove the last few nocommits such as 
trying to unify the identifyNodes and getNodesForNewReplicas method, getting 
rid of the ZkNodeProps instance being passed around.

> Refactor Assign and extract replica placement strategies out of it
> --
>
> Key: SOLR-12756
> URL: https://issues.apache.org/jira/browse/SOLR-12756
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
>Priority: Minor
> Fix For: 7.6, master (8.0)
>
> Attachments: SOLR-12756.patch, SOLR-12756.patch, SOLR-12756.patch
>
>
> While working on SOLR-12648, I found Assign class to be very complex. Many 
> methods have overlapping functionality, differ in side-effects and have 
> non-intuitive arguments. We should clean this up and extract replica 
> placement strategies out of that class.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12756) Refactor Assign and extract replica placement strategies out of it

2018-09-26 Thread Shalin Shekhar Mangar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16628668#comment-16628668
 ] 

Shalin Shekhar Mangar edited comment on SOLR-12756 at 9/26/18 12:10 PM:


This patch passes all tests. Now to remove the last few nocommits such as 
trying to merge the Assign2 and Assign classes, merge identifyNodes and 
getNodesForNewReplicas method and getting rid of the ZkNodeProps instance being 
passed around.


was (Author: shalinmangar):
This patch passes all tests. Now to remove the last few nocommits such as 
trying to unify the identifyNodes and getNodesForNewReplicas method, getting 
rid of the ZkNodeProps instance being passed around.

> Refactor Assign and extract replica placement strategies out of it
> --
>
> Key: SOLR-12756
> URL: https://issues.apache.org/jira/browse/SOLR-12756
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
>Priority: Minor
> Fix For: 7.6, master (8.0)
>
> Attachments: SOLR-12756.patch, SOLR-12756.patch, SOLR-12756.patch
>
>
> While working on SOLR-12648, I found Assign class to be very complex. Many 
> methods have overlapping functionality, differ in side-effects and have 
> non-intuitive arguments. We should clean this up and extract replica 
> placement strategies out of that class.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: Release Announcement: General Availability of JDK 11

2018-09-26 Thread Uwe Schindler
Hi Rory,

 

thanks for the information. I updated the Jenkins servers this morning. It’s 
now running tests with the final version.

I also installed JDK 12 build 12 for EA testing.

 

Uwe

 

From: Rory O'Donnell  
Sent: Wednesday, September 26, 2018 10:40 AM
To: dawid.we...@cs.put.poznan.pl; uwe.h.schind...@gmail.com
Cc: rory.odonn...@oracle.com; Dalibor Topic ; 
Balchandra Vaidya ; Muneer Kolarkunnu 
; dev@lucene.apache.org
Subject: Release Announcement: General Availability of JDK 11

 

Hi Uwe & Dawid,  

1) Release Announcement: General Availability of JDK 11 

*   JDK 11, the reference implementation of Java 11 and the first long-term 
support release produced under the six-month rapid-cadence release model 
[1][2], is now Generally Available. 
*   GPL-licensed OpenJDK builds from Oracle are available here: 
https://jdk.java.net/11 

This release includes seventeen features: 

*   181: Nest-Based Access Control  
*   309: Dynamic Class-File Constants  
*   315: Improve Aarch64 Intrinsics  
*   318: Epsilon: A No-Op Garbage Collector 
 
*   320: Remove the Java EE and CORBA Modules 
 
*   321: HTTP Client (Standard)  
*   323: Local-Variable Syntax for Lambda Parameters 
 
*   324: Key Agreement with Curve25519 and Curve448 
 
*   327: Unicode 10  
*   328: Flight Recorder  
*   329: ChaCha20 and Poly1305 Cryptographic Algorithms 
 
*   330: Launch Single-File Source-Code Programs 
 
*   331: Low-Overhead Heap Profiling  
*   332: Transport Layer Security (TLS) 1.3 
 
*   333: ZGC: A Scalable Low-Latency Garbage Collector (Experimental) 
 
*   335: Deprecate the Nashorn JavaScript Engine 
 
*   336: Deprecate the Pack200 Tools and API 
 


2) Quality Outreach Report for September 2018 is available

*   Quality Outreach report September 2018 

 

Thanks to everyone who contributed to JDK 11 by downloading and testing the 
early-access builds.
In particular the following developers who logged 18 issues in the JDK Bug 
System.

*   Netty
*   Eclipse Jetty
*   Apache Lucene
*   JUnit5
*   Apache Tomcat
*   Apache Ant
*   Apache POI
*   AssertJ
*   Eclipse Collections
*   Byte Buddy
*   RxJava

3) JDK 12 EA build 12, under both the GPL and Oracle EA licenses, are now 
available at http://jdk.java.net/11 .

*   Schedule , Status & Features

*   http://openjdk.java.net/projects/jdk/12/

*   Release Notes:

*   http://jdk.java.net/12/release-notes

Rgds,Rory

-- 
Rgds,Rory O'Donnell
Quality Engineering Manager
Oracle EMEA, Dublin,Ireland


[jira] [Updated] (SOLR-12756) Refactor Assign and extract replica placement strategies out of it

2018-09-26 Thread Shalin Shekhar Mangar (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-12756:
-
Attachment: SOLR-12756.patch

> Refactor Assign and extract replica placement strategies out of it
> --
>
> Key: SOLR-12756
> URL: https://issues.apache.org/jira/browse/SOLR-12756
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
>Priority: Minor
> Fix For: 7.6, master (8.0)
>
> Attachments: SOLR-12756.patch, SOLR-12756.patch, SOLR-12756.patch
>
>
> While working on SOLR-12648, I found Assign class to be very complex. Many 
> methods have overlapping functionality, differ in side-effects and have 
> non-intuitive arguments. We should clean this up and extract replica 
> placement strategies out of that class.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12798) Structural changes in SolrJ since version 7.0.0 have effectively disabled multipart post

2018-09-26 Thread Karl Wright (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16628630#comment-16628630
 ] 

Karl Wright edited comment on SOLR-12798 at 9/26/18 11:49 AM:
--

[~shalinmangar], there's no general answer to that question, because there's no 
one definitive example of metadata.

I refer you to the project page for ManifoldCF here:

https://manifoldcf.apache.org/en_US/index.html#What+Is+Apache+ManifoldCF%3F

Just for fun, I dug up a ManifoldCF ticket related to this issue, involving the 
email connector:

https://issues.apache.org/jira/browse/CONNECTORS-1408




was (Author: kwri...@metacarta.com):
[~shalinmangar], there's no general answer to that question, because there's no 
one definitive example of metadata.

I refer you to the project page for ManifoldCF here:

https://manifoldcf.apache.org/en_US/index.html#What+Is+Apache+ManifoldCF%3F



> Structural changes in SolrJ since version 7.0.0 have effectively disabled 
> multipart post
> 
>
> Key: SOLR-12798
> URL: https://issues.apache.org/jira/browse/SOLR-12798
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 7.4
>Reporter: Karl Wright
>Assignee: Karl Wright
>Priority: Major
>
> Project ManifoldCF uses SolrJ to post documents to Solr.  When upgrading from 
> SolrJ 7.0.x to SolrJ 7.4, we encountered significant structural changes to 
> SolrJ's HttpSolrClient class that seemingly disable any use of multipart 
> post.  This is critical because ManifoldCF's documents often contain metadata 
> in excess of 4K that therefore cannot be stuffed into a URL.
> The changes in question seem to have been performed by Paul Noble on 
> 10/31/2017, with the introduction of the RequestWriter mechanism.  Basically, 
> if a request has a RequestWriter, it is used exclusively to write the 
> request, and that overrides the stream mechanism completely.  I haven't 
> chased it back to a specific ticket.
> ManifoldCF's usage of SolrJ involves the creation of 
> ContentStreamUpdateRequests for all posts meant for Solr Cell, and the 
> creation of UpdateRequests for posts not meant for Solr Cell (as well as for 
> delete and commit requests).  For our release cycle that is taking place 
> right now, we're shipping a modified version of HttpSolrClient that ignores 
> the RequestWriter when dealing with ContentStreamUpdateRequests.  We 
> apparently cannot use multipart for all requests because on the Solr side we 
> get "pfountz Should not get here!" errors on the Solr side when we do, which 
> generate HTTP error code 500 responses.  That should not happen either, in my 
> opinion.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12798) Structural changes in SolrJ since version 7.0.0 have effectively disabled multipart post

2018-09-26 Thread Karl Wright (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16628630#comment-16628630
 ] 

Karl Wright commented on SOLR-12798:


[~shalinmangar], there's no general answer to that question, because there's no 
one definitive example of metadata.

I refer you to the project page for ManifoldCF here:

https://manifoldcf.apache.org/en_US/index.html#What+Is+Apache+ManifoldCF%3F



> Structural changes in SolrJ since version 7.0.0 have effectively disabled 
> multipart post
> 
>
> Key: SOLR-12798
> URL: https://issues.apache.org/jira/browse/SOLR-12798
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 7.4
>Reporter: Karl Wright
>Assignee: Karl Wright
>Priority: Major
>
> Project ManifoldCF uses SolrJ to post documents to Solr.  When upgrading from 
> SolrJ 7.0.x to SolrJ 7.4, we encountered significant structural changes to 
> SolrJ's HttpSolrClient class that seemingly disable any use of multipart 
> post.  This is critical because ManifoldCF's documents often contain metadata 
> in excess of 4K that therefore cannot be stuffed into a URL.
> The changes in question seem to have been performed by Paul Noble on 
> 10/31/2017, with the introduction of the RequestWriter mechanism.  Basically, 
> if a request has a RequestWriter, it is used exclusively to write the 
> request, and that overrides the stream mechanism completely.  I haven't 
> chased it back to a specific ticket.
> ManifoldCF's usage of SolrJ involves the creation of 
> ContentStreamUpdateRequests for all posts meant for Solr Cell, and the 
> creation of UpdateRequests for posts not meant for Solr Cell (as well as for 
> delete and commit requests).  For our release cycle that is taking place 
> right now, we're shipping a modified version of HttpSolrClient that ignores 
> the RequestWriter when dealing with ContentStreamUpdateRequests.  We 
> apparently cannot use multipart for all requests because on the Solr side we 
> get "pfountz Should not get here!" errors on the Solr side when we do, which 
> generate HTTP error code 500 responses.  That should not happen either, in my 
> opinion.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Windows (64bit/jdk-9.0.4) - Build # 7536 - Still Unstable!

2018-09-26 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7536/
Java: 64bit/jdk-9.0.4 -XX:-UseCompressedOops -XX:+UseSerialGC

2 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.sim.TestSimGenericDistributedQueue.testDistributedQueue

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([26B0FC478D1CB109]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.autoscaling.sim.TestSimGenericDistributedQueue

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([26B0FC478D1CB109]:0)




Build Log:
[...truncated 15100 lines...]
   [junit4] Suite: 
org.apache.solr.cloud.autoscaling.sim.TestSimGenericDistributedQueue
   [junit4]   2> Creating dataDir: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.autoscaling.sim.TestSimGenericDistributedQueue_26B0FC478D1CB109-001\init-core-data-001
   [junit4]   2> 911667 INFO  
(SUITE-TestSimGenericDistributedQueue-seed#[26B0FC478D1CB109]-worker) [] 
o.a.s.SolrTestCaseJ4 Using PointFields (NUMERIC_POINTS_SYSPROP=true) 
w/NUMERIC_DOCVALUES_SYSPROP=true
   [junit4]   2> 911668 INFO  
(SUITE-TestSimGenericDistributedQueue-seed#[26B0FC478D1CB109]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (true) and clientAuth (false) via: 
@org.apache.solr.util.RandomizeSSL(reason="", value=0.0/0.0, ssl=0.0/0.0, 
clientAuth=0.0/0.0)
   [junit4]   2> 911669 INFO  
(SUITE-TestSimGenericDistributedQueue-seed#[26B0FC478D1CB109]-worker) [] 
o.a.s.SolrTestCaseJ4 SecureRandom sanity checks: 
test.solr.allowed.securerandom=null & java.security.egd=file:/dev/./urandom
   [junit4]   2> 911670 INFO  
(TEST-TestSimGenericDistributedQueue.testLocallyOffer-seed#[26B0FC478D1CB109]) 
[] o.a.s.SolrTestCaseJ4 ###Starting testLocallyOffer
   [junit4]   2> 911896 INFO  
(TEST-TestSimGenericDistributedQueue.testLocallyOffer-seed#[26B0FC478D1CB109]) 
[] o.a.s.SolrTestCaseJ4 ###Ending testLocallyOffer
   [junit4]   2> 911898 INFO  
(TEST-TestSimGenericDistributedQueue.testDistributedQueue-seed#[26B0FC478D1CB109])
 [] o.a.s.SolrTestCaseJ4 ###Starting testDistributedQueue
   [junit4]   2> Sep 26, 2018 12:30:12 PM 
com.carrotsearch.randomizedtesting.ThreadLeakControl$2 evaluate
   [junit4]   2> WARNING: Suite execution timed out: 
org.apache.solr.cloud.autoscaling.sim.TestSimGenericDistributedQueue
   [junit4]   2>  jstack at approximately timeout time 
   [junit4]   2> 
"TEST-TestSimGenericDistributedQueue.testDistributedQueue-seed#[26B0FC478D1CB109]"
 ID=8836 TIMED_WAITING on 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@751b8f3c
   [junit4]   2>at java.base@9.0.4/jdk.internal.misc.Unsafe.park(Native 
Method)
   [junit4]   2>- timed waiting on 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@751b8f3c
   [junit4]   2>at 
java.base@9.0.4/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
   [junit4]   2>at 
java.base@9.0.4/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2104)
   [junit4]   2>at 
app//org.apache.solr.cloud.autoscaling.sim.GenericDistributedQueue.peek(GenericDistributedQueue.java:194)
   [junit4]   2>at 
app//org.apache.solr.cloud.autoscaling.sim.GenericDistributedQueue.peek(GenericDistributedQueue.java:167)
   [junit4]   2>at 
app//org.apache.solr.cloud.autoscaling.sim.TestSimDistributedQueue.testDistributedQueue(TestSimDistributedQueue.java:74)
   [junit4]   2>at 
app//org.apache.solr.cloud.autoscaling.sim.TestSimGenericDistributedQueue.testDistributedQueue(TestSimGenericDistributedQueue.java:36)
   [junit4]   2>at 
java.base@9.0.4/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method)
   [junit4]   2>at 
java.base@9.0.4/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
   [junit4]   2>at 
java.base@9.0.4/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   [junit4]   2>at 
java.base@9.0.4/java.lang.reflect.Method.invoke(Method.java:564)
   [junit4]   2>at 
app//com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
   [junit4]   2>at 
app//com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
   [junit4]   2>at 
app//com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
   [junit4]   2>at 
app//com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
   [junit4]   2>at 

[jira] [Commented] (SOLR-12798) Structural changes in SolrJ since version 7.0.0 have effectively disabled multipart post

2018-09-26 Thread Shalin Shekhar Mangar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16628586#comment-16628586
 ] 

Shalin Shekhar Mangar commented on SOLR-12798:
--

[~kwri...@metacarta.com] - One thing that wasn't very clear to me reading 
through the issue description and comments is what's the metadata for and why 
is it supposed to go through the request URL? I'd appreciate if you can give an 
example of the metadata for my understanding. Thanks!

> Structural changes in SolrJ since version 7.0.0 have effectively disabled 
> multipart post
> 
>
> Key: SOLR-12798
> URL: https://issues.apache.org/jira/browse/SOLR-12798
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 7.4
>Reporter: Karl Wright
>Assignee: Karl Wright
>Priority: Major
>
> Project ManifoldCF uses SolrJ to post documents to Solr.  When upgrading from 
> SolrJ 7.0.x to SolrJ 7.4, we encountered significant structural changes to 
> SolrJ's HttpSolrClient class that seemingly disable any use of multipart 
> post.  This is critical because ManifoldCF's documents often contain metadata 
> in excess of 4K that therefore cannot be stuffed into a URL.
> The changes in question seem to have been performed by Paul Noble on 
> 10/31/2017, with the introduction of the RequestWriter mechanism.  Basically, 
> if a request has a RequestWriter, it is used exclusively to write the 
> request, and that overrides the stream mechanism completely.  I haven't 
> chased it back to a specific ticket.
> ManifoldCF's usage of SolrJ involves the creation of 
> ContentStreamUpdateRequests for all posts meant for Solr Cell, and the 
> creation of UpdateRequests for posts not meant for Solr Cell (as well as for 
> delete and commit requests).  For our release cycle that is taking place 
> right now, we're shipping a modified version of HttpSolrClient that ignores 
> the RequestWriter when dealing with ContentStreamUpdateRequests.  We 
> apparently cannot use multipart for all requests because on the Solr side we 
> get "pfountz Should not get here!" errors on the Solr side when we do, which 
> generate HTTP error code 500 responses.  That should not happen either, in my 
> opinion.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12808) Wrong highlighting using PatternReplaceCharFilterFactory

2018-09-26 Thread Federico Grillini (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16628521#comment-16628521
 ] 

Federico Grillini commented on SOLR-12808:
--

I've inserted this bug because the official documentation says:

{quote}
CharFilters can be chained like Token Filters and placed in front of a 
Tokenizer. CharFilters can add, change, or remove characters while preserving 
the original character offsets to support features like highlighting.
{quote}

> Wrong highlighting using PatternReplaceCharFilterFactory
> 
>
> Key: SOLR-12808
> URL: https://issues.apache.org/jira/browse/SOLR-12808
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: highlighter
>Affects Versions: 7.2.1, 7.4, 7.5
> Environment: Java: Oracle Corporation Java HotSpot(TM) 64-Bit Server 
> VM 1.8.0_162 25.162-b12
> OS: Linux Debian 8.11
>Reporter: Federico Grillini
>Priority: Major
> Attachments: text_analysis.png
>
>
> Hi,
> the default highlighter seems to work badly in conjunction with 
> PatternReplaceCharFilterFactory.
> My query is: {{verb_esame_num_tnv:(00031665 0035 9)}}
> The field type used by the field "verb_esame_num_tnv" is:
> {code:xml}
>  positionIncrementGap="100">
>
>pattern="^0*([0-9]+\s+[0-9]+\s+[0-9]+)$" replacement=" $1"/>
>replacement=" "/>
>   
>
> 
> {code}
> I've attached a screenshot of the text analysis.
> It seems that the highlighter uses the wrong offsets in the original text to 
> highligth the matched tokens.
> Hope this helps.
> Regards.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12808) Wrong highlighting using PatternReplaceCharFilterFactory

2018-09-26 Thread Federico Grillini (JIRA)
Federico Grillini created SOLR-12808:


 Summary: Wrong highlighting using PatternReplaceCharFilterFactory
 Key: SOLR-12808
 URL: https://issues.apache.org/jira/browse/SOLR-12808
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: highlighter
Affects Versions: 7.5, 7.4, 7.2.1
 Environment: Java: Oracle Corporation Java HotSpot(TM) 64-Bit Server 
VM 1.8.0_162 25.162-b12
OS: Linux Debian 8.11
Reporter: Federico Grillini
 Attachments: text_analysis.png

Hi,
the default highlighter seems to work badly in conjunction with 
PatternReplaceCharFilterFactory.

My query is: {{verb_esame_num_tnv:(00031665 0035 9)}}

The field type used by the field "verb_esame_num_tnv" is:

{code:xml}

   
  
  
  
   

{code}

I've attached a screenshot of the text analysis.

It seems that the highlighter uses the wrong offsets in the original text to 
highligth the matched tokens.

Hope this helps.

Regards.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-7.x - Build # 895 - Still Unstable

2018-09-26 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/895/

3 tests failed.
FAILED:  org.apache.solr.cloud.BasicDistributedZkTest.test

Error Message:
Timeout occured while waiting response from server at: http://127.0.0.1:33590

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: http://127.0.0.1:33590
at 
__randomizedtesting.SeedInfo.seed([45EA633A637AE641:CDBE5CE0CD868BB9]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:654)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1107)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createJettys(AbstractFullDistribZkTestBase.java:425)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createServers(AbstractFullDistribZkTestBase.java:341)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1006)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:983)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-10.0.1) - Build # 22925 - Failure!

2018-09-26 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/22925/
Java: 64bit/jdk-10.0.1 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.TestCloudRecovery.corruptedLogTest

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([FF3885BFF79F8D85:7C4EDA4D21E68324]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertTrue(Assert.java:54)
at 
org.apache.solr.cloud.TestCloudRecovery.corruptedLogTest(TestCloudRecovery.java:201)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)




Build Log:
[...truncated 1996 lines...]
   [junit4] JVM J0: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/core/test/temp/junit4-J0-20180926_074722_4644266551150281546859.syserr
   [junit4] >>> JVM J0 emitted unexpected output (verbatim) 
   [junit4] OpenJDK 64-Bit Server VM 

Release Announcement: General Availability of JDK 11

2018-09-26 Thread Rory O'Donnell

Hi Uwe & Dawid,

*1) Release Announcement: General Availability of JDK 11 *

 * JDK 11, the reference implementation of Java 11 and the first
   long-term support release produced under the six-month rapid-cadence
   release model [1][2], is now Generally Available.
 * GPL-licensed OpenJDK builds from Oracle are available here:
   https://jdk.java.net/11

This release includes seventeen features:

 * 181: Nest-Based Access Control 
 * 309: Dynamic Class-File Constants 
 * 315: Improve Aarch64 Intrinsics 
 * 318: Epsilon: A No-Op Garbage Collector
   
 * 320: Remove the Java EE and CORBA Modules
   
 * 321: HTTP Client (Standard) 
 * 323: Local-Variable Syntax for Lambda Parameters
   
 * 324: Key Agreement with Curve25519 and Curve448
   
 * 327: Unicode 10 
 * 328: Flight Recorder 
 * 329: ChaCha20 and Poly1305 Cryptographic Algorithms
   
 * 330: Launch Single-File Source-Code Programs
   
 * 331: Low-Overhead Heap Profiling 
 * 332: Transport Layer Security (TLS) 1.3
   
 * 333: ZGC: A Scalable Low-Latency Garbage Collector (Experimental)
   
 * 335: Deprecate the Nashorn JavaScript Engine
   
 * 336: Deprecate the Pack200 Tools and API
   


2) Quality Outreach Report for September 2018 is available*
*

 * Quality Outreach report September 2018

*Thanks to everyone who contributed to JDK 11 by downloading and testing 
the early-access builds.
In particular the following developers who logged **18 issues in the JDK 
Bug System.*


 * Netty
 * Eclipse Jetty
 * Apache Lucene
 * JUnit5
 * Apache Tomcat
 * Apache Ant
 * Apache POI
 * AssertJ
 * Eclipse Collections
 * Byte Buddy
 * RxJava

3) JDK 12 EA build 12, under both the GPL and Oracle EA licenses, are 
now available at http://jdk.java.net/11 .


 * Schedule , Status & Features
 o http://openjdk.java.net/projects/jdk/12/
 * Release Notes:
 o http://jdk.java.net/12/release-notes

**

Rgds,Rory

--
Rgds,Rory O'Donnell
Quality Engineering Manager
Oracle EMEA, Dublin,Ireland



[jira] [Updated] (SOLR-12798) Structural changes in SolrJ since version 7.0.0 have effectively disabled multipart post

2018-09-26 Thread Karl Wright (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karl Wright updated SOLR-12798:
---
Issue Type: Improvement  (was: Bug)

> Structural changes in SolrJ since version 7.0.0 have effectively disabled 
> multipart post
> 
>
> Key: SOLR-12798
> URL: https://issues.apache.org/jira/browse/SOLR-12798
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 7.4
>Reporter: Karl Wright
>Assignee: Karl Wright
>Priority: Major
>
> Project ManifoldCF uses SolrJ to post documents to Solr.  When upgrading from 
> SolrJ 7.0.x to SolrJ 7.4, we encountered significant structural changes to 
> SolrJ's HttpSolrClient class that seemingly disable any use of multipart 
> post.  This is critical because ManifoldCF's documents often contain metadata 
> in excess of 4K that therefore cannot be stuffed into a URL.
> The changes in question seem to have been performed by Paul Noble on 
> 10/31/2017, with the introduction of the RequestWriter mechanism.  Basically, 
> if a request has a RequestWriter, it is used exclusively to write the 
> request, and that overrides the stream mechanism completely.  I haven't 
> chased it back to a specific ticket.
> ManifoldCF's usage of SolrJ involves the creation of 
> ContentStreamUpdateRequests for all posts meant for Solr Cell, and the 
> creation of UpdateRequests for posts not meant for Solr Cell (as well as for 
> delete and commit requests).  For our release cycle that is taking place 
> right now, we're shipping a modified version of HttpSolrClient that ignores 
> the RequestWriter when dealing with ContentStreamUpdateRequests.  We 
> apparently cannot use multipart for all requests because on the Solr side we 
> get "pfountz Should not get here!" errors on the Solr side when we do, which 
> generate HTTP error code 500 responses.  That should not happen either, in my 
> opinion.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-12798) Structural changes in SolrJ since version 7.0.0 have effectively disabled multipart post

2018-09-26 Thread Karl Wright (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karl Wright reassigned SOLR-12798:
--

Assignee: Karl Wright

> Structural changes in SolrJ since version 7.0.0 have effectively disabled 
> multipart post
> 
>
> Key: SOLR-12798
> URL: https://issues.apache.org/jira/browse/SOLR-12798
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 7.4
>Reporter: Karl Wright
>Assignee: Karl Wright
>Priority: Major
>
> Project ManifoldCF uses SolrJ to post documents to Solr.  When upgrading from 
> SolrJ 7.0.x to SolrJ 7.4, we encountered significant structural changes to 
> SolrJ's HttpSolrClient class that seemingly disable any use of multipart 
> post.  This is critical because ManifoldCF's documents often contain metadata 
> in excess of 4K that therefore cannot be stuffed into a URL.
> The changes in question seem to have been performed by Paul Noble on 
> 10/31/2017, with the introduction of the RequestWriter mechanism.  Basically, 
> if a request has a RequestWriter, it is used exclusively to write the 
> request, and that overrides the stream mechanism completely.  I haven't 
> chased it back to a specific ticket.
> ManifoldCF's usage of SolrJ involves the creation of 
> ContentStreamUpdateRequests for all posts meant for Solr Cell, and the 
> creation of UpdateRequests for posts not meant for Solr Cell (as well as for 
> delete and commit requests).  For our release cycle that is taking place 
> right now, we're shipping a modified version of HttpSolrClient that ignores 
> the RequestWriter when dealing with ContentStreamUpdateRequests.  We 
> apparently cannot use multipart for all requests because on the Solr side we 
> get "pfountz Should not get here!" errors on the Solr side when we do, which 
> generate HTTP error code 500 responses.  That should not happen either, in my 
> opinion.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12798) Structural changes in SolrJ since version 7.0.0 have effectively disabled multipart post

2018-09-26 Thread Karl Wright (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16628354#comment-16628354
 ] 

Karl Wright commented on SOLR-12798:


Ok, thanks for the clarification.

I will propose SolrJ changes to allow multipart form transport as a first-class 
citizen, using the ContentWriter construct, and attach those as a patch to this 
ticket.  The other fixes I will propose separately.  Or, if you want to tackle 
this, I'd be happy to hand it to you.  Please let me know.

> Structural changes in SolrJ since version 7.0.0 have effectively disabled 
> multipart post
> 
>
> Key: SOLR-12798
> URL: https://issues.apache.org/jira/browse/SOLR-12798
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 7.4
>Reporter: Karl Wright
>Priority: Major
>
> Project ManifoldCF uses SolrJ to post documents to Solr.  When upgrading from 
> SolrJ 7.0.x to SolrJ 7.4, we encountered significant structural changes to 
> SolrJ's HttpSolrClient class that seemingly disable any use of multipart 
> post.  This is critical because ManifoldCF's documents often contain metadata 
> in excess of 4K that therefore cannot be stuffed into a URL.
> The changes in question seem to have been performed by Paul Noble on 
> 10/31/2017, with the introduction of the RequestWriter mechanism.  Basically, 
> if a request has a RequestWriter, it is used exclusively to write the 
> request, and that overrides the stream mechanism completely.  I haven't 
> chased it back to a specific ticket.
> ManifoldCF's usage of SolrJ involves the creation of 
> ContentStreamUpdateRequests for all posts meant for Solr Cell, and the 
> creation of UpdateRequests for posts not meant for Solr Cell (as well as for 
> delete and commit requests).  For our release cycle that is taking place 
> right now, we're shipping a modified version of HttpSolrClient that ignores 
> the RequestWriter when dealing with ContentStreamUpdateRequests.  We 
> apparently cannot use multipart for all requests because on the Solr side we 
> get "pfountz Should not get here!" errors on the Solr side when we do, which 
> generate HTTP error code 500 responses.  That should not happen either, in my 
> opinion.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12798) Structural changes in SolrJ since version 7.0.0 have effectively disabled multipart post

2018-09-26 Thread Noble Paul (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16628338#comment-16628338
 ] 

Noble Paul commented on SOLR-12798:
---

{quote}So there's a fix for multipart post usage? Is this committed to master? 
How do you turn it on, or does it do this automatically?
{quote}

I never bothered with multipart post. I wanted to ensure that we don't write 
the docs to memory before we post to the server. That's the fix. As long as you 
can generate docs in a streaming fashion there is no limit to the no:of docs 
that we can write in a single request in the client  

> Structural changes in SolrJ since version 7.0.0 have effectively disabled 
> multipart post
> 
>
> Key: SOLR-12798
> URL: https://issues.apache.org/jira/browse/SOLR-12798
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 7.4
>Reporter: Karl Wright
>Priority: Major
>
> Project ManifoldCF uses SolrJ to post documents to Solr.  When upgrading from 
> SolrJ 7.0.x to SolrJ 7.4, we encountered significant structural changes to 
> SolrJ's HttpSolrClient class that seemingly disable any use of multipart 
> post.  This is critical because ManifoldCF's documents often contain metadata 
> in excess of 4K that therefore cannot be stuffed into a URL.
> The changes in question seem to have been performed by Paul Noble on 
> 10/31/2017, with the introduction of the RequestWriter mechanism.  Basically, 
> if a request has a RequestWriter, it is used exclusively to write the 
> request, and that overrides the stream mechanism completely.  I haven't 
> chased it back to a specific ticket.
> ManifoldCF's usage of SolrJ involves the creation of 
> ContentStreamUpdateRequests for all posts meant for Solr Cell, and the 
> creation of UpdateRequests for posts not meant for Solr Cell (as well as for 
> delete and commit requests).  For our release cycle that is taking place 
> right now, we're shipping a modified version of HttpSolrClient that ignores 
> the RequestWriter when dealing with ContentStreamUpdateRequests.  We 
> apparently cannot use multipart for all requests because on the Solr side we 
> get "pfountz Should not get here!" errors on the Solr side when we do, which 
> generate HTTP error code 500 responses.  That should not happen either, in my 
> opinion.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12807) out of memory error due to a lot of zk watchers in solr cloud

2018-09-26 Thread Mine_Orange (JIRA)
Mine_Orange created SOLR-12807:
--

 Summary: out of memory error due to a lot of zk watchers in solr 
cloud 
 Key: SOLR-12807
 URL: https://issues.apache.org/jira/browse/SOLR-12807
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: SolrCloud
Affects Versions: 6.1
Reporter: Mine_Orange


Analyzing the dump file,we found a lot of watchers in childWatches of 
ZKWatchManager,nearly 1.8G,the znode of childWatches is 
/overseer/collection-queue-work,confirm that it is not because of the frequent 
use of collection API and the network is normal. 

The instance is the overseer leader of a solr cluster and did not restart for 
more than a year,suspect that the watchers grow with time.

Our solr version is 6.1 and zookeeper version is 3.4.9.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-BadApples-NightlyTests-7.x - Build # 28 - Still Unstable

2018-09-26 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-NightlyTests-7.x/28/

4 tests failed.
FAILED:  
org.apache.solr.cloud.LeaderVoteWaitTimeoutTest.testMostInSyncReplicasCanWinElection

Error Message:
Timeout waiting for new leader null Live Nodes: [127.0.0.1:33818_solr, 
127.0.0.1:34947_solr, 127.0.0.1:46091_solr] Last available state: 
DocCollection(collection1//collections/collection1/state.json/15)={   
"pullReplicas":"0",   "replicationFactor":"3",   "shards":{"shard1":{   
"range":"8000-7fff",   "state":"active",   "replicas":{ 
"core_node62":{   "core":"collection1_shard1_replica_n61",   
"base_url":"http://127.0.0.1:36480/solr;,   
"node_name":"127.0.0.1:36480_solr",   "state":"down",   
"type":"NRT",   "force_set_state":"false"}, "core_node64":{ 
  "core":"collection1_shard1_replica_n63",   
"base_url":"http://127.0.0.1:33818/solr;,   
"node_name":"127.0.0.1:33818_solr",   "state":"down",   
"type":"NRT",   "force_set_state":"false"}, "core_node66":{ 
  "core":"collection1_shard1_replica_n65",   
"base_url":"http://127.0.0.1:34947/solr;,   
"node_name":"127.0.0.1:34947_solr",   "state":"active",   
"type":"NRT",   "force_set_state":"false",   
"router":{"name":"compositeId"},   "maxShardsPerNode":"1",   
"autoAddReplicas":"false",   "nrtReplicas":"3",   "tlogReplicas":"0"}

Stack Trace:
java.lang.AssertionError: Timeout waiting for new leader
null
Live Nodes: [127.0.0.1:33818_solr, 127.0.0.1:34947_solr, 127.0.0.1:46091_solr]
Last available state: 
DocCollection(collection1//collections/collection1/state.json/15)={
  "pullReplicas":"0",
  "replicationFactor":"3",
  "shards":{"shard1":{
  "range":"8000-7fff",
  "state":"active",
  "replicas":{
"core_node62":{
  "core":"collection1_shard1_replica_n61",
  "base_url":"http://127.0.0.1:36480/solr;,
  "node_name":"127.0.0.1:36480_solr",
  "state":"down",
  "type":"NRT",
  "force_set_state":"false"},
"core_node64":{
  "core":"collection1_shard1_replica_n63",
  "base_url":"http://127.0.0.1:33818/solr;,
  "node_name":"127.0.0.1:33818_solr",
  "state":"down",
  "type":"NRT",
  "force_set_state":"false"},
"core_node66":{
  "core":"collection1_shard1_replica_n65",
  "base_url":"http://127.0.0.1:34947/solr;,
  "node_name":"127.0.0.1:34947_solr",
  "state":"active",
  "type":"NRT",
  "force_set_state":"false",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false",
  "nrtReplicas":"3",
  "tlogReplicas":"0"}
at 
__randomizedtesting.SeedInfo.seed([37256A5AA4376587:9F3976E0667751AD]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.SolrCloudTestCase.waitForState(SolrCloudTestCase.java:280)
at 
org.apache.solr.cloud.LeaderVoteWaitTimeoutTest.testMostInSyncReplicasCanWinElection(LeaderVoteWaitTimeoutTest.java:200)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 

[jira] [Commented] (SOLR-12798) Structural changes in SolrJ since version 7.0.0 have effectively disabled multipart post

2018-09-26 Thread Karl Wright (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16628307#comment-16628307
 ] 

Karl Wright commented on SOLR-12798:


{quote}
this no longer is the case
{quote}

That's good news; I can change things in ManifoldCF accordingly, since we no 
longer have to enforce a maximum document size limit in that case then.

{quote}
I have fixed this problem in the current SolrJ
{quote}

So there's a fix for multipart post usage?  Is this committed to master?  How 
do you turn it on, or does it do this automatically?

Once that's there, it would be straightforward to add my other fixes; I'm a 
Lucene/Solr committer now as well, so I can ticket and propose them and they 
will get done this time.


> Structural changes in SolrJ since version 7.0.0 have effectively disabled 
> multipart post
> 
>
> Key: SOLR-12798
> URL: https://issues.apache.org/jira/browse/SOLR-12798
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 7.4
>Reporter: Karl Wright
>Priority: Major
>
> Project ManifoldCF uses SolrJ to post documents to Solr.  When upgrading from 
> SolrJ 7.0.x to SolrJ 7.4, we encountered significant structural changes to 
> SolrJ's HttpSolrClient class that seemingly disable any use of multipart 
> post.  This is critical because ManifoldCF's documents often contain metadata 
> in excess of 4K that therefore cannot be stuffed into a URL.
> The changes in question seem to have been performed by Paul Noble on 
> 10/31/2017, with the introduction of the RequestWriter mechanism.  Basically, 
> if a request has a RequestWriter, it is used exclusively to write the 
> request, and that overrides the stream mechanism completely.  I haven't 
> chased it back to a specific ticket.
> ManifoldCF's usage of SolrJ involves the creation of 
> ContentStreamUpdateRequests for all posts meant for Solr Cell, and the 
> creation of UpdateRequests for posts not meant for Solr Cell (as well as for 
> delete and commit requests).  For our release cycle that is taking place 
> right now, we're shipping a modified version of HttpSolrClient that ignores 
> the RequestWriter when dealing with ContentStreamUpdateRequests.  We 
> apparently cannot use multipart for all requests because on the Solr side we 
> get "pfountz Should not get here!" errors on the Solr side when we do, which 
> generate HTTP error code 500 responses.  That should not happen either, in my 
> opinion.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12806) when strict=false is specified prioritize node allocation using non strict rules

2018-09-26 Thread Noble Paul (JIRA)
Noble Paul created SOLR-12806:
-

 Summary: when strict=false is specified prioritize node allocation 
using non strict rules
 Key: SOLR-12806
 URL: https://issues.apache.org/jira/browse/SOLR-12806
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: AutoScaling
Reporter: Noble Paul


if a rule is specified as

for instance if a policy rule as follows exists
{code:java}
{"replica" : "#ALL", "freedisk" : "<500", "strict" : false}
{code}
 

If no no nodes have {{freedisk}} more than 500 GB, Solr ignores this rule 
completely and assign nodes. Ideally it should still prefer a node with 
{{freedisk}} of 450GB compared to a node that has {{freedisk}} of 400GB



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12806) when strict=false is specified, prioritize node allocation using non strict rules

2018-09-26 Thread Noble Paul (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-12806:
--
Summary: when strict=false is specified, prioritize node allocation using 
non strict rules  (was: when strict=false is specified prioritize node 
allocation using non strict rules)

> when strict=false is specified, prioritize node allocation using non strict 
> rules
> -
>
> Key: SOLR-12806
> URL: https://issues.apache.org/jira/browse/SOLR-12806
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Noble Paul
>Priority: Major
>
> if a rule is specified as
> for instance if a policy rule as follows exists
> {code:java}
> {"replica" : "#ALL", "freedisk" : "<500", "strict" : false}
> {code}
>  
> If no no nodes have {{freedisk}} more than 500 GB, Solr ignores this rule 
> completely and assign nodes. Ideally it should still prefer a node with 
> {{freedisk}} of 450GB compared to a node that has {{freedisk}} of 400GB



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org