[JENKINS] Lucene-Solr-7.x-Solaris (64bit/jdk1.8.0) - Build # 813 - Still Unstable!

2018-09-12 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Solaris/813/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseG1GC

2 tests failed.
FAILED:  org.apache.solr.cloud.cdcr.CdcrBidirectionalTest.testBiDir

Error Message:
Captured an uncaught exception in thread: Thread[id=715, 
name=cdcr-replicator-201-thread-1, state=RUNNABLE, 
group=TGRP-CdcrBidirectionalTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=715, name=cdcr-replicator-201-thread-1, 
state=RUNNABLE, group=TGRP-CdcrBidirectionalTest]
Caused by: java.lang.AssertionError: 1611391313648812032 != 1611391313647763456
at __randomizedtesting.SeedInfo.seed([7E5970A0C6AFE24E]:0)
at 
org.apache.solr.update.CdcrUpdateLog$CdcrLogReader.forwardSeek(CdcrUpdateLog.java:611)
at org.apache.solr.handler.CdcrReplicator.run(CdcrReplicator.java:125)
at 
org.apache.solr.handler.CdcrReplicatorScheduler.lambda$null$0(CdcrReplicatorScheduler.java:81)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)


FAILED:  org.apache.solr.cloud.cdcr.CdcrBidirectionalTest.testBiDir

Error Message:
Captured an uncaught exception in thread: Thread[id=13041, 
name=cdcr-replicator-6186-thread-1, state=RUNNABLE, 
group=TGRP-CdcrBidirectionalTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=13041, name=cdcr-replicator-6186-thread-1, 
state=RUNNABLE, group=TGRP-CdcrBidirectionalTest]
Caused by: java.lang.AssertionError: 1611386873032212480 != 1611386872339103744
at __randomizedtesting.SeedInfo.seed([7E5970A0C6AFE24E]:0)
at 
org.apache.solr.update.CdcrUpdateLog$CdcrLogReader.forwardSeek(CdcrUpdateLog.java:611)
at org.apache.solr.handler.CdcrReplicator.run(CdcrReplicator.java:125)
at 
org.apache.solr.handler.CdcrReplicatorScheduler.lambda$null$0(CdcrReplicatorScheduler.java:81)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 13366 lines...]
   [junit4] Suite: org.apache.solr.cloud.cdcr.CdcrBidirectionalTest
   [junit4]   2> Creating dataDir: 
/export/home/jenkins/workspace/Lucene-Solr-7.x-Solaris/solr/build/solr-core/test/J0/temp/solr.cloud.cdcr.CdcrBidirectionalTest_7E5970A0C6AFE24E-001/init-core-data-001
   [junit4]   2> 1430649 INFO  
(SUITE-CdcrBidirectionalTest-seed#[7E5970A0C6AFE24E]-worker) [] 
o.a.s.SolrTestCaseJ4 Using PointFields (NUMERIC_POINTS_SYSPROP=true) 
w/NUMERIC_DOCVALUES_SYSPROP=true
   [junit4]   2> 1430650 INFO  
(SUITE-CdcrBidirectionalTest-seed#[7E5970A0C6AFE24E]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (false) via: 
@org.apache.solr.util.RandomizeSSL(reason=, value=NaN, ssl=NaN, clientAuth=NaN)
   [junit4]   2> 1430650 INFO  
(SUITE-CdcrBidirectionalTest-seed#[7E5970A0C6AFE24E]-worker) [] 
o.a.s.SolrTestCaseJ4 SecureRandom sanity checks: 
test.solr.allowed.securerandom=null & java.security.egd=file:/dev/./urandom
   [junit4]   2> 1430651 INFO  
(TEST-CdcrBidirectionalTest.testBiDir-seed#[7E5970A0C6AFE24E]) [] 
o.a.s.SolrTestCaseJ4 ###Starting testBiDir
   [junit4]   2> 1430652 INFO  
(TEST-CdcrBidirectionalTest.testBiDir-seed#[7E5970A0C6AFE24E]) [] 
o.a.s.c.MiniSolrCloudCluster Starting cluster of 1 servers in 
/export/home/jenkins/workspace/Lucene-Solr-7.x-Solaris/solr/build/solr-core/test/J0/temp/solr.cloud.cdcr.CdcrBidirectionalTest_7E5970A0C6AFE24E-001/cdcr-cluster2-001
   [junit4]   2> 1430652 INFO  
(TEST-CdcrBidirectionalTest.testBiDir-seed#[7E5970A0C6AFE24E]) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 1430652 INFO  (Thread-2782) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 1430652 INFO  (Thread-2782) [] o.a.s.c.ZkTestServer 
Starting server
   [junit4]   2> 1430654 ERROR (Thread-2782) [] o.a.z.s.ZooKeeperServer 
ZKShutdownHandler is not registered, so ZooKeeper server won't take any action 
on ERROR or SHUTDOWN server state changes
   [junit4]   2> 1430752 INFO  
(TEST-CdcrBidirectionalTest.testBiDir-seed#[7E5970A0C6AFE24E]) [] 
o.a.s.c.ZkTestServer start zk server on port:49671
   [junit4]   2> 1430754 INFO  (zkConnectionManagerCallback-3365-thread-1) [
] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 1430758 INFO  (jetty-launcher-3362-thread-1) [] 

[jira] [Commented] (SOLR-12718) StreamContext ctor should always take a SolrClientCache

2018-09-12 Thread Peter Cseh (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16611779#comment-16611779
 ] 

Peter Cseh commented on SOLR-12718:
---

Opened up PR with a patch.
This change breaks everyone's usage of StreamContext. Should I add a 
constructor with no parameters and keep the setter as well for compatibility 
reasons?

> StreamContext ctor should always take a SolrClientCache
> ---
>
> Key: SOLR-12718
> URL: https://issues.apache.org/jira/browse/SOLR-12718
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Priority: Major
>  Labels: newdev, streaming
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> {code:java}
> StreamExpression expression = StreamExpressionParser.parse(expr);
> TupleStream stream = new CloudSolrStream(expression, factory);
> SolrClientCache solrClientCache = new SolrClientCache();
> StreamContext streamContext = new StreamContext();
> streamContext.setSolrClientCache(solrClientCache);
> stream.setStreamContext(streamContext);
> List tuples = getTuples(stream);{code}
>  
> If we don't call {{streamContext.setSolrClientCache}} we will get an NPE. 
> Seems like we should always have the user take solrClientCache in 
> StreamContext's ctor ?
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7862) Should BKD cells store their min/max packed values?

2018-09-12 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-7862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16611644#comment-16611644
 ] 

ASF subversion and git services commented on LUCENE-7862:
-

Commit 7c9b8b4b6167dce9ff6967d88a3a596e041671d6 in lucene-solr's branch 
refs/heads/branch_7_5 from iverase
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=7c9b8b4 ]

LUCENE-7862:Change entry in NOTES.txt to the right lucene version


> Should BKD cells store their min/max packed values?
> ---
>
> Key: LUCENE-7862
> URL: https://issues.apache.org/jira/browse/LUCENE-7862
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Ignacio Vera
>Priority: Minor
> Fix For: 7.5, master (8.0)
>
> Attachments: LUCENE-7862.patch, LUCENE-7862.patch, LUCENE-7862.patch
>
>
> The index of the BKD tree already allows to know lower and upper bounds of 
> values in a given dimension. However the actual range of values might be more 
> narrow than what the index tells us, especially if splitting on one dimension 
> reduces the range of values in at least one other dimension. For instance 
> this tends to be the case with range fields: since we enforce that lower 
> bounds are less than upper bounds, splitting on one dimension will also 
> affect the range of values in the other dimension.
> So I'm wondering whether we should store the actual range of values for each 
> dimension in leaf blocks, this will hopefully allow to figure out that either 
> none or all values match in a block without having to check them all.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7862) Should BKD cells store their min/max packed values?

2018-09-12 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-7862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16611642#comment-16611642
 ] 

ASF subversion and git services commented on LUCENE-7862:
-

Commit 0789a77c2590f716fc3cedb247309068c3fc5d85 in lucene-solr's branch 
refs/heads/branch_7x from iverase
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=0789a77 ]

LUCENE-7862:Change entry in NOTES.txt to the right lucene version


> Should BKD cells store their min/max packed values?
> ---
>
> Key: LUCENE-7862
> URL: https://issues.apache.org/jira/browse/LUCENE-7862
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Ignacio Vera
>Priority: Minor
> Fix For: 7.5, master (8.0)
>
> Attachments: LUCENE-7862.patch, LUCENE-7862.patch, LUCENE-7862.patch
>
>
> The index of the BKD tree already allows to know lower and upper bounds of 
> values in a given dimension. However the actual range of values might be more 
> narrow than what the index tells us, especially if splitting on one dimension 
> reduces the range of values in at least one other dimension. For instance 
> this tends to be the case with range fields: since we enforce that lower 
> bounds are less than upper bounds, splitting on one dimension will also 
> affect the range of values in the other dimension.
> So I'm wondering whether we should store the actual range of values for each 
> dimension in leaf blocks, this will hopefully allow to figure out that either 
> none or all values match in a block without having to check them all.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-12607) Investigate ShardSplitTest failures

2018-09-12 Thread Shalin Shekhar Mangar (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar resolved SOLR-12607.
--
Resolution: Fixed

I'm resolving this because all changes made here have gone to 7.5. I'll open a 
follow up issue for the remaining test failures.

> Investigate ShardSplitTest failures
> ---
>
> Key: SOLR-12607
> URL: https://issues.apache.org/jira/browse/SOLR-12607
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
>Priority: Major
> Fix For: 7.5, master (8.0)
>
>
> There have been many recent ShardSplitTest failures. 
> According to http://fucit.org/solr-jenkins-reports/failure-report.html
> {code}
> Class: org.apache.solr.cloud.api.collections.ShardSplitTest
> Method: testSplitWithChaosMonkey
> Failures: 72.32% (81 / 112)
> Class: org.apache.solr.cloud.api.collections.ShardSplitTest
> Method: test
> Failures: 26.79% (30 / 112)
> {code} 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12639) Umbrella JIRA for adding support HTTP/2, jira/http2

2018-09-12 Thread Cao Manh Dat (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612403#comment-16612403
 ] 

Cao Manh Dat edited comment on SOLR-12639 at 9/12/18 4:12 PM:
--

Hi [~risdenk] all the terminologies like Kerberos/NTLM/Spnego still spin my 
head when I read about them. But I had been working on back-porting the 
{{KerberosPlugin}} to use HTTP/2 based on HttpClient implementation 
(jira/http2), the related tests for them is still good so far (if you can test 
it manually and verify my changes that will be great).

I also asked the Jetty community a hand for adding the SPNEGO authentication 
support to Jetty Client as well as review my changes on jira/http2. They are on 
progress on doing that : 
[https://github.com/eclipse/jetty.project/issues/2868]. Therefore by the time 
jira/http2 get merged it will works as normally in HTTP 1.1. Basically it means 
that what used to work will still work with {{Http2SolrClient}}

{quote}

It looks like most servers will fall back to HTTP 1.1 if Kerberos 
authentication is required

{quote}

I don't think so since all the steps of the authorization are done through HTTP 
headers which is independence with HTTP protocol version.


was (Author: caomanhdat):
Hi [~risdenk] all the terminologies like Kerberos/NTLM/Spnego still spin my 
head when I read about them. But I had been working on back-porting the 
{{KerberosPlugin}} to use HTTP/2 based on HttpClient implementation 
(jira/http2), the related tests for them is still good so far (if you can test 
it manually and verify my changes that will be great).

I also asked the Jetty community a hand for adding the SPNEGO authentication 
support to Jetty Client as well as review my changes on jira/http2. They are on 
progress on doing that : 
[https://github.com/eclipse/jetty.project/issues/2868]. Therefore by the time 
jira/http2 get merged it will works as normally in HTTP 1.1. Basically it means 
that what used to work will still work with {{Http2SolrClient}}

> Umbrella JIRA for adding support HTTP/2, jira/http2
> ---
>
> Key: SOLR-12639
> URL: https://issues.apache.org/jira/browse/SOLR-12639
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: http2
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Major
>
> This ticket will aim to replace/add support of http2 by using jetty HTTP 
> client to Solr. All the works will be committed to jira/http2 branch. This 
> branch will be served like a stepping stone between the master branch and 
> Mark Miller starburst branch. I will try to keep jira/http2 as close as 
> master as possible (this will make merging in the future easier). In the same 
> time, changes in starburst branch will be split into smaller/testable parts 
> then push it to jira/http2 branch. 
> Anyone who interests at http2 for Solr can use jira/http2 branch but there is 
> no backward-compatibility guarantee.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_172) - Build # 22846 - Unstable!

2018-09-12 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/22846/
Java: 32bit/jdk1.8.0_172 -server -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.cloud.cdcr.CdcrBidirectionalTest.testBiDir

Error Message:
Captured an uncaught exception in thread: Thread[id=23547, 
name=cdcr-replicator-7183-thread-1, state=RUNNABLE, 
group=TGRP-CdcrBidirectionalTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=23547, name=cdcr-replicator-7183-thread-1, 
state=RUNNABLE, group=TGRP-CdcrBidirectionalTest]
Caused by: java.lang.AssertionError: 1611417101977780224 != 1611417101449297920
at __randomizedtesting.SeedInfo.seed([76261E2580EE0685]:0)
at 
org.apache.solr.update.CdcrUpdateLog$CdcrLogReader.forwardSeek(CdcrUpdateLog.java:611)
at org.apache.solr.handler.CdcrReplicator.run(CdcrReplicator.java:125)
at 
org.apache.solr.handler.CdcrReplicatorScheduler.lambda$null$0(CdcrReplicatorScheduler.java:81)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 14422 lines...]
   [junit4] Suite: org.apache.solr.cloud.cdcr.CdcrBidirectionalTest
   [junit4]   2> Creating dataDir: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.cdcr.CdcrBidirectionalTest_76261E2580EE0685-001/init-core-data-001
   [junit4]   2> 2647536 INFO  
(TEST-CdcrBidirectionalTest.testBiDir-seed#[76261E2580EE0685]) [] 
o.a.s.c.MiniSolrCloudCluster Starting cluster of 1 servers in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.cdcr.CdcrBidirectionalTest_76261E2580EE0685-001/cdcr-cluster2-001
   [junit4]   2> 2647536 INFO  
(TEST-CdcrBidirectionalTest.testBiDir-seed#[76261E2580EE0685]) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 2647537 INFO  (Thread-3876) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 2647537 INFO  (Thread-3876) [] o.a.s.c.ZkTestServer 
Starting server
   [junit4]   2> 2647538 ERROR (Thread-3876) [] o.a.z.s.ZooKeeperServer 
ZKShutdownHandler is not registered, so ZooKeeper server won't take any action 
on ERROR or SHUTDOWN server state changes
   [junit4]   2> 2647637 INFO  
(TEST-CdcrBidirectionalTest.testBiDir-seed#[76261E2580EE0685]) [] 
o.a.s.c.ZkTestServer start zk server on port:35315
   [junit4]   2> 2647639 INFO  (zkConnectionManagerCallback-9660-thread-1) [
] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 2647643 INFO  (jetty-launcher-9657-thread-1) [] 
o.e.j.s.Server jetty-9.4.11.v20180605; built: 2018-06-05T18:24:03.829Z; git: 
d5fc0523cfa96bfebfbda19606cad384d772f04c; jvm 1.8.0_172-b11
   [junit4]   2> 2647644 INFO  (jetty-launcher-9657-thread-1) [] 
o.e.j.s.session DefaultSessionIdManager workerName=node0
   [junit4]   2> 2647644 INFO  (jetty-launcher-9657-thread-1) [] 
o.e.j.s.session No SessionScavenger set, using defaults
   [junit4]   2> 2647644 INFO  (jetty-launcher-9657-thread-1) [] 
o.e.j.s.session node0 Scavenging every 66ms
   [junit4]   2> 2647644 INFO  (jetty-launcher-9657-thread-1) [] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@d7fd71{/solr,null,AVAILABLE}
   [junit4]   2> 2647645 INFO  (jetty-launcher-9657-thread-1) [] 
o.e.j.s.AbstractConnector Started ServerConnector@31ff44{SSL,[ssl, 
http/1.1]}{127.0.0.1:35393}
   [junit4]   2> 2647645 INFO  (jetty-launcher-9657-thread-1) [] 
o.e.j.s.Server Started @2647677ms
   [junit4]   2> 2647645 INFO  (jetty-launcher-9657-thread-1) [] 
o.a.s.c.s.e.JettySolrRunner Jetty properties: {hostContext=/solr, 
hostPort=35393}
   [junit4]   2> 2647645 ERROR (jetty-launcher-9657-thread-1) [] 
o.a.s.u.StartupLoggingUtils Missing Java Option solr.log.dir. Logging may be 
missing or incomplete.
   [junit4]   2> 2647645 INFO  (jetty-launcher-9657-thread-1) [] 
o.a.s.s.SolrDispatchFilter Using logger factory 
org.apache.logging.slf4j.Log4jLoggerFactory
   [junit4]   2> 2647645 INFO  (jetty-launcher-9657-thread-1) [] 
o.a.s.s.SolrDispatchFilter  ___  _   Welcome to Apache Solr™ version 
8.0.0
   [junit4]   2> 2647645 INFO  (jetty-launcher-9657-thread-1) [] 
o.a.s.s.SolrDispatchFilter / __| ___| |_ _   Starting in cloud mode on port null
   [junit4]   2> 2647645 INFO  (jetty-launcher-9657-thread-1) [] 
o.a.s.s.SolrDispatchFilter \__ \/ _ \ | '_|  Install dir: null
   [junit4]   2> 2647645 INFO  (jetty-launcher-9657-thread-1) [] 
o.a.s.s.SolrDispatchFilter |___/\___/_|_|Start time: 
2018-09-12T15:44:36.573Z
   [junit4]   2> 2647648 INFO  

[jira] [Updated] (SOLR-12361) Add solr child documents as values inside SolrInputField

2018-09-12 Thread David Smiley (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated SOLR-12361:

Attachment: SOLR-12361_ref_guide.patch

> Add solr child documents as values inside SolrInputField
> 
>
> Key: SOLR-12361
> URL: https://issues.apache.org/jira/browse/SOLR-12361
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: mosh
>Assignee: David Smiley
>Priority: Major
> Fix For: 7.5
>
> Attachments: SOLR-12361.patch, SOLR-12361.patch, SOLR-12361.patch, 
> SOLR-12361.patch, SOLR-12361_ref_guide.patch
>
>  Time Spent: 10h 10m
>  Remaining Estimate: 0h
>
> During the discussion on SOLR-12298, there was a proposal to remove 
> _childDocuments, and incorporate the relationship between the parent and its 
> child documents, by holding the child documents inside a solrInputField, 
> inside of the document.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12639) Umbrella JIRA for adding support HTTP/2, jira/http2

2018-09-12 Thread Cao Manh Dat (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612403#comment-16612403
 ] 

Cao Manh Dat edited comment on SOLR-12639 at 9/12/18 4:09 PM:
--

Hi [~risdenk] all the terminologies like Kerberos/NTLM/Spnego still spin my 
head when I read about them. But I had been working on back-porting the 
{{KerberosPlugin}} to use HTTP/2 based on HttpClient implementation 
(jira/http2), the related tests for them is still good so far (if you can test 
it manually and verify my changes that will be great).

I also asked the Jetty community a hand for adding the SPNEGO authentication 
support to Jetty Client as well as review my changes on jira/http2. They are on 
progress on doing that : 
[https://github.com/eclipse/jetty.project/issues/2868]. Therefore by the time 
jira/http2 get merged it will works as normally in HTTP 1.1. Basically it means 
that what used to work will still work with {{Http2SolrClient}}


was (Author: caomanhdat):
Hi [~risdenk] all the terminologies like Kerberos/NTLM/Spnego still spin my 
head when I read about them. But I had been working on back-porting the 
{{KerberosPlugin}} to use HTTP/2 based on HttpClient implementation 
(jira/http2), the related tests for them is still good so far (if you can test 
it manually and verify my changes that will be great).

I also asked the Jetty community a hand for adding the SPNEGO authentication 
support to Jetty Client as well as review my changes on jira/http2. They are on 
progress on doing that. Therefore by the time jira/http2 get merged it will 
works as normally in HTTP 1.1. Basically it means that what used to work will 
still work with {{Http2SolrClient}}

> Umbrella JIRA for adding support HTTP/2, jira/http2
> ---
>
> Key: SOLR-12639
> URL: https://issues.apache.org/jira/browse/SOLR-12639
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: http2
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Major
>
> This ticket will aim to replace/add support of http2 by using jetty HTTP 
> client to Solr. All the works will be committed to jira/http2 branch. This 
> branch will be served like a stepping stone between the master branch and 
> Mark Miller starburst branch. I will try to keep jira/http2 as close as 
> master as possible (this will make merging in the future easier). In the same 
> time, changes in starburst branch will be split into smaller/testable parts 
> then push it to jira/http2 branch. 
> Anyone who interests at http2 for Solr can use jira/http2 branch but there is 
> no backward-compatibility guarantee.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12362) JSON loader should save the relationship of children

2018-09-12 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612419#comment-16612419
 ] 

David Smiley commented on SOLR-12362:
-

While working on a bit of documentation, I can't figure out why we need the 
"anonChildDocs" param flag here, yet not for XML.  I know there were some 
syntax ambiguities but I thought we solved them by looking for the unique key 
in a child map to differentiate an atomic update from a child doc.  The code 
makes it seem it's a mere matter of placing the doc on the SolrInputDocument 
ananonymously vs as a field; but it can't be jus that or we wouldn't of needed 
this in the first place (we didn't for XML).  Maybe it was only an issue when 
the document is not solr-update JSON but sliced with "split" param?  Can you 
help remind me [~moshebla]?  I'm hoping that regardless of what the reason is, 
we can outright remove it in Solr 8.

> JSON loader should save the relationship of children
> 
>
> Key: SOLR-12362
> URL: https://issues.apache.org/jira/browse/SOLR-12362
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: mosh
>Assignee: David Smiley
>Priority: Major
> Fix For: 7.5
>
>  Time Spent: 6.5h
>  Remaining Estimate: 0h
>
> Once _childDocuments in SolrInputDocument is changed to a Map, JsonLoader 
> should add the child document to the map while saving its key name, to 
> maintain the child's relationship to its parent.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12055) Enable async logging by default

2018-09-12 Thread Erick Erickson (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-12055:
--
Attachment: SOLR-12055-rollback.patch

> Enable async logging by default
> ---
>
> Key: SOLR-12055
> URL: https://issues.apache.org/jira/browse/SOLR-12055
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: logging
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Minor
> Attachments: SOLR-12055-rollback.patch, 
> SOLR-12055-slh-interim1.patch, SOLR-12055-slh-interim1.patch, 
> SOLR-12055.patch, SOLR-12055.patch
>
>
> When SOLR-7887 is done, switching to async logging will be a simple change to 
> the config files for log4j2. This will reduce contention and increase 
> throughput generally and logging in particular.
> There's a discussion of the pros/cons here: 
> https://logging.apache.org/log4j/2.0/manual/async.html
> An alternative is to put a note in the Ref Guide about how to enable async 
> logging.
> I guess even if we enable async by default the ref guide still needs a note 
> about how to _disable_ it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12055) Enable async logging by default

2018-09-12 Thread Erick Erickson (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612321#comment-16612321
 ] 

Erick Erickson commented on SOLR-12055:
---

Forgot to attach the rollback patch last week, it's there now. People can 
probably ignore it.

I'll be starting this up again today locally, although anyone with knowledge 
here please chime in.

> Enable async logging by default
> ---
>
> Key: SOLR-12055
> URL: https://issues.apache.org/jira/browse/SOLR-12055
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: logging
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Minor
> Attachments: SOLR-12055-rollback.patch, 
> SOLR-12055-slh-interim1.patch, SOLR-12055-slh-interim1.patch, 
> SOLR-12055.patch, SOLR-12055.patch
>
>
> When SOLR-7887 is done, switching to async logging will be a simple change to 
> the config files for log4j2. This will reduce contention and increase 
> throughput generally and logging in particular.
> There's a discussion of the pros/cons here: 
> https://logging.apache.org/log4j/2.0/manual/async.html
> An alternative is to put a note in the Ref Guide about how to enable async 
> logging.
> I guess even if we enable async by default the ref guide still needs a note 
> about how to _disable_ it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12639) Umbrella JIRA for adding support HTTP/2, jira/http2

2018-09-12 Thread Cao Manh Dat (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612403#comment-16612403
 ] 

Cao Manh Dat commented on SOLR-12639:
-

Hi [~risdenk] all the terminologies like Kerberos/NTLM/Spnego still spin my 
head when I read about them. But I had been working on back-porting the 
{{KerberosPlugin}} to use HTTP/2 based on HttpClient implementation 
(jira/http2), the related tests for them is still good so far (if you can test 
it manually and verify my changes that will be great).

I also asked the Jetty community a hand for adding the SPNEGO authentication 
support to Jetty Client as well as review my changes on jira/http2. They are on 
progress on doing that. Therefore by the time jira/http2 get merged it will 
works as normally in HTTP 1.1. Basically it means that what used to work will 
still work with {{Http2SolrClient}}

> Umbrella JIRA for adding support HTTP/2, jira/http2
> ---
>
> Key: SOLR-12639
> URL: https://issues.apache.org/jira/browse/SOLR-12639
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: http2
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Major
>
> This ticket will aim to replace/add support of http2 by using jetty HTTP 
> client to Solr. All the works will be committed to jira/http2 branch. This 
> branch will be served like a stepping stone between the master branch and 
> Mark Miller starburst branch. I will try to keep jira/http2 as close as 
> master as possible (this will make merging in the future easier). In the same 
> time, changes in starburst branch will be split into smaller/testable parts 
> then push it to jira/http2 branch. 
> Anyone who interests at http2 for Solr can use jira/http2 branch but there is 
> no backward-compatibility guarantee.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-7.5 - Build # 1 - Failure

2018-09-12 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-7.5/1/

No tests ran.

Build Log:
[...truncated 23300 lines...]
[asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: 
invalid part, must have at least one section (e.g., chapter, appendix, etc.)
[asciidoctor:convert] asciidoctor: WARNING: simulations.adoc: line 25: section 
title out of sequence: expected level 3, got level 4
[asciidoctor:convert] asciidoctor: WARNING: simulations.adoc: line 92: section 
title out of sequence: expected level 3, got level 4
[asciidoctor:convert] asciidoctor: WARNING: simulations.adoc: line 138: section 
title out of sequence: expected level 3, got level 4
[asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid 
part, must have at least one section (e.g., chapter, appendix, etc.)
[asciidoctor:convert] asciidoctor: WARNING: simulations.adoc: line 25: section 
title out of sequence: expected levels 0 or 1, got level 2
[asciidoctor:convert] asciidoctor: WARNING: simulations.adoc: line 92: section 
title out of sequence: expected levels 0 or 1, got level 2
[asciidoctor:convert] asciidoctor: WARNING: simulations.adoc: line 138: section 
title out of sequence: expected levels 0 or 1, got level 2
 [java] Processed 2342 links (1893 relative) to 3144 anchors in 245 files
 [echo] Validated Links & Anchors via: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.5/solr/build/solr-ref-guide/bare-bones-html/

-dist-changes:
 [copy] Copying 4 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.5/solr/package/changes

package:

-unpack-solr-tgz:

-ensure-solr-tgz-exists:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.5/solr/build/solr.tgz.unpacked
[untar] Expanding: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.5/solr/package/solr-7.5.0.tgz
 into 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.5/solr/build/solr.tgz.unpacked

generate-maven-artifacts:

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.5/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.5/lucene/top-level-ivy-settings.xml

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.5/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.5/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.5/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.5/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.5/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.5/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.5/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:

[JENKINS-MAVEN] Lucene-Solr-Maven-7.x #302: POMs out of sync

2018-09-12 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-7.x/302/

No tests ran.

Build Log:
[...truncated 19572 lines...]
BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-7.x/build.xml:672: The 
following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-7.x/build.xml:209: The 
following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-7.x/lucene/build.xml:411: 
The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-7.x/lucene/common-build.xml:2261:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-7.x/lucene/common-build.xml:1719:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-7.x/lucene/common-build.xml:650:
 Error deploying artifact 'org.apache.lucene:lucene-queries:jar': Error 
installing artifact's metadata: Error while deploying metadata: Error 
transferring file

Total time: 8 minutes 24 seconds
Build step 'Invoke Ant' marked build as failure
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Created] (SOLR-12766) When retrying internal requests, backoff only once for the full batch of retries

2018-09-12 Thread JIRA
Tomás Fernández Löbbe created SOLR-12766:


 Summary: When retrying internal requests, backoff only once for 
the full batch of retries
 Key: SOLR-12766
 URL: https://issues.apache.org/jira/browse/SOLR-12766
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Tomás Fernández Löbbe
Assignee: Tomás Fernández Löbbe


We currently wait for each internal retry request ({{TOLEADER}} or 
{{FROMLEADER}} requests). This can cause a long wait time when retrying many 
requests and can timeout the client. We should instead wait once and retry the 
full batch of errors.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 2060 - Unstable!

2018-09-12 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/2060/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseG1GC

3 tests failed.
FAILED:  org.apache.solr.metrics.reporters.solr.SolrShardReporterTest.test

Error Message:
Error from server at http://127.0.0.1:33247: At least one of the node(s) 
specified [127.0.0.1:49911_] are not currently active in [], no action taken.

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:33247: At least one of the node(s) specified 
[127.0.0.1:49911_] are not currently active in [], no action taken.
at 
__randomizedtesting.SeedInfo.seed([D9B83986EA2BDE2C:51EC065C44D7B3D4]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1107)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createJettys(AbstractFullDistribZkTestBase.java:425)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createServers(AbstractFullDistribZkTestBase.java:341)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1006)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:983)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 

[jira] [Commented] (SOLR-12361) Add solr child documents as values inside SolrInputField

2018-09-12 Thread Cassandra Targett (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612698#comment-16612698
 ] 

Cassandra Targett commented on SOLR-12361:
--

I think the patch is good and helps explain this type of documents in better 
detail. The only change I would make is to change the heading "Schema notes" to 
capitalize "notes" -> "Notes" for headline case in our section titles.

> Add solr child documents as values inside SolrInputField
> 
>
> Key: SOLR-12361
> URL: https://issues.apache.org/jira/browse/SOLR-12361
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: mosh
>Assignee: David Smiley
>Priority: Major
> Fix For: 7.5
>
> Attachments: SOLR-12361.patch, SOLR-12361.patch, SOLR-12361.patch, 
> SOLR-12361.patch, SOLR-12361_ref_guide.patch
>
>  Time Spent: 10h 10m
>  Remaining Estimate: 0h
>
> During the discussion on SOLR-12298, there was a proposal to remove 
> _childDocuments, and incorporate the relationship between the parent and its 
> child documents, by holding the child documents inside a solrInputField, 
> inside of the document.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12361) Add solr child documents as values inside SolrInputField

2018-09-12 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612654#comment-16612654
 ] 

David Smiley commented on SOLR-12361:
-

In the attach patch "ref_guide.patch" suffix, I gathered together issues 
related to nested docs including this one under "New Features".  I tweaked the 
wording a little.  I then updated "uploading-data-with-index-handlers.adoc" to 
show the old (anonymous) and new (labelled) way of associating docs in XML & 
JSON. [~ctargett] could you please review?  I hope this isn't too late for 7.5.

> Add solr child documents as values inside SolrInputField
> 
>
> Key: SOLR-12361
> URL: https://issues.apache.org/jira/browse/SOLR-12361
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: mosh
>Assignee: David Smiley
>Priority: Major
> Fix For: 7.5
>
> Attachments: SOLR-12361.patch, SOLR-12361.patch, SOLR-12361.patch, 
> SOLR-12361.patch, SOLR-12361_ref_guide.patch
>
>  Time Spent: 10h 10m
>  Remaining Estimate: 0h
>
> During the discussion on SOLR-12298, there was a proposal to remove 
> _childDocuments, and incorporate the relationship between the parent and its 
> child documents, by holding the child documents inside a solrInputField, 
> inside of the document.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12753) Async logging ring buffer and OOM error

2018-09-12 Thread Erick Erickson (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612627#comment-16612627
 ] 

Erick Erickson commented on SOLR-12753:
---

Since I backed out the async logging, I'll probably fold this into 12055 when I 
check it back in after the 7.5 release.

> Async logging ring buffer and OOM error
> ---
>
> Key: SOLR-12753
> URL: https://issues.apache.org/jira/browse/SOLR-12753
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: logging
>Affects Versions: 7.5
>Reporter: Andrzej Bialecki 
>Assignee: Erick Erickson
>Priority: Major
> Attachments: SOLR-12753.patch
>
>
> I’m using a simulated environment for autoscaling tests, which may create 
> some pretty degenerate cases (like collections with 50,000 replicas and 
> Policy calculations over these, times 500 nodes).
> I noticed that when I turned on debug logging I occasionally would get an OOM 
> error, and the heap dump showed that the biggest objects were a bunch of 
> extremely large strings in the async logger’s ring buffer. These strings were 
> admittedly extremely large (million chars or so) but the previously used sync 
> logging didn’t have any issue with them, because they were consumed one by 
> one.
> For sure, Solr should not attempt to be logging multi-megabyte data. But I 
> also feel like the framework could perhaps help here by enforcing large but 
> sane limits on maximum size of log messages.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-BadApples-Tests-7.x - Build # 157 - Unstable

2018-09-12 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-7.x/157/

1 tests failed.
FAILED:  
org.apache.solr.client.solrj.io.stream.MathExpressionTest.testDistributions

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([63027E94C0B444C4:DCFD3F3E1E4EA458]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertTrue(Assert.java:54)
at 
org.apache.solr.client.solrj.io.stream.MathExpressionTest.testDistributions(MathExpressionTest.java:1543)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 15754 lines...]
   [junit4] Suite: org.apache.solr.client.solrj.io.stream.MathExpressionTest
   [junit4]   2> Creating dataDir: 

Re: [JENKINS] Lucene-Solr-NightlyTests-master - Build # 1640 - Failure

2018-09-12 Thread Dawid Weiss
This is an OOM and RAMDirectory -- should this test maybe suppress
RAMDirectory entirely?

D.
On Wed, Sep 12, 2018 at 4:17 PM Apache Jenkins Server
 wrote:
>
> Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1640/
>
> 2 tests failed.
> FAILED:  org.apache.lucene.index.TestBinaryDocValuesUpdates.testTonsOfUpdates
>
> Error Message:
> Java heap space
>
> Stack Trace:
> java.lang.OutOfMemoryError: Java heap space
> at org.apache.lucene.store.RAMFile.newBuffer(RAMFile.java:84)
> at org.apache.lucene.store.RAMFile.addBuffer(RAMFile.java:57)
> at 
> org.apache.lucene.store.RAMOutputStream.switchCurrentBuffer(RAMOutputStream.java:168)
> at 
> org.apache.lucene.store.RAMOutputStream.writeBytes(RAMOutputStream.java:154)
> at 
> org.apache.lucene.store.MockIndexOutputWrapper.writeBytes(MockIndexOutputWrapper.java:141)
> at 
> org.apache.lucene.codecs.lucene70.Lucene70DocValuesConsumer.addBinaryField(Lucene70DocValuesConsumer.java:348)
> at 
> org.apache.lucene.codecs.perfield.PerFieldDocValuesFormat$FieldsWriter.addBinaryField(PerFieldDocValuesFormat.java:114)
> at 
> org.apache.lucene.index.ReadersAndUpdates.handleDVUpdates(ReadersAndUpdates.java:330)
> at 
> org.apache.lucene.index.ReadersAndUpdates.writeFieldUpdates(ReadersAndUpdates.java:570)
> at 
> org.apache.lucene.index.IndexWriter.writeSomeDocValuesUpdates(IndexWriter.java:626)
> at 
> org.apache.lucene.index.FrozenBufferedUpdates.apply(FrozenBufferedUpdates.java:299)
> at 
> org.apache.lucene.index.IndexWriter.lambda$publishFrozenUpdates$3(IndexWriter.java:2600)
> at 
> org.apache.lucene.index.IndexWriter$$Lambda$110/1815498900.process(Unknown 
> Source)
> at 
> org.apache.lucene.index.IndexWriter.processEvents(IndexWriter.java:5097)
> at 
> org.apache.lucene.index.IndexWriter.updateDocValues(IndexWriter.java:1783)
> at 
> org.apache.lucene.index.TestBinaryDocValuesUpdates.testTonsOfUpdates(TestBinaryDocValuesUpdates.java:1324)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
> at 
> org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
> at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
> at 
> org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
> at 
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
> at 
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
> at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
>
>
> FAILED:  org.apache.solr.cloud.MoveReplicaHDFSTest.testFailedMove
>
> Error Message:
> No live SolrServers available to handle this 
> request:[http://127.0.0.1:41156/solr/MoveReplicaHDFSTest_failed_coll_true, 
> http://127.0.0.1:46777/solr/MoveReplicaHDFSTest_failed_coll_true]
>
> Stack Trace:
> org.apache.solr.client.solrj.SolrServerException: No live SolrServers 
> available to handle this 
> request:[http://127.0.0.1:41156/solr/MoveReplicaHDFSTest_failed_coll_true, 
> http://127.0.0.1:46777/solr/MoveReplicaHDFSTest_failed_coll_true]
> at 
> __randomizedtesting.SeedInfo.seed([A78659BA58953D1C:D4B8A48EF46E8CC]:0)
> at 
> org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:462)
> at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1107)
> at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884)
> at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:994)
> at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
> at 
> org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
> at 

[jira] [Updated] (SOLR-12766) When retrying internal requests, backoff only once for the full batch of retries

2018-09-12 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/SOLR-12766?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe updated SOLR-12766:
-
Attachment: SOLR-12766.patch

> When retrying internal requests, backoff only once for the full batch of 
> retries
> 
>
> Key: SOLR-12766
> URL: https://issues.apache.org/jira/browse/SOLR-12766
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Tomás Fernández Löbbe
>Assignee: Tomás Fernández Löbbe
>Priority: Minor
> Attachments: SOLR-12766.patch
>
>
> We currently wait for each internal retry request ({{TOLEADER}} or 
> {{FROMLEADER}} requests). This can cause a long wait time when retrying many 
> requests and can timeout the client. We should instead wait once and retry 
> the full batch of errors.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-7.5 - Build # 2 - Unstable

2018-09-12 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.5/2/

4 tests failed.
FAILED:  org.apache.solr.cloud.ChaosMonkeySafeLeaderWithPullReplicasTest.test

Error Message:
Timed out waiting for replica core_node48 (1536770121345) to replicate from 
leader core_node44 (0)

Stack Trace:
java.lang.AssertionError: Timed out waiting for replica core_node48 
(1536770121345) to replicate from leader core_node44 (0)
at 
__randomizedtesting.SeedInfo.seed([8B07B010EAEAAA35:3538FCA4416C7CD]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForReplicationFromReplicas(AbstractFullDistribZkTestBase.java:2146)
at 
org.apache.solr.cloud.ChaosMonkeySafeLeaderWithPullReplicasTest.test(ChaosMonkeySafeLeaderWithPullReplicasTest.java:211)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1008)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:983)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 

[jira] [Commented] (SOLR-10290) New Publication Model for Solr Reference Guide

2018-09-12 Thread Christine Poerschke (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-10290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612614#comment-16612614
 ] 

Christine Poerschke commented on SOLR-10290:


Hello. Not sure if this is the best place to ask or if it is a known issue ... 
I stumbled across odd-looking javadocs links in the .pdf version of the Solr 
Reference Guide e.g. in 7.4 search for (say) {{SynonymMap.Builder.html}} or 
{{SynonymMap.Builder}} and it shows (literally)
{code}
#{pdf-lucene-javadocs}/analyzers- 
common/org/apache/lucene/analysis/synonym/SynonymMap.Builder.html[SynonymMap.Builder]
{code}
whereas the online equivalent is fine e.g. 
http://lucene.apache.org/solr/guide/7_4/filter-descriptions.html#synonym-graph-filter
 in this case. The issue seems to be specific to {{\{lucene-javadocs\}}} i.e. 
the {{\{solr-javadocs\}}} links i checked were fine.

> New Publication Model for Solr Reference Guide
> --
>
> Key: SOLR-10290
> URL: https://issues.apache.org/jira/browse/SOLR-10290
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Cassandra Targett
>Priority: Major
> Attachments: sitemap.patch, sitemap.patch
>
>
> The current Solr Ref Guide is hosted at cwiki.apache.org, a Confluence 
> installation. There are numerous reasons to be dissatisfied with the current 
> setup, a few of which are:
> * Confluence as a product is no longer designed for our use case and our type 
> of content. 
> * The writing/editing experience is painful and a barrier for all users, who 
> need to learn a lot of Confluence-specific syntax just to help out with some 
> content. 
> * Non-committers can't really help improve documentation except to point out 
> problems and hope someone else fixes them.
> * We really can't maintain online documentation for different versions. Users 
> on versions other than the one that hasn't been released yet are only given a 
> PDF to work with.
> I made a proposal in Aug 2016 ([email 
> link|http://mail-archives.apache.org/mod_mbox/lucene-dev/201608.mbox/%3CCAKrJsP%3DqMLVZhb8xR2C27mfNFfEJ6b%3DPcjxPz4f3fq7G371B_g%40mail.gmail.com%3E])
>  to move the Ref Guide from Confluence to a new system that relies on 
> asciidoc-formatted text files integrated with the Solr source code. 
> This is an umbrella issue for the sub-tasks and related decisions to make 
> that proposal a reality. A lot of work has already been done as part of a 
> proof-of-concept, but there are many things left to do. Some of the items to 
> be completed include:
> * Creation of a branch and moving the early POC work I've done to the project
> * Conversion the content and clean up of unavoidable post-conversion issues
> * Decisions about location of source files, branching strategy and hosting 
> for online versions
> * Meta-documentation for publication process, beginner tips, etc. (whatever 
> else people need or want)
> * Integration of build processes with the broader project
> For reference, a demo of what the new ref guide might look like is currently 
> online at http://people.apache.org/~ctargett/RefGuidePOC/.
> Creation of sub-tasks to follow shortly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-repro - Build # 1451 - Unstable

2018-09-12 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/1451/

[...truncated 28 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1640/consoleText

[repro] Revision: 5b96f89d2b038bff2ed3351887a87108f7cc6ea3

[repro] Ant options: -Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
[repro] Repro line:  ant test  -Dtestcase=TestBinaryDocValuesUpdates 
-Dtests.method=testTonsOfUpdates -Dtests.seed=946FDCB4526E941E 
-Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=es-MX -Dtests.timezone=MST -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[repro] Repro line:  ant test  -Dtestcase=MoveReplicaHDFSTest 
-Dtests.method=testFailedMove -Dtests.seed=A78659BA58953D1C 
-Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=es-CO -Dtests.timezone=America/Cordoba -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
5b96f89d2b038bff2ed3351887a87108f7cc6ea3
[repro] git fetch
[repro] git checkout 5b96f89d2b038bff2ed3351887a87108f7cc6ea3

[...truncated 1 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]lucene/core
[repro]   TestBinaryDocValuesUpdates
[repro]solr/core
[repro]   MoveReplicaHDFSTest
[repro] ant compile-test

[...truncated 160 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.TestBinaryDocValuesUpdates" -Dtests.showOutput=onerror 
-Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.seed=946FDCB4526E941E -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=es-MX -Dtests.timezone=MST -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[...truncated 423 lines...]
[repro] Setting last failure code to 256

[repro] ant compile-test

[...truncated 3356 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.MoveReplicaHDFSTest" -Dtests.showOutput=onerror 
-Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.seed=A78659BA58953D1C -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=es-CO -Dtests.timezone=America/Cordoba -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[...truncated 86 lines...]
[repro] Failures:
[repro]   0/5 failed: org.apache.solr.cloud.MoveReplicaHDFSTest
[repro]   2/5 failed: org.apache.lucene.index.TestBinaryDocValuesUpdates
[repro] git checkout 5b96f89d2b038bff2ed3351887a87108f7cc6ea3

[...truncated 1 lines...]
[repro] Exiting with code 256

[...truncated 6 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-10290) New Publication Model for Solr Reference Guide

2018-09-12 Thread Cassandra Targett (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-10290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612655#comment-16612655
 ] 

Cassandra Targett commented on SOLR-10290:
--

Thanks Christine, that must be a recent breakage, I will investigate and fix it.

> New Publication Model for Solr Reference Guide
> --
>
> Key: SOLR-10290
> URL: https://issues.apache.org/jira/browse/SOLR-10290
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Cassandra Targett
>Priority: Major
> Attachments: sitemap.patch, sitemap.patch
>
>
> The current Solr Ref Guide is hosted at cwiki.apache.org, a Confluence 
> installation. There are numerous reasons to be dissatisfied with the current 
> setup, a few of which are:
> * Confluence as a product is no longer designed for our use case and our type 
> of content. 
> * The writing/editing experience is painful and a barrier for all users, who 
> need to learn a lot of Confluence-specific syntax just to help out with some 
> content. 
> * Non-committers can't really help improve documentation except to point out 
> problems and hope someone else fixes them.
> * We really can't maintain online documentation for different versions. Users 
> on versions other than the one that hasn't been released yet are only given a 
> PDF to work with.
> I made a proposal in Aug 2016 ([email 
> link|http://mail-archives.apache.org/mod_mbox/lucene-dev/201608.mbox/%3CCAKrJsP%3DqMLVZhb8xR2C27mfNFfEJ6b%3DPcjxPz4f3fq7G371B_g%40mail.gmail.com%3E])
>  to move the Ref Guide from Confluence to a new system that relies on 
> asciidoc-formatted text files integrated with the Solr source code. 
> This is an umbrella issue for the sub-tasks and related decisions to make 
> that proposal a reality. A lot of work has already been done as part of a 
> proof-of-concept, but there are many things left to do. Some of the items to 
> be completed include:
> * Creation of a branch and moving the early POC work I've done to the project
> * Conversion the content and clean up of unavoidable post-conversion issues
> * Decisions about location of source files, branching strategy and hosting 
> for online versions
> * Meta-documentation for publication process, beginner tips, etc. (whatever 
> else people need or want)
> * Integration of build processes with the broader project
> For reference, a demo of what the new ref guide might look like is currently 
> online at http://people.apache.org/~ctargett/RefGuidePOC/.
> Creation of sub-tasks to follow shortly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10290) New Publication Model for Solr Reference Guide

2018-09-12 Thread Cassandra Targett (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-10290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612671#comment-16612671
 ] 

Cassandra Targett commented on SOLR-10290:
--

That was the tiniest of typos - a {{#}} instead of a {{$}}! Thanks! Here are 
the commits that fix it:

master: ad7f15d808232572c8755967559f440c742a2352
branch_7x: 1a8a6eafe0220a40935d2aa9f5d4cf0b6d2eaa4b
branch_7_5:  b20f0c703dbba466a3e92b57673310ee88c5ef20

> New Publication Model for Solr Reference Guide
> --
>
> Key: SOLR-10290
> URL: https://issues.apache.org/jira/browse/SOLR-10290
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Cassandra Targett
>Priority: Major
> Attachments: sitemap.patch, sitemap.patch
>
>
> The current Solr Ref Guide is hosted at cwiki.apache.org, a Confluence 
> installation. There are numerous reasons to be dissatisfied with the current 
> setup, a few of which are:
> * Confluence as a product is no longer designed for our use case and our type 
> of content. 
> * The writing/editing experience is painful and a barrier for all users, who 
> need to learn a lot of Confluence-specific syntax just to help out with some 
> content. 
> * Non-committers can't really help improve documentation except to point out 
> problems and hope someone else fixes them.
> * We really can't maintain online documentation for different versions. Users 
> on versions other than the one that hasn't been released yet are only given a 
> PDF to work with.
> I made a proposal in Aug 2016 ([email 
> link|http://mail-archives.apache.org/mod_mbox/lucene-dev/201608.mbox/%3CCAKrJsP%3DqMLVZhb8xR2C27mfNFfEJ6b%3DPcjxPz4f3fq7G371B_g%40mail.gmail.com%3E])
>  to move the Ref Guide from Confluence to a new system that relies on 
> asciidoc-formatted text files integrated with the Solr source code. 
> This is an umbrella issue for the sub-tasks and related decisions to make 
> that proposal a reality. A lot of work has already been done as part of a 
> proof-of-concept, but there are many things left to do. Some of the items to 
> be completed include:
> * Creation of a branch and moving the early POC work I've done to the project
> * Conversion the content and clean up of unavoidable post-conversion issues
> * Decisions about location of source files, branching strategy and hosting 
> for online versions
> * Meta-documentation for publication process, beginner tips, etc. (whatever 
> else people need or want)
> * Integration of build processes with the broader project
> For reference, a demo of what the new ref guide might look like is currently 
> online at http://people.apache.org/~ctargett/RefGuidePOC/.
> Creation of sub-tasks to follow shortly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12361) Add solr child documents as values inside SolrInputField

2018-09-12 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612763#comment-16612763
 ] 

ASF subversion and git services commented on SOLR-12361:


Commit 6e8c05f6fe083544fb7f8fdd01df08ac54d7742e in lucene-solr's branch 
refs/heads/master from [~dsmiley]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=6e8c05f ]

SOLR-12361: ref guide changes & CHANGES.txt organization


> Add solr child documents as values inside SolrInputField
> 
>
> Key: SOLR-12361
> URL: https://issues.apache.org/jira/browse/SOLR-12361
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: mosh
>Assignee: David Smiley
>Priority: Major
> Fix For: 7.5
>
> Attachments: SOLR-12361.patch, SOLR-12361.patch, SOLR-12361.patch, 
> SOLR-12361.patch, SOLR-12361_ref_guide.patch
>
>  Time Spent: 10h 10m
>  Remaining Estimate: 0h
>
> During the discussion on SOLR-12298, there was a proposal to remove 
> _childDocuments, and incorporate the relationship between the parent and its 
> child documents, by holding the child documents inside a solrInputField, 
> inside of the document.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8496) Explore selective dimension indexing in BKDReader/Writer

2018-09-12 Thread Nicholas Knize (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicholas Knize updated LUCENE-8496:
---
Attachment: LUCENE-8496.patch

> Explore selective dimension indexing in BKDReader/Writer
> 
>
> Key: LUCENE-8496
> URL: https://issues.apache.org/jira/browse/LUCENE-8496
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Nicholas Knize
>Priority: Major
> Attachments: LUCENE-8496.patch
>
>
> This issue explores adding a new feature to BKDReader/Writer that enables 
> users to select a fewer number of dimensions to be used for creating the BKD 
> index than the total number of dimensions specified for field encoding. This 
> is useful for encoding dimensional data that is used for interpreting the 
> encoded field data but unnecessary (or not efficient) for creating the index 
> structure. One such example is {{LatLonShape}} encoding. The first 4 
> dimensions may be used to to efficiently search/index the triangle using its 
> precomputed bounding box as a 4D point, and the remaining dimensions can be 
> used to encode the vertices of the tessellated triangle. This causes BKD to 
> act much like an R-Tree for shape data where search is distilled into a 4D 
> point (instead of a more expensive 6D point) and the triangle is encoded 
> using a portion of the remaining (non-indexed) dimensions. Fields that use 
> the full data range for indexing are not impacted and behave as they normally 
> would.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8496) Explore selective dimension indexing in BKDReader/Writer

2018-09-12 Thread Nicholas Knize (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612826#comment-16612826
 ] 

Nicholas Knize commented on LUCENE-8496:


Initial patch provided:

The lionshare of the changes are made to {{FieldType}}, {{BKDWriter}}, and 
{{BKDReader}}.

* {{FieldType}} - split {{pointDimensionCount}} into two new integers that 
define {{pointDataDimensionCount}} and {{pointIndexDimensionCount}}. 
{{pointIndexDimensionCount}} must be <= {{pointDataDimensionCount}} and defines 
the first {{n}} dimensions that will be used to build the index. The remaining 
{{pointDataDimensionCount}} - {{pointIndexDimensionCount}} dimensions are 
ignored while building (e.g., split/merge) the index. Getter and Setter utility 
methods are added.

* {{BKDWriter}} - change {{writeIndex}} to encode and write {{numIndexDims}} in 
the 2 most significant bytes of the integer that formerly stored {{numDims}} 
this provides simple backwards compatability without requiring a change to 
{{FieldInfoFormat}}. Indexing methods are updated to only use the first 
{{numIndexDims}} while building the tree. Leaf nodes still use {{numDataDims}} 
for efficiently packing and compressing the leaf level data (data nodes).

* {{BKDReader}} - add version checking in the constructor to decode 
{{numIndexDims}} and {{numDataDims}} from the packed dimension integer. Update 
index reading methods to only look at the first {{numIndexDims}} while 
traversing the tree. {{numDataDims}} are still used for decoding leaf level 
data.

* API Changes - all instances of {{pointDimensionCount}} have been updated to 
{{pointDataDimensionCount}} and {{pointIndexDimensionCount}} to reflect total 
number of dimensions, and number of dimensions used for creating the index, 
respectively.

* All POINT Tests and POINT based Fields have been updated to use the API 
changes.

Benchmarking
---

To benchmark the changes I update {{LatLonShape}} (not included in this patch) 
and ran benchmark tests both with and without selective indexing. The results 
are below: 

6 dimension encoded {{LatLonShape}} w/o selective indexing
--
INDEX SIZE: 1.2795778876170516 GB
READER MB: 1.7928361892700195
BEST M hits/sec: 11.67378231920028
BEST QPS: 6.8635445274291715 for 225 queries, totHits=382688713

7 dimension LatLonShape encoding w/ 4 dimension selective indexing
---
INDEX SIZE: 2.1509012933820486 GB
READER MB: 1.8154268264770508
BEST M hits/sec: 17.018094815004627
BEST QPS: 10.005707519719927 for 225 queries, totHits=382688713

The gains are a little better than the differences between searching a 4d range 
vs a 6d range. The index size increased due to using 7 dimensions instead of 6, 
but I also switched over to a bit bigger encoding size.

> Explore selective dimension indexing in BKDReader/Writer
> 
>
> Key: LUCENE-8496
> URL: https://issues.apache.org/jira/browse/LUCENE-8496
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Nicholas Knize
>Priority: Major
> Attachments: LUCENE-8496.patch
>
>
> This issue explores adding a new feature to BKDReader/Writer that enables 
> users to select a fewer number of dimensions to be used for creating the BKD 
> index than the total number of dimensions specified for field encoding. This 
> is useful for encoding dimensional data that is used for interpreting the 
> encoded field data but unnecessary (or not efficient) for creating the index 
> structure. One such example is {{LatLonShape}} encoding. The first 4 
> dimensions may be used to to efficiently search/index the triangle using its 
> precomputed bounding box as a 4D point, and the remaining dimensions can be 
> used to encode the vertices of the tessellated triangle. This causes BKD to 
> act much like an R-Tree for shape data where search is distilled into a 4D 
> point (instead of a more expensive 6D point) and the triangle is encoded 
> using a portion of the remaining (non-indexed) dimensions. Fields that use 
> the full data range for indexing are not impacted and behave as they normally 
> would.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12767) Deprecate min_rf

2018-09-12 Thread JIRA
Tomás Fernández Löbbe created SOLR-12767:


 Summary: Deprecate min_rf
 Key: SOLR-12767
 URL: https://issues.apache.org/jira/browse/SOLR-12767
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Tomás Fernández Löbbe


Currently the {{min_rf}} parameter does two things.
1. It tells Solr that the user wants to keep track of the achieved replication 
factor
2. (undocumented AFAICT) It prevents Solr from putting replicas in recovery if 
the achieved replication factor is lower than the {{min_rf}} specified

#2 is intentional and I believe the reason behind it is to prevent replicas to 
go into recovery in cases of short hiccups (since the assumption is that the 
user is going to retry the request anyway). This is dangerous because if the 
user doesn’t retry (or retries a number of times but keeps failing) the 
replicas will be permanently inconsistent. Also, since we now retry updates 
from leaders to replicas, this behavior has less value, since short temporary 
blips should be recovered by those retries anyway. 

I think we should remove the behavior described in #2, #1 is still valuable, 
but there isn’t much point of making the parameter an integer, the user is just 
telling Solr that they want the achieved replication factor, so it could be a 
boolean, but I’m thinking we probably don’t even want to expose the parameter, 
and just always keep track of it, and include it in the response. It’s not 
costly to calculate, so why keep two separate code paths?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12361) Add solr child documents as values inside SolrInputField

2018-09-12 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612765#comment-16612765
 ] 

ASF subversion and git services commented on SOLR-12361:


Commit 136f0fee5b1c27047a14c2aa9fc89c23eb69fa08 in lucene-solr's branch 
refs/heads/branch_7x from [~dsmiley]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=136f0fe ]

SOLR-12361: ref guide changes & CHANGES.txt organization

(cherry picked from commit 6e8c05f6fe083544fb7f8fdd01df08ac54d7742e)


> Add solr child documents as values inside SolrInputField
> 
>
> Key: SOLR-12361
> URL: https://issues.apache.org/jira/browse/SOLR-12361
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: mosh
>Assignee: David Smiley
>Priority: Major
> Fix For: 7.5
>
> Attachments: SOLR-12361.patch, SOLR-12361.patch, SOLR-12361.patch, 
> SOLR-12361.patch, SOLR-12361_ref_guide.patch
>
>  Time Spent: 10h 10m
>  Remaining Estimate: 0h
>
> During the discussion on SOLR-12298, there was a proposal to remove 
> _childDocuments, and incorporate the relationship between the parent and its 
> child documents, by holding the child documents inside a solrInputField, 
> inside of the document.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12361) Add solr child documents as values inside SolrInputField

2018-09-12 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612767#comment-16612767
 ] 

ASF subversion and git services commented on SOLR-12361:


Commit c19dc51e93ff3e11b975e557a7431e188ec007f9 in lucene-solr's branch 
refs/heads/branch_7_5 from [~dsmiley]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=c19dc51 ]

SOLR-12361: ref guide changes & CHANGES.txt organization

(cherry picked from commit 6e8c05f6fe083544fb7f8fdd01df08ac54d7742e)


> Add solr child documents as values inside SolrInputField
> 
>
> Key: SOLR-12361
> URL: https://issues.apache.org/jira/browse/SOLR-12361
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: mosh
>Assignee: David Smiley
>Priority: Major
> Fix For: 7.5
>
> Attachments: SOLR-12361.patch, SOLR-12361.patch, SOLR-12361.patch, 
> SOLR-12361.patch, SOLR-12361_ref_guide.patch
>
>  Time Spent: 10h 10m
>  Remaining Estimate: 0h
>
> During the discussion on SOLR-12298, there was a proposal to remove 
> _childDocuments, and incorporate the relationship between the parent and its 
> child documents, by holding the child documents inside a solrInputField, 
> inside of the document.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-8496) Explore selective dimension indexing in BKDReader/Writer

2018-09-12 Thread Nicholas Knize (JIRA)
Nicholas Knize created LUCENE-8496:
--

 Summary: Explore selective dimension indexing in BKDReader/Writer
 Key: LUCENE-8496
 URL: https://issues.apache.org/jira/browse/LUCENE-8496
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Nicholas Knize


This issue explores adding a new feature to BKDReader/Writer that enables users 
to select a fewer number of dimensions to be used for creating the BKD index 
than the total number of dimensions specified for field encoding. This is 
useful for encoding dimensional data that is used for interpreting the encoded 
field data but unnecessary (or not efficient) for creating the index structure. 
One such example is {{LatLonShape}} encoding. The first 4 dimensions may be 
used to to efficiently search/index the triangle using its precomputed bounding 
box as a 4D point, and the remaining dimensions can be used to encode the 
vertices of the tessellated triangle. This causes BKD to act much like an 
R-Tree for shape data where search is distilled into a 4D point (instead of a 
more expensive 6D point) and the triangle is encoded using a portion of the 
remaining (non-indexed) dimensions. Fields that use the full data range for 
indexing are not impacted and behave as they normally would.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Solaris (64bit/jdk1.8.0) - Build # 814 - Still Unstable!

2018-09-12 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Solaris/814/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

4 tests failed.
FAILED:  org.apache.lucene.index.TestAddIndexes.testAddIndicesWithSoftDeletes

Error Message:
Index: 2, Size: 2

Stack Trace:
java.lang.IndexOutOfBoundsException: Index: 2, Size: 2
at 
__randomizedtesting.SeedInfo.seed([B80C15673B91141:27D43E12C6BC3559]:0)
at java.util.ArrayList.rangeCheck(ArrayList.java:657)
at java.util.ArrayList.get(ArrayList.java:433)
at java.util.Collections$UnmodifiableList.get(Collections.java:1309)
at 
org.apache.lucene.index.TestAddIndexes.testAddIndicesWithSoftDeletes(TestAddIndexes.java:1455)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  org.apache.solr.cloud.UnloadDistributedZkTest.test

Error Message:
Timeout occured while waiting response from server at: http://127.0.0.1:40258

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: http://127.0.0.1:40258
at 
__randomizedtesting.SeedInfo.seed([28027971633A8DD1:A05646ABCDC6E029]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:654)
at 

[JENKINS] Lucene-Solr-SmokeRelease-master - Build # 1123 - Failure

2018-09-12 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/1123/

No tests ran.

Build Log:
[...truncated 23269 lines...]
[asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: 
invalid part, must have at least one section (e.g., chapter, appendix, etc.)
[asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid 
part, must have at least one section (e.g., chapter, appendix, etc.)
 [java] Processed 2341 links (1892 relative) to 3146 anchors in 246 files
 [echo] Validated Links & Anchors via: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr-ref-guide/bare-bones-html/

-dist-changes:
 [copy] Copying 4 files to 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/changes

package:

-unpack-solr-tgz:

-ensure-solr-tgz-exists:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr.tgz.unpacked
[untar] Expanding: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/solr-8.0.0.tgz
 into 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr.tgz.unpacked

generate-maven-artifacts:

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.


[JENKINS] Lucene-Solr-Tests-master - Build # 2807 - Unstable

2018-09-12 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/2807/

1 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.sim.TestSimComputePlanAction.testNodeAdded

Error Message:
ComputePlanAction should have computed exactly 1 operation, but was: 
[org.apache.solr.client.solrj.request.CollectionAdminRequest$MoveReplica@2c8dc325,
 
org.apache.solr.client.solrj.request.CollectionAdminRequest$MoveReplica@14e31142]
 expected:<1> but was:<2>

Stack Trace:
java.lang.AssertionError: ComputePlanAction should have computed exactly 1 
operation, but was: 
[org.apache.solr.client.solrj.request.CollectionAdminRequest$MoveReplica@2c8dc325,
 
org.apache.solr.client.solrj.request.CollectionAdminRequest$MoveReplica@14e31142]
 expected:<1> but was:<2>
at 
__randomizedtesting.SeedInfo.seed([69F3AA28CE054E1:635C6CD52E43FCE2]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.cloud.autoscaling.sim.TestSimComputePlanAction.testNodeAdded(TestSimComputePlanAction.java:314)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)

[jira] [Commented] (SOLR-12767) Deprecate min_rf

2018-09-12 Thread Erick Erickson (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612887#comment-16612887
 ] 

Erick Erickson commented on SOLR-12767:
---

{quote} #1 is still valuable, but there isn’t much point of making the 
parameter an integer, the user is just telling Solr that they want the achieved 
replication factor, so it could be a boolean,
{quote}
I question this. The scenario is this:
 * Someone can't re-index from source
 * They need to be really, really, really _sure_ the doc gets indexed

So even being guaranteed that the doc is replicated isn't enough in the 
unlikely scenario that the leader and the one replica that the doc happened to 
replicate to die at the same time.

Maybe not enough of a window to allow for, but that's the concern.

 

> Deprecate min_rf
> 
>
> Key: SOLR-12767
> URL: https://issues.apache.org/jira/browse/SOLR-12767
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Tomás Fernández Löbbe
>Priority: Major
>
> Currently the {{min_rf}} parameter does two things.
> 1. It tells Solr that the user wants to keep track of the achieved 
> replication factor
> 2. (undocumented AFAICT) It prevents Solr from putting replicas in recovery 
> if the achieved replication factor is lower than the {{min_rf}} specified
> #2 is intentional and I believe the reason behind it is to prevent replicas 
> to go into recovery in cases of short hiccups (since the assumption is that 
> the user is going to retry the request anyway). This is dangerous because if 
> the user doesn’t retry (or retries a number of times but keeps failing) the 
> replicas will be permanently inconsistent. Also, since we now retry updates 
> from leaders to replicas, this behavior has less value, since short temporary 
> blips should be recovered by those retries anyway. 
> I think we should remove the behavior described in #2, #1 is still valuable, 
> but there isn’t much point of making the parameter an integer, the user is 
> just telling Solr that they want the achieved replication factor, so it could 
> be a boolean, but I’m thinking we probably don’t even want to expose the 
> parameter, and just always keep track of it, and include it in the response. 
> It’s not costly to calculate, so why keep two separate code paths?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-11-ea+28) - Build # 22848 - Unstable!

2018-09-12 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/22848/
Java: 64bit/jdk-11-ea+28 -XX:+UseCompressedOops -XX:+UseG1GC

24 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.extraction.ExtractingRequestHandlerTest

Error Message:
SOLR-12759 JDK 11 (1st release) and Tika 1.x can result in extracting dates in 
a bad format.

Stack Trace:
java.lang.AssertionError: SOLR-12759 JDK 11 (1st release) and Tika 1.x can 
result in extracting dates in a bad format.
at __randomizedtesting.SeedInfo.seed([8A729583D8268169]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertFalse(Assert.java:68)
at 
org.apache.solr.handler.extraction.ExtractingRequestHandlerTest.beforeClass(ExtractingRequestHandlerTest.java:44)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:875)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:834)


FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.extraction.ExtractingRequestHandlerTest

Error Message:
SOLR-12759 JDK 11 (1st release) and Tika 1.x can result in extracting dates in 
a bad format.

Stack Trace:
java.lang.AssertionError: SOLR-12759 JDK 11 (1st release) and Tika 1.x can 
result in extracting dates in a bad format.
at __randomizedtesting.SeedInfo.seed([8A729583D8268169]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertFalse(Assert.java:68)
at 
org.apache.solr.handler.extraction.ExtractingRequestHandlerTest.beforeClass(ExtractingRequestHandlerTest.java:44)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:875)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 

Re: Lucene/Solr 7.5

2018-09-12 Thread Tomás Fernández Löbbe
Hi Jim,
I'd like to commit SOLR-12766 to 7.5. SOLR-11881 added retries for internal
requests, but the backoff time in cases with multiple updates can become
big, and cause clients to timeout. The change is minimal, just backoff once
for a retry batch instead of for every doc.

I'm testing a patch and plan to commit later today, if there aren't any
issues or objections.

On Wed, Sep 12, 2018 at 5:39 AM jim ferenczi  wrote:

> Thanks !
>
> Le mer. 12 sept. 2018 à 11:49, Adrien Grand  a écrit :
>
>> Hey Jim,
>>
>> I added you to the hudson-jobadmin group so that you can do it next time.
>>
>> Steve, thanks for taking care of setting up the builds!
>>
>> Le mar. 11 sept. 2018 à 17:32, jim ferenczi  a
>> écrit :
>>
>>> No worries at all Cassandra. What do you think of building the first RC
>>> on Friday and start the vote on Monday next week ? This will leave some
>>> room to finish the missing bits.
>>> Could someone help to setup the Jenkins releases build ? It seems that I
>>> cannot create jobs with my account.
>>>
>>> Le mar. 11 sept. 2018 à 14:08, Cassandra Targett 
>>> a écrit :
>>>
 Sorry, Jim, I should have replied yesterday about the state of things
 with the Ref Guide - it's close. I'm doing the last bit of big review I
 need to do and am nearly done with that, then I have a couple more small
 things done (including SOLR-12763 which I just created since I forgot to do
 it earlier). My goal is to be done by the end of my day today so you could
 do the RC tomorrow, but who knows what the day will bring work-wise, so
 I'll send another mail at the end of the day my time to let you know for
 sure.

 On Mon, Sep 10, 2018 at 9:07 AM jim ferenczi 
 wrote:

> I just fixed the invalid version (7.5.1) that I added in master and
> 7x. The next version on these branches should be 7.6.0, sorry for the 
> noise.
>
> Le lun. 10 sept. 2018 à 09:26, jim ferenczi 
> a écrit :
>
>> Hi,
>>
>> Feature freeze for 7.5 has started, I just created a branch_7_5.:
>>
>> * No new features may be committed to the branch.
>> * Documentation patches, build patches and serious bug fixes may be
>> committed to the branch. However, you should submit all patches you want 
>> to
>> commit to Jira first to give others the chance to review and possibly 
>> vote
>> against the patch. Keep in mind that it is our main intention to keep the
>> branch as stable as possible.
>> * All patches that are intended for the branch should first be
>> committed to the unstable branch, merged into the stable branch, and then
>> into the current release branch.
>> * Normal unstable and stable branch development may continue as
>> usual. However, if you plan to commit a big change to the unstable branch
>> while the branch feature freeze is in effect, think twice: can't the
>> addition wait a couple more days? Merges of bug fixes into the branch may
>> become more difficult.
>> * Only Jira issues with Fix version "7.5" and priority "Blocker" will
>> delay a release candidate build.
>>
>> I'll create the first RC later this week depending on the status of
>> the Solr ref guide. Cassandra, can you update the status when you think
>> that the ref guide is ready (no rush just a reminder that we need to sync
>> during this release ;) ) ?
>>
>> Cheers,
>> Jim
>>
>> Le mer. 5 sept. 2018 à 17:57, Erick Erickson 
>> a écrit :
>>
>>> Great, thanks!
>>> On Wed, Sep 5, 2018 at 8:44 AM jim ferenczi 
>>> wrote:
>>> >
>>> > Sure it can wait a few days. Let's cut the branch next Monday and
>>> we can sync with Cassandra to create the first RC when the ref guide is
>>> ready.
>>> >
>>> > Le mer. 5 sept. 2018 à 17:27, Erick Erickson <
>>> erickerick...@gmail.com> a écrit :
>>> >>
>>> >> Jim:
>>> >>
>>> >> I know it's the 11th hour, but WDYT about cutting the branch next
>>> >> Monday? We see a flurry of activity (announcing a release does
>>> >> that) and waiting to cut the branch might be easiest all
>>> 'round.
>>> >>
>>> >> Up to you of course, I can backport the test fixes I'd like for
>>> >> instance and I'd like to get the upgraded ZooKeeper in 7.5.
>>> >>
>>> >> Erick
>>> >> On Tue, Sep 4, 2018 at 1:04 PM Cassandra Targett <
>>> casstarg...@gmail.com> wrote:
>>> >> >
>>> >> > It's not so much the building of the RC as giving the content a
>>> detailed editorial review.
>>> >> >
>>> >> > The build/release process itself is well-documented and
>>> published with every Ref Guide:
>>> https://lucene.apache.org/solr/guide/how-to-contribute.html#building-publishing-the-guide.
>>> It was designed from the artifact process, so it's nearly identical as a
>>> process. It's really barely a burden.
>>> >> >
>>> >> > In terms of 

[JENKINS] Lucene-Solr-repro - Build # 1452 - Still Unstable

2018-09-12 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/1452/

[...truncated 39 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-Tests-7.x/878/consoleText

[repro] Revision: 0789a77c2590f716fc3cedb247309068c3fc5d85

[repro] Repro line:  ant test  -Dtestcase=TestSimTriggerIntegration 
-Dtests.method=testEventQueue -Dtests.seed=97B17FF999C129CC 
-Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.locale=th-TH-u-nu-thai-x-lvariant-TH -Dtests.timezone=Etc/GMT0 
-Dtests.asserts=true -Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=IndexSizeTriggerTest 
-Dtests.method=testMixedBounds -Dtests.seed=97B17FF999C129CC 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=bg-BG 
-Dtests.timezone=NZ-CHAT -Dtests.asserts=true -Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=TestSimLargeCluster 
-Dtests.method=testSearchRate -Dtests.seed=97B17FF999C129CC 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=ga-IE 
-Dtests.timezone=America/Tegucigalpa -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
6e8c05f6fe083544fb7f8fdd01df08ac54d7742e
[repro] git fetch
[repro] git checkout 0789a77c2590f716fc3cedb247309068c3fc5d85

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   TestSimTriggerIntegration
[repro]   TestSimLargeCluster
[repro]   IndexSizeTriggerTest
[repro] ant compile-test

[...truncated 3437 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=15 
-Dtests.class="*.TestSimTriggerIntegration|*.TestSimLargeCluster|*.IndexSizeTriggerTest"
 -Dtests.showOutput=onerror  -Dtests.seed=97B17FF999C129CC -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.locale=th-TH-u-nu-thai-x-lvariant-TH 
-Dtests.timezone=Etc/GMT0 -Dtests.asserts=true -Dtests.file.encoding=UTF-8

[...truncated 34380 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   0/5 failed: org.apache.solr.cloud.autoscaling.sim.TestSimLargeCluster
[repro]   0/5 failed: 
org.apache.solr.cloud.autoscaling.sim.TestSimTriggerIntegration
[repro]   5/5 failed: org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest

[repro] Re-testing 100% failures at the tip of branch_7x
[repro] git fetch
[repro] git checkout branch_7x

[...truncated 4 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 8 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   IndexSizeTriggerTest
[repro] ant compile-test

[...truncated 3318 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.IndexSizeTriggerTest" -Dtests.showOutput=onerror  
-Dtests.seed=97B17FF999C129CC -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.locale=bg-BG -Dtests.timezone=NZ-CHAT -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[...truncated 17061 lines...]
[repro] Setting last failure code to 256

[repro] Failures at the tip of branch_7x:
[repro]   2/5 failed: org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest
[repro] git checkout 6e8c05f6fe083544fb7f8fdd01df08ac54d7742e

[...truncated 8 lines...]
[repro] Exiting with code 256

[...truncated 5 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-repro - Build # 1454 - Unstable

2018-09-12 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/1454/

[...truncated 28 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/1123/consoleText

[repro] Revision: 5b96f89d2b038bff2ed3351887a87108f7cc6ea3

[repro] Ant options: -DsmokeTestRelease.java9=/home/jenkins/tools/java/latest1.9
[repro] Repro line:  ant test  -Dtestcase=IndexSizeTriggerTest 
-Dtests.method=testMergeIntegration -Dtests.seed=305DF77B50560394 
-Dtests.multiplier=2 -Dtests.locale=es-US -Dtests.timezone=Australia/NSW 
-Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1

[repro] Repro line:  ant test  -Dtestcase=TestSimLargeCluster 
-Dtests.method=testNodeLost -Dtests.seed=305DF77B50560394 -Dtests.multiplier=2 
-Dtests.locale=el-CY -Dtests.timezone=IST -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
6e8c05f6fe083544fb7f8fdd01df08ac54d7742e
[repro] git fetch
[repro] git checkout 5b96f89d2b038bff2ed3351887a87108f7cc6ea3

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   IndexSizeTriggerTest
[repro]   TestSimLargeCluster
[repro] ant compile-test

[...truncated 3423 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=10 
-Dtests.class="*.IndexSizeTriggerTest|*.TestSimLargeCluster" 
-Dtests.showOutput=onerror 
-DsmokeTestRelease.java9=/home/jenkins/tools/java/latest1.9 
-Dtests.seed=305DF77B50560394 -Dtests.multiplier=2 -Dtests.locale=es-US 
-Dtests.timezone=Australia/NSW -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[...truncated 139046 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   0/5 failed: org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest
[repro]   1/5 failed: org.apache.solr.cloud.autoscaling.sim.TestSimLargeCluster
[repro] git checkout 6e8c05f6fe083544fb7f8fdd01df08ac54d7742e

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 6 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-master-Windows (64bit/jdk-10) - Build # 7520 - Failure!

2018-09-12 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7520/
Java: 64bit/jdk-10 -XX:+UseCompressedOops -XX:+UseSerialGC

All tests passed

Build Log:
[...truncated 14333 lines...]
   [junit4] JVM J0: stdout was not empty, see: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\temp\junit4-J0-20180913_003051_043606143125996590.sysout
   [junit4] >>> JVM J0 emitted unexpected output (verbatim) 
   [junit4] #
   [junit4] # A fatal error has been detected by the Java Runtime Environment:
   [junit4] #
   [junit4] #  EXCEPTION_ACCESS_VIOLATION (0xc005) at 
pc=0x6706b12f, pid=25144, tid=38000
   [junit4] #
   [junit4] # JRE version: OpenJDK Runtime Environment (10.0+46) (build 10+46)
   [junit4] # Java VM: OpenJDK 64-Bit Server VM (10+46, mixed mode, tiered, 
compressed oops, serial gc, windows-amd64)
   [junit4] # Problematic frame:
   [junit4] # V  [jvm.dll+0x46b12f]
   [junit4] #
   [junit4] # No core dump will be written. Minidumps are not enabled by 
default on client versions of Windows
   [junit4] #
   [junit4] # An error report file with more information is saved as:
   [junit4] # 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\hs_err_pid25144.log
   [junit4] #
   [junit4] # Compiler replay data is saved as:
   [junit4] # 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\replay_pid25144.log
   [junit4] #
   [junit4] # If you would like to submit a bug report, please visit:
   [junit4] #   http://bugreport.java.com/bugreport/crash.jsp
   [junit4] #
   [junit4] <<< JVM J0: EOF 

[...truncated 1094 lines...]
   [junit4] ERROR: JVM J0 ended with an exception, command line: 
C:\Users\jenkins\tools\java\64bit\jdk-10\bin\java.exe -XX:+UseCompressedOops 
-XX:+UseSerialGC -XX:+HeapDumpOnOutOfMemoryError 
-XX:HeapDumpPath=C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\heapdumps
 -ea -esa --illegal-access=deny -Dtests.prefix=tests 
-Dtests.seed=53DEB07C69BF77C8 -Xmx512M -Dtests.iters= -Dtests.verbose=false 
-Dtests.infostream=false -Dtests.codec=random -Dtests.postingsformat=random 
-Dtests.docvaluesformat=random -Dtests.locale=random -Dtests.timezone=random 
-Dtests.directory=random -Dtests.linedocsfile=europarl.lines.txt.gz 
-Dtests.luceneMatchVersion=8.0.0 -Dtests.cleanthreads=perClass 
-Djava.util.logging.config.file=C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\tools\junit4\logging.properties
 -Dtests.nightly=false -Dtests.weekly=false -Dtests.monster=false 
-Dtests.slow=true -Dtests.asserts=true -Dtests.multiplier=1 -DtempDir=./temp 
-Djava.io.tmpdir=./temp 
-Dcommon.dir=C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene 
-Dclover.db.dir=C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\clover\db
 
-Djava.security.policy=C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\tools\junit4\solr-tests.policy
 -Dtests.LUCENE_VERSION=8.0.0 -Djetty.testMode=1 -Djetty.insecurerandom=1 
-Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory 
-Djava.awt.headless=true -Djdk.map.althashing.threshold=0 
-Dtests.src.home=C:\Users\jenkins\workspace\Lucene-Solr-master-Windows 
-Djava.security.egd=file:/dev/./urandom 
-Djunit4.childvm.cwd=C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0
 
-Djunit4.tempDir=C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\temp
 -Djunit4.childvm.id=0 -Djunit4.childvm.count=2 -Dtests.disableHdfs=true 
-Djava.security.manager=org.apache.lucene.util.TestSecurityManager 
-Dtests.filterstacks=true -Dtests.leaveTemporary=false -Dtests.badapples=false 
-classpath 

[jira] [Updated] (SOLR-12767) Deprecate min_rf parameter and always include the achieved rf in the response

2018-09-12 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/SOLR-12767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe updated SOLR-12767:
-
Summary: Deprecate min_rf parameter and always include the achieved rf in 
the response  (was: Deprecate min_rf)

> Deprecate min_rf parameter and always include the achieved rf in the response
> -
>
> Key: SOLR-12767
> URL: https://issues.apache.org/jira/browse/SOLR-12767
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Tomás Fernández Löbbe
>Priority: Major
>
> Currently the {{min_rf}} parameter does two things.
> 1. It tells Solr that the user wants to keep track of the achieved 
> replication factor
> 2. (undocumented AFAICT) It prevents Solr from putting replicas in recovery 
> if the achieved replication factor is lower than the {{min_rf}} specified
> #2 is intentional and I believe the reason behind it is to prevent replicas 
> to go into recovery in cases of short hiccups (since the assumption is that 
> the user is going to retry the request anyway). This is dangerous because if 
> the user doesn’t retry (or retries a number of times but keeps failing) the 
> replicas will be permanently inconsistent. Also, since we now retry updates 
> from leaders to replicas, this behavior has less value, since short temporary 
> blips should be recovered by those retries anyway. 
> I think we should remove the behavior described in #2, #1 is still valuable, 
> but there isn’t much point of making the parameter an integer, the user is 
> just telling Solr that they want the achieved replication factor, so it could 
> be a boolean, but I’m thinking we probably don’t even want to expose the 
> parameter, and just always keep track of it, and include it in the response. 
> It’s not costly to calculate, so why keep two separate code paths?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12767) Deprecate min_rf

2018-09-12 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-12767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612978#comment-16612978
 ] 

Tomás Fernández Löbbe commented on SOLR-12767:
--

I'm sorry, I think I wasn't clear enough. Right now the way this parameter 
works is: The user provides a {{min_rf}} parameter, that's an integer. Solr 
replies back with the "achieved" replication factor and echoes back whatever 
{{min_rf}} was in the request. It doesn't do anything else with the value of 
{{min_rf}} (if you discount the skip-recovery behavior I mentioned in the 
description of this Jira as #2, that I believe is wrong). So, instead of 
{{min_rf}} being an integer, it could just be a parameter like 
{{returnAchievedReplicationFactor=true}}, and Solr would return the same value 
as today. But even then, I think we should just always return the achieved 
replication factor and let the user do as they please with that value. In your 
question, log and retry later any time the achieved replication factor is < N

> Deprecate min_rf
> 
>
> Key: SOLR-12767
> URL: https://issues.apache.org/jira/browse/SOLR-12767
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Tomás Fernández Löbbe
>Priority: Major
>
> Currently the {{min_rf}} parameter does two things.
> 1. It tells Solr that the user wants to keep track of the achieved 
> replication factor
> 2. (undocumented AFAICT) It prevents Solr from putting replicas in recovery 
> if the achieved replication factor is lower than the {{min_rf}} specified
> #2 is intentional and I believe the reason behind it is to prevent replicas 
> to go into recovery in cases of short hiccups (since the assumption is that 
> the user is going to retry the request anyway). This is dangerous because if 
> the user doesn’t retry (or retries a number of times but keeps failing) the 
> replicas will be permanently inconsistent. Also, since we now retry updates 
> from leaders to replicas, this behavior has less value, since short temporary 
> blips should be recovered by those retries anyway. 
> I think we should remove the behavior described in #2, #1 is still valuable, 
> but there isn’t much point of making the parameter an integer, the user is 
> just telling Solr that they want the achieved replication factor, so it could 
> be a boolean, but I’m thinking we probably don’t even want to expose the 
> parameter, and just always keep track of it, and include it in the response. 
> It’s not costly to calculate, so why keep two separate code paths?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12362) JSON loader should save the relationship of children

2018-09-12 Thread mosh (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612972#comment-16612972
 ] 

mosh commented on SOLR-12362:
-

{quote}I know there were some syntax ambiguities but I thought we solved them 
by looking for the unique key in a child map to differentiate an atomic update 
from a child doc.
{quote}
If I recall correctly, we used this to allow a grace period before child 
documents are stored inside SolrInputField.
{quote}@link CommonParams#ANONYMOUS_CHILD_DOCS} Defaults to true.
{quote}
This seemed like a major change at the time, so we decided to set it to true by 
default, to give it time until we get this up to scratch.

> JSON loader should save the relationship of children
> 
>
> Key: SOLR-12362
> URL: https://issues.apache.org/jira/browse/SOLR-12362
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: mosh
>Assignee: David Smiley
>Priority: Major
> Fix For: 7.5
>
>  Time Spent: 6.5h
>  Remaining Estimate: 0h
>
> Once _childDocuments in SolrInputDocument is changed to a Map, JsonLoader 
> should add the child document to the map while saving its key name, to 
> maintain the child's relationship to its parent.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12362) JSON loader should save the relationship of children

2018-09-12 Thread mosh (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612972#comment-16612972
 ] 

mosh edited comment on SOLR-12362 at 9/13/18 3:53 AM:
--

{quote}I know there were some syntax ambiguities but I thought we solved them 
by looking for the unique key in a child map to differentiate an atomic update 
from a child doc.
{quote}
If I recall correctly, we used this to allow a grace period before child 
documents are stored inside SolrInputField.
{code:javascript}@link CommonParams#ANONYMOUS_CHILD_DOCS} Defaults to true.
{code}
This seemed like a major change at the time, so we decided to set it to true by 
default, to give it time until we get this up to scratch.


was (Author: moshebla):
{quote}I know there were some syntax ambiguities but I thought we solved them 
by looking for the unique key in a child map to differentiate an atomic update 
from a child doc.
{quote}
If I recall correctly, we used this to allow a grace period before child 
documents are stored inside SolrInputField.
{quote}@link CommonParams#ANONYMOUS_CHILD_DOCS} Defaults to true.
{quote}
This seemed like a major change at the time, so we decided to set it to true by 
default, to give it time until we get this up to scratch.

> JSON loader should save the relationship of children
> 
>
> Key: SOLR-12362
> URL: https://issues.apache.org/jira/browse/SOLR-12362
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: mosh
>Assignee: David Smiley
>Priority: Major
> Fix For: 7.5
>
>  Time Spent: 6.5h
>  Remaining Estimate: 0h
>
> Once _childDocuments in SolrInputDocument is changed to a Map, JsonLoader 
> should add the child document to the map while saving its key name, to 
> maintain the child's relationship to its parent.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12766) When retrying internal requests, backoff only once for the full batch of retries

2018-09-12 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-12766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612988#comment-16612988
 ] 

Tomás Fernández Löbbe commented on SOLR-12766:
--

I'll merge into branch_7_5 tomorrow if there are no concerns.

> When retrying internal requests, backoff only once for the full batch of 
> retries
> 
>
> Key: SOLR-12766
> URL: https://issues.apache.org/jira/browse/SOLR-12766
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Tomás Fernández Löbbe
>Assignee: Tomás Fernández Löbbe
>Priority: Minor
> Attachments: SOLR-12766.patch
>
>
> We currently wait for each internal retry request ({{TOLEADER}} or 
> {{FROMLEADER}} requests). This can cause a long wait time when retrying many 
> requests and can timeout the client. We should instead wait once and retry 
> the full batch of errors.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12766) When retrying internal requests, backoff only once for the full batch of retries

2018-09-12 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612986#comment-16612986
 ] 

ASF subversion and git services commented on SOLR-12766:


Commit f76a424aa2b1a29eda229e0e7b292551d96e9d29 in lucene-solr's branch 
refs/heads/branch_7x from Tomas Fernandez Lobbe
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f76a424 ]

SOLR-12766: Improve backoff for internal retries

When retrying internal update requests, backoff only once for the full batch of 
retries
instead of for every request.
Make backoff linear with the number of retries


> When retrying internal requests, backoff only once for the full batch of 
> retries
> 
>
> Key: SOLR-12766
> URL: https://issues.apache.org/jira/browse/SOLR-12766
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Tomás Fernández Löbbe
>Assignee: Tomás Fernández Löbbe
>Priority: Minor
> Attachments: SOLR-12766.patch
>
>
> We currently wait for each internal retry request ({{TOLEADER}} or 
> {{FROMLEADER}} requests). This can cause a long wait time when retrying many 
> requests and can timeout the client. We should instead wait once and retry 
> the full batch of errors.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12767) Deprecate min_rf parameter and always include the achieved rf in the response

2018-09-12 Thread Erick Erickson (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16613036#comment-16613036
 ] 

Erick Erickson commented on SOLR-12767:
---

Ah, ok. I vaguely remember being in that code at one point and it seemed kind 
of hacked together so likely could use some cleaning up

> Deprecate min_rf parameter and always include the achieved rf in the response
> -
>
> Key: SOLR-12767
> URL: https://issues.apache.org/jira/browse/SOLR-12767
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Tomás Fernández Löbbe
>Priority: Major
>
> Currently the {{min_rf}} parameter does two things.
> 1. It tells Solr that the user wants to keep track of the achieved 
> replication factor
> 2. (undocumented AFAICT) It prevents Solr from putting replicas in recovery 
> if the achieved replication factor is lower than the {{min_rf}} specified
> #2 is intentional and I believe the reason behind it is to prevent replicas 
> to go into recovery in cases of short hiccups (since the assumption is that 
> the user is going to retry the request anyway). This is dangerous because if 
> the user doesn’t retry (or retries a number of times but keeps failing) the 
> replicas will be permanently inconsistent. Also, since we now retry updates 
> from leaders to replicas, this behavior has less value, since short temporary 
> blips should be recovered by those retries anyway. 
> I think we should remove the behavior described in #2, #1 is still valuable, 
> but there isn’t much point of making the parameter an integer, the user is 
> just telling Solr that they want the achieved replication factor, so it could 
> be a boolean, but I’m thinking we probably don’t even want to expose the 
> parameter, and just always keep track of it, and include it in the response. 
> It’s not costly to calculate, so why keep two separate code paths?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12768) Determine how _nest_path_ should be analyzed to support various use-cases

2018-09-12 Thread David Smiley (JIRA)
David Smiley created SOLR-12768:
---

 Summary: Determine how _nest_path_ should be analyzed to support 
various use-cases
 Key: SOLR-12768
 URL: https://issues.apache.org/jira/browse/SOLR-12768
 Project: Solr
  Issue Type: Sub-task
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: David Smiley


We know we need {{\_nest\_path\_}} in the schema for the new nested documents 
support, and we loosely know what goes in it.  From a DocValues perspective, 
we've got it down; though we might tweak it.  From an indexing (text analysis) 
perspective, we're not quite sure yet, though we've got a test schema, 
{{schema-nest.xml}} with a decent shot at it.  Ultimately, how we index it will 
depend on the query/filter use-cases we need to support.  So we'll review some 
of them here.

TBD: Not sure if the outcome of this task is just a "decide" or wether we also 
potentially add a few tests for some of these cases, and/or if we also add a 
FieldType to make declaring it as easy as a one-liner.  A FieldType would have 
other benefits too once we're ready to make querying on the path easier.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-7.x-Linux (64bit/jdk-11-ea+28) - Build # 2735 - Unstable!

2018-09-12 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2735/
Java: 64bit/jdk-11-ea+28 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

21 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.extraction.ExtractingRequestHandlerTest

Error Message:
SOLR-12759 JDK 11 (1st release) and Tika 1.x can result in extracting dates in 
a bad format.

Stack Trace:
java.lang.AssertionError: SOLR-12759 JDK 11 (1st release) and Tika 1.x can 
result in extracting dates in a bad format.
at __randomizedtesting.SeedInfo.seed([CB2703E0A2284453]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertFalse(Assert.java:68)
at 
org.apache.solr.handler.extraction.ExtractingRequestHandlerTest.beforeClass(ExtractingRequestHandlerTest.java:44)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:875)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:834)


FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.extraction.ExtractingRequestHandlerTest

Error Message:
SOLR-12759 JDK 11 (1st release) and Tika 1.x can result in extracting dates in 
a bad format.

Stack Trace:
java.lang.AssertionError: SOLR-12759 JDK 11 (1st release) and Tika 1.x can 
result in extracting dates in a bad format.
at __randomizedtesting.SeedInfo.seed([CB2703E0A2284453]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertFalse(Assert.java:68)
at 
org.apache.solr.handler.extraction.ExtractingRequestHandlerTest.beforeClass(ExtractingRequestHandlerTest.java:44)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:875)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 

[jira] [Commented] (SOLR-12766) When retrying internal requests, backoff only once for the full batch of retries

2018-09-12 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612985#comment-16612985
 ] 

ASF subversion and git services commented on SOLR-12766:


Commit 4a5b914eaa8683009191748bf6c0b1be14d59661 in lucene-solr's branch 
refs/heads/master from Tomas Fernandez Lobbe
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=4a5b914 ]

SOLR-12766: Improve backoff for internal retries

When retrying internal update requests, backoff only once for the full batch of 
retries
instead of for every request.
Make backoff linear with the number of retries


> When retrying internal requests, backoff only once for the full batch of 
> retries
> 
>
> Key: SOLR-12766
> URL: https://issues.apache.org/jira/browse/SOLR-12766
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Tomás Fernández Löbbe
>Assignee: Tomás Fernández Löbbe
>Priority: Minor
> Attachments: SOLR-12766.patch
>
>
> We currently wait for each internal retry request ({{TOLEADER}} or 
> {{FROMLEADER}} requests). This can cause a long wait time when retrying many 
> requests and can timeout the client. We should instead wait once and retry 
> the full batch of errors.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12727) Upgrade ZooKeeper dependency to 3.4.13

2018-09-12 Thread Erick Erickson (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16613034#comment-16613034
 ] 

Erick Erickson commented on SOLR-12727:
---

I'm having two failures and have no idea where to start, so flying blind. Any 
clues?
 * SSLMigrationTest
 * SaslZkACLProvider

Of the two, only SaslZkACLProvider fails reliably, but here's a failing seed 
just in case:

 -Dtests.seed=64BB8764D5BC1EC3

And the stack trace is below. Pretty clearly auth isn't happening as it should, 
but I have no clue why. One thing I can say is that line 516 in SolrZkClient 
fails with this patch, and succeeds without it.
{code:java}
Object exists = exists(currentPath, watcher, retryOnConnLoss);{code}
No call is made to
{code:java}
SaslZkACLProvider{code}
methods with the patch, but methods there _are_ called without the patch.  
{{SaslZkACLProvider}} is created in both cases. It's late and things are going 
fuzzy so I'm giving up for the night.
{code:java}
NOTE: reproduce with: ant test  -Dtestcase=SaslZkACLProviderTest 
-Dtests.method=testSaslZkACLProvider -Dtests.seed=64BB8764D5BC1EC3 
-Dtests.slow=true -Dtests.badapples=true -Dtests.locale=ca-ES 
-Dtests.timezone=US/Mountain -Dtests.asserts=true -Dtests.file.encoding=UTF-8

org.apache.zookeeper.KeeperException$AuthFailedException: KeeperErrorCode = 
AuthFailed for /solr

at 
__randomizedtesting.SeedInfo.seed([64BB8764D5BC1EC3:604FD1BC4CF8BDCD]:0)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:126)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:54)
at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:)
at 
org.apache.solr.common.cloud.SolrZkClient.lambda$exists$2(SolrZkClient.java:305)
at 
org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:60)
at 
org.apache.solr.common.cloud.SolrZkClient.exists(SolrZkClient.java:305)
at 
org.apache.solr.common.cloud.SolrZkClient.makePath(SolrZkClient.java:516)
at 
org.apache.solr.common.cloud.SolrZkClient.makePath(SolrZkClient.java:419)
at 
org.apache.solr.cloud.SaslZkACLProviderTest.setUp(SaslZkACLProviderTest.java:82)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at com.car
{code}
 

 

 

 

> Upgrade ZooKeeper dependency to 3.4.13
> --
>
> Key: SOLR-12727
> URL: https://issues.apache.org/jira/browse/SOLR-12727
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.4
>Reporter: Shawn Heisey
>Assignee: Erick Erickson
>Priority: Major
> Attachments: SOLR-12727.patch
>
>
> Upgrade ZK dependency to 3.4.13.  This fixes ZOOKEEPER-2184 which will make 
> the ZK client re-resolve the server hostnames when a connection fails.  This 
> will fix issues where a failed ZK container is replaced with a new one that 
> has a different IP address and DNS gets updated with the new address.
> Typically these upgrades do not require code changes, but that should be 
> verified.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12765) Possibly incorrect format in JMX cache stats

2018-09-12 Thread Bojan Smid (JIRA)
Bojan Smid created SOLR-12765:
-

 Summary: Possibly incorrect format in JMX cache stats
 Key: SOLR-12765
 URL: https://issues.apache.org/jira/browse/SOLR-12765
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Affects Versions: 7.4
Reporter: Bojan Smid


I posted a question on ML 
[https://mail-archives.apache.org/mod_mbox/lucene-solr-user/201809.mbox/%3CCAGniRXR4Ps%3D03X0uiByCn5ecUT2VY4TLV4iNcxCde3dxBnmC-w%40mail.gmail.com%3E|https://mail-archives.apache.org/mod_mbox/lucene-solr-user/201809.mbox/%3CCAGniRXR4Ps%3D03X0uiByCn5ecUT2VY4TLV4iNcxCde3dxBnmC-w%40mail.gmail.com%3E),]
 , but didn't get feedback. Since it looks like a possible bug, I am opening a 
ticket.

 
  It seems the format of cache mbeans changed with 7.4.0.  And from what I see 
similar change wasn't made for other mbeans, which may mean it was accidental 
and may be a bug.
 
  In Solr 7.3.* format was (each attribute on its own, numeric type):
 
mbean:
solr:dom1=core,dom2=gettingstarted,dom3=shard1,dom4=replica_n1,category=CACHE,scope=searcher,name=filterCache
 
attributes:
  lookups java.lang.Long = 0
  hits java.lang.Long = 0
  cumulative_evictions java.lang.Long = 0
  size java.lang.Long = 0
  hitratio java.lang.Float = 0.0
  evictions java.lang.Long = 0
  cumulative_lookups java.lang.Long = 0
  cumulative_hitratio java.lang.Float = 0.0
  warmupTime java.lang.Long = 0
  inserts java.lang.Long = 0
  cumulative_inserts java.lang.Long = 0
  cumulative_hits java.lang.Long = 0


 
  With 7.4.0 there is a single attribute "Value" (java.lang.Object):
 
mbean:
solr:dom1=core,dom2=gettingstarted,dom3=shard1,dom4=replica_n1,category=CACHE,scope=searcher,name=filterCache
 
attributes:
  Value java.lang.Object = \{lookups=0, evictions=0, cumulative_inserts=0, 
cumulative_hits=0, hits=0, cumulative_evictions=0, size=0, hitratio=0.0, 
cumulative_lookups=0, cumulative_hitratio=0.0, warmupTime=0, inserts=0}
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1640 - Failure

2018-09-12 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1640/

2 tests failed.
FAILED:  org.apache.lucene.index.TestBinaryDocValuesUpdates.testTonsOfUpdates

Error Message:
Java heap space

Stack Trace:
java.lang.OutOfMemoryError: Java heap space
at org.apache.lucene.store.RAMFile.newBuffer(RAMFile.java:84)
at org.apache.lucene.store.RAMFile.addBuffer(RAMFile.java:57)
at 
org.apache.lucene.store.RAMOutputStream.switchCurrentBuffer(RAMOutputStream.java:168)
at 
org.apache.lucene.store.RAMOutputStream.writeBytes(RAMOutputStream.java:154)
at 
org.apache.lucene.store.MockIndexOutputWrapper.writeBytes(MockIndexOutputWrapper.java:141)
at 
org.apache.lucene.codecs.lucene70.Lucene70DocValuesConsumer.addBinaryField(Lucene70DocValuesConsumer.java:348)
at 
org.apache.lucene.codecs.perfield.PerFieldDocValuesFormat$FieldsWriter.addBinaryField(PerFieldDocValuesFormat.java:114)
at 
org.apache.lucene.index.ReadersAndUpdates.handleDVUpdates(ReadersAndUpdates.java:330)
at 
org.apache.lucene.index.ReadersAndUpdates.writeFieldUpdates(ReadersAndUpdates.java:570)
at 
org.apache.lucene.index.IndexWriter.writeSomeDocValuesUpdates(IndexWriter.java:626)
at 
org.apache.lucene.index.FrozenBufferedUpdates.apply(FrozenBufferedUpdates.java:299)
at 
org.apache.lucene.index.IndexWriter.lambda$publishFrozenUpdates$3(IndexWriter.java:2600)
at 
org.apache.lucene.index.IndexWriter$$Lambda$110/1815498900.process(Unknown 
Source)
at 
org.apache.lucene.index.IndexWriter.processEvents(IndexWriter.java:5097)
at 
org.apache.lucene.index.IndexWriter.updateDocValues(IndexWriter.java:1783)
at 
org.apache.lucene.index.TestBinaryDocValuesUpdates.testTonsOfUpdates(TestBinaryDocValuesUpdates.java:1324)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)


FAILED:  org.apache.solr.cloud.MoveReplicaHDFSTest.testFailedMove

Error Message:
No live SolrServers available to handle this 
request:[http://127.0.0.1:41156/solr/MoveReplicaHDFSTest_failed_coll_true, 
http://127.0.0.1:46777/solr/MoveReplicaHDFSTest_failed_coll_true]

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this 
request:[http://127.0.0.1:41156/solr/MoveReplicaHDFSTest_failed_coll_true, 
http://127.0.0.1:46777/solr/MoveReplicaHDFSTest_failed_coll_true]
at 
__randomizedtesting.SeedInfo.seed([A78659BA58953D1C:D4B8A48EF46E8CC]:0)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:462)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1107)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:994)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:942)
at 
org.apache.solr.cloud.MoveReplicaTest.testFailedMove(MoveReplicaTest.java:291)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 

[JENKINS-EA] Lucene-Solr-7.x-Windows (64bit/jdk-11-ea+28) - Build # 787 - Failure!

2018-09-12 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/787/
Java: 64bit/jdk-11-ea+28 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

6 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.sim.TestSimGenericDistributedQueue.testDistributedQueueBlocking

Error Message:


Stack Trace:
java.util.concurrent.TimeoutException
at 
__randomizedtesting.SeedInfo.seed([5917F333877E9C57:1CBD814AC32C2023]:0)
at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:204)
at 
org.apache.solr.cloud.autoscaling.sim.TestSimDistributedQueue.testDistributedQueueBlocking(TestSimDistributedQueue.java:102)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:834)


FAILED:  
org.apache.solr.cloud.autoscaling.sim.TestSimGenericDistributedQueue.testDistributedQueue
 {#2}

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at 

[jira] [Commented] (SOLR-12761) Be able to configure “maxExpansions” for FuzzyQuery

2018-09-12 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612148#comment-16612148
 ] 

David Smiley commented on SOLR-12761:
-

I think this setting would be best as a request parameter and not requiring 
SolrConfig changes.   I know maxBooleanClauses is not done this way but it's 
due to fundamental limitations of that particular setting at the Lucene level 
that thankfully don't apply here.

> Be able to configure “maxExpansions” for FuzzyQuery
> ---
>
> Key: SOLR-12761
> URL: https://issues.apache.org/jira/browse/SOLR-12761
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: query parsers
>Affects Versions: 7.3
>Reporter: Manuel Gübeli
>Priority: Minor
>
> We had an issue where we reached the expansion limit of the FuzzyQuery.
> Situation:
>  * Query «meier~» found «Meier»
>  * Query «mazer~» found «Meier»
>  * Query «maxer~» found «Meier»
>  * Query «mayer~» did *NOT* find «Meier»
> The parameter “maxBooleanClauses” does not help in this situation since the 
> “maxExpansions” FuzzyQuery of is never set in Solr and therefore the default 
> value of 50 is used. Details: “SolrQuery-ParserBase” calles the default 
> constructor new FuzzyQuery(Term term, int maxEdits, int pre-fixLength) and 
> therefore FuzzyQuery run always with the default values defaultMaxExpansions 
> = 50 and defaultTranspositions = true)
> Suggestion expose FuzzyQuery parameters in solrconfig.xm like e.g. 
>  1024
>  
> Addtion would be:
>  0
>  50
>  true



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12765) Possibly incorrect format in JMX cache stats

2018-09-12 Thread Otis Gospodnetic (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16612047#comment-16612047
 ] 

Otis Gospodnetic commented on SOLR-12765:
-

[~ab]is this a bug?  If so, we could try to get you the patch/PR.

> Possibly incorrect format in JMX cache stats
> 
>
> Key: SOLR-12765
> URL: https://issues.apache.org/jira/browse/SOLR-12765
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.4
>Reporter: Bojan Smid
>Priority: Major
>
> I posted a question on ML 
> [https://mail-archives.apache.org/mod_mbox/lucene-solr-user/201809.mbox/%3CCAGniRXR4Ps%3D03X0uiByCn5ecUT2VY4TLV4iNcxCde3dxBnmC-w%40mail.gmail.com%3E|https://mail-archives.apache.org/mod_mbox/lucene-solr-user/201809.mbox/%3CCAGniRXR4Ps%3D03X0uiByCn5ecUT2VY4TLV4iNcxCde3dxBnmC-w%40mail.gmail.com%3E),]
>  , but didn't get feedback. Since it looks like a possible bug, I am opening 
> a ticket.
>  
>   It seems the format of cache mbeans changed with 7.4.0.  And from what I 
> see similar change wasn't made for other mbeans, which may mean it was 
> accidental and may be a bug.
>  
>   In Solr 7.3.* format was (each attribute on its own, numeric type):
>  
> mbean:
> solr:dom1=core,dom2=gettingstarted,dom3=shard1,dom4=replica_n1,category=CACHE,scope=searcher,name=filterCache
>  
> attributes:
>   lookups java.lang.Long = 0
>   hits java.lang.Long = 0
>   cumulative_evictions java.lang.Long = 0
>   size java.lang.Long = 0
>   hitratio java.lang.Float = 0.0
>   evictions java.lang.Long = 0
>   cumulative_lookups java.lang.Long = 0
>   cumulative_hitratio java.lang.Float = 0.0
>   warmupTime java.lang.Long = 0
>   inserts java.lang.Long = 0
>   cumulative_inserts java.lang.Long = 0
>   cumulative_hits java.lang.Long = 0
>  
>   With 7.4.0 there is a single attribute "Value" (java.lang.Object):
>  
> mbean:
> solr:dom1=core,dom2=gettingstarted,dom3=shard1,dom4=replica_n1,category=CACHE,scope=searcher,name=filterCache
>  
> attributes:
>   Value java.lang.Object = \{lookups=0, evictions=0, 
> cumulative_inserts=0, cumulative_hits=0, hits=0, cumulative_evictions=0, 
> size=0, hitratio=0.0, cumulative_lookups=0, cumulative_hitratio=0.0, 
> warmupTime=0, inserts=0}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr 7.5

2018-09-12 Thread jim ferenczi
Thanks !

Le mer. 12 sept. 2018 à 11:49, Adrien Grand  a écrit :

> Hey Jim,
>
> I added you to the hudson-jobadmin group so that you can do it next time.
>
> Steve, thanks for taking care of setting up the builds!
>
> Le mar. 11 sept. 2018 à 17:32, jim ferenczi  a
> écrit :
>
>> No worries at all Cassandra. What do you think of building the first RC
>> on Friday and start the vote on Monday next week ? This will leave some
>> room to finish the missing bits.
>> Could someone help to setup the Jenkins releases build ? It seems that I
>> cannot create jobs with my account.
>>
>> Le mar. 11 sept. 2018 à 14:08, Cassandra Targett 
>> a écrit :
>>
>>> Sorry, Jim, I should have replied yesterday about the state of things
>>> with the Ref Guide - it's close. I'm doing the last bit of big review I
>>> need to do and am nearly done with that, then I have a couple more small
>>> things done (including SOLR-12763 which I just created since I forgot to do
>>> it earlier). My goal is to be done by the end of my day today so you could
>>> do the RC tomorrow, but who knows what the day will bring work-wise, so
>>> I'll send another mail at the end of the day my time to let you know for
>>> sure.
>>>
>>> On Mon, Sep 10, 2018 at 9:07 AM jim ferenczi 
>>> wrote:
>>>
 I just fixed the invalid version (7.5.1) that I added in master and 7x.
 The next version on these branches should be 7.6.0, sorry for the noise.

 Le lun. 10 sept. 2018 à 09:26, jim ferenczi  a
 écrit :

> Hi,
>
> Feature freeze for 7.5 has started, I just created a branch_7_5.:
>
> * No new features may be committed to the branch.
> * Documentation patches, build patches and serious bug fixes may be
> committed to the branch. However, you should submit all patches you want 
> to
> commit to Jira first to give others the chance to review and possibly vote
> against the patch. Keep in mind that it is our main intention to keep the
> branch as stable as possible.
> * All patches that are intended for the branch should first be
> committed to the unstable branch, merged into the stable branch, and then
> into the current release branch.
> * Normal unstable and stable branch development may continue as usual.
> However, if you plan to commit a big change to the unstable branch while
> the branch feature freeze is in effect, think twice: can't the addition
> wait a couple more days? Merges of bug fixes into the branch may become
> more difficult.
> * Only Jira issues with Fix version "7.5" and priority "Blocker" will
> delay a release candidate build.
>
> I'll create the first RC later this week depending on the status of
> the Solr ref guide. Cassandra, can you update the status when you think
> that the ref guide is ready (no rush just a reminder that we need to sync
> during this release ;) ) ?
>
> Cheers,
> Jim
>
> Le mer. 5 sept. 2018 à 17:57, Erick Erickson 
> a écrit :
>
>> Great, thanks!
>> On Wed, Sep 5, 2018 at 8:44 AM jim ferenczi 
>> wrote:
>> >
>> > Sure it can wait a few days. Let's cut the branch next Monday and
>> we can sync with Cassandra to create the first RC when the ref guide is
>> ready.
>> >
>> > Le mer. 5 sept. 2018 à 17:27, Erick Erickson <
>> erickerick...@gmail.com> a écrit :
>> >>
>> >> Jim:
>> >>
>> >> I know it's the 11th hour, but WDYT about cutting the branch next
>> >> Monday? We see a flurry of activity (announcing a release does
>> >> that) and waiting to cut the branch might be easiest all
>> 'round.
>> >>
>> >> Up to you of course, I can backport the test fixes I'd like for
>> >> instance and I'd like to get the upgraded ZooKeeper in 7.5.
>> >>
>> >> Erick
>> >> On Tue, Sep 4, 2018 at 1:04 PM Cassandra Targett <
>> casstarg...@gmail.com> wrote:
>> >> >
>> >> > It's not so much the building of the RC as giving the content a
>> detailed editorial review.
>> >> >
>> >> > The build/release process itself is well-documented and
>> published with every Ref Guide:
>> https://lucene.apache.org/solr/guide/how-to-contribute.html#building-publishing-the-guide.
>> It was designed from the artifact process, so it's nearly identical as a
>> process. It's really barely a burden.
>> >> >
>> >> > In terms of preparing the content, there are a number of things
>> I do:
>> >> >
>> >> > First, I try to ensure that every issue in CHANGES.txt that
>> should be documented has been documented. That involves an intensive 
>> review
>> of CHANGES.txt and a comparison with commits to find what might be 
>> missing,
>> then chasing people down to see if they intend to make changes or not.
>> Assuming the person responds, then it's waiting for them to get their 
>> stuff
>> done. This is usually about 2-3 days of 

Re: Lucene/Solr 7.5

2018-09-12 Thread Adrien Grand
Hey Jim,

I added you to the hudson-jobadmin group so that you can do it next time.

Steve, thanks for taking care of setting up the builds!

Le mar. 11 sept. 2018 à 17:32, jim ferenczi  a
écrit :

> No worries at all Cassandra. What do you think of building the first RC on
> Friday and start the vote on Monday next week ? This will leave some
> room to finish the missing bits.
> Could someone help to setup the Jenkins releases build ? It seems that I
> cannot create jobs with my account.
>
> Le mar. 11 sept. 2018 à 14:08, Cassandra Targett 
> a écrit :
>
>> Sorry, Jim, I should have replied yesterday about the state of things
>> with the Ref Guide - it's close. I'm doing the last bit of big review I
>> need to do and am nearly done with that, then I have a couple more small
>> things done (including SOLR-12763 which I just created since I forgot to do
>> it earlier). My goal is to be done by the end of my day today so you could
>> do the RC tomorrow, but who knows what the day will bring work-wise, so
>> I'll send another mail at the end of the day my time to let you know for
>> sure.
>>
>> On Mon, Sep 10, 2018 at 9:07 AM jim ferenczi 
>> wrote:
>>
>>> I just fixed the invalid version (7.5.1) that I added in master and 7x.
>>> The next version on these branches should be 7.6.0, sorry for the noise.
>>>
>>> Le lun. 10 sept. 2018 à 09:26, jim ferenczi  a
>>> écrit :
>>>
 Hi,

 Feature freeze for 7.5 has started, I just created a branch_7_5.:

 * No new features may be committed to the branch.
 * Documentation patches, build patches and serious bug fixes may be
 committed to the branch. However, you should submit all patches you want to
 commit to Jira first to give others the chance to review and possibly vote
 against the patch. Keep in mind that it is our main intention to keep the
 branch as stable as possible.
 * All patches that are intended for the branch should first be
 committed to the unstable branch, merged into the stable branch, and then
 into the current release branch.
 * Normal unstable and stable branch development may continue as usual.
 However, if you plan to commit a big change to the unstable branch while
 the branch feature freeze is in effect, think twice: can't the addition
 wait a couple more days? Merges of bug fixes into the branch may become
 more difficult.
 * Only Jira issues with Fix version "7.5" and priority "Blocker" will
 delay a release candidate build.

 I'll create the first RC later this week depending on the status of the
 Solr ref guide. Cassandra, can you update the status when you think that
 the ref guide is ready (no rush just a reminder that we need to sync during
 this release ;) ) ?

 Cheers,
 Jim

 Le mer. 5 sept. 2018 à 17:57, Erick Erickson 
 a écrit :

> Great, thanks!
> On Wed, Sep 5, 2018 at 8:44 AM jim ferenczi 
> wrote:
> >
> > Sure it can wait a few days. Let's cut the branch next Monday and we
> can sync with Cassandra to create the first RC when the ref guide is 
> ready.
> >
> > Le mer. 5 sept. 2018 à 17:27, Erick Erickson <
> erickerick...@gmail.com> a écrit :
> >>
> >> Jim:
> >>
> >> I know it's the 11th hour, but WDYT about cutting the branch next
> >> Monday? We see a flurry of activity (announcing a release does
> >> that) and waiting to cut the branch might be easiest all 'round.
> >>
> >> Up to you of course, I can backport the test fixes I'd like for
> >> instance and I'd like to get the upgraded ZooKeeper in 7.5.
> >>
> >> Erick
> >> On Tue, Sep 4, 2018 at 1:04 PM Cassandra Targett <
> casstarg...@gmail.com> wrote:
> >> >
> >> > It's not so much the building of the RC as giving the content a
> detailed editorial review.
> >> >
> >> > The build/release process itself is well-documented and published
> with every Ref Guide:
> https://lucene.apache.org/solr/guide/how-to-contribute.html#building-publishing-the-guide.
> It was designed from the artifact process, so it's nearly identical as a
> process. It's really barely a burden.
> >> >
> >> > In terms of preparing the content, there are a number of things I
> do:
> >> >
> >> > First, I try to ensure that every issue in CHANGES.txt that
> should be documented has been documented. That involves an intensive 
> review
> of CHANGES.txt and a comparison with commits to find what might be 
> missing,
> then chasing people down to see if they intend to make changes or not.
> Assuming the person responds, then it's waiting for them to get their 
> stuff
> done. This is usually about 2-3 days of effort, before the waiting around
> for answers and/or commits.
> >> >
> >> > Then I review every commit and read it for clarity and correct
> English usage. Does it fit where someone 

[jira] [Commented] (LUCENE-8416) Add tokenized version of o.o. to Stempel stopwords

2018-09-12 Thread Peter Cseh (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16611862#comment-16611862
 ] 

Peter Cseh commented on LUCENE-8416:


I've created a PR for this. I could not find any tests that should be changed 
after this.

> Add tokenized version of o.o. to Stempel stopwords
> --
>
> Key: LUCENE-8416
> URL: https://issues.apache.org/jira/browse/LUCENE-8416
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Reporter: Trey Jones
>Priority: Trivial
>  Labels: easyfix, newbie
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The Stempel stopword list ( 
> lucene-solr/lucene/analysis/stempel/src/resources/org/apache/lucene/analysis/pl/stopwords.txt
>  ) contains "o.o." which is a good stopword (it's part of the abbreviation 
> for "limited liability company", which is "[sp. z 
> o.o.|https://en.wiktionary.org/wiki/sp._z_o.o.];. However, the standard 
> tokenizer changes "o.o." to "o.o" so the stopword filter has no effect.
> Add "o.o" to the stopword list. (It's probably okay to leave "o.o." in the 
> list, though, in case a different tokenizer is used.)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-repro - Build # 1448 - Unstable

2018-09-12 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/1448/

[...truncated 28 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-BadApples-NightlyTests-7.x/26/consoleText

[repro] Revision: 7fdbf0c016e75422ebc18147cadea4ca75ab0a1a

[repro] Ant options: -Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-7.x/test-data/enwiki.random.lines.txt
[repro] Repro line:  ant test  -Dtestcase=SharedFSAutoReplicaFailoverTest 
-Dtests.method=test -Dtests.seed=C7B681A9FBA6B54A -Dtests.multiplier=2 
-Dtests.nightly=true -Dtests.slow=true -Dtests.badapples=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-7.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=hu -Dtests.timezone=America/Nipigon -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=CdcrReplicationHandlerTest 
-Dtests.method=testReplicationWithBufferedUpdates -Dtests.seed=C7B681A9FBA6B54A 
-Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
-Dtests.badapples=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-7.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=be-BY -Dtests.timezone=America/Louisville -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=HdfsRestartWhileUpdatingTest 
-Dtests.method=test -Dtests.seed=C7B681A9FBA6B54A -Dtests.multiplier=2 
-Dtests.nightly=true -Dtests.slow=true -Dtests.badapples=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-7.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=th-TH -Dtests.timezone=US/East-Indiana -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=HdfsRestartWhileUpdatingTest 
-Dtests.seed=C7B681A9FBA6B54A -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true -Dtests.badapples=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-7.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=th-TH -Dtests.timezone=US/East-Indiana -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=TestStressCloudBlindAtomicUpdates 
-Dtests.method=test_dv -Dtests.seed=C7B681A9FBA6B54A -Dtests.multiplier=2 
-Dtests.nightly=true -Dtests.slow=true -Dtests.badapples=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-7.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=es-PY -Dtests.timezone=SST -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
5b96f89d2b038bff2ed3351887a87108f7cc6ea3
[repro] git fetch
[repro] git checkout 7fdbf0c016e75422ebc18147cadea4ca75ab0a1a

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   CdcrReplicationHandlerTest
[repro]   SharedFSAutoReplicaFailoverTest
[repro]   TestStressCloudBlindAtomicUpdates
[repro]   HdfsRestartWhileUpdatingTest
[repro] ant compile-test

[...truncated 3437 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=20 
-Dtests.class="*.CdcrReplicationHandlerTest|*.SharedFSAutoReplicaFailoverTest|*.TestStressCloudBlindAtomicUpdates|*.HdfsRestartWhileUpdatingTest"
 -Dtests.showOutput=onerror -Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-7.x/test-data/enwiki.random.lines.txt
 -Dtests.seed=C7B681A9FBA6B54A -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true -Dtests.badapples=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-7.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=be-BY -Dtests.timezone=America/Louisville -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[...truncated 92134 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   0/5 failed: org.apache.solr.cloud.SharedFSAutoReplicaFailoverTest
[repro]   0/5 failed: org.apache.solr.cloud.TestStressCloudBlindAtomicUpdates
[repro]   0/5 failed: org.apache.solr.cloud.cdcr.CdcrReplicationHandlerTest
[repro]   3/5 failed: org.apache.solr.cloud.hdfs.HdfsRestartWhileUpdatingTest
[repro] git checkout 5b96f89d2b038bff2ed3351887a87108f7cc6ea3

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 6 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-NightlyTests-7.x - Build # 317 - Still Unstable

2018-09-12 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-7.x/317/

1 tests failed.
FAILED:  
org.apache.solr.cloud.TestLeaderInitiatedRecoveryThread.testPublishDownState

Error Message:
Could not load collection from ZK: collection1

Stack Trace:
org.apache.solr.common.SolrException: Could not load collection from ZK: 
collection1
at 
__randomizedtesting.SeedInfo.seed([77C7B93B03977AA3:29BA1BC5A78A335E]:0)
at 
org.apache.solr.common.cloud.ZkStateReader.getCollectionLive(ZkStateReader.java:1316)
at 
org.apache.solr.common.cloud.ZkStateReader$LazyCollectionRef.get(ZkStateReader.java:732)
at 
org.apache.solr.common.cloud.ClusterState.getCollectionOrNull(ClusterState.java:148)
at 
org.apache.solr.common.cloud.ClusterState.getCollectionOrNull(ClusterState.java:131)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.getTotalReplicas(AbstractFullDistribZkTestBase.java:495)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createJettys(AbstractFullDistribZkTestBase.java:448)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createServers(AbstractFullDistribZkTestBase.java:341)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1006)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:983)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.zookeeper.KeeperException$SessionExpiredException: 
KeeperErrorCode = Session expired for /collections/collection1/state.json
at 

[jira] [Commented] (LUCENE-7862) Should BKD cells store their min/max packed values?

2018-09-12 Thread Ignacio Vera (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-7862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16611645#comment-16611645
 ] 

Ignacio Vera commented on LUCENE-7862:
--

Thanks [~janhoy]!

> Should BKD cells store their min/max packed values?
> ---
>
> Key: LUCENE-7862
> URL: https://issues.apache.org/jira/browse/LUCENE-7862
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Ignacio Vera
>Priority: Minor
> Fix For: 7.5, master (8.0)
>
> Attachments: LUCENE-7862.patch, LUCENE-7862.patch, LUCENE-7862.patch
>
>
> The index of the BKD tree already allows to know lower and upper bounds of 
> values in a given dimension. However the actual range of values might be more 
> narrow than what the index tells us, especially if splitting on one dimension 
> reduces the range of values in at least one other dimension. For instance 
> this tends to be the case with range fields: since we enforce that lower 
> bounds are less than upper bounds, splitting on one dimension will also 
> affect the range of values in the other dimension.
> So I'm wondering whether we should store the actual range of values for each 
> dimension in leaf blocks, this will hopefully allow to figure out that either 
> none or all values match in a block without having to check them all.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8495) ComplexPhraseQuery.rewrite throws "Unknown query type:org.apache.lucene.search.SynonymQuery" when nested BooleanQuery contains a SynonymQuery

2018-09-12 Thread Bjarke Mortensen (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bjarke Mortensen updated LUCENE-8495:
-
Affects Version/s: master (8.0)

> ComplexPhraseQuery.rewrite throws "Unknown query 
> type:org.apache.lucene.search.SynonymQuery" when nested BooleanQuery contains 
> a SynonymQuery 
> --
>
> Key: LUCENE-8495
> URL: https://issues.apache.org/jira/browse/LUCENE-8495
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/queryparser
>Affects Versions: 7.4, master (8.0)
>Reporter: Bjarke Mortensen
>Priority: Major
> Attachments: 
> 0001-Added-support-for-nested-synonym-queries-in-ComplexP.patch
>
>
> When using nested queries in ComplexPhrases, and a part of the query is a 
> SynonymQuery, an exception is thrown from  addComplexPhraseClause:
> {{throw new IllegalArgumentException("Unknown query type:"}}
> {{ + childQuery.getClass().getName());}}
> Examples (dogs and tv are synonyms):
> {{"(cats OR dogs) cigar"}}
> "platform* (video* OR tv)"~10
> The bug is similar in nature to LUCENE-8305, in that SynonymQuery support was 
> added to ComplexPhraseQueryParser (in LUCENE-7695), but was not expanded to 
> nested queries.
> The fix is similar to the one in LUCENE-8305, namely to add the same logic in 
> addComplexPhraseClause as in rewrite.
> See attached patch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-7.x - Build # 878 - Unstable

2018-09-12 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/878/

3 tests failed.
FAILED:  org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testMixedBounds

Error Message:
expected: but was:

Stack Trace:
java.lang.AssertionError: expected: but was:
at 
__randomizedtesting.SeedInfo.seed([97B17FF999C129CC:9D32C054D47A2296]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:147)
at 
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testMixedBounds(IndexSizeTriggerTest.java:669)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
org.apache.solr.cloud.autoscaling.sim.TestSimLargeCluster.testSearchRate

Error Message:
{}

Stack Trace:
java.lang.AssertionError: {}
at 
__randomizedtesting.SeedInfo.seed([97B17FF999C129CC:CAF9617056078F83]:0)
at