[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_172) - Build # 23607 - Still Unstable!

2019-01-31 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/23607/
Java: 32bit/jdk1.8.0_172 -client -XX:+UseConcMarkSweepGC

4 tests failed.
FAILED:  org.apache.lucene.document.TestLatLonShape.testRandomLineEncoding

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([B8138E730ADD21DA:550A36B1DEBB7BBB]:0)
at org.junit.Assert.fail(Assert.java:86)
at org.junit.Assert.assertTrue(Assert.java:41)
at org.junit.Assert.assertTrue(Assert.java:52)
at 
org.apache.lucene.document.TestLatLonShape.verifyEncoding(TestLatLonShape.java:774)
at 
org.apache.lucene.document.TestLatLonShape.testRandomLineEncoding(TestLatLonShape.java:716)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  org.apache.lucene.document.TestLatLonShape.testRandomLineEncoding

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([B8138E730ADD21DA:550A36B1DEBB7BBB]:0)
at org.junit.Assert.fail(Assert.java:86)
at org.junit.Assert.assertTrue(Assert.java:41)
at org.junit.Assert.assertTrue(Assert.java:52)
at 
org.apache.lucene.document.TestLatLonShape.verifyEncoding(TestLatLonShape.java:774)
at 

[jira] [Updated] (LUCENE-8668) Various JVM failures on PhaseIdealLoop::split_up

2019-01-31 Thread Dawid Weiss (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Weiss updated LUCENE-8668:

Labels: jvm  (was: )

> Various JVM failures on PhaseIdealLoop::split_up
> 
>
> Key: LUCENE-8668
> URL: https://issues.apache.org/jira/browse/LUCENE-8668
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Minor
>  Labels: jvm
> Attachments: hs_err_pid10534.log, replay_pid10534.log
>
>
> Shows up on Jenkins in various contexts an on various JVMs, but all on Uwe's 
> jenkins machine. 
> Examples:
> {code}
>[junit4] #
>[junit4] #  SIGSEGV (0xb) at pc=0x7fe7a0b1a46c, pid=18527, tid=18552
>[junit4] #
>[junit4] # JRE version: OpenJDK Runtime Environment (12.0+23) (build 
> 12-ea+23)
>[junit4] # Java VM: OpenJDK 64-Bit Server VM (12-ea+23, mixed mode, 
> tiered, serial gc, linux-amd64)
>[junit4] # Problematic frame:
>[junit4] # V  [libjvm.so+0xce046c]  PhaseIdealLoop::split_up(Node*, Node*, 
> Node*) [clone .part.38]+0x47c
> {code}
> {code}
>[junit4] #
>[junit4] #  SIGSEGV (0xb) at pc=0x7f8a1fcf713c, pid=8792, tid=8822
>[junit4] #
>[junit4] # JRE version: OpenJDK Runtime Environment (11.0+28) (build 11+28)
>[junit4] # Java VM: OpenJDK 64-Bit Server VM (11+28, mixed mode, tiered, 
> parallel gc, linux-amd64)
>[junit4] # Problematic frame:
>[junit4] # V  [libjvm.so+0xd3e13c]  PhaseIdealLoop::split_up(Node*, Node*, 
> Node*) [clone .part.39]+0x47c
> {code}
> {code}
>[junit4] #
>[junit4] #  SIGSEGV (0xb) at pc=0x7f8cfcb0a13c, pid=27685, tid=27730
>[junit4] #
>[junit4] # JRE version: OpenJDK Runtime Environment (11.0+28) (build 11+28)
>[junit4] # Java VM: OpenJDK 64-Bit Server VM (11+28, mixed mode, tiered, 
> g1 gc, linux-amd64)
>[junit4] # Problematic frame:
>[junit4] # V  [libjvm.so+0xd3e13c]  PhaseIdealLoop::split_up(Node*, Node*, 
> Node*) [clone .part.39]+0x47c
> {code}
> {code}
>[junit4] #
>[junit4] #  SIGSEGV (0xb) at pc=0x7fad0dea1409, pid=10534, tid=10604
>[junit4] #
>[junit4] # JRE version: OpenJDK Runtime Environment (10.0.1+10) (build 
> 10.0.1+10)
>[junit4] # Java VM: OpenJDK 64-Bit Server VM (10.0.1+10, mixed mode, 
> tiered, compressed oops, concurrent mark sweep gc, linux-amd64)
>[junit4] # Problematic frame:
>[junit4] # V  [libjvm.so+0xc48409]  PhaseIdealLoop::split_up(Node*, Node*, 
> Node*) [clone .part.40]+0x619
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-8677) JVM SIGSEGV in Node::in

2019-01-31 Thread Dawid Weiss (JIRA)
Dawid Weiss created LUCENE-8677:
---

 Summary: JVM SIGSEGV in Node::in
 Key: LUCENE-8677
 URL: https://issues.apache.org/jira/browse/LUCENE-8677
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Dawid Weiss


Jenkins:
https://builds.apache.org/job/Lucene-Solr-SmokeRelease-8.x/15

{code}
   [junit4] # A fatal error has been detected by the Java Runtime Environment:
   [junit4] #
   [junit4] #  SIGSEGV (0xb) at pc=0x000105bee9d8, pid=85292, tid=18179
   [junit4] #
   [junit4] # JRE version: Java(TM) SE Runtime Environment (9.0+181) (build 
9+181)
   [junit4] # Java VM: Java HotSpot(TM) 64-Bit Server VM (9+181, mixed mode, 
tiered, concurrent mark sweep gc, bsd-amd64)
   [junit4] # Problematic frame:
   [junit4] # [thread 208539 also had an error]
   [junit4] V  [libjvm.dylib+0x4f49d8]  Node::in(unsigned int) const+0x18
   [junit4] #
   [junit4] # No core dump will be written. Core dumps have been disabled. To 
enable core dumping, try "ulimit -c unlimited" before starting Java again
   [junit4] #
   [junit4] # An error report file with more information is saved as:
   [junit4] # 
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/solr/build/solr-core/test/J0/hs_err_pid85292.log
   [junit4] # [ timer expired, abort... ]
{code}

No hs_err or replay log on the jenkins page though.




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9515) Update to Hadoop 3

2019-01-31 Thread Uwe Schindler (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16758049#comment-16758049
 ] 

Uwe Schindler commented on SOLR-9515:
-

IMHO, as we can now provide our own thread factory, we may use the same 
approach like in the TestFramework for normal executors, where we have a thread 
factory that names our threads in a better way. This would allow to possibly 
remove the special cases in the thread leak detector.

> Update to Hadoop 3
> --
>
> Key: SOLR-9515
> URL: https://issues.apache.org/jira/browse/SOLR-9515
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Kevin Risden
>Priority: Blocker
> Fix For: 8.0, master (9.0)
>
> Attachments: SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch, 
> SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch
>
>  Time Spent: 4h 50m
>  Remaining Estimate: 0h
>
> Hadoop 3 is not out yet, but I'd like to iron out the upgrade to be prepared. 
> I'll start up a dev branch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-Tests-master - Build # 3164 - Unstable

2019-01-31 Thread Dawid Weiss
This failure does reproduce for me.

On Fri, Feb 1, 2019 at 8:16 AM Apache Jenkins Server
 wrote:
>
> Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/3164/
>
> 2 tests failed.
> FAILED:  org.apache.lucene.document.TestLatLonShape.testRandomPolygonEncoding
>
> Error Message:
>
>
> Stack Trace:
> java.lang.AssertionError
> at 
> __randomizedtesting.SeedInfo.seed([E92F1FD44199EFBE:2C3EDB8695100930]:0)
> at org.junit.Assert.fail(Assert.java:86)
> at org.junit.Assert.assertTrue(Assert.java:41)
> at org.junit.Assert.assertTrue(Assert.java:52)
> at 
> org.apache.lucene.document.TestLatLonShape.verifyEncoding(TestLatLonShape.java:774)
> at 
> org.apache.lucene.document.TestLatLonShape.testRandomPolygonEncoding(TestLatLonShape.java:726)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
> at 
> org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
> at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
> at 
> org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
> at 
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
> at 
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
> at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
> at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
> at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
> at 
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
> at 
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
> at 
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
> at 
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
> at 
> org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
> at java.lang.Thread.run(Thread.java:748)
>
>
> FAILED:  
> org.apache.solr.client.solrj.impl.CloudSolrClientTest.testRetryUpdatesWhenClusterStateIsStale
>
> Error Message:
> Error from server at https://127.0.0.1:39042/solr/stale_state_test_col: No 
> registered leader was found after waiting for 4000ms , collection: 
> stale_state_test_col slice: shard1 saw 
> 

Re: Lucene/Solr 8.0

2019-01-31 Thread Adrien Grand
Nick, this change seems to be causing test failures. Can you have a look?

See eg. https://builds.apache.org/job/Lucene-Solr-SmokeRelease-8.x/15/console.

On Fri, Feb 1, 2019 at 12:27 AM Nicholas Knize  wrote:
>
> Thank you Jim. LUCENE-8669 has been merged.
>
> - Nick
>
> On Wed, Jan 30, 2019 at 1:36 PM jim ferenczi  wrote:
>>
>> Sure Nick, I am not aware of other blockers for 7.7 so I'll start the first 
>> RC when your patch is merged.
>> Kevin, this looks like a big change so I am not sure if it's a good idea to 
>> rush this in for 8.0. Would it be safer to target another version in order 
>> to take some time to ensure that it's not breaking anything ? I guess that 
>> your concern is that a change like this should happen in a major version but 
>> I wonder if it's worth the risk. I don't know this part of the code and the 
>> implications of such a change so I let you decide what we should do here but 
>> let's not delay the release if we realize that this change requires more 
>> than a few days to be merged.
>>
>> Le mer. 30 janv. 2019 à 20:25, Nicholas Knize  a écrit :
>>>
>>> Hey Jim,
>>>
>>> I just added https://issues.apache.org/jira/browse/LUCENE-8669 along with a 
>>> pretty straightforward patch. This is a critical one that I think needs to 
>>> be in for 7.7 and 8.0. Can I set this as a blocker?
>>>
>>> On Wed, Jan 30, 2019 at 1:07 PM Kevin Risden  wrote:

 Jim,

 Since 7.7 needs to be released before 8.0 does that leave time to get
 SOLR-9515 - Hadoop 3 upgrade into 8.0? I have a PR updated and it is
 currently under review.

 Should I set the SOLR-9515 as a blocker for 8.0? I'm curious if others
 feel this should make it into 8.0 or not.

 Kevin Risden

 On Tue, Jan 29, 2019 at 11:15 AM jim ferenczi  
 wrote:
 >
 > I had to revert the version bump for 8.0 (8.1) on branch_8x because we 
 > don't handle two concurrent releases in our tests 
 > (https://issues.apache.org/jira/browse/LUCENE-8665).
 > Since we want to release 7.7 first I created the Jenkins job for this 
 > version only and will build the first candidate for this version later 
 > this week if there are no objection.
 > I'll restore the version bump for 8.0 when 7.7 is out.
 >
 >
 > Le mar. 29 janv. 2019 à 14:43, jim ferenczi  a 
 > écrit :
 >>
 >> Hi,
 >> Hearing no objection I created the branches for 8.0 and 7.7. I'll now 
 >> create the Jenkins tasks for these versions, Uwe can you also add them 
 >> to the Policeman's Jenkins job ?
 >> This also means that the feature freeze phase has started for both 
 >> versions (7.7 and 8.0):
 >>
 >> No new features may be committed to the branch.
 >> Documentation patches, build patches and serious bug fixes may be 
 >> committed to the branch. However, you should submit all patches you 
 >> want to commit to Jira first to give others the chance to review and 
 >> possibly vote against the patch. Keep in mind that it is our main 
 >> intention to keep the branch as stable as possible.
 >> All patches that are intended for the branch should first be committed 
 >> to the unstable branch, merged into the stable branch, and then into 
 >> the current release branch.
 >> Normal unstable and stable branch development may continue as usual. 
 >> However, if you plan to commit a big change to the unstable branch 
 >> while the branch feature freeze is in effect, think twice: can't the 
 >> addition wait a couple more days? Merges of bug fixes into the branch 
 >> may become more difficult.
 >> Only Jira issues with Fix version "X.Y" and priority "Blocker" will 
 >> delay a release candidate build.
 >>
 >>
 >> Thanks,
 >> Jim
 >>
 >>
 >> Le lun. 28 janv. 2019 à 13:54, Tommaso Teofili 
 >>  a écrit :
 >>>
 >>> sure, thanks Jim!
 >>>
 >>> Tommaso
 >>>
 >>> Il giorno lun 28 gen 2019 alle ore 10:35 jim ferenczi
 >>>  ha scritto:
 >>> >
 >>> > Go ahead Tommaso the branch is not created yet.
 >>> > The plan is to create the branches (7.7 and 8.0)  tomorrow or 
 >>> > wednesday and to announce the feature freeze the same day.
 >>> > For blocker issues that are still open this leaves another week to 
 >>> > work on a patch and we can update the status at the end of the week 
 >>> > in order to decide if we can start the first build candidate
 >>> > early next week. Would that work for you ?
 >>> >
 >>> > Le lun. 28 janv. 2019 à 10:19, Tommaso Teofili 
 >>> >  a écrit :
 >>> >>
 >>> >> I'd like to backport 
 >>> >> https://issues.apache.org/jira/browse/LUCENE-8659
 >>> >> (upgrade to OpenNLP 1.9.1) to 8x branch, if there's still time.
 >>> >>
 >>> >> Regards,
 >>> >> Tommaso
 >>> >>
 >>> >> Il giorno lun 28 gen 2019 alle ore 07:59 Adrien Grand
 >>> >>  ha 

[JENKINS] Lucene-Solr-Tests-master - Build # 3164 - Unstable

2019-01-31 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/3164/

2 tests failed.
FAILED:  org.apache.lucene.document.TestLatLonShape.testRandomPolygonEncoding

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([E92F1FD44199EFBE:2C3EDB8695100930]:0)
at org.junit.Assert.fail(Assert.java:86)
at org.junit.Assert.assertTrue(Assert.java:41)
at org.junit.Assert.assertTrue(Assert.java:52)
at 
org.apache.lucene.document.TestLatLonShape.verifyEncoding(TestLatLonShape.java:774)
at 
org.apache.lucene.document.TestLatLonShape.testRandomPolygonEncoding(TestLatLonShape.java:726)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
org.apache.solr.client.solrj.impl.CloudSolrClientTest.testRetryUpdatesWhenClusterStateIsStale

Error Message:
Error from server at https://127.0.0.1:39042/solr/stale_state_test_col: No 
registered leader was found after waiting for 4000ms , collection: 
stale_state_test_col slice: shard1 saw 
state=DocCollection(stale_state_test_col//collections/stale_state_test_col/state.json/9)={
   "pullReplicas":"0",   "replicationFactor":"1",   "shards":{"shard1":{   
"range":"8000-7fff",   "state":"active",   
"replicas":{"core_node4":{   

[jira] [Commented] (SOLR-9515) Update to Hadoop 3

2019-01-31 Thread Uwe Schindler (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16758034#comment-16758034
 ] 

Uwe Schindler commented on SOLR-9515:
-

I agree with Mark Miller. BTW, the commonPool and the ForkJoin pools are made 
for cpu-intensive calculation, so in general they won't need any special 
permissions. What Hadoop is doing here is very untypical, because it misuses 
the fork-join-pool for I/O stuff which generally just waits for I/O devices to 
finish, so it's not a cpu-intensive job. This especially applies for the common 
pool (everybody in Java says: Whenever you do something with common-pool like 
stream.parallel() never ever do I/O inside.
The change in Java 9 now unfortunately changes this to all fork join pools.
I think we should file a bug to change hadoop to use their own thread factory 
or use another pool.

bq. Definitely didn't work. From the ForkJoinPool javadocs
The sentence in the docs was only meant for the "commonPool", so when you 
change that to 0 it does not create any threads at all, so stream().parallel() 
is not doing anything. Hadoop is using its own ForkJoin pool that breaks.

I am fine with the same approach for tests that we did for the Jetty server 
(it's a test only class here): Fork it and fix it until Hadoop fixed their bug. 
We'd really open a bug report.

> Update to Hadoop 3
> --
>
> Key: SOLR-9515
> URL: https://issues.apache.org/jira/browse/SOLR-9515
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Kevin Risden
>Priority: Blocker
> Fix For: 8.0, master (9.0)
>
> Attachments: SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch, 
> SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch
>
>  Time Spent: 4h 50m
>  Remaining Estimate: 0h
>
> Hadoop 3 is not out yet, but I'd like to iron out the upgrade to be prepared. 
> I'll start up a dev branch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-8.x - Build # 15 - Failure

2019-01-31 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-8.x/15/

No tests ran.

Build Log:
[...truncated 23465 lines...]
[asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: 
invalid part, must have at least one section (e.g., chapter, appendix, etc.)
[asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid 
part, must have at least one section (e.g., chapter, appendix, etc.)
 [java] Processed 2467 links (2018 relative) to 3229 anchors in 247 files
 [echo] Validated Links & Anchors via: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/solr/build/solr-ref-guide/bare-bones-html/

-dist-changes:
 [copy] Copying 4 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/solr/package/changes

package:

-unpack-solr-tgz:

-ensure-solr-tgz-exists:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/solr/build/solr.tgz.unpacked
[untar] Expanding: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/solr/package/solr-8.0.0.tgz
 into 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/solr/build/solr.tgz.unpacked

generate-maven-artifacts:

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-8.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: 

Re: Solr Size Limitation upto 32 kb limitation

2019-01-31 Thread Kranthi Kumar K
Hi Team,


Thanks for your suggestions that you've posted, but none of them have fixed our 
issue. Could you please provide us your valuable suggestions to address this 
issue.


We'll be awaiting your reply.


Thanks,

Kranthi kumar.K


From: Michelle Ngo
Sent: Thursday, January 24, 2019 12:00:06 PM
To: Kranthi Kumar K; dev@lucene.apache.org; solr-u...@lucene.apache.org
Cc: Ananda Babu medida; Srinivasa Reddy Karri; Ravi Vangala; Suresh Malladi; 
Vijay Nandula
Subject: RE: Solr Size Limitation upto 32 kb limitation


Thanks @Kranthi Kumar K for following up



From: Kranthi Kumar K 
Sent: Thursday, 24 January 2019 4:51 PM
To: dev@lucene.apache.org; solr-u...@lucene.apache.org
Cc: Ananda Babu medida ; Srinivasa Reddy 
Karri ; Michelle Ngo 
; Ravi Vangala ; 
Suresh Malladi ; Vijay Nandula 

Subject: RE: Solr Size Limitation upto 32 kb limitation



Thank you Bernd Fehling for your suggested solution, I've tried the same by 
changing the type and added multivalued to true in Schema.xml file i.e,

change from:







Changed to:







After changing it also still we are unable to import the files size > 32 kb. 
please find the solution suggested by Bernd in the below url:



http://lucene.472066.n3.nabble.com/Re-Solr-Size-Limitation-upto-32-kb-limitation-td4421569.html



Bernd Fehling, could you please suggest another alternative solution to resolve 
our issue, which would help us alot?



Please let me know for any questions.



[image001]

Thanks & Regards,

Kranthi Kumar.K,

Software Engineer,

Ccube Fintech Global Services Pvt Ltd.,

Email/Skype: 
kranthikuma...@ccubefintech.com,

Mobile: +91-8978078449.





From: Kranthi Kumar K
Sent: Friday, January 18, 2019 4:22 PM
To: dev@lucene.apache.org; 
solr-u...@lucene.apache.org
Cc: Ananda Babu medida 
mailto:anandababu.med...@ccubefintech.com>>;
 Srinivasa Reddy Karri 
mailto:srinivasareddy.ka...@ccubefintech.com>>;
 Michelle Ngo mailto:michelle@ccube.com.au>>; 
Ravi Vangala 
mailto:ravi.vang...@ccubefintech.com>>
Subject: RE: Solr Size Limitation upto 32 kb limitation



Hi team,



Thank you Erick Erickson ,Bernd Fehling , Jan Hoydahl for your suggested 
solutions. I’ve tried the suggested one’s and still we are unable to import 
files havingsize  >32 kb, it is displaying same error.



Below link has the suggested solutions. Please have a look once.



http://lucene.472066.n3.nabble.com/Solr-Size-Limitation-upto-32-KB-files-td4419779.html



  1.  As per Erick Erickson, I’ve changed the string type to Text type based 
and still the issue occurs .

I’ve changed from :







Changed to:







If we do so, it is showing error in the log, please find the error in the 
attachment.



If I change to:







It is not showing any error , but the issue still exists.



  1.  As per Jan Hoydahl, I have gone through the link that you have provided 
and checked ‘requestParsers’ tag in solrconfig.xml,



RequestParsers tag in our application is as follows:



‘’

Request parsers, which we are using and in the link you have provided are 
similar. And still we are unable to import the files size >32 kb.



  1.  As per Bernd Fehling, we are using Solr 4.10.2. you have mentioned as,

‘If you are trying to add larger content then you have to "chop" that
by yourself and add it as multivalued. Can be done within a self written 
loader. ’



I’m a newbie to Solr and I didn’t get what exactly ‘self written loader’ is?



Could you please provide us sample code, that helps us to go further?





[image001]

Thanks & Regards,

Kranthi Kumar.K,

Software Engineer,

Ccube Fintech Global Services Pvt Ltd.,

Email/Skype: 
kranthikuma...@ccubefintech.com,

Mobile: +91-8978078449.





From: Kranthi Kumar K 
mailto:kranthikuma...@ccubefintech.com>>
Sent: Thursday, January 17, 2019 12:43 PM
To: dev@lucene.apache.org; 
solr-u...@lucene.apache.org
Cc: Ananda Babu medida 
mailto:anandababu.med...@ccubefintech.com>>;
 Srinivasa Reddy Karri 
mailto:srinivasareddy.ka...@ccubefintech.com>>;
 Michelle Ngo mailto:michelle@ccube.com.au>>
Subject: Re: Solr Size Limitation upto 32 kb limitation



Hi Team,



Can we have any updates on the below issue? We are awaiting your reply.



Thanks,

Kranthi kumar.K



From: Kranthi Kumar K
Sent: Friday, January 4, 2019 5:01:38 PM
To: dev@lucene.apache.org
Cc: Ananda Babu medida; Srinivasa Reddy Karri
Subject: Solr Size Limitation upto 32 kb limitation



Hi team,



We are currently using Solr 4.2.1 version in our project and everything is 
going well. But recently, we are facing an issue with Solr Data Import. It is 
not importing the files with size greater than 32766 bytes (i.e, 32 kb) 

[jira] [Commented] (SOLR-9515) Update to Hadoop 3

2019-01-31 Thread Mark Miller (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757960#comment-16757960
 ] 

Mark Miller commented on SOLR-9515:
---

You should file a Hadoop Jira and see what they say.

I was using a bunch of ForkJoinPool in the large test cleanup I did and ran 
into issues like this so replaced it with a normal executor. That thing is too 
special and more trouble than it's worth by far.

> Update to Hadoop 3
> --
>
> Key: SOLR-9515
> URL: https://issues.apache.org/jira/browse/SOLR-9515
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Kevin Risden
>Priority: Blocker
> Fix For: 8.0, master (9.0)
>
> Attachments: SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch, 
> SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch
>
>  Time Spent: 4h 50m
>  Remaining Estimate: 0h
>
> Hadoop 3 is not out yet, but I'd like to iron out the upgrade to be prepared. 
> I'll start up a dev branch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13189) Need reliable example (Test) of how to use TestInjection.failReplicaRequests

2019-01-31 Thread Mark Miller (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757955#comment-16757955
 ] 

Mark Miller commented on SOLR-13189:


Whoops, waiting for consistency isn't enough, you also have to wait for the 
right total doc count. Updated patch.

> Need reliable example (Test) of how to use TestInjection.failReplicaRequests
> 
>
> Key: SOLR-13189
> URL: https://issues.apache.org/jira/browse/SOLR-13189
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Priority: Major
> Attachments: SOLR-13189.patch, SOLR-13189.patch, SOLR-13189.patch
>
>
> We need a test that reliably demonstrates the usage of 
> {{TestInjection.failReplicaRequests}} and shows what steps a test needs to 
> take after issuing updates to reliably "pass" (finding all index updates that 
> succeeded from the clients perspective) even in the event of an (injected) 
> replica failure.
> As things stand now, it does not seem that any test using 
> {{TestInjection.failReplicaRequests}} passes reliably -- *and it's not clear 
> if this is due to poorly designed tests, or an indication of a bug in 
> distributed updates / LIR*



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13189) Need reliable example (Test) of how to use TestInjection.failReplicaRequests

2019-01-31 Thread Mark Miller (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-13189:
---
Attachment: SOLR-13189.patch

> Need reliable example (Test) of how to use TestInjection.failReplicaRequests
> 
>
> Key: SOLR-13189
> URL: https://issues.apache.org/jira/browse/SOLR-13189
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Priority: Major
> Attachments: SOLR-13189.patch, SOLR-13189.patch, SOLR-13189.patch
>
>
> We need a test that reliably demonstrates the usage of 
> {{TestInjection.failReplicaRequests}} and shows what steps a test needs to 
> take after issuing updates to reliably "pass" (finding all index updates that 
> succeeded from the clients perspective) even in the event of an (injected) 
> replica failure.
> As things stand now, it does not seem that any test using 
> {{TestInjection.failReplicaRequests}} passes reliably -- *and it's not clear 
> if this is due to poorly designed tests, or an indication of a bug in 
> distributed updates / LIR*



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-7.7 - Build # 1 - Failure

2019-01-31 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-7.7/1/

No tests ran.

Build Log:
[...truncated 23466 lines...]
[asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: 
invalid part, must have at least one section (e.g., chapter, appendix, etc.)
[asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid 
part, must have at least one section (e.g., chapter, appendix, etc.)
 [java] Processed 2460 links (2011 relative) to 3224 anchors in 246 files
 [echo] Validated Links & Anchors via: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.7/solr/build/solr-ref-guide/bare-bones-html/

-dist-changes:
 [copy] Copying 4 files to 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.7/solr/package/changes

package:

-unpack-solr-tgz:

-ensure-solr-tgz-exists:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.7/solr/build/solr.tgz.unpacked
[untar] Expanding: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.7/solr/package/solr-7.7.0.tgz
 into 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.7/solr/build/solr.tgz.unpacked

generate-maven-artifacts:

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.7/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.7/lucene/top-level-ivy-settings.xml

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.7/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.7/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.7/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.7/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.7/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.7/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.7/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.7/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.7/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:

[jira] [Commented] (SOLR-13189) Need reliable example (Test) of how to use TestInjection.failReplicaRequests

2019-01-31 Thread Mark Miller (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757947#comment-16757947
 ] 

Mark Miller commented on SOLR-13189:


Here is a hack to that test.

If we want to handle any valid case when checking counts in a test, we have to 
do like the ChaosMonkey tests have always done and wait for consistency 
explicitly.

> Need reliable example (Test) of how to use TestInjection.failReplicaRequests
> 
>
> Key: SOLR-13189
> URL: https://issues.apache.org/jira/browse/SOLR-13189
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Priority: Major
> Attachments: SOLR-13189.patch, SOLR-13189.patch
>
>
> We need a test that reliably demonstrates the usage of 
> {{TestInjection.failReplicaRequests}} and shows what steps a test needs to 
> take after issuing updates to reliably "pass" (finding all index updates that 
> succeeded from the clients perspective) even in the event of an (injected) 
> replica failure.
> As things stand now, it does not seem that any test using 
> {{TestInjection.failReplicaRequests}} passes reliably -- *and it's not clear 
> if this is due to poorly designed tests, or an indication of a bug in 
> distributed updates / LIR*



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13189) Need reliable example (Test) of how to use TestInjection.failReplicaRequests

2019-01-31 Thread Mark Miller (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-13189:
---
Attachment: SOLR-13189.patch

> Need reliable example (Test) of how to use TestInjection.failReplicaRequests
> 
>
> Key: SOLR-13189
> URL: https://issues.apache.org/jira/browse/SOLR-13189
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Priority: Major
> Attachments: SOLR-13189.patch, SOLR-13189.patch
>
>
> We need a test that reliably demonstrates the usage of 
> {{TestInjection.failReplicaRequests}} and shows what steps a test needs to 
> take after issuing updates to reliably "pass" (finding all index updates that 
> succeeded from the clients perspective) even in the event of an (injected) 
> replica failure.
> As things stand now, it does not seem that any test using 
> {{TestInjection.failReplicaRequests}} passes reliably -- *and it's not clear 
> if this is due to poorly designed tests, or an indication of a bug in 
> distributed updates / LIR*



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk-9) - Build # 5040 - Failure!

2019-01-31 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/5040/
Java: 64bit/jdk-9 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

All tests passed

Build Log:
[...truncated 2041 lines...]
   [junit4] JVM J1: stderr was not empty, see: 
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/lucene/build/core/test/temp/junit4-J1-20190201_022806_3764652033111769156818.syserr
   [junit4] >>> JVM J1 emitted unexpected output (verbatim) 
   [junit4] Java HotSpot(TM) 64-Bit Server VM warning: Option 
UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in 
a future release.
   [junit4] <<< JVM J1: EOF 

[...truncated 3 lines...]
   [junit4] JVM J0: stderr was not empty, see: 
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/lucene/build/core/test/temp/junit4-J0-20190201_022806_3767535197494481032855.syserr
   [junit4] >>> JVM J0 emitted unexpected output (verbatim) 
   [junit4] Java HotSpot(TM) 64-Bit Server VM warning: Option 
UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in 
a future release.
   [junit4] <<< JVM J0: EOF 

[...truncated 299 lines...]
   [junit4] JVM J0: stderr was not empty, see: 
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/lucene/build/test-framework/test/temp/junit4-J0-20190201_023855_8084652299839328869638.syserr
   [junit4] >>> JVM J0 emitted unexpected output (verbatim) 
   [junit4] Java HotSpot(TM) 64-Bit Server VM warning: Option 
UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in 
a future release.
   [junit4] <<< JVM J0: EOF 

   [junit4] JVM J1: stderr was not empty, see: 
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/lucene/build/test-framework/test/temp/junit4-J1-20190201_023855_80817770586621094957161.syserr
   [junit4] >>> JVM J1 emitted unexpected output (verbatim) 
   [junit4] Java HotSpot(TM) 64-Bit Server VM warning: Option 
UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in 
a future release.
   [junit4] <<< JVM J1: EOF 

[...truncated 1080 lines...]
   [junit4] JVM J1: stderr was not empty, see: 
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/lucene/build/analysis/common/test/temp/junit4-J1-20190201_024030_0915750081011133537222.syserr
   [junit4] >>> JVM J1 emitted unexpected output (verbatim) 
   [junit4] Java HotSpot(TM) 64-Bit Server VM warning: Option 
UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in 
a future release.
   [junit4] <<< JVM J1: EOF 

[...truncated 3 lines...]
   [junit4] JVM J0: stderr was not empty, see: 
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/lucene/build/analysis/common/test/temp/junit4-J0-20190201_024030_0914595294727667949992.syserr
   [junit4] >>> JVM J0 emitted unexpected output (verbatim) 
   [junit4] Java HotSpot(TM) 64-Bit Server VM warning: Option 
UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in 
a future release.
   [junit4] <<< JVM J0: EOF 

[...truncated 259 lines...]
   [junit4] JVM J0: stderr was not empty, see: 
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/lucene/build/analysis/icu/test/temp/junit4-J0-20190201_024400_87616630813357863400386.syserr
   [junit4] >>> JVM J0 emitted unexpected output (verbatim) 
   [junit4] Java HotSpot(TM) 64-Bit Server VM warning: Option 
UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in 
a future release.
   [junit4] <<< JVM J0: EOF 

   [junit4] JVM J1: stderr was not empty, see: 
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/lucene/build/analysis/icu/test/temp/junit4-J1-20190201_024400_8766519008512372375250.syserr
   [junit4] >>> JVM J1 emitted unexpected output (verbatim) 
   [junit4] Java HotSpot(TM) 64-Bit Server VM warning: Option 
UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in 
a future release.
   [junit4] <<< JVM J1: EOF 

[...truncated 253 lines...]
   [junit4] JVM J1: stderr was not empty, see: 
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/lucene/build/analysis/kuromoji/test/temp/junit4-J1-20190201_024417_28910002153707718634124.syserr
   [junit4] >>> JVM J1 emitted unexpected output (verbatim) 
   [junit4] Java HotSpot(TM) 64-Bit Server VM warning: Option 
UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in 
a future release.
   [junit4] <<< JVM J1: EOF 

[...truncated 3 lines...]
   [junit4] JVM J0: stderr was not empty, see: 
/Users/jenkins/workspace/Lucene-Solr-master-MacOSX/lucene/build/analysis/kuromoji/test/temp/junit4-J0-20190201_024417_2896415414776877795607.syserr
   [junit4] >>> JVM J0 emitted unexpected output (verbatim) 
   [junit4] Java HotSpot(TM) 64-Bit Server VM warning: Option 
UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in 
a future release.
   [junit4] <<< JVM J0: EOF 

[...truncated 162 lines...]
   [junit4] JVM J0: stderr was not empty, see: 

[JENKINS] Lucene-Solr-Tests-7.7 - Build # 3 - Unstable

2019-01-31 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.7/3/

2 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.security.hadoop.TestSolrCloudWithHadoopAuthPlugin

Error Message:
6 threads leaked from SUITE scope at 
org.apache.solr.security.hadoop.TestSolrCloudWithHadoopAuthPlugin: 1) 
Thread[id=35244, name=apacheds, state=WAITING, 
group=TGRP-TestSolrCloudWithHadoopAuthPlugin] at 
java.lang.Object.wait(Native Method) at 
java.lang.Object.wait(Object.java:502) at 
java.util.TimerThread.mainLoop(Timer.java:526) at 
java.util.TimerThread.run(Timer.java:505)2) Thread[id=35245, 
name=kdcReplayCache.data, state=TIMED_WAITING, 
group=TGRP-TestSolrCloudWithHadoopAuthPlugin] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
at java.lang.Thread.run(Thread.java:748)3) Thread[id=35247, 
name=changePwdReplayCache.data, state=TIMED_WAITING, 
group=TGRP-TestSolrCloudWithHadoopAuthPlugin] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
at java.lang.Thread.run(Thread.java:748)4) Thread[id=35249, 
name=pool-74-thread-1, state=TIMED_WAITING, 
group=TGRP-TestSolrCloudWithHadoopAuthPlugin] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
 at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
 at 
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
at java.lang.Thread.run(Thread.java:748)5) Thread[id=35246, 
name=groupCache.data, state=TIMED_WAITING, 
group=TGRP-TestSolrCloudWithHadoopAuthPlugin] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
at java.lang.Thread.run(Thread.java:748)6) Thread[id=35248, 
name=ou=system.data, state=TIMED_WAITING, 
group=TGRP-TestSolrCloudWithHadoopAuthPlugin] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 at 

[JENKINS] Lucene-Solr-8.x-Linux (64bit/jdk1.8.0_172) - Build # 115 - Unstable!

2019-01-31 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-Linux/115/
Java: 64bit/jdk1.8.0_172 -XX:+UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.core.TestDynamicLoading.testDynamicLoading

Error Message:
Could not get expected value  
'org.apache.solr.core.BlobStoreTestRequestHandler' for path 
'overlay/requestHandler/\/test1/class' full output: {   "responseHeader":{ 
"status":0, "QTime":0},   "overlay":{ "znodeVersion":0, 
"runtimeLib":{"colltest":{ "name":"colltest", "version":1,  
from server:  null

Stack Trace:
java.lang.AssertionError: Could not get expected value  
'org.apache.solr.core.BlobStoreTestRequestHandler' for path 
'overlay/requestHandler/\/test1/class' full output: {
  "responseHeader":{
"status":0,
"QTime":0},
  "overlay":{
"znodeVersion":0,
"runtimeLib":{"colltest":{
"name":"colltest",
"version":1,  from server:  null
at 
__randomizedtesting.SeedInfo.seed([36166B740804ED3F:EE5B4623FFD9489F]:0)
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at 
org.apache.solr.core.TestSolrConfigHandler.testForResponseElement(TestSolrConfigHandler.java:590)
at 
org.apache.solr.core.TestDynamicLoading.testDynamicLoading(TestDynamicLoading.java:80)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1082)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:1054)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-13189) Need reliable example (Test) of how to use TestInjection.failReplicaRequests

2019-01-31 Thread Mark Miller (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757921#comment-16757921
 ] 

Mark Miller commented on SOLR-13189:


Basically another example in a long line of someone introducing or changing a 
feature and causing massive new instability.

I still intend to tackle that problem fully and concrete plans and work already 
done, but I've got some side gigs too.

> Need reliable example (Test) of how to use TestInjection.failReplicaRequests
> 
>
> Key: SOLR-13189
> URL: https://issues.apache.org/jira/browse/SOLR-13189
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Priority: Major
> Attachments: SOLR-13189.patch
>
>
> We need a test that reliably demonstrates the usage of 
> {{TestInjection.failReplicaRequests}} and shows what steps a test needs to 
> take after issuing updates to reliably "pass" (finding all index updates that 
> succeeded from the clients perspective) even in the event of an (injected) 
> replica failure.
> As things stand now, it does not seem that any test using 
> {{TestInjection.failReplicaRequests}} passes reliably -- *and it's not clear 
> if this is due to poorly designed tests, or an indication of a bug in 
> distributed updates / LIR*



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13189) Need reliable example (Test) of how to use TestInjection.failReplicaRequests

2019-01-31 Thread Mark Miller (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757918#comment-16757918
 ] 

Mark Miller commented on SOLR-13189:


And it was just starting to feel good being away again ...

As an aside, that wait for recoveries call should be nixed because it's flakey 
after a collection create call. We need to use wait calls that specify the 
shards and replicas to wait for like the SolrCloudTest tests do now.

What I would guess is happening here is that you are hitting the eventual 
consistency nature of the system.

In older versions these tests might have worked because before the request 
returns to the client, the leader would have called to the replica and told it 
to go into recovery. I believe we no longer make these calls (for good reason, 
http calls tied to updates was no good). So a replica will only enter recovery 
when it realizes it should via ZooKeeper communication.

The system will be eventually consistent, but there is no promise it will be 
consistent even when all replicas are active. You must be willing to wait a 
short time for consistency and this test does not.

> Need reliable example (Test) of how to use TestInjection.failReplicaRequests
> 
>
> Key: SOLR-13189
> URL: https://issues.apache.org/jira/browse/SOLR-13189
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Priority: Major
> Attachments: SOLR-13189.patch
>
>
> We need a test that reliably demonstrates the usage of 
> {{TestInjection.failReplicaRequests}} and shows what steps a test needs to 
> take after issuing updates to reliably "pass" (finding all index updates that 
> succeeded from the clients perspective) even in the event of an (injected) 
> replica failure.
> As things stand now, it does not seem that any test using 
> {{TestInjection.failReplicaRequests}} passes reliably -- *and it's not clear 
> if this is due to poorly designed tests, or an indication of a bug in 
> distributed updates / LIR*



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-master-Linux (64bit/jdk-9.0.4) - Build # 23606 - Still Unstable!

2019-01-31 Thread Kevin Risden
Build started before SOLR-9515 commit was reverted. Sorry for the noise.

Kevin Risden

On Thu, Jan 31, 2019 at 9:05 PM Policeman Jenkins Server
 wrote:
>
> Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/23606/
> Java: 64bit/jdk-9.0.4 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC
>
> 44 tests failed.
> FAILED:  
> junit.framework.TestSuite.org.apache.solr.cloud.hdfs.HDFSCollectionsAPITest
>
> Error Message:
> Timed out waiting for Mini HDFS Cluster to start
>
> Stack Trace:
> java.io.IOException: Timed out waiting for Mini HDFS Cluster to start
> at __randomizedtesting.SeedInfo.seed([54029BC8F13A576C]:0)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster.waitClusterUp(MiniDFSCluster.java:1428)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:915)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:518)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:477)
> at 
> org.apache.solr.cloud.hdfs.HdfsTestUtil.setupClass(HdfsTestUtil.java:108)
> at 
> org.apache.solr.cloud.hdfs.HdfsTestUtil.setupClass(HdfsTestUtil.java:61)
> at 
> org.apache.solr.cloud.hdfs.HDFSCollectionsAPITest.setupClass(HDFSCollectionsAPITest.java:50)
> at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.base/java.lang.reflect.Method.invoke(Method.java:564)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:878)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
> at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
> at 
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
> at 
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
> at 
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
> at 
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
> at 
> org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
> at java.base/java.lang.Thread.run(Thread.java:844)
>
>
> FAILED:  
> junit.framework.TestSuite.org.apache.solr.cloud.hdfs.HDFSCollectionsAPITest
>
> Error Message:
> Timed out waiting for Mini HDFS Cluster to start
>
> Stack Trace:
> java.io.IOException: Timed out waiting for Mini HDFS Cluster to start
> at __randomizedtesting.SeedInfo.seed([54029BC8F13A576C]:0)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster.waitClusterUp(MiniDFSCluster.java:1428)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:915)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:518)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:477)
> at 
> org.apache.solr.cloud.hdfs.HdfsTestUtil.setupClass(HdfsTestUtil.java:108)
> at 
> org.apache.solr.cloud.hdfs.HdfsTestUtil.setupClass(HdfsTestUtil.java:61)
> at 
> org.apache.solr.cloud.hdfs.HDFSCollectionsAPITest.setupClass(HDFSCollectionsAPITest.java:50)
> at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)

[jira] [Commented] (SOLR-9515) Update to Hadoop 3

2019-01-31 Thread Lucene/Solr QA (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757883#comment-16757883
 ] 

Lucene/Solr QA commented on SOLR-9515:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
50s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Release audit (RAT) {color} | 
{color:green}  0m 14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Check forbidden APIs {color} | 
{color:green}  0m  3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Check licenses {color} | {color:green} 
 0m 19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Validate source patterns {color} | 
{color:green}  0m  3s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m  
7s{color} | {color:green} tools in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 40m 
33s{color} | {color:green} core in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
27s{color} | {color:green} test-framework in the patch passed. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 48m 55s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | SOLR-9515 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12957138/SOLR-9515.patch |
| Optional Tests |  checklicenses  validatesourcepatterns  ratsources  compile  
javac  unit  checkforbiddenapis  |
| uname | Linux lucene1-us-west 4.4.0-137-generic #163~14.04.1-Ubuntu SMP Mon 
Sep 24 17:14:57 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | ant |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-SOLR-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh
 |
| git revision | master / e4f202c |
| ant | version: Apache Ant(TM) version 1.9.3 compiled on July 24 2018 |
| Default Java | 1.8.0_191 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-SOLR-Build/284/testReport/ |
| modules | C: lucene lucene/tools solr solr/core solr/test-framework U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-SOLR-Build/284/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> Update to Hadoop 3
> --
>
> Key: SOLR-9515
> URL: https://issues.apache.org/jira/browse/SOLR-9515
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Kevin Risden
>Priority: Blocker
> Fix For: 8.0, master (9.0)
>
> Attachments: SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch, 
> SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch
>
>  Time Spent: 4h 50m
>  Remaining Estimate: 0h
>
> Hadoop 3 is not out yet, but I'd like to iron out the upgrade to be prepared. 
> I'll start up a dev branch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-9.0.4) - Build # 23606 - Still Unstable!

2019-01-31 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/23606/
Java: 64bit/jdk-9.0.4 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

44 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.hdfs.HDFSCollectionsAPITest

Error Message:
Timed out waiting for Mini HDFS Cluster to start

Stack Trace:
java.io.IOException: Timed out waiting for Mini HDFS Cluster to start
at __randomizedtesting.SeedInfo.seed([54029BC8F13A576C]:0)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.waitClusterUp(MiniDFSCluster.java:1428)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:915)
at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:518)
at 
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:477)
at 
org.apache.solr.cloud.hdfs.HdfsTestUtil.setupClass(HdfsTestUtil.java:108)
at 
org.apache.solr.cloud.hdfs.HdfsTestUtil.setupClass(HdfsTestUtil.java:61)
at 
org.apache.solr.cloud.hdfs.HDFSCollectionsAPITest.setupClass(HDFSCollectionsAPITest.java:50)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:878)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.hdfs.HDFSCollectionsAPITest

Error Message:
Timed out waiting for Mini HDFS Cluster to start

Stack Trace:
java.io.IOException: Timed out waiting for Mini HDFS Cluster to start
at __randomizedtesting.SeedInfo.seed([54029BC8F13A576C]:0)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.waitClusterUp(MiniDFSCluster.java:1428)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:915)
at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:518)
at 
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:477)
at 
org.apache.solr.cloud.hdfs.HdfsTestUtil.setupClass(HdfsTestUtil.java:108)
at 
org.apache.solr.cloud.hdfs.HdfsTestUtil.setupClass(HdfsTestUtil.java:61)
at 
org.apache.solr.cloud.hdfs.HDFSCollectionsAPITest.setupClass(HDFSCollectionsAPITest.java:50)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 

[jira] [Commented] (SOLR-9515) Update to Hadoop 3

2019-01-31 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757876#comment-16757876
 ] 

Kevin Risden commented on SOLR-9515:


So I see three ways forward short term - none of which I really like.
 # Disable Hadoop tests on JDK9+
 ## Part of this Hadoop 3 upgrade was to make it possible to run on JDK9+ 
potentially with Hadoop support.
 ## There are other Hadoop tests currently disabled with JDK9+ due to existing 
issues.
 # Copy and patch BlockPoolSlice to change how the ForkJoinPool is created
 ## BlockPoolSlice looks to be the only place where ForkJoinPool will cause 
issues for the tests. I am currently validating this.
 ## We would end up with 2 classes copied to make the Hadoop integration tests 
work.
 # Don't upgrade to Hadoop 3 until Hadoop is more SecurityManager friendly.

Long term it would be good to fix Hadoop to make it more SecurityManager 
friendly.

> Update to Hadoop 3
> --
>
> Key: SOLR-9515
> URL: https://issues.apache.org/jira/browse/SOLR-9515
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Kevin Risden
>Priority: Blocker
> Fix For: 8.0, master (9.0)
>
> Attachments: SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch, 
> SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch
>
>  Time Spent: 4h 50m
>  Remaining Estimate: 0h
>
> Hadoop 3 is not out yet, but I'd like to iron out the upgrade to be prepared. 
> I'll start up a dev branch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9515) Update to Hadoop 3

2019-01-31 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757855#comment-16757855
 ] 

Kevin Risden commented on SOLR-9515:


I found why this breaks on JDK9+ but not on JDK8. The 
DefaultForkJoinWorkerThreadFactory implementation changed to have no default 
permissions.

JDK8 - 
[http://hg.openjdk.java.net/jdk8/jdk8/jdk/file/687fd7c7986d/src/share/classes/java/util/concurrent/ForkJoinPool.java#l587]

JDK11 - 
http://hg.openjdk.java.net/jdk/jdk11/file/1ddf9a99e4ad/src/java.base/share/classes/java/util/concurrent/ForkJoinPool.java#l705

> Update to Hadoop 3
> --
>
> Key: SOLR-9515
> URL: https://issues.apache.org/jira/browse/SOLR-9515
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Kevin Risden
>Priority: Blocker
> Fix For: 8.0, master (9.0)
>
> Attachments: SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch, 
> SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch
>
>  Time Spent: 4h 50m
>  Remaining Estimate: 0h
>
> Hadoop 3 is not out yet, but I'd like to iron out the upgrade to be prepared. 
> I'll start up a dev branch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9515) Update to Hadoop 3

2019-01-31 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757839#comment-16757839
 ] 

Kevin Risden commented on SOLR-9515:


ForkJoinPool was added in HDFS-13768

> Update to Hadoop 3
> --
>
> Key: SOLR-9515
> URL: https://issues.apache.org/jira/browse/SOLR-9515
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Kevin Risden
>Priority: Blocker
> Fix For: 8.0, master (9.0)
>
> Attachments: SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch, 
> SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch
>
>  Time Spent: 4h 50m
>  Remaining Estimate: 0h
>
> Hadoop 3 is not out yet, but I'd like to iron out the upgrade to be prepared. 
> I'll start up a dev branch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9515) Update to Hadoop 3

2019-01-31 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757838#comment-16757838
 ] 

Kevin Risden commented on SOLR-9515:


Reverted the hadoop 3 commit since it is broken with JDK9+ and the security 
manager. Hadoop uses ForkJoinPool which by default creates threads that have no 
permissions. There is no easy way to work around the ForkJoinPool currently. 
Thinking about ways to move forward.

> Update to Hadoop 3
> --
>
> Key: SOLR-9515
> URL: https://issues.apache.org/jira/browse/SOLR-9515
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Kevin Risden
>Priority: Blocker
> Fix For: 8.0, master (9.0)
>
> Attachments: SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch, 
> SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch
>
>  Time Spent: 4h 50m
>  Remaining Estimate: 0h
>
> Hadoop 3 is not out yet, but I'd like to iron out the upgrade to be prepared. 
> I'll start up a dev branch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9515) Update to Hadoop 3

2019-01-31 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757834#comment-16757834
 ] 

ASF subversion and git services commented on SOLR-9515:
---

Commit e4f202c1e30f7c7209f978d7733922245c33ab71 in lucene-solr's branch 
refs/heads/master from Kevin Risden
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=e4f202c ]

Revert "SOLR-9515: Update to Hadoop 3"

This reverts commit 6bb24673f422a4e4267bc22361bc9258809d5f60.


> Update to Hadoop 3
> --
>
> Key: SOLR-9515
> URL: https://issues.apache.org/jira/browse/SOLR-9515
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Kevin Risden
>Priority: Blocker
> Fix For: 8.0, master (9.0)
>
> Attachments: SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch, 
> SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch
>
>  Time Spent: 4h 50m
>  Remaining Estimate: 0h
>
> Hadoop 3 is not out yet, but I'd like to iron out the upgrade to be prepared. 
> I'll start up a dev branch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-master-Linux (64bit/jdk-9.0.4) - Build # 23605 - Unstable!

2019-01-31 Thread Kevin Risden
Thanks to Uwe for some ideas. I found what the issue is. Not sure how
to fix it going forward. I will revert the SOLR-9515 commit later
tonight until I can find a solution.

Details are in SOLR-9515 comments.

Kevin Risden

On Thu, Jan 31, 2019 at 5:14 PM Kevin Risden  wrote:
>
> This is caused by SOLR-9515. I don't know how though. Details in
> comment here: https://issues.apache.org/jira/browse/SOLR-9515
>
> Still digging but would appreciate any ideas.
>
> Kevin Risden
>
> On Thu, Jan 31, 2019 at 4:58 PM Policeman Jenkins Server
>  wrote:
> >
> > Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/23605/
> > Java: 64bit/jdk-9.0.4 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC
> >
> > 45 tests failed.
> > FAILED:  
> > junit.framework.TestSuite.org.apache.solr.cloud.hdfs.HDFSCollectionsAPITest
> >
> > Error Message:
> > Timed out waiting for Mini HDFS Cluster to start
> >
> > Stack Trace:
> > java.io.IOException: Timed out waiting for Mini HDFS Cluster to start
> > at __randomizedtesting.SeedInfo.seed([B3EBC148FC827CD8]:0)
> > at 
> > org.apache.hadoop.hdfs.MiniDFSCluster.waitClusterUp(MiniDFSCluster.java:1428)
> > at 
> > org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:915)
> > at 
> > org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:518)
> > at 
> > org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:477)
> > at 
> > org.apache.solr.cloud.hdfs.HdfsTestUtil.setupClass(HdfsTestUtil.java:108)
> > at 
> > org.apache.solr.cloud.hdfs.HdfsTestUtil.setupClass(HdfsTestUtil.java:61)
> > at 
> > org.apache.solr.cloud.hdfs.HDFSCollectionsAPITest.setupClass(HDFSCollectionsAPITest.java:50)
> > at 
> > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
> > Method)
> > at 
> > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> > at 
> > java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> > at java.base/java.lang.reflect.Method.invoke(Method.java:564)
> > at 
> > com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
> > at 
> > com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:878)
> > at 
> > com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
> > at 
> > com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> > at 
> > com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
> > at 
> > org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
> > at 
> > com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> > at 
> > org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
> > at 
> > com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
> > at 
> > com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
> > at 
> > com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> > at 
> > com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> > at 
> > org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
> > at 
> > org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
> > at 
> > org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
> > at 
> > org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
> > at 
> > com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> > at 
> > com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
> > at java.base/java.lang.Thread.run(Thread.java:844)
> >
> >
> > FAILED:  
> > junit.framework.TestSuite.org.apache.solr.cloud.hdfs.HDFSCollectionsAPITest
> >
> > Error Message:
> > Timed out waiting for Mini HDFS Cluster to start
> >
> > Stack Trace:
> > java.io.IOException: Timed out waiting for Mini HDFS Cluster to start
> > at __randomizedtesting.SeedInfo.seed([B3EBC148FC827CD8]:0)
> > at 
> > org.apache.hadoop.hdfs.MiniDFSCluster.waitClusterUp(MiniDFSCluster.java:1428)
> > at 
> > org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:915)
> > at 
> > org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:518)
> > 

[jira] [Commented] (SOLR-9515) Update to Hadoop 3

2019-01-31 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757807#comment-16757807
 ] 

Kevin Risden commented on SOLR-9515:


Well setting that key to 0 results in the following.
{code:java}
com.carrotsearch.randomizedtesting.UncaughtExceptionError:
 Captured an uncaught exception in thread: Thread[id=226, name=Thread-111, 
state=RUNNABLE, group=TGRP-HdfsRecoverLeaseTest]
Caused by: java.lang.IllegalArgumentException
at __randomizedtesting.SeedInfo.seed([B3EBC148FC827CD8]:0)
at 
java.base/java.util.concurrent.ForkJoinPool.init(ForkJoinPool.java:2295)
at 
java.base/java.util.concurrent.ForkJoinPool.init(ForkJoinPool.java:2165)
at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.initializeAddReplicaPool(BlockPoolSlice.java:213)
at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.init(BlockPoolSlice.java:188)
at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.addBlockPool(FsVolumeImpl.java:1041)
at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.addBlockPool(FsVolumeImpl.java:1033)
at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList$2.run(FsVolumeList.java:412)
{code}
Definitely didn't work. From the ForkJoinPool javadocs:

@throws IllegalArgumentException if parallelism less than or equal to zero, or 
greater than implementation limit

So yea not sure how to work around this.

> Update to Hadoop 3
> --
>
> Key: SOLR-9515
> URL: https://issues.apache.org/jira/browse/SOLR-9515
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Kevin Risden
>Priority: Blocker
> Fix For: 8.0, master (9.0)
>
> Attachments: SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch, 
> SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch
>
>  Time Spent: 4h 50m
>  Remaining Estimate: 0h
>
> Hadoop 3 is not out yet, but I'd like to iron out the upgrade to be prepared. 
> I'll start up a dev branch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9515) Update to Hadoop 3

2019-01-31 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757804#comment-16757804
 ] 

Kevin Risden commented on SOLR-9515:


Yea not sure how to work around it either. The pool is made explicitly by 
BlockPoolSlice so none of the system properties take affect. I don't understand 
how this works on JDK8 though. I'll have to take a break and look at it again 
later tonight or tomorrow.

> Update to Hadoop 3
> --
>
> Key: SOLR-9515
> URL: https://issues.apache.org/jira/browse/SOLR-9515
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Kevin Risden
>Priority: Blocker
> Fix For: 8.0, master (9.0)
>
> Attachments: SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch, 
> SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch
>
>  Time Spent: 4h 50m
>  Remaining Estimate: 0h
>
> Hadoop 3 is not out yet, but I'd like to iron out the upgrade to be prepared. 
> I'll start up a dev branch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13017) SolrInputField.setValue method should not use supplied collection as backing value.

2019-01-31 Thread Lucene/Solr QA (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757802#comment-16757802
 ] 

Lucene/Solr QA commented on SOLR-13017:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
27s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Release audit (RAT) {color} | 
{color:green}  1m 14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Check forbidden APIs {color} | 
{color:green}  1m 14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Validate source patterns {color} | 
{color:green}  1m 14s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  5m 
55s{color} | {color:green} solrj in the patch passed. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 11m 31s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | SOLR-13017 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12949713/SOLR-13017.patch |
| Optional Tests |  compile  javac  unit  ratsources  checkforbiddenapis  
validatesourcepatterns  |
| uname | Linux lucene1-us-west 4.4.0-137-generic #163~14.04.1-Ubuntu SMP Mon 
Sep 24 17:14:57 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | ant |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-SOLR-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh
 |
| git revision | master / edb0531 |
| ant | version: Apache Ant(TM) version 1.9.3 compiled on July 24 2018 |
| Default Java | 1.8.0_191 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-SOLR-Build/283/testReport/ |
| modules | C: solr solr/solrj U: solr |
| Console output | 
https://builds.apache.org/job/PreCommit-SOLR-Build/283/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> SolrInputField.setValue method should not use supplied collection as backing 
> value.
> ---
>
> Key: SOLR-13017
> URL: https://issues.apache.org/jira/browse/SOLR-13017
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Reporter: Charles Sanders
>Priority: Minor
> Attachments: SOLR-13017.patch
>
>
> The setValue method in SolrInputField takes an argument of Object.  If the 
> supplied object is a collection, then the collection is used as the backing 
> value for the field.  This can cause unexpected results when the collection 
> is used to initialize two or more different fields.
> Consider the example where a list of values 'a', 'b', 'c' is used to 
> initialize two fields in a SolrInputDocument.
> {noformat}
> List lst = new ArrayList<>();
> lst.add("a");
> lst.add("b");
> lst.add("c");
> 
> SolrInputDocument sid = new SolrInputDocument();
> sid.addField("alpha", lst);
> sid.addField("beta", lst);
> .
> .  {add more fields to doc}
> .
> sid.addField("beta", "blah");  // add another value to field 'beta'
> {noformat}
> Because the same list is used to initialize both fields 'alpha' and 'beta', 
> they not only contain the same values, but point to the same instance of the 
> list.  Therefore, if an additional value is added to one of the fields, both 
> will contain the value.
> In the example provided, the user would expect field 'alpha' to contain 
> values 'a', 'b', 'c'.  While field 'beta' should contain fields 'a', 'b', 'c' 
> and 'blah'.  But that is not the case.  Both fields point to the same 
> instance of the list, so if a new value is added to either field, the list is 
> updated and both fields will contain the same values ('a', 'b', 'c', 'blah').
> This is not a bug, but the intended logic of the method based on the method 
> comment.
> {noformat}
> /**
>* Set the value for a field.  Arrays will be converted to a collection. If
>* a collection is 

[jira] [Commented] (SOLR-9515) Update to Hadoop 3

2019-01-31 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757805#comment-16757805
 ] 

Kevin Risden commented on SOLR-9515:


Ah our comments overlapped let me try that. That is pretty easy to set. Thanks 
for looking!

> Update to Hadoop 3
> --
>
> Key: SOLR-9515
> URL: https://issues.apache.org/jira/browse/SOLR-9515
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Kevin Risden
>Priority: Blocker
> Fix For: 8.0, master (9.0)
>
> Attachments: SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch, 
> SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch
>
>  Time Spent: 4h 50m
>  Remaining Estimate: 0h
>
> Hadoop 3 is not out yet, but I'd like to iron out the upgrade to be prepared. 
> I'll start up a dev branch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9515) Update to Hadoop 3

2019-01-31 Thread Uwe Schindler (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757800#comment-16757800
 ] 

Uwe Schindler commented on SOLR-9515:
-

This one in test setup: 
DFSConfigKeys.DFS_DATANODE_VOLUMES_REPLICA_ADD_THREADPOOL_SIZE_KEY to 0

> Update to Hadoop 3
> --
>
> Key: SOLR-9515
> URL: https://issues.apache.org/jira/browse/SOLR-9515
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Kevin Risden
>Priority: Blocker
> Fix For: 8.0, master (9.0)
>
> Attachments: SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch, 
> SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch
>
>  Time Spent: 4h 50m
>  Remaining Estimate: 0h
>
> Hadoop 3 is not out yet, but I'd like to iron out the upgrade to be prepared. 
> I'll start up a dev branch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9515) Update to Hadoop 3

2019-01-31 Thread Uwe Schindler (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757793#comment-16757793
 ] 

Uwe Schindler commented on SOLR-9515:
-

You found the issue. Not sure how to work around that. Maybe set parallelism to 
0 on test runner properties.

> Update to Hadoop 3
> --
>
> Key: SOLR-9515
> URL: https://issues.apache.org/jira/browse/SOLR-9515
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Kevin Risden
>Priority: Blocker
> Fix For: 8.0, master (9.0)
>
> Attachments: SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch, 
> SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch
>
>  Time Spent: 4h 50m
>  Remaining Estimate: 0h
>
> Hadoop 3 is not out yet, but I'd like to iron out the upgrade to be prepared. 
> I'll start up a dev branch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9515) Update to Hadoop 3

2019-01-31 Thread Uwe Schindler (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757797#comment-16757797
 ] 

Uwe Schindler commented on SOLR-9515:
-

Or set the parallelism to 0 in the DFSConfig in test setup. That should be the 
easiest.

> Update to Hadoop 3
> --
>
> Key: SOLR-9515
> URL: https://issues.apache.org/jira/browse/SOLR-9515
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Kevin Risden
>Priority: Blocker
> Fix For: 8.0, master (9.0)
>
> Attachments: SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch, 
> SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch
>
>  Time Spent: 4h 50m
>  Remaining Estimate: 0h
>
> Hadoop 3 is not out yet, but I'd like to iron out the upgrade to be prepared. 
> I'll start up a dev branch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9515) Update to Hadoop 3

2019-01-31 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757791#comment-16757791
 ] 

Kevin Risden commented on SOLR-9515:


Well looks like I found something that is promising. I don't know why this 
would work on JDK8 though.

[https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/ForkJoinPool.html]
{code:java}
If a SecurityManager is present and no factory is specified, then the default 
pool uses a factory supplying threads that have no Permissions enabled. The 
system class loader is used to load these classes. Upon any error in 
establishing these settings, default parameters are used. It is possible to 
disable or limit the use of threads in the common pool by setting the 
parallelism property to zero, and/or using a factory that may return null. 
However doing so may cause unjoined tasks to never be executed.
{code}
[https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/util/concurrent/ForkJoinPool.html]
{code:java}
If no thread factory is supplied via a system property, then the common pool 
uses a factory that uses the system class loader as the thread context class 
loader. In addition, if a SecurityManager is present, then the common pool uses 
a factory supplying threads that have no Permissions enabled. Upon any error in 
establishing these settings, default parameters are used. It is possible to 
disable or limit the use of threads in the common pool by setting the 
parallelism property to zero, and/or using a factory that may return null. 
However doing so may cause unjoined tasks to never be executed.{code}
From 
https://github.com/apache/hadoop/blob/branch-3.2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java#L213

> Update to Hadoop 3
> --
>
> Key: SOLR-9515
> URL: https://issues.apache.org/jira/browse/SOLR-9515
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Kevin Risden
>Priority: Blocker
> Fix For: 8.0, master (9.0)
>
> Attachments: SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch, 
> SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch
>
>  Time Spent: 4h 50m
>  Remaining Estimate: 0h
>
> Hadoop 3 is not out yet, but I'd like to iron out the upgrade to be prepared. 
> I'll start up a dev branch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-8669) LatLonShape WITHIN queries fail with Multiple search Polygons that share the dateline

2019-01-31 Thread Nicholas Knize (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicholas Knize resolved LUCENE-8669.

Resolution: Fixed

> LatLonShape WITHIN queries fail with Multiple search Polygons that share the 
> dateline
> -
>
> Key: LUCENE-8669
> URL: https://issues.apache.org/jira/browse/LUCENE-8669
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 8.0, 7.7
>Reporter: Nicholas Knize
>Assignee: Nicholas Knize
>Priority: Blocker
> Attachments: LUCENE-8669.patch
>
>
> {{LatLonShape.newPolygonQuery}} does not support dateline crossing polygons. 
> It is therefore up to the calling application / user to split dateline 
> crossing polygons into a {{MultiPolygon}} query with two search polygons that 
> share the dateline. This, however, does not produce expected results because 
> {{EdgeTree.internalComponentRelateTriangle}} does not differentiate between a 
> triangle that {{CROSSES}} or is {{WITHIN}} the target polygon. Therefore 
> {{MultiPolygon}} {{WITHIN}} queries that share the dateline behave as an 
> {{INTERSECT}} and will therefore produce incorrect results.
> Consider the following test, for example:
> {code:java}
> // index
> // western poly
> Polygon indexPoly1 = new Polygon(
> new double[] {-7.5d, 15d, 15d, 0d, -7.5d},
> new double[] {-180d, -180d, -176d, -176d, -180d}
> );
> // eastern poly
> Polygon indexPoly2 = new Polygon(
> new double[] {15d, -7.5d, -15d, -10d, 15d, 15d},
> new double[] {180d, 180d, 176d, 174d, 176d, 180d}
> );
>  index 
> Field[] fields = LatLonShape.createIndexableFields("test", indexPoly1);
> for (Field f : fields) {
>   doc.add(f);
> }
> fields = LatLonShape.createIndexableFields("test", indexPoly2);
> for (Field f : fields) {
>   doc.add(f);
> }
> writer.addDocument(doc);
> / search //
> Polygon[] searchPoly = new Polygon[] {
> new Polygon(new double[] {-20d, 20d, 20d, -20d, -20d},
> new double[] {-180d, -180d, -170d, -170d, -180d}),
> new Polygon(new double[] {20d, -20d, -20d, 20d, 20d},
> new double[] {180d, 180d, 170d, 170d, 180d})
> };
> Query q = LatLonShape.newPolygonQuery("test", QueryRelation.WITHIN, 
> searchPoly);
> assertEquals(1, searcher.count(q));
> {code}
>  
> In the example above, a dateline spanning polygon is indexed as a 
> {{MultiPolygon}} with two polygons that share the dateline. Similarly, a 
> polygon that spans the dateline is provided as  two polygons that share the 
> dateline in a {{WITHIN}} query. The indexed polygon should be returned as a 
> match; but it does not.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8669) LatLonShape WITHIN queries fail with Multiple search Polygons that share the dateline

2019-01-31 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757786#comment-16757786
 ] 

ASF subversion and git services commented on LUCENE-8669:
-

Commit be471ea91d53ae9b362f223e4fafecc612b4d309 in lucene-solr's branch 
refs/heads/branch_7_7 from Nicholas Knize
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=be471ea ]

LUCENE-8669: Fix LatLonShape WITHIN queries that fail with Multiple search 
Polygons that share the dateline.


> LatLonShape WITHIN queries fail with Multiple search Polygons that share the 
> dateline
> -
>
> Key: LUCENE-8669
> URL: https://issues.apache.org/jira/browse/LUCENE-8669
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 8.0, 7.7
>Reporter: Nicholas Knize
>Assignee: Nicholas Knize
>Priority: Blocker
> Attachments: LUCENE-8669.patch
>
>
> {{LatLonShape.newPolygonQuery}} does not support dateline crossing polygons. 
> It is therefore up to the calling application / user to split dateline 
> crossing polygons into a {{MultiPolygon}} query with two search polygons that 
> share the dateline. This, however, does not produce expected results because 
> {{EdgeTree.internalComponentRelateTriangle}} does not differentiate between a 
> triangle that {{CROSSES}} or is {{WITHIN}} the target polygon. Therefore 
> {{MultiPolygon}} {{WITHIN}} queries that share the dateline behave as an 
> {{INTERSECT}} and will therefore produce incorrect results.
> Consider the following test, for example:
> {code:java}
> // index
> // western poly
> Polygon indexPoly1 = new Polygon(
> new double[] {-7.5d, 15d, 15d, 0d, -7.5d},
> new double[] {-180d, -180d, -176d, -176d, -180d}
> );
> // eastern poly
> Polygon indexPoly2 = new Polygon(
> new double[] {15d, -7.5d, -15d, -10d, 15d, 15d},
> new double[] {180d, 180d, 176d, 174d, 176d, 180d}
> );
>  index 
> Field[] fields = LatLonShape.createIndexableFields("test", indexPoly1);
> for (Field f : fields) {
>   doc.add(f);
> }
> fields = LatLonShape.createIndexableFields("test", indexPoly2);
> for (Field f : fields) {
>   doc.add(f);
> }
> writer.addDocument(doc);
> / search //
> Polygon[] searchPoly = new Polygon[] {
> new Polygon(new double[] {-20d, 20d, 20d, -20d, -20d},
> new double[] {-180d, -180d, -170d, -170d, -180d}),
> new Polygon(new double[] {20d, -20d, -20d, 20d, 20d},
> new double[] {180d, 180d, 170d, 170d, 180d})
> };
> Query q = LatLonShape.newPolygonQuery("test", QueryRelation.WITHIN, 
> searchPoly);
> assertEquals(1, searcher.count(q));
> {code}
>  
> In the example above, a dateline spanning polygon is indexed as a 
> {{MultiPolygon}} with two polygons that share the dateline. Similarly, a 
> polygon that spans the dateline is provided as  two polygons that share the 
> dateline in a {{WITHIN}} query. The indexed polygon should be returned as a 
> match; but it does not.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9515) Update to Hadoop 3

2019-01-31 Thread Uwe Schindler (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757778#comment-16757778
 ] 

Uwe Schindler commented on SOLR-9515:
-

Interestingly, the Solr test policy (in contrast to Lucene's) allows read 
access everywhere: first line with "<<>>"

I am currently not know what's going on. I'd like to help, but I am very 
busy because I have to fly to FOSDEM conference tomorrow and before that have a 
phone conference, so I have to sleep.

The only thing you can do is to try to debug it by setting a breakpoint 
there

> Update to Hadoop 3
> --
>
> Key: SOLR-9515
> URL: https://issues.apache.org/jira/browse/SOLR-9515
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Kevin Risden
>Priority: Blocker
> Fix For: 8.0, master (9.0)
>
> Attachments: SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch, 
> SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch
>
>  Time Spent: 4h 50m
>  Remaining Estimate: 0h
>
> Hadoop 3 is not out yet, but I'd like to iron out the upgrade to be prepared. 
> I'll start up a dev branch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9515) Update to Hadoop 3

2019-01-31 Thread Uwe Schindler (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757783#comment-16757783
 ] 

Uwe Schindler commented on SOLR-9515:
-

More info here: 
https://docs.oracle.com/javase/8/docs/technotes/guides/security/troubleshooting-security.html

> Update to Hadoop 3
> --
>
> Key: SOLR-9515
> URL: https://issues.apache.org/jira/browse/SOLR-9515
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Kevin Risden
>Priority: Blocker
> Fix For: 8.0, master (9.0)
>
> Attachments: SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch, 
> SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch
>
>  Time Spent: 4h 50m
>  Remaining Estimate: 0h
>
> Hadoop 3 is not out yet, but I'd like to iron out the upgrade to be prepared. 
> I'll start up a dev branch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9515) Update to Hadoop 3

2019-01-31 Thread Uwe Schindler (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757782#comment-16757782
 ] 

Uwe Schindler commented on SOLR-9515:
-

The only thing that might be an issue here: Maybe Hadoop does some separate 
permission stuff which interferes with Lucene. In code you can always further 
restrict permissions (we have a test using that feature) in some 
AccessController-closure.
I'd suggest to enable logging of permission checks: 
https://docs.oracle.com/javase/8/docs/technotes/guides/troubleshoot/envvars003.html,
 e.g. {{-Djava.security.debug="access,failure"}}

> Update to Hadoop 3
> --
>
> Key: SOLR-9515
> URL: https://issues.apache.org/jira/browse/SOLR-9515
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Kevin Risden
>Priority: Blocker
> Fix For: 8.0, master (9.0)
>
> Attachments: SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch, 
> SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch
>
>  Time Spent: 4h 50m
>  Remaining Estimate: 0h
>
> Hadoop 3 is not out yet, but I'd like to iron out the upgrade to be prepared. 
> I'll start up a dev branch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8669) LatLonShape WITHIN queries fail with Multiple search Polygons that share the dateline

2019-01-31 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757781#comment-16757781
 ] 

ASF subversion and git services commented on LUCENE-8669:
-

Commit 3e5bc5c2ebb66a189f3d791d23ccc23ba17543b6 in lucene-solr's branch 
refs/heads/branch_7x from Nicholas Knize
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=3e5bc5c ]

LUCENE-8669: Fix LatLonShape WITHIN queries that fail with Multiple search 
Polygons that share the dateline.


> LatLonShape WITHIN queries fail with Multiple search Polygons that share the 
> dateline
> -
>
> Key: LUCENE-8669
> URL: https://issues.apache.org/jira/browse/LUCENE-8669
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 8.0, 7.7
>Reporter: Nicholas Knize
>Assignee: Nicholas Knize
>Priority: Blocker
> Attachments: LUCENE-8669.patch
>
>
> {{LatLonShape.newPolygonQuery}} does not support dateline crossing polygons. 
> It is therefore up to the calling application / user to split dateline 
> crossing polygons into a {{MultiPolygon}} query with two search polygons that 
> share the dateline. This, however, does not produce expected results because 
> {{EdgeTree.internalComponentRelateTriangle}} does not differentiate between a 
> triangle that {{CROSSES}} or is {{WITHIN}} the target polygon. Therefore 
> {{MultiPolygon}} {{WITHIN}} queries that share the dateline behave as an 
> {{INTERSECT}} and will therefore produce incorrect results.
> Consider the following test, for example:
> {code:java}
> // index
> // western poly
> Polygon indexPoly1 = new Polygon(
> new double[] {-7.5d, 15d, 15d, 0d, -7.5d},
> new double[] {-180d, -180d, -176d, -176d, -180d}
> );
> // eastern poly
> Polygon indexPoly2 = new Polygon(
> new double[] {15d, -7.5d, -15d, -10d, 15d, 15d},
> new double[] {180d, 180d, 176d, 174d, 176d, 180d}
> );
>  index 
> Field[] fields = LatLonShape.createIndexableFields("test", indexPoly1);
> for (Field f : fields) {
>   doc.add(f);
> }
> fields = LatLonShape.createIndexableFields("test", indexPoly2);
> for (Field f : fields) {
>   doc.add(f);
> }
> writer.addDocument(doc);
> / search //
> Polygon[] searchPoly = new Polygon[] {
> new Polygon(new double[] {-20d, 20d, 20d, -20d, -20d},
> new double[] {-180d, -180d, -170d, -170d, -180d}),
> new Polygon(new double[] {20d, -20d, -20d, 20d, 20d},
> new double[] {180d, 180d, 170d, 170d, 180d})
> };
> Query q = LatLonShape.newPolygonQuery("test", QueryRelation.WITHIN, 
> searchPoly);
> assertEquals(1, searcher.count(q));
> {code}
>  
> In the example above, a dateline spanning polygon is indexed as a 
> {{MultiPolygon}} with two polygons that share the dateline. Similarly, a 
> polygon that spans the dateline is provided as  two polygons that share the 
> dateline in a {{WITHIN}} query. The indexed polygon should be returned as a 
> match; but it does not.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9515) Update to Hadoop 3

2019-01-31 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757779#comment-16757779
 ] 

Kevin Risden commented on SOLR-9515:


Yea I have been trying to debug it. I'll keep playing with it. Thanks. Glad it 
isn't something extremely obvious that I'm missing.

> Update to Hadoop 3
> --
>
> Key: SOLR-9515
> URL: https://issues.apache.org/jira/browse/SOLR-9515
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Kevin Risden
>Priority: Blocker
> Fix For: 8.0, master (9.0)
>
> Attachments: SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch, 
> SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch
>
>  Time Spent: 4h 50m
>  Remaining Estimate: 0h
>
> Hadoop 3 is not out yet, but I'd like to iron out the upgrade to be prepared. 
> I'll start up a dev branch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8669) LatLonShape WITHIN queries fail with Multiple search Polygons that share the dateline

2019-01-31 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1675#comment-1675
 ] 

ASF subversion and git services commented on LUCENE-8669:
-

Commit fd92d54b38a7a7048e84ff20b2d26e6c05e116e7 in lucene-solr's branch 
refs/heads/branch_8_0 from Nicholas Knize
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=fd92d54 ]

LUCENE-8669: Fix LatLonShape WITHIN queries that fail with Multiple search 
Polygons that share the dateline.


> LatLonShape WITHIN queries fail with Multiple search Polygons that share the 
> dateline
> -
>
> Key: LUCENE-8669
> URL: https://issues.apache.org/jira/browse/LUCENE-8669
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 8.0, 7.7
>Reporter: Nicholas Knize
>Assignee: Nicholas Knize
>Priority: Blocker
> Attachments: LUCENE-8669.patch
>
>
> {{LatLonShape.newPolygonQuery}} does not support dateline crossing polygons. 
> It is therefore up to the calling application / user to split dateline 
> crossing polygons into a {{MultiPolygon}} query with two search polygons that 
> share the dateline. This, however, does not produce expected results because 
> {{EdgeTree.internalComponentRelateTriangle}} does not differentiate between a 
> triangle that {{CROSSES}} or is {{WITHIN}} the target polygon. Therefore 
> {{MultiPolygon}} {{WITHIN}} queries that share the dateline behave as an 
> {{INTERSECT}} and will therefore produce incorrect results.
> Consider the following test, for example:
> {code:java}
> // index
> // western poly
> Polygon indexPoly1 = new Polygon(
> new double[] {-7.5d, 15d, 15d, 0d, -7.5d},
> new double[] {-180d, -180d, -176d, -176d, -180d}
> );
> // eastern poly
> Polygon indexPoly2 = new Polygon(
> new double[] {15d, -7.5d, -15d, -10d, 15d, 15d},
> new double[] {180d, 180d, 176d, 174d, 176d, 180d}
> );
>  index 
> Field[] fields = LatLonShape.createIndexableFields("test", indexPoly1);
> for (Field f : fields) {
>   doc.add(f);
> }
> fields = LatLonShape.createIndexableFields("test", indexPoly2);
> for (Field f : fields) {
>   doc.add(f);
> }
> writer.addDocument(doc);
> / search //
> Polygon[] searchPoly = new Polygon[] {
> new Polygon(new double[] {-20d, 20d, 20d, -20d, -20d},
> new double[] {-180d, -180d, -170d, -170d, -180d}),
> new Polygon(new double[] {20d, -20d, -20d, 20d, 20d},
> new double[] {180d, 180d, 170d, 170d, 180d})
> };
> Query q = LatLonShape.newPolygonQuery("test", QueryRelation.WITHIN, 
> searchPoly);
> assertEquals(1, searcher.count(q));
> {code}
>  
> In the example above, a dateline spanning polygon is indexed as a 
> {{MultiPolygon}} with two polygons that share the dateline. Similarly, a 
> polygon that spans the dateline is provided as  two polygons that share the 
> dateline in a {{WITHIN}} query. The indexed polygon should be returned as a 
> match; but it does not.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9515) Update to Hadoop 3

2019-01-31 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757776#comment-16757776
 ] 

Kevin Risden commented on SOLR-9515:


Both of the following still exhibit issues:
{code:java}
ant test  -Dtestcase=HdfsRecoverLeaseTest -Dtests.seed=B3EBC148FC827CD8 
-Dtests.multiplier=3 -Dtests.slow=true -Dtests.locale=en-IE 
-Dtests.timezone=Asia/Ulan_Bator -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1 
-Dargs='-Djdk.io.permissionsUseCanonicalPath=true 
-Djdk.security.filePermCompat=true'
{code}
{code:java}
ant test  -Dtestcase=HdfsRecoverLeaseTest -Dtests.seed=B3EBC148FC827CD8 
-Dtests.multiplier=3 -Dtests.slow=true -Dtests.locale=en-IE 
-Dtests.timezone=Asia/Ulan_Bator -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1 -Djdk.io.permissionsUseCanonicalPath=true 
-Djdk.security.filePermCompat=true
{code}

> Update to Hadoop 3
> --
>
> Key: SOLR-9515
> URL: https://issues.apache.org/jira/browse/SOLR-9515
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Kevin Risden
>Priority: Blocker
> Fix For: 8.0, master (9.0)
>
> Attachments: SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch, 
> SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch
>
>  Time Spent: 4h 50m
>  Remaining Estimate: 0h
>
> Hadoop 3 is not out yet, but I'd like to iron out the upgrade to be prepared. 
> I'll start up a dev branch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9515) Update to Hadoop 3

2019-01-31 Thread Uwe Schindler (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757768#comment-16757768
 ] 

Uwe Schindler commented on SOLR-9515:
-

In Java9 they changed the "normalization" of paths when the FilePermissions are 
compared. There is no longer the canonical path be calculated (for performance 
reasons, so the path is only made absolute but not canonicalized - if there is 
a symlink bad things may happen). Not sure what exactly is happening here, to 
me it also looks fine.

Did you test this locally with Java 9+ during your development or did you only 
use Java 8?

> Update to Hadoop 3
> --
>
> Key: SOLR-9515
> URL: https://issues.apache.org/jira/browse/SOLR-9515
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Kevin Risden
>Priority: Blocker
> Fix For: 8.0, master (9.0)
>
> Attachments: SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch, 
> SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch
>
>  Time Spent: 4h 50m
>  Remaining Estimate: 0h
>
> Hadoop 3 is not out yet, but I'd like to iron out the upgrade to be prepared. 
> I'll start up a dev branch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9515) Update to Hadoop 3

2019-01-31 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757773#comment-16757773
 ] 

Kevin Risden edited comment on SOLR-9515 at 1/31/19 10:40 PM:
--

I had tested with just JDK8. This does reproduce locally with JDK11. I checked 
the failure locally and don't see any symlinks in the path going down. I can 
revert the master commit since I don't have this working on JDK9+ yet. Sigh.

I was going to try a few configs from here: 
[https://docs.oracle.com/javase/10/security/permissions-jdk1.htm#JSSEC-GUID-83063225-0ACB-4909-9BAB-7F7D4E3749E2]


was (Author: risdenk):
I had tested with just JDK8. This does reproduce locally though with JDK11. I 
checked the failure locally and don't see any symlinks in the path going down. 
I can revert the master commit since I don't have this working on JDK9+ yet. 
Sigh.

I was going to try a few configs from here: 
https://docs.oracle.com/javase/10/security/permissions-jdk1.htm#JSSEC-GUID-83063225-0ACB-4909-9BAB-7F7D4E3749E2

> Update to Hadoop 3
> --
>
> Key: SOLR-9515
> URL: https://issues.apache.org/jira/browse/SOLR-9515
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Kevin Risden
>Priority: Blocker
> Fix For: 8.0, master (9.0)
>
> Attachments: SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch, 
> SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch
>
>  Time Spent: 4h 50m
>  Remaining Estimate: 0h
>
> Hadoop 3 is not out yet, but I'd like to iron out the upgrade to be prepared. 
> I'll start up a dev branch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9515) Update to Hadoop 3

2019-01-31 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757773#comment-16757773
 ] 

Kevin Risden commented on SOLR-9515:


I had tested with just JDK8. This does reproduce locally though with JDK11. I 
checked the failure locally and don't see any symlinks in the path going down. 
I can revert the master commit since I don't have this working on JDK9+ yet. 
Sigh.

I was going to try a few configs from here: 
https://docs.oracle.com/javase/10/security/permissions-jdk1.htm#JSSEC-GUID-83063225-0ACB-4909-9BAB-7F7D4E3749E2

> Update to Hadoop 3
> --
>
> Key: SOLR-9515
> URL: https://issues.apache.org/jira/browse/SOLR-9515
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Kevin Risden
>Priority: Blocker
> Fix For: 8.0, master (9.0)
>
> Attachments: SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch, 
> SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch
>
>  Time Spent: 4h 50m
>  Remaining Estimate: 0h
>
> Hadoop 3 is not out yet, but I'd like to iron out the upgrade to be prepared. 
> I'll start up a dev branch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8669) LatLonShape WITHIN queries fail with Multiple search Polygons that share the dateline

2019-01-31 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757769#comment-16757769
 ] 

ASF subversion and git services commented on LUCENE-8669:
-

Commit fade1a091bfa2b7733c37b47a96ee8adbd3c8583 in lucene-solr's branch 
refs/heads/branch_8x from Nicholas Knize
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=fade1a0 ]

LUCENE-8669: Fix LatLonShape WITHIN queries that fail with Multiple search 
Polygons that share the dateline.


> LatLonShape WITHIN queries fail with Multiple search Polygons that share the 
> dateline
> -
>
> Key: LUCENE-8669
> URL: https://issues.apache.org/jira/browse/LUCENE-8669
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 8.0, 7.7
>Reporter: Nicholas Knize
>Assignee: Nicholas Knize
>Priority: Blocker
> Attachments: LUCENE-8669.patch
>
>
> {{LatLonShape.newPolygonQuery}} does not support dateline crossing polygons. 
> It is therefore up to the calling application / user to split dateline 
> crossing polygons into a {{MultiPolygon}} query with two search polygons that 
> share the dateline. This, however, does not produce expected results because 
> {{EdgeTree.internalComponentRelateTriangle}} does not differentiate between a 
> triangle that {{CROSSES}} or is {{WITHIN}} the target polygon. Therefore 
> {{MultiPolygon}} {{WITHIN}} queries that share the dateline behave as an 
> {{INTERSECT}} and will therefore produce incorrect results.
> Consider the following test, for example:
> {code:java}
> // index
> // western poly
> Polygon indexPoly1 = new Polygon(
> new double[] {-7.5d, 15d, 15d, 0d, -7.5d},
> new double[] {-180d, -180d, -176d, -176d, -180d}
> );
> // eastern poly
> Polygon indexPoly2 = new Polygon(
> new double[] {15d, -7.5d, -15d, -10d, 15d, 15d},
> new double[] {180d, 180d, 176d, 174d, 176d, 180d}
> );
>  index 
> Field[] fields = LatLonShape.createIndexableFields("test", indexPoly1);
> for (Field f : fields) {
>   doc.add(f);
> }
> fields = LatLonShape.createIndexableFields("test", indexPoly2);
> for (Field f : fields) {
>   doc.add(f);
> }
> writer.addDocument(doc);
> / search //
> Polygon[] searchPoly = new Polygon[] {
> new Polygon(new double[] {-20d, 20d, 20d, -20d, -20d},
> new double[] {-180d, -180d, -170d, -170d, -180d}),
> new Polygon(new double[] {20d, -20d, -20d, 20d, 20d},
> new double[] {180d, 180d, 170d, 170d, 180d})
> };
> Query q = LatLonShape.newPolygonQuery("test", QueryRelation.WITHIN, 
> searchPoly);
> assertEquals(1, searcher.count(q));
> {code}
>  
> In the example above, a dateline spanning polygon is indexed as a 
> {{MultiPolygon}} with two polygons that share the dateline. Similarly, a 
> polygon that spans the dateline is provided as  two polygons that share the 
> dateline in a {{WITHIN}} query. The indexed polygon should be returned as a 
> match; but it does not.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8669) LatLonShape WITHIN queries fail with Multiple search Polygons that share the dateline

2019-01-31 Thread Nicholas Knize (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757765#comment-16757765
 ] 

Nicholas Knize commented on LUCENE-8669:


Thanks [~ivera]  I agree on the testing. With this one being a blocker I'll go 
ahead and commit as is then add some more thorough randomized testing beyond 
the simple explicit testing that is provided.

> LatLonShape WITHIN queries fail with Multiple search Polygons that share the 
> dateline
> -
>
> Key: LUCENE-8669
> URL: https://issues.apache.org/jira/browse/LUCENE-8669
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 8.0, 7.7
>Reporter: Nicholas Knize
>Assignee: Nicholas Knize
>Priority: Blocker
> Attachments: LUCENE-8669.patch
>
>
> {{LatLonShape.newPolygonQuery}} does not support dateline crossing polygons. 
> It is therefore up to the calling application / user to split dateline 
> crossing polygons into a {{MultiPolygon}} query with two search polygons that 
> share the dateline. This, however, does not produce expected results because 
> {{EdgeTree.internalComponentRelateTriangle}} does not differentiate between a 
> triangle that {{CROSSES}} or is {{WITHIN}} the target polygon. Therefore 
> {{MultiPolygon}} {{WITHIN}} queries that share the dateline behave as an 
> {{INTERSECT}} and will therefore produce incorrect results.
> Consider the following test, for example:
> {code:java}
> // index
> // western poly
> Polygon indexPoly1 = new Polygon(
> new double[] {-7.5d, 15d, 15d, 0d, -7.5d},
> new double[] {-180d, -180d, -176d, -176d, -180d}
> );
> // eastern poly
> Polygon indexPoly2 = new Polygon(
> new double[] {15d, -7.5d, -15d, -10d, 15d, 15d},
> new double[] {180d, 180d, 176d, 174d, 176d, 180d}
> );
>  index 
> Field[] fields = LatLonShape.createIndexableFields("test", indexPoly1);
> for (Field f : fields) {
>   doc.add(f);
> }
> fields = LatLonShape.createIndexableFields("test", indexPoly2);
> for (Field f : fields) {
>   doc.add(f);
> }
> writer.addDocument(doc);
> / search //
> Polygon[] searchPoly = new Polygon[] {
> new Polygon(new double[] {-20d, 20d, 20d, -20d, -20d},
> new double[] {-180d, -180d, -170d, -170d, -180d}),
> new Polygon(new double[] {20d, -20d, -20d, 20d, 20d},
> new double[] {180d, 180d, 170d, 170d, 180d})
> };
> Query q = LatLonShape.newPolygonQuery("test", QueryRelation.WITHIN, 
> searchPoly);
> assertEquals(1, searcher.count(q));
> {code}
>  
> In the example above, a dateline spanning polygon is indexed as a 
> {{MultiPolygon}} with two polygons that share the dateline. Similarly, a 
> polygon that spans the dateline is provided as  two polygons that share the 
> dateline in a {{WITHIN}} query. The indexed polygon should be returned as a 
> match; but it does not.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8669) LatLonShape WITHIN queries fail with Multiple search Polygons that share the dateline

2019-01-31 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757767#comment-16757767
 ] 

ASF subversion and git services commented on LUCENE-8669:
-

Commit edb05314b315acf9abc4f9fdb3d30e17aff7feba in lucene-solr's branch 
refs/heads/master from Nicholas Knize
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=edb0531 ]

LUCENE-8669: Fix LatLonShape WITHIN queries that fail with Multiple search 
Polygons that share the dateline.


> LatLonShape WITHIN queries fail with Multiple search Polygons that share the 
> dateline
> -
>
> Key: LUCENE-8669
> URL: https://issues.apache.org/jira/browse/LUCENE-8669
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 8.0, 7.7
>Reporter: Nicholas Knize
>Assignee: Nicholas Knize
>Priority: Blocker
> Attachments: LUCENE-8669.patch
>
>
> {{LatLonShape.newPolygonQuery}} does not support dateline crossing polygons. 
> It is therefore up to the calling application / user to split dateline 
> crossing polygons into a {{MultiPolygon}} query with two search polygons that 
> share the dateline. This, however, does not produce expected results because 
> {{EdgeTree.internalComponentRelateTriangle}} does not differentiate between a 
> triangle that {{CROSSES}} or is {{WITHIN}} the target polygon. Therefore 
> {{MultiPolygon}} {{WITHIN}} queries that share the dateline behave as an 
> {{INTERSECT}} and will therefore produce incorrect results.
> Consider the following test, for example:
> {code:java}
> // index
> // western poly
> Polygon indexPoly1 = new Polygon(
> new double[] {-7.5d, 15d, 15d, 0d, -7.5d},
> new double[] {-180d, -180d, -176d, -176d, -180d}
> );
> // eastern poly
> Polygon indexPoly2 = new Polygon(
> new double[] {15d, -7.5d, -15d, -10d, 15d, 15d},
> new double[] {180d, 180d, 176d, 174d, 176d, 180d}
> );
>  index 
> Field[] fields = LatLonShape.createIndexableFields("test", indexPoly1);
> for (Field f : fields) {
>   doc.add(f);
> }
> fields = LatLonShape.createIndexableFields("test", indexPoly2);
> for (Field f : fields) {
>   doc.add(f);
> }
> writer.addDocument(doc);
> / search //
> Polygon[] searchPoly = new Polygon[] {
> new Polygon(new double[] {-20d, 20d, 20d, -20d, -20d},
> new double[] {-180d, -180d, -170d, -170d, -180d}),
> new Polygon(new double[] {20d, -20d, -20d, 20d, 20d},
> new double[] {180d, 180d, 170d, 170d, 180d})
> };
> Query q = LatLonShape.newPolygonQuery("test", QueryRelation.WITHIN, 
> searchPoly);
> assertEquals(1, searcher.count(q));
> {code}
>  
> In the example above, a dateline spanning polygon is indexed as a 
> {{MultiPolygon}} with two polygons that share the dateline. Similarly, a 
> polygon that spans the dateline is provided as  two polygons that share the 
> dateline in a {{WITHIN}} query. The indexed polygon should be returned as a 
> match; but it does not.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-master-Linux (64bit/jdk-9.0.4) - Build # 23605 - Unstable!

2019-01-31 Thread Kevin Risden
This is caused by SOLR-9515. I don't know how though. Details in
comment here: https://issues.apache.org/jira/browse/SOLR-9515

Still digging but would appreciate any ideas.

Kevin Risden

On Thu, Jan 31, 2019 at 4:58 PM Policeman Jenkins Server
 wrote:
>
> Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/23605/
> Java: 64bit/jdk-9.0.4 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC
>
> 45 tests failed.
> FAILED:  
> junit.framework.TestSuite.org.apache.solr.cloud.hdfs.HDFSCollectionsAPITest
>
> Error Message:
> Timed out waiting for Mini HDFS Cluster to start
>
> Stack Trace:
> java.io.IOException: Timed out waiting for Mini HDFS Cluster to start
> at __randomizedtesting.SeedInfo.seed([B3EBC148FC827CD8]:0)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster.waitClusterUp(MiniDFSCluster.java:1428)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:915)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:518)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:477)
> at 
> org.apache.solr.cloud.hdfs.HdfsTestUtil.setupClass(HdfsTestUtil.java:108)
> at 
> org.apache.solr.cloud.hdfs.HdfsTestUtil.setupClass(HdfsTestUtil.java:61)
> at 
> org.apache.solr.cloud.hdfs.HDFSCollectionsAPITest.setupClass(HDFSCollectionsAPITest.java:50)
> at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.base/java.lang.reflect.Method.invoke(Method.java:564)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:878)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
> at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
> at 
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
> at 
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
> at 
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
> at 
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
> at 
> org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
> at java.base/java.lang.Thread.run(Thread.java:844)
>
>
> FAILED:  
> junit.framework.TestSuite.org.apache.solr.cloud.hdfs.HDFSCollectionsAPITest
>
> Error Message:
> Timed out waiting for Mini HDFS Cluster to start
>
> Stack Trace:
> java.io.IOException: Timed out waiting for Mini HDFS Cluster to start
> at __randomizedtesting.SeedInfo.seed([B3EBC148FC827CD8]:0)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster.waitClusterUp(MiniDFSCluster.java:1428)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:915)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:518)
> at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:477)
> at 
> org.apache.solr.cloud.hdfs.HdfsTestUtil.setupClass(HdfsTestUtil.java:108)
> at 
> org.apache.solr.cloud.hdfs.HdfsTestUtil.setupClass(HdfsTestUtil.java:61)
> at 
> org.apache.solr.cloud.hdfs.HDFSCollectionsAPITest.setupClass(HDFSCollectionsAPITest.java:50)
> at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 

[jira] [Comment Edited] (SOLR-9515) Update to Hadoop 3

2019-01-31 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757746#comment-16757746
 ] 

Kevin Risden edited comment on SOLR-9515 at 1/31/19 10:08 PM:
--

So this caused failures on JDK9+ not sure how the below is possible 
currently since solr-tests.policy 
([https://github.com/apache/lucene-solr/blob/master/lucene/tools/junit4/solr-tests.policy#L27)]
 allows read access to that path.

[https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/23605]
{code:java}
[junit4]   2> java.io.IOException: Failed to start sub tasks to add replica in 
replica map :java.security.AccessControlException: access denied 
("java.io.FilePermission" 
"/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J0/temp/solr.index.hdfs.CheckHdfsIndexTest_B3EBC148FC827CD8-001/tempDir-001/hdfsBaseDir/data/data3/current/BP-669531916-88.99.242.108-1548970895105/current/finalized"
 "read")
   [junit4]   2>at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.getVolumeMap(BlockPoolSlice.java:439)
 ~[hadoop-hdfs-3.2.0.jar:?]
   [junit4]   2>at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.getVolumeMap(FsVolumeImpl.java:1003)
 ~[hadoop-hdfs-3.2.0.jar:?]
   [junit4]   2>at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList$1.run(FsVolumeList.java:201)
 ~[hadoop-hdfs-3.2.0.jar:?]
{code}
Snippet from solr-tests.policy:
{code:java}
permission java.io.FilePermission "<>", "read,execute";
permission java.io.FilePermission "${junit4.childvm.cwd}", "read,execute";
permission java.io.FilePermission "${junit4.childvm.cwd}${/}temp", 
"read,execute,write,delete";
permission java.io.FilePermission "${junit4.childvm.cwd}${/}temp${/}-", 
"read,execute,write,delete";
permission java.io.FilePermission "${junit4.childvm.cwd}${/}jacoco.db", "write";
permission java.io.FilePermission "${junit4.tempDir}${/}*", 
"read,execute,write,delete";
permission java.io.FilePermission "${clover.db.dir}${/}-", 
"read,execute,write,delete";
permission java.io.FilePermission "${tests.linedocsfile}", "read";
permission java.nio.file.LinkPermission "hard";
{code}
Variables from run:
{code:java}
junit4.childvm.cwd=/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J0
junit.tempDir=/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/temp
java.security.manager=org.apache.lucene.util.TestSecurityManager
java.security.policy/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/tools/junit4/solr-tests.policy{code}
So we should have read on the paths that HDFS is trying to use?


was (Author: risdenk):
So this caused failures on JDK9+ not sure how the below is possible 
currently since solr-tests.policy 
([https://github.com/apache/lucene-solr/blob/master/lucene/tools/junit4/solr-tests.policy#L27)]
 allows read access to that path.

[https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/23605]
{code:java}
[junit4]   2> java.io.IOException: Failed to start sub tasks to add replica in 
replica map :java.security.AccessControlException: access denied 
("java.io.FilePermission" 
"/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J0/temp/solr.index.hdfs.CheckHdfsIndexTest_B3EBC148FC827CD8-001/tempDir-001/hdfsBaseDir/data/data3/current/BP-669531916-88.99.242.108-1548970895105/current/finalized"
 "read")
   [junit4]   2>at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.getVolumeMap(BlockPoolSlice.java:439)
 ~[hadoop-hdfs-3.2.0.jar:?]
   [junit4]   2>at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.getVolumeMap(FsVolumeImpl.java:1003)
 ~[hadoop-hdfs-3.2.0.jar:?]
   [junit4]   2>at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList$1.run(FsVolumeList.java:201)
 ~[hadoop-hdfs-3.2.0.jar:?]
{code}
Snippet from solr-tests.policy:
{code:java}
permission java.io.FilePermission "<>", "read,execute";
permission java.io.FilePermission "${junit4.childvm.cwd}", "read,execute";
permission java.io.FilePermission "${junit4.childvm.cwd}${/}temp", 
"read,execute,write,delete";
permission java.io.FilePermission "${junit4.childvm.cwd}${/}temp${/}-", 
"read,execute,write,delete";
permission java.io.FilePermission "${junit4.childvm.cwd}${/}jacoco.db", "write";
permission java.io.FilePermission "${junit4.tempDir}${/}*", 
"read,execute,write,delete";
permission java.io.FilePermission "${clover.db.dir}${/}-", 
"read,execute,write,delete";
permission java.io.FilePermission "${tests.linedocsfile}", "read";
permission java.nio.file.LinkPermission "hard";
{code}
Variables from run:
{code:java}
junit4.childvm.cwd=/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J0
junit.tempDir=/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/temp{code}
So we should have read on the paths 

[JENKINS] Lucene-Solr-NightlyTests-7.x - Build # 445 - Still Unstable

2019-01-31 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-7.x/445/

5 tests failed.
FAILED:  org.apache.lucene.index.TestIndexWriterDelete.testUpdatesOnDiskFull

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([DCF0B4DFB70AB6EA]:0)


FAILED:  junit.framework.TestSuite.org.apache.lucene.index.TestIndexWriterDelete

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([DCF0B4DFB70AB6EA]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.admin.AutoscalingHistoryHandlerTest

Error Message:
ObjectTracker found 2 object(s) that were not released!!! [ZkStateReader, 
SolrZkClient] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.solr.common.cloud.ZkStateReader  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at org.apache.solr.common.cloud.ZkStateReader.(ZkStateReader.java:328)  
at 
org.apache.solr.client.solrj.impl.ZkClientClusterStateProvider.connect(ZkClientClusterStateProvider.java:160)
  at 
org.apache.solr.client.solrj.impl.CloudSolrClient.connect(CloudSolrClient.java:399)
  at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1012)
  at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884)
  at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:997)
  at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
  at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)  at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)  at 
org.apache.solr.client.solrj.impl.SolrClientCloudManager.request(SolrClientCloudManager.java:115)
  at 
org.apache.solr.cloud.autoscaling.SystemLogListener.onEvent(SystemLogListener.java:126)
  at 
org.apache.solr.cloud.autoscaling.ScheduledTriggers$TriggerListeners.fireListeners(ScheduledTriggers.java:837)
  at 
org.apache.solr.cloud.autoscaling.ScheduledTriggers$TriggerListeners.fireListeners(ScheduledTriggers.java:804)
  at 
org.apache.solr.cloud.autoscaling.ScheduledTriggers.lambda$add$4(ScheduledTriggers.java:298)
  at 
org.apache.solr.cloud.autoscaling.NodeLostTrigger.run(NodeLostTrigger.java:185) 
 at 
org.apache.solr.cloud.autoscaling.ScheduledTriggers$TriggerWrapper.run(ScheduledTriggers.java:634)
  at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)  
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)  at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
  at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
 at java.lang.Thread.run(Thread.java:748)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.solr.common.cloud.SolrZkClient  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:203)  
at org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:126)  at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:116)  at 
org.apache.solr.common.cloud.ZkStateReader.(ZkStateReader.java:306)  at 
org.apache.solr.client.solrj.impl.ZkClientClusterStateProvider.connect(ZkClientClusterStateProvider.java:160)
  at 
org.apache.solr.client.solrj.impl.CloudSolrClient.connect(CloudSolrClient.java:399)
  at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1012)
  at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884)
  at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:997)
  at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
  at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)  at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)  at 
org.apache.solr.client.solrj.impl.SolrClientCloudManager.request(SolrClientCloudManager.java:115)
  at 
org.apache.solr.cloud.autoscaling.SystemLogListener.onEvent(SystemLogListener.java:126)
  at 
org.apache.solr.cloud.autoscaling.ScheduledTriggers$TriggerListeners.fireListeners(ScheduledTriggers.java:837)
  at 
org.apache.solr.cloud.autoscaling.ScheduledTriggers$TriggerListeners.fireListeners(ScheduledTriggers.java:804)
  at 

[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-9.0.4) - Build # 23605 - Unstable!

2019-01-31 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/23605/
Java: 64bit/jdk-9.0.4 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

45 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.hdfs.HDFSCollectionsAPITest

Error Message:
Timed out waiting for Mini HDFS Cluster to start

Stack Trace:
java.io.IOException: Timed out waiting for Mini HDFS Cluster to start
at __randomizedtesting.SeedInfo.seed([B3EBC148FC827CD8]:0)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.waitClusterUp(MiniDFSCluster.java:1428)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:915)
at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:518)
at 
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:477)
at 
org.apache.solr.cloud.hdfs.HdfsTestUtil.setupClass(HdfsTestUtil.java:108)
at 
org.apache.solr.cloud.hdfs.HdfsTestUtil.setupClass(HdfsTestUtil.java:61)
at 
org.apache.solr.cloud.hdfs.HDFSCollectionsAPITest.setupClass(HDFSCollectionsAPITest.java:50)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:878)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.hdfs.HDFSCollectionsAPITest

Error Message:
Timed out waiting for Mini HDFS Cluster to start

Stack Trace:
java.io.IOException: Timed out waiting for Mini HDFS Cluster to start
at __randomizedtesting.SeedInfo.seed([B3EBC148FC827CD8]:0)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.waitClusterUp(MiniDFSCluster.java:1428)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:915)
at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:518)
at 
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:477)
at 
org.apache.solr.cloud.hdfs.HdfsTestUtil.setupClass(HdfsTestUtil.java:108)
at 
org.apache.solr.cloud.hdfs.HdfsTestUtil.setupClass(HdfsTestUtil.java:61)
at 
org.apache.solr.cloud.hdfs.HDFSCollectionsAPITest.setupClass(HDFSCollectionsAPITest.java:50)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 

[jira] [Commented] (SOLR-9515) Update to Hadoop 3

2019-01-31 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757746#comment-16757746
 ] 

Kevin Risden commented on SOLR-9515:


So this caused failures on JDK9+ not sure how the below is possible 
currently since solr-tests.policy allows read access to that path.

https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/23605
{code:java}
[junit4]   2> java.io.IOException: Failed to start sub tasks to add replica in 
replica map :java.security.AccessControlException: access denied 
("java.io.FilePermission" 
"/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J0/temp/solr.index.hdfs.CheckHdfsIndexTest_B3EBC148FC827CD8-001/tempDir-001/hdfsBaseDir/data/data3/current/BP-669531916-88.99.242.108-1548970895105/current/finalized"
 "read")
   [junit4]   2>at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.getVolumeMap(BlockPoolSlice.java:439)
 ~[hadoop-hdfs-3.2.0.jar:?]
   [junit4]   2>at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.getVolumeMap(FsVolumeImpl.java:1003)
 ~[hadoop-hdfs-3.2.0.jar:?]
   [junit4]   2>at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList$1.run(FsVolumeList.java:201)
 ~[hadoop-hdfs-3.2.0.jar:?]
{code}

> Update to Hadoop 3
> --
>
> Key: SOLR-9515
> URL: https://issues.apache.org/jira/browse/SOLR-9515
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Kevin Risden
>Priority: Blocker
> Fix For: 8.0, master (9.0)
>
> Attachments: SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch, 
> SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch
>
>  Time Spent: 4h 50m
>  Remaining Estimate: 0h
>
> Hadoop 3 is not out yet, but I'd like to iron out the upgrade to be prepared. 
> I'll start up a dev branch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9515) Update to Hadoop 3

2019-01-31 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757746#comment-16757746
 ] 

Kevin Risden edited comment on SOLR-9515 at 1/31/19 9:48 PM:
-

So this caused failures on JDK9+ not sure how the below is possible 
currently since solr-tests.policy 
([https://github.com/apache/lucene-solr/blob/master/lucene/tools/junit4/solr-tests.policy#L27)]
 allows read access to that path.

[https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/23605]
{code:java}
[junit4]   2> java.io.IOException: Failed to start sub tasks to add replica in 
replica map :java.security.AccessControlException: access denied 
("java.io.FilePermission" 
"/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J0/temp/solr.index.hdfs.CheckHdfsIndexTest_B3EBC148FC827CD8-001/tempDir-001/hdfsBaseDir/data/data3/current/BP-669531916-88.99.242.108-1548970895105/current/finalized"
 "read")
   [junit4]   2>at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.getVolumeMap(BlockPoolSlice.java:439)
 ~[hadoop-hdfs-3.2.0.jar:?]
   [junit4]   2>at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.getVolumeMap(FsVolumeImpl.java:1003)
 ~[hadoop-hdfs-3.2.0.jar:?]
   [junit4]   2>at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList$1.run(FsVolumeList.java:201)
 ~[hadoop-hdfs-3.2.0.jar:?]
{code}
Snippet from solr-tests.policy:
{code:java}
permission java.io.FilePermission "<>", "read,execute";
permission java.io.FilePermission "${junit4.childvm.cwd}", "read,execute";
permission java.io.FilePermission "${junit4.childvm.cwd}${/}temp", 
"read,execute,write,delete";
permission java.io.FilePermission "${junit4.childvm.cwd}${/}temp${/}-", 
"read,execute,write,delete";
permission java.io.FilePermission "${junit4.childvm.cwd}${/}jacoco.db", "write";
permission java.io.FilePermission "${junit4.tempDir}${/}*", 
"read,execute,write,delete";
permission java.io.FilePermission "${clover.db.dir}${/}-", 
"read,execute,write,delete";
permission java.io.FilePermission "${tests.linedocsfile}", "read";
permission java.nio.file.LinkPermission "hard";
{code}
Variables from run:
{code:java}
junit4.childvm.cwd=/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J0
junit.tempDir=/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/temp{code}
So we should have read on the paths that HDFS is trying to use?


was (Author: risdenk):
So this caused failures on JDK9+ not sure how the below is possible 
currently since solr-tests.policy allows read access to that path.

https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/23605
{code:java}
[junit4]   2> java.io.IOException: Failed to start sub tasks to add replica in 
replica map :java.security.AccessControlException: access denied 
("java.io.FilePermission" 
"/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J0/temp/solr.index.hdfs.CheckHdfsIndexTest_B3EBC148FC827CD8-001/tempDir-001/hdfsBaseDir/data/data3/current/BP-669531916-88.99.242.108-1548970895105/current/finalized"
 "read")
   [junit4]   2>at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.getVolumeMap(BlockPoolSlice.java:439)
 ~[hadoop-hdfs-3.2.0.jar:?]
   [junit4]   2>at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.getVolumeMap(FsVolumeImpl.java:1003)
 ~[hadoop-hdfs-3.2.0.jar:?]
   [junit4]   2>at 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList$1.run(FsVolumeList.java:201)
 ~[hadoop-hdfs-3.2.0.jar:?]
{code}

> Update to Hadoop 3
> --
>
> Key: SOLR-9515
> URL: https://issues.apache.org/jira/browse/SOLR-9515
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Kevin Risden
>Priority: Blocker
> Fix For: 8.0, master (9.0)
>
> Attachments: SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch, 
> SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch
>
>  Time Spent: 4h 50m
>  Remaining Estimate: 0h
>
> Hadoop 3 is not out yet, but I'd like to iron out the upgrade to be prepared. 
> I'll start up a dev branch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12999) Index replication could delete segments first

2019-01-31 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757728#comment-16757728
 ] 

ASF subversion and git services commented on SOLR-12999:


Commit 34da61e863c459ae48103a3f6e1dacb76f14cd23 in lucene-solr's branch 
refs/heads/branch_8x from Noble Paul
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=34da61e ]

SOLR-12999: Index replication could delete segments before downloading segments 
from master if there is not enough disk space


> Index replication could delete segments first
> -
>
> Key: SOLR-12999
> URL: https://issues.apache.org/jira/browse/SOLR-12999
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: replication (java)
>Reporter: David Smiley
>Assignee: Noble Paul
>Priority: Major
> Fix For: 8.0
>
> Attachments: SOLR-12999.patch, SOLR-12999.patch
>
>
> Index replication could optionally delete files that it knows will not be 
> needed _first_.  This would reduce disk capacity requirements of Solr, and it 
> would reduce some disk fragmentation when space get tight.
> Solr (IndexFetcher) already grabs the remote file list, and it could see 
> which files it has locally, then delete the others.  Today it asks Lucene to 
> {{deleteUnusedFiles}} at the end.  This new mode would probably only be 
> useful if there is no SolrIndexSearcher open, since it would prevent the 
> removal of files.
> The motivating scenario is a SolrCloud replica that is going into full 
> recovery.  It ought to not be fielding searches.  The code changes would not 
> depend on SolrCloud though.
> This option would have some danger the user should be aware of.  If the 
> replication fails, leaving the local files incomplete/corrupt, the only 
> recourse is to try full replication again.  You can't just give up and field 
> queries.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-8.x - Build # 12 - Failure

2019-01-31 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-8.x/12/

1 tests failed.
FAILED:  org.apache.solr.cloud.hdfs.StressHdfsTest.test

Error Message:
There are still nodes recoverying - waited for 330 seconds

Stack Trace:
java.lang.AssertionError: There are still nodes recoverying - waited for 330 
seconds
at 
__randomizedtesting.SeedInfo.seed([8FB92E0E34FF0327:7ED11D49A036EDF]:0)
at org.junit.Assert.fail(Assert.java:88)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:195)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:143)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:138)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForRecoveriesToFinish(AbstractFullDistribZkTestBase.java:1032)
at 
org.apache.solr.cloud.hdfs.StressHdfsTest.createAndDeleteCollection(StressHdfsTest.java:164)
at 
org.apache.solr.cloud.hdfs.StressHdfsTest.test(StressHdfsTest.java:108)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1082)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:1054)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 

[jira] [Updated] (SOLR-12291) OverseerCollectionMessageHandler sliceCmd assumes only one replica exists on each node

2019-01-31 Thread Mikhail Khludnev (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev updated SOLR-12291:

Attachment: SOLR-12291.patch

> OverseerCollectionMessageHandler sliceCmd assumes only one replica exists on 
> each node
> --
>
> Key: SOLR-12291
> URL: https://issues.apache.org/jira/browse/SOLR-12291
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Backup/Restore, SolrCloud
>Reporter: Varun Thacker
>Priority: Major
> Attachments: SOLR-12291.patch, SOLR-122911.patch
>
>
> The OverseerCollectionMessageHandler sliceCmd assumes only one replica exists 
> on one node
> When multiple replicas of a slice are on the same node we only track one 
> replica's async request. This happens because the async requestMap's key is 
> "node_name"
> I discovered this when [~alabax] shared some logs of a restore issue, where 
> the second replica got added before the first replica had completed it's 
> restorecore action.
> While looking at the logs I noticed that the overseer never called 
> REQUESTSTATUS for the restorecore action , almost as if it had missed 
> tracking that particular async request.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12999) Index replication could delete segments first

2019-01-31 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757727#comment-16757727
 ] 

ASF subversion and git services commented on SOLR-12999:


Commit d3c686aa242e8b6ff8363244dd6267e1e51ff4fa in lucene-solr's branch 
refs/heads/branch_8x from Noble Paul
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=d3c686a ]

SOLR-12999: Index replication could delete segments before downloading segments 
from master if there is not enough disk space


> Index replication could delete segments first
> -
>
> Key: SOLR-12999
> URL: https://issues.apache.org/jira/browse/SOLR-12999
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: replication (java)
>Reporter: David Smiley
>Assignee: Noble Paul
>Priority: Major
> Fix For: 8.0
>
> Attachments: SOLR-12999.patch, SOLR-12999.patch
>
>
> Index replication could optionally delete files that it knows will not be 
> needed _first_.  This would reduce disk capacity requirements of Solr, and it 
> would reduce some disk fragmentation when space get tight.
> Solr (IndexFetcher) already grabs the remote file list, and it could see 
> which files it has locally, then delete the others.  Today it asks Lucene to 
> {{deleteUnusedFiles}} at the end.  This new mode would probably only be 
> useful if there is no SolrIndexSearcher open, since it would prevent the 
> removal of files.
> The motivating scenario is a SolrCloud replica that is going into full 
> recovery.  It ought to not be fielding searches.  The code changes would not 
> depend on SolrCloud though.
> This option would have some danger the user should be aware of.  If the 
> replication fails, leaving the local files incomplete/corrupt, the only 
> recourse is to try full replication again.  You can't just give up and field 
> queries.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8673) Use radix partitioning when merging dimensional points

2019-01-31 Thread Robert Muir (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-8673:

Description: 
Following the advise of [~jpountz] in LUCENE-8623I have investigated using 
radix selection when merging segments instead of sorting the data at the 
beginning. The results are pretty promising when running Lucene geo benchmarks:

 
||Approach||Index time (sec): Dev||Index Time (sec): Base||Index Time: 
Diff||Force merge time (sec): Dev||Force Merge time (sec): Base||Force Merge 
Time: Diff||Index size (GB): Dev||Index size (GB): Base||Index Size: 
Diff||Reader heap (MB): Dev||Reader heap (MB): Base||Reader heap: Diff
|points|241.5s|235.0s| 3%|157.2s|157.9s|-0%|0.55|0.55| 0%|1.57|1.57| 0%|
|shapes|416.1s|650.1s|-36%|306.1s|603.2s|-49%|1.29|1.29| 0%|1.61|1.61| 0%|
|geo3d|261.0s|360.1s|-28%|170.2s|279.9s|-39%|0.75|0.75| 0%|1.58|1.58| 0%|
 
edited: table formatting to be a jira table
 

In 2D the index throughput is more or less equal but for higher dimensions the 
impact is quite big. In all cases the merging process requires much less disk 
space, I am attaching plots showing the different behaviour and I am opening a 
pull request.

 

 

 

  was:
Following the advise of [~jpountz] in LUCENE-8623I have investigated using 
radix selection when merging segments instead of sorting the data at the 
beginning. The results are pretty promising when running Lucene geo benchmarks:

 
{code:java}
||Approach||Index time (sec)||Force merge time (sec)||Index size (GB)||Reader 
heap (MB)||
          ||Dev||Base||Diff ||Dev  ||Base  ||diff   
||Dev||Base||Diff||Dev||Base||Diff ||
|points|241.5s|235.0s| 3%|157.2s|157.9s|-0%|0.55|0.55| 0%|1.57|1.57| 0%|
|shapes|416.1s|650.1s|-36%|306.1s|603.2s|-49%|1.29|1.29| 0%|1.61|1.61| 0%|
|geo3d|261.0s|360.1s|-28%|170.2s|279.9s|-39%|0.75|0.75| 0%|1.58|1.58| 0%|{code}
 

 

In 2D the index throughput is more or less equal but for higher dimensions the 
impact is quite big. In all cases the merging process requires much less disk 
space, I am attaching plots showing the different behaviour and I am opening a 
pull request.

 

 

 


> Use radix partitioning when merging dimensional points
> --
>
> Key: LUCENE-8673
> URL: https://issues.apache.org/jira/browse/LUCENE-8673
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Ignacio Vera
>Priority: Major
> Attachments: Geo3D.png, LatLonPoint.png, LatLonShape.png
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Following the advise of [~jpountz] in LUCENE-8623I have investigated using 
> radix selection when merging segments instead of sorting the data at the 
> beginning. The results are pretty promising when running Lucene geo 
> benchmarks:
>  
> ||Approach||Index time (sec): Dev||Index Time (sec): Base||Index Time: 
> Diff||Force merge time (sec): Dev||Force Merge time (sec): Base||Force Merge 
> Time: Diff||Index size (GB): Dev||Index size (GB): Base||Index Size: 
> Diff||Reader heap (MB): Dev||Reader heap (MB): Base||Reader heap: Diff
> |points|241.5s|235.0s| 3%|157.2s|157.9s|-0%|0.55|0.55| 0%|1.57|1.57| 0%|
> |shapes|416.1s|650.1s|-36%|306.1s|603.2s|-49%|1.29|1.29| 0%|1.61|1.61| 0%|
> |geo3d|261.0s|360.1s|-28%|170.2s|279.9s|-39%|0.75|0.75| 0%|1.58|1.58| 0%|
>  
> edited: table formatting to be a jira table
>  
> In 2D the index throughput is more or less equal but for higher dimensions 
> the impact is quite big. In all cases the merging process requires much less 
> disk space, I am attaching plots showing the different behaviour and I am 
> opening a pull request.
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12291) OverseerCollectionMessageHandler sliceCmd assumes only one replica exists on each node

2019-01-31 Thread Mikhail Khludnev (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757719#comment-16757719
 ] 

Mikhail Khludnev commented on SOLR-12291:
-

[^SOLR-12291.patch] just replaces map with multi map. It has a test checks per 
core responses in clumsy REQUESTSTATUS response. 
Also, I want to clarify impact for users: async operations with large 
collections are broken.  

> OverseerCollectionMessageHandler sliceCmd assumes only one replica exists on 
> each node
> --
>
> Key: SOLR-12291
> URL: https://issues.apache.org/jira/browse/SOLR-12291
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Backup/Restore, SolrCloud
>Reporter: Varun Thacker
>Priority: Major
> Attachments: SOLR-12291.patch, SOLR-122911.patch
>
>
> The OverseerCollectionMessageHandler sliceCmd assumes only one replica exists 
> on one node
> When multiple replicas of a slice are on the same node we only track one 
> replica's async request. This happens because the async requestMap's key is 
> "node_name"
> I discovered this when [~alabax] shared some logs of a restore issue, where 
> the second replica got added before the first replica had completed it's 
> restorecore action.
> While looking at the logs I noticed that the overseer never called 
> REQUESTSTATUS for the restorecore action , almost as if it had missed 
> tracking that particular async request.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-8676) TestKoreanTokenizer#testRandomHugeStrings failure

2019-01-31 Thread Jim Ferenczi (JIRA)
Jim Ferenczi created LUCENE-8676:


 Summary: TestKoreanTokenizer#testRandomHugeStrings failure
 Key: LUCENE-8676
 URL: https://issues.apache.org/jira/browse/LUCENE-8676
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Jim Ferenczi


KoreanTokenizer#testRandomHugeString failed in CI with the following exception:

{noformat}
  [junit4]> Throwable #1: java.lang.AssertionError
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([8C5E2BE10F581CB:90E6857D4E833D83]:0)
   [junit4]>at 
org.apache.lucene.analysis.ko.KoreanTokenizer.add(KoreanTokenizer.java:334)
   [junit4]>at 
org.apache.lucene.analysis.ko.KoreanTokenizer.parse(KoreanTokenizer.java:707)
   [junit4]>at 
org.apache.lucene.analysis.ko.KoreanTokenizer.incrementToken(KoreanTokenizer.java:377)
   [junit4]>at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkAnalysisConsistency(BaseTokenStreamTestCase.java:748)
   [junit4]>at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:659)
   [junit4]>at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:561)
   [junit4]>at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:474)
   [junit4]>at 
org.apache.lucene.analysis.ko.TestKoreanTokenizer.testRandomHugeStrings(TestKoreanTokenizer.java:313)
   [junit4]>at java.lang.Thread.run(Thread.java:748)
   [junit4]   2> NOTE: leaving temporary files
{noformat}

I am able to reproduce locally with:

{noformat}
ant test  -Dtestcase=TestKoreanTokenizer -Dtests.method=testRandomHugeStrings 
-Dtests.seed=8C5E2BE10F581CB -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.7/test-data/enwiki.random.lines.txt
 -Dtests.locale=uk-UA -Dtests.timezone=Europe/Istanbul -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1
{noformat}

After some investigation I found out that the position of the buffer is not 
updated when the maximum backtrace size is reached (1024).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13004) Integer overflow in total count in grouping results

2019-01-31 Thread ruchir choudhry (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757697#comment-16757697
 ] 

ruchir choudhry commented on SOLR-13004:


Can you pls assign it to me, I can take this up. 

Thanks -Ruchir

> Integer overflow in total count in grouping results
> ---
>
> Key: SOLR-13004
> URL: https://issues.apache.org/jira/browse/SOLR-13004
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search, SolrCloud
>Affects Versions: 5.5.3
>Reporter: Ian
>Priority: Minor
>
> When doing a Grouping search in solr cloud you can get a negative number for 
> the total found.
> This is caused by the accumulated total being held in an integer and not a 
> long.
>  
> example result:
> {{{ "responseHeader": { "status": 0, "QTime": 9231, "params": { "q": 
> "decade:200", "indent": "true", "fl": "decade", "wt": "json", "group.field": 
> "decade", "group": "true", "_": "1542773674247" } }, "grouped": { "decade": { 
> "matches": -629516788, "groups": [ { "groupValue": "200", "doclist": { 
> "numFound": -629516788, "start": 0, "maxScore": 1.9315376, "docs": [ { 
> "decade": "200" } ] } } ] } } }}}
>  
> {{result without grouping:}}
> {{{ "responseHeader": { "status": 0, "QTime": 1063, "params": { "q": 
> "decade:200", "indent": "true", "fl": "decade", "wt": "json", "_": 
> "1542773791855" } }, "response": { "numFound": 3665450508, "start": 0, 
> "maxScore": 1.9315376, "docs": [ { "decade": "200" }, { "decade": "200" }, { 
> "decade": "200" }, { "decade": "200" }, { "decade": "200" }, { "decade": 
> "200" }, { "decade": "200" }, { "decade": "200" }, { "decade": "200" }, { 
> "decade": "200" } ] } }}}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-8.x-Linux (64bit/jdk-10.0.1) - Build # 113 - Unstable!

2019-01-31 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-Linux/113/
Java: 64bit/jdk-10.0.1 -XX:+UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  
org.apache.solr.cloud.api.collections.ShardSplitTest.testSplitShardWithRuleLink

Error Message:
Error from server at http://127.0.0.1:43171/ok_woc/c: Could not find collection 
: shardSplitWithRule_link

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:43171/ok_woc/c: Could not find collection : 
shardSplitWithRule_link
at 
__randomizedtesting.SeedInfo.seed([B57066E6C7D4C3DC:BF6CD37D4D48A579]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:650)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:256)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:245)
at 
org.apache.solr.client.solrj.impl.LBSolrClient.doRequest(LBSolrClient.java:368)
at 
org.apache.solr.client.solrj.impl.LBSolrClient.request(LBSolrClient.java:296)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:213)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1110)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:207)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:224)
at 
org.apache.solr.cloud.api.collections.ShardSplitTest.doSplitShardWithRule(ShardSplitTest.java:661)
at 
org.apache.solr.cloud.api.collections.ShardSplitTest.testSplitShardWithRuleLink(ShardSplitTest.java:633)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1082)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:1054)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[GitHub] jefferyyuan commented on issue #551: LUCENE-8662: Change TermsEnum.seekExact(BytesRef) to abstract

2019-01-31 Thread GitBox
jefferyyuan commented on issue #551: LUCENE-8662: Change 
TermsEnum.seekExact(BytesRef) to abstract
URL: https://github.com/apache/lucene-solr/pull/551#issuecomment-459465139
 
 
   Thanks @s1monw, added lucene/MIGRATE.txt and  lucene/CHANGES.txt.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] jefferyyuan commented on a change in pull request #551: LUCENE-8662: Change TermsEnum.seekExact(BytesRef) to abstract

2019-01-31 Thread GitBox
jefferyyuan commented on a change in pull request #551: LUCENE-8662: Change 
TermsEnum.seekExact(BytesRef) to abstract
URL: https://github.com/apache/lucene-solr/pull/551#discussion_r252797675
 
 

 ##
 File path: lucene/core/src/java/org/apache/lucene/index/TermsEnum.java
 ##
 @@ -65,13 +65,26 @@ public AttributeSource attributes() {
 NOT_FOUND
   };
 
-  /** Attempts to seek to the exact term, returning
-   *  true if the term is found.  If this returns false, the
-   *  enum is unpositioned.  For some codecs, seekExact may
-   *  be substantially faster than {@link #seekCeil}. */
-  public boolean seekExact(BytesRef text) throws IOException {
+  /**
+   * Attempts to seek to the exact term, returning true if the term is found. 
If this returns false, the enum is
+   * unpositioned. For some codecs, seekExact may be substantially faster than 
{@link #seekCeil}.
+   * 
+   * 
+   * This method is performance critical and the Default implementation: 
defaultSeekExact may be slow in some cases, so
+   * Subclass SHOULD have its own implementation if possible.
+   * 
+   * @return true if the term is found; return false if the enum is 
unpositioned.
+   */
+  public abstract boolean seekExact(BytesRef text) throws IOException;
+
+  /**
+   * Default implementation for seekExact(BytesRef), which may be slow in some 
cases. 
+   * The abstract seekExact(BytesRef) method is performance critical, subclass 
SHOULD have its own implementation if
+   * possible.
+   */
+  public final boolean defaultSeekExactImpl(BytesRef text) throws IOException {
 
 Review comment:
   Thanks @s1monw @dsmiley and changed the code based on your suggestions : )


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8675) Divide Segment Search Amongst Multiple Threads

2019-01-31 Thread Atri Sharma (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757614#comment-16757614
 ] 

Atri Sharma commented on LUCENE-8675:
-

Thanks for the comments.

Having a multi shard approach makes sense, but a search is still bottlenecked 
by the largest segment it needs to scan. If there are many segments of that 
type, that might become a problem.

While I agree that range queries might not be directly benefited from parallel 
scans, but other queries (such as TermQueries) might be benefitted from a 
segment parallel scan. In a typical ElasticSearch interactive query, we see 
spikes when a large segment is hit for an interactive use case. Such cases can 
be optimized with parallel scans.

We should have a method of deciding whether a scan should be parallelized or 
not, and then let the execution operator get a set of nodes to execute. That is 
probably outside the scope of this JIRA, but I wanted to open this thread to 
get the conversation going.

> Divide Segment Search Amongst Multiple Threads
> --
>
> Key: LUCENE-8675
> URL: https://issues.apache.org/jira/browse/LUCENE-8675
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/search
>Reporter: Atri Sharma
>Priority: Major
>
> Segment search is a single threaded operation today, which can be a 
> bottleneck for large analytical queries which index a lot of data and have 
> complex queries which touch multiple segments (imagine a composite query with 
> range query and filters on top). This ticket is for discussing the idea of 
> splitting a single segment into multiple threads based on mutually exclusive 
> document ID ranges.
> This will be a two phase effort, the first phase targeting queries returning 
> all matching documents (collectors not terminating early). The second phase 
> patch will introduce staged execution and will build on top of this patch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-13210) TriLevelCompositeIdRoutingTest makes no sense -- can never fail

2019-01-31 Thread Hoss Man (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man reassigned SOLR-13210:
---

  Assignee: Hoss Man
Attachment: SOLR-13210_demonstrate_broken_test.patch


I've attached a SOLR-13210_demonstrate_broken_test.patch that demonstrates how 
the test logic is so falwed that even if we mock out the queries to return the 
exact same hardcoded docs from every shard, it still passes. 

The logic as written is so weird, i'm not actually sure what the original 
intent of idMap is – whether it was ment to contain the first 2 sections 
(app+user) of a TriLevel id, or just the first (app) section -- because based 
on my understanding of the contract for compositeIds, neither one is guaranteed 
to only exist in a single shard in the situation where a {{/numBits}} is 
specified -- as it is in this test.

[~shalinmangar] -- some guidance here would be appreciated

> TriLevelCompositeIdRoutingTest makes no sense -- can never fail
> ---
>
> Key: SOLR-13210
> URL: https://issues.apache.org/jira/browse/SOLR-13210
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Hoss Man
>Priority: Major
> Attachments: SOLR-13210_demonstrate_broken_test.patch
>
>
> i recently fixed tweaked TriLevelCompositeIdRoutingTest to lower the 
> node/shard count on TEST_NIGHTLY because it was constantly causing an OOM.
> While skimming this test i realized that (other then the OOM, or other 
> catastrophic failure in solr) it was garunteed to never fail, rgardless of 
> what bugs might exist in solr when routing an update/query:
> * it doesn't sanity check that any docs are returned from any query -- so if 
> commit does nothing and it gets no results from each of the shard queries, it 
> will still pass
> * the {{getKey()}} method -- which throws away anything after the last "!" in 
> a String -- is called redundently on it's own output to populate an {{idMap}} 
> ... but not before the first result is used do to acontainsKey assertion on 
> that same {{idMap}}
> ** ie: if {{app42/7!user33!doc1234}} is a uniqueKey value, then 
> {{app42/7!user33}} is what the assert !containsKey checks the Map for, but  
> {{app42/7}} is what gets put in the Map



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-13209) NullPointerException from call in org.apache.solr.search.SolrIndexSearcher.getDocSet

2019-01-31 Thread Cesar Rodriguez (JIRA)
Cesar Rodriguez created SOLR-13209:
--

 Summary: NullPointerException from call in 
org.apache.solr.search.SolrIndexSearcher.getDocSet
 Key: SOLR-13209
 URL: https://issues.apache.org/jira/browse/SOLR-13209
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Affects Versions: master (9.0)
 Environment: h1. Steps to reproduce

* Use a Linux machine.
* Build commit {{ea2c8ba}} of Solr as described in the section below.
* Build the films collection as described below.
* Start the server using the command {{./bin/solr start -f -p 8983 -s 
/tmp/home}}
* Request the URL given in the bug description.

h1. Compiling the server

{noformat}
git clone https://github.com/apache/lucene-solr
cd lucene-solr
git checkout ea2c8ba
ant compile
cd solr
ant server
{noformat}

h1. Building the collection and reproducing the bug

We followed [Exercise 
2|http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html#exercise-2] from 
the [Solr Tutorial|http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html].

{noformat}
mkdir -p /tmp/home
echo '' > /tmp/home/solr.xml
{noformat}

In one terminal start a Solr instance in foreground:
{noformat}
./bin/solr start -f -p 8983 -s /tmp/home
{noformat}

In another terminal, create a collection of movies, with no shards and no 
replication, and initialize it:

{noformat}
bin/solr create -c films
curl -X POST -H 'Content-type:application/json' --data-binary '{"add-field": 
{"name":"name", "type":"text_general", "multiValued":false, "stored":true}}' 
http://localhost:8983/solr/films/schema
curl -X POST -H 'Content-type:application/json' --data-binary 
'{"add-copy-field" : {"source":"*","dest":"_text_"}}' 
http://localhost:8983/solr/films/schema
./bin/post -c films example/films/films.json
curl -v “URL_BUG”
{noformat}

Please check the issue description below to find the “URL_BUG” that will allow 
you to reproduce the issue reported.
Reporter: Cesar Rodriguez


Requesting the following URL causes Solr to return an HTTP 500 error response:

{noformat}
http://localhost:8983/solr/films/select?group=true
{noformat}

The error response seems to be caused by the following uncaught exception:

{noformat}
 java.lang.NullPointerException
at 
java.util.concurrent.ConcurrentHashMap.get(ConcurrentHashMap.java:936)
at 
org.apache.solr.util.ConcurrentLRUCache.get(ConcurrentLRUCache.java:124)
at org.apache.solr.search.FastLRUCache.get(FastLRUCache.java:163)
at 
org.apache.solr.search.SolrIndexSearcher.getDocSet(SolrIndexSearcher.java:792)
at 
org.apache.solr.search.Grouping$CommandQuery.createFirstPassCollector(Grouping.java:860)
at org.apache.solr.search.Grouping.execute(Grouping.java:327)
at 
org.apache.solr.handler.component.QueryComponent.doProcessGroupedSearch(QueryComponent.java:1408)
at 
org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:365)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:298)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:2559)
[...]
{noformat}

Method {{org.apache.solr.search.SolrIndexSearcher.getDocSet()}}, at line 792 
calls {{filterCache.get(absQ)}} where {{absQ}} is a null pointer. I think this 
null pointer comes in fact from the caller, but I don't fully follow the logic 
of the code.

To set up an environment to reproduce this bug, follow the description in the 
‘Environment’ field.

We automatically found this issue and ~70 more like this using [Diffblue 
Microservices Testing|https://www.diffblue.com/labs/?utm_source=solr-br]. Find 
more information on this [fuzz testing 
campaign|https://www.diffblue.com/blog/2018/12/19/diffblue-microservice-testing-a-sneak-peek-at-our-early-product-and-results?utm_source=solr-br].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-13210) TriLevelCompositeIdRoutingTest makes no sense -- can never fail

2019-01-31 Thread Hoss Man (JIRA)
Hoss Man created SOLR-13210:
---

 Summary: TriLevelCompositeIdRoutingTest makes no sense -- can 
never fail
 Key: SOLR-13210
 URL: https://issues.apache.org/jira/browse/SOLR-13210
 Project: Solr
  Issue Type: Sub-task
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Hoss Man


i recently fixed tweaked TriLevelCompositeIdRoutingTest to lower the node/shard 
count on TEST_NIGHTLY because it was constantly causing an OOM.

While skimming this test i realized that (other then the OOM, or other 
catastrophic failure in solr) it was garunteed to never fail, rgardless of what 
bugs might exist in solr when routing an update/query:
* it doesn't sanity check that any docs are returned from any query -- so if 
commit does nothing and it gets no results from each of the shard queries, it 
will still pass
* the {{getKey()}} method -- which throws away anything after the last "!" in a 
String -- is called redundently on it's own output to populate an {{idMap}} ... 
but not before the first result is used do to acontainsKey assertion on that 
same {{idMap}}
** ie: if {{app42/7!user33!doc1234}} is a uniqueKey value, then 
{{app42/7!user33}} is what the assert !containsKey checks the Map for, but  
{{app42/7}} is what gets put in the Map




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-13092) Solr's Maven pom declares both org.codehaus.jackson and com.fasterxml.jackson

2019-01-31 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden resolved SOLR-13092.
-
Resolution: Duplicate

Superceded by SOLR-9515

> Solr's Maven pom declares both org.codehaus.jackson and com.fasterxml.jackson
> -
>
> Key: SOLR-13092
> URL: https://issues.apache.org/jira/browse/SOLR-13092
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Petar Tahchiev
>Priority: Major
>
> The pom.xml in the maven repository of dataimporthandler:
> view-source:https://repo1.maven.org/maven2/org/apache/solr/solr-dataimporthandler/7.6.0/solr-dataimporthandler-7.6.0.pom
> declares both com.fasterxml.jackson and org.codehaus.jackson. This is a bug 
> and it is stopping me form upgrading my app to fasterxml jackson.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13092) Solr's Maven pom declares both org.codehaus.jackson and com.fasterxml.jackson

2019-01-31 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757581#comment-16757581
 ] 

Kevin Risden commented on SOLR-13092:
-

I'm pretty sure this is resolved by the Hadoop 3 upgrade as the link suggests. 
Hadoop 3.2 only depends on old jackson stuff for YARN which we don't pull in. 
So that removes the dependency from the Solr side. I'm hopeful this sticks in 
the 8.0 release coming up. Marking as resolved since it is superceded by the 
Hadoop 3 upgrade ticket.

> Solr's Maven pom declares both org.codehaus.jackson and com.fasterxml.jackson
> -
>
> Key: SOLR-13092
> URL: https://issues.apache.org/jira/browse/SOLR-13092
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Petar Tahchiev
>Priority: Major
>
> The pom.xml in the maven repository of dataimporthandler:
> view-source:https://repo1.maven.org/maven2/org/apache/solr/solr-dataimporthandler/7.6.0/solr-dataimporthandler-7.6.0.pom
> declares both com.fasterxml.jackson and org.codehaus.jackson. This is a bug 
> and it is stopping me form upgrading my app to fasterxml jackson.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8675) Divide Segment Search Amongst Multiple Threads

2019-01-31 Thread Adrien Grand (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757568#comment-16757568
 ] 

Adrien Grand commented on LUCENE-8675:
--

The best way to address such issues is on top of Lucene by having multiple 
shards whose results can be merged with TopDocs#merge.

Parallelizing based on ranges of doc IDs is problematic for some queries, for 
instance the cost of evaluating a range query over an entire segment or only 
about a specific range of doc IDs is exactly the same given that it uses 
data-structures that are organized by value rather than by doc ID.

> Divide Segment Search Amongst Multiple Threads
> --
>
> Key: LUCENE-8675
> URL: https://issues.apache.org/jira/browse/LUCENE-8675
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/search
>Reporter: Atri Sharma
>Priority: Major
>
> Segment search is a single threaded operation today, which can be a 
> bottleneck for large analytical queries which index a lot of data and have 
> complex queries which touch multiple segments (imagine a composite query with 
> range query and filters on top). This ticket is for discussing the idea of 
> splitting a single segment into multiple threads based on mutually exclusive 
> document ID ranges.
> This will be a two phase effort, the first phase targeting queries returning 
> all matching documents (collectors not terminating early). The second phase 
> patch will introduce staged execution and will build on top of this patch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9515) Update to Hadoop 3

2019-01-31 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757567#comment-16757567
 ] 

ASF subversion and git services commented on SOLR-9515:
---

Commit 6bb24673f422a4e4267bc22361bc9258809d5f60 in lucene-solr's branch 
refs/heads/master from Mark Robert Miller
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=6bb2467 ]

SOLR-9515: Update to Hadoop 3

Signed-off-by: Kevin Risden 


> Update to Hadoop 3
> --
>
> Key: SOLR-9515
> URL: https://issues.apache.org/jira/browse/SOLR-9515
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Kevin Risden
>Priority: Blocker
> Fix For: 8.0, master (9.0)
>
> Attachments: SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch, 
> SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch
>
>  Time Spent: 4.5h
>  Remaining Estimate: 0h
>
> Hadoop 3 is not out yet, but I'd like to iron out the upgrade to be prepared. 
> I'll start up a dev branch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] risdenk commented on issue #553: SOLR-9515: Update to Hadoop 3

2019-01-31 Thread GitBox
risdenk commented on issue #553: SOLR-9515: Update to Hadoop 3
URL: https://github.com/apache/lucene-solr/pull/553#issuecomment-459445817
 
 
   @uschindler Want to make sure you don't think I am trying to avoid fixing 
the forbiddenapis/source check stuff. I plan to fix it but since it is test 
only and want to make sure this cleanup ends up in 8.0 if possible.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] risdenk commented on issue #553: SOLR-9515: Update to Hadoop 3

2019-01-31 Thread GitBox
risdenk commented on issue #553: SOLR-9515: Update to Hadoop 3
URL: https://github.com/apache/lucene-solr/pull/553#issuecomment-459445398
 
 
   Pushed to master. Will wait a few hours and then push to other branches.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9515) Update to Hadoop 3

2019-01-31 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757559#comment-16757559
 ] 

Kevin Risden commented on SOLR-9515:


Last patch is squashed commits from PR plus CHANGES.txt.

> Update to Hadoop 3
> --
>
> Key: SOLR-9515
> URL: https://issues.apache.org/jira/browse/SOLR-9515
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Kevin Risden
>Priority: Blocker
> Fix For: 8.0, master (9.0)
>
> Attachments: SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch, 
> SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch
>
>  Time Spent: 4h
>  Remaining Estimate: 0h
>
> Hadoop 3 is not out yet, but I'd like to iron out the upgrade to be prepared. 
> I'll start up a dev branch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8655) No possibility to access to the underlying "valueSource" of a FunctionScoreQuery

2019-01-31 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/LUCENE-8655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gérald Quaire updated LUCENE-8655:
--
Attachment: (was: LUCENE-8655.patch)

> No possibility to access to the underlying "valueSource" of a 
> FunctionScoreQuery 
> -
>
> Key: LUCENE-8655
> URL: https://issues.apache.org/jira/browse/LUCENE-8655
> Project: Lucene - Core
>  Issue Type: Improvement
>Affects Versions: 7.6
>Reporter: Gérald Quaire
>Priority: Major
>  Labels: patch
>
> After LUCENE-8099, the "BoostedQuery" is deprecated by the use of the 
> "FunctionScoreQuery". With the BoostedQuery, it was possible to access at its 
> underlying "valueSource". But it is not the case with the class 
> "FunctionScoreQuery". It has got only a getter for the wrapped query,  
> For development of specific parsers, it would be necessary to access the 
> valueSource of a "FunctionScoreQuery". I suggest to add a new getter into the 
> class "FunctionScoreQuery" like below:
> {code:java}
>  /**
>    * @return the wrapped Query
>    */
>   public Query getWrappedQuery() {
>     return in;
>   }
>  /**
>    * @return the a source of scores
>    */
>   public DoubleValuesSource getValueSource() {
>     return source;
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] risdenk edited a comment on issue #553: SOLR-9515: Update to Hadoop 3

2019-01-31 Thread GitBox
risdenk edited a comment on issue #553: SOLR-9515: Update to Hadoop 3
URL: https://github.com/apache/lucene-solr/pull/553#issuecomment-459441190
 
 
   > I'll try with Netbeans next.
   
   I tried with Netbeans 8.x and tests ran successfully. Netbeans 10.x hung on 
my machine loading the lucene-solr project and didn't track that down.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] risdenk commented on issue #553: SOLR-9515: Update to Hadoop 3

2019-01-31 Thread GitBox
risdenk commented on issue #553: SOLR-9515: Update to Hadoop 3
URL: https://github.com/apache/lucene-solr/pull/553#issuecomment-459441190
 
 
   > I'll try with Netbeans next.
   I tried with Netbeans 8.x and tests ran successfully. Netbeans 10.x hung on 
my machine and didn't track that down.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] risdenk commented on a change in pull request #553: SOLR-9515: Update to Hadoop 3

2019-01-31 Thread GitBox
risdenk commented on a change in pull request #553: SOLR-9515: Update to Hadoop 
3
URL: https://github.com/apache/lucene-solr/pull/553#discussion_r252772727
 
 

 ##
 File path: solr/core/src/test/org/apache/solr/cloud/hdfs/HdfsTestUtil.java
 ##
 @@ -67,6 +68,14 @@ public static MiniDFSCluster setupClass(String dir, boolean 
safeModeTesting, boo
 LuceneTestCase.assumeFalse("HDFS tests were disabled by 
-Dtests.disableHdfs",
   Boolean.parseBoolean(System.getProperty("tests.disableHdfs", "false")));
 
+// Checks that commons-lang3 FastDateFormat works with configured locale
 
 Review comment:
   Turns out only locale "ja-JP-u-ca-japanese-x-lvariant-JP" causes this 
failure. Emailed commons-user list about this. No response yet. I have not seen 
any failures in the past ~24 hours with this fix put in place.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9515) Update to Hadoop 3

2019-01-31 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-9515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-9515:
---
Priority: Blocker  (was: Major)

> Update to Hadoop 3
> --
>
> Key: SOLR-9515
> URL: https://issues.apache.org/jira/browse/SOLR-9515
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Kevin Risden
>Priority: Blocker
> Fix For: 8.0, master (9.0)
>
> Attachments: SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch, 
> SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch
>
>  Time Spent: 4h
>  Remaining Estimate: 0h
>
> Hadoop 3 is not out yet, but I'd like to iron out the upgrade to be prepared. 
> I'll start up a dev branch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9515) Update to Hadoop 3

2019-01-31 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-9515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-9515:
---
Attachment: SOLR-9515.patch

> Update to Hadoop 3
> --
>
> Key: SOLR-9515
> URL: https://issues.apache.org/jira/browse/SOLR-9515
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Kevin Risden
>Priority: Major
> Fix For: 8.0, master (9.0)
>
> Attachments: SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch, 
> SOLR-9515.patch, SOLR-9515.patch, SOLR-9515.patch
>
>  Time Spent: 4h
>  Remaining Estimate: 0h
>
> Hadoop 3 is not out yet, but I'd like to iron out the upgrade to be prepared. 
> I'll start up a dev branch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13189) Need reliable example (Test) of how to use TestInjection.failReplicaRequests

2019-01-31 Thread Hoss Man (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757551#comment-16757551
 ] 

Hoss Man commented on SOLR-13189:
-

[~markrmil...@gmail.com] - any guidance/observations here to help me proceed?

> Need reliable example (Test) of how to use TestInjection.failReplicaRequests
> 
>
> Key: SOLR-13189
> URL: https://issues.apache.org/jira/browse/SOLR-13189
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Priority: Major
> Attachments: SOLR-13189.patch
>
>
> We need a test that reliably demonstrates the usage of 
> {{TestInjection.failReplicaRequests}} and shows what steps a test needs to 
> take after issuing updates to reliably "pass" (finding all index updates that 
> succeeded from the clients perspective) even in the event of an (injected) 
> replica failure.
> As things stand now, it does not seem that any test using 
> {{TestInjection.failReplicaRequests}} passes reliably -- *and it's not clear 
> if this is due to poorly designed tests, or an indication of a bug in 
> distributed updates / LIR*



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr 8.0

2019-01-31 Thread Kevin Risden
Thanks Jim agree with your concerns. Reviews have been positive so
far. I am pretty confident that this won't break due to the amount of
testing I've done over the past few days. I am planning to merge the
change to master and 8.x later today. I'll keep and eye on Jenkins and
if the first few runs look ok I'll look at merging it into 8.0.

Kevin Risden

On Wed, Jan 30, 2019 at 2:36 PM jim ferenczi  wrote:
>
> Sure Nick, I am not aware of other blockers for 7.7 so I'll start the first 
> RC when your patch is merged.
> Kevin, this looks like a big change so I am not sure if it's a good idea to 
> rush this in for 8.0. Would it be safer to target another version in order to 
> take some time to ensure that it's not breaking anything ? I guess that your 
> concern is that a change like this should happen in a major version but I 
> wonder if it's worth the risk. I don't know this part of the code and the 
> implications of such a change so I let you decide what we should do here but 
> let's not delay the release if we realize that this change requires more than 
> a few days to be merged.
>
> Le mer. 30 janv. 2019 à 20:25, Nicholas Knize  a écrit :
>>
>> Hey Jim,
>>
>> I just added https://issues.apache.org/jira/browse/LUCENE-8669 along with a 
>> pretty straightforward patch. This is a critical one that I think needs to 
>> be in for 7.7 and 8.0. Can I set this as a blocker?
>>
>> On Wed, Jan 30, 2019 at 1:07 PM
Kevin Risden  wrote:
>>>
>>> Jim,
>>>
>>> Since 7.7 needs to be released before 8.0 does that leave time to get
>>> SOLR-9515 - Hadoop 3 upgrade into 8.0? I have a PR updated and it is
>>> currently under review.
>>>
>>> Should I set the SOLR-9515 as a blocker for 8.0? I'm curious if others
>>> feel this should make it into 8.0 or not.
>>>
>>> Kevin Risden
>>>
>>> On Tue, Jan 29, 2019 at 11:15 AM jim ferenczi  
>>> wrote:
>>> >
>>> > I had to revert the version bump for 8.0 (8.1) on branch_8x because we 
>>> > don't handle two concurrent releases in our tests 
>>> > (https://issues.apache.org/jira/browse/LUCENE-8665).
>>> > Since we want to release 7.7 first I created the Jenkins job for this 
>>> > version only and will build the first candidate for this version later 
>>> > this week if there are no objection.
>>> > I'll restore the version bump for 8.0 when 7.7 is out.
>>> >
>>> >
>>> > Le mar. 29 janv. 2019 à 14:43, jim ferenczi  a 
>>> > écrit :
>>> >>
>>> >> Hi,
>>> >> Hearing no objection I created the branches for 8.0 and 7.7. I'll now 
>>> >> create the Jenkins tasks for these versions, Uwe can you also add them 
>>> >> to the Policeman's Jenkins job ?
>>> >> This also means that the feature freeze phase has started for both 
>>> >> versions (7.7 and 8.0):
>>> >>
>>> >> No new features may be committed to the branch.
>>> >> Documentation patches, build patches and serious bug fixes may be 
>>> >> committed to the branch. However, you should submit all patches you want 
>>> >> to commit to Jira first to give others the chance to review and possibly 
>>> >> vote against the patch. Keep in mind that it is our main intention to 
>>> >> keep the branch as stable as possible.
>>> >> All patches that are intended for the branch should first be committed 
>>> >> to the unstable branch, merged into the stable branch, and then into the 
>>> >> current release branch.
>>> >> Normal unstable and stable branch development may continue as usual. 
>>> >> However, if you plan to commit a big change to the unstable branch while 
>>> >> the branch feature freeze is in effect, think twice: can't the addition 
>>> >> wait a couple more days? Merges of bug fixes into the branch may become 
>>> >> more difficult.
>>> >> Only Jira issues with Fix version "X.Y" and priority "Blocker" will 
>>> >> delay a release candidate build.
>>> >>
>>> >>
>>> >> Thanks,
>>> >> Jim
>>> >>
>>> >>
>>> >> Le lun. 28 janv. 2019 à 13:54, Tommaso Teofili 
>>> >>  a écrit :
>>> >>>
>>> >>> sure, thanks Jim!
>>> >>>
>>> >>> Tommaso
>>> >>>
>>> >>> Il giorno lun 28 gen 2019 alle ore 10:35 jim ferenczi
>>> >>>  ha scritto:
>>> >>> >
>>> >>> > Go ahead Tommaso the branch is not created yet.
>>> >>> > The plan is to create the branches (7.7 and 8.0)  tomorrow or 
>>> >>> > wednesday and to announce the feature freeze the same day.
>>> >>> > For blocker issues that are still open this leaves another week to 
>>> >>> > work on a patch and we can update the status at the end of the week 
>>> >>> > in order to decide if we can start the first build candidate
>>> >>> > early next week. Would that work for you ?
>>> >>> >
>>> >>> > Le lun. 28 janv. 2019 à 10:19, Tommaso Teofili 
>>> >>> >  a écrit :
>>> >>> >>
>>> >>> >> I'd like to backport 
>>> >>> >> https://issues.apache.org/jira/browse/LUCENE-8659
>>> >>> >> (upgrade to OpenNLP 1.9.1) to 8x branch, if there's still time.
>>> >>> >>
>>> >>> >> Regards,
>>> >>> >> Tommaso
>>> >>> >>
>>> >>> >> Il giorno lun 28 gen 2019 alle ore 07:59 Adrien Grand
>>> >>> >>  ha scritto:
>>> >>> >> >
>>> >>> 

[GitHub] risdenk commented on issue #553: SOLR-9515: Update to Hadoop 3

2019-01-31 Thread GitBox
risdenk commented on issue #553: SOLR-9515: Update to Hadoop 3
URL: https://github.com/apache/lucene-solr/pull/553#issuecomment-459435527
 
 
   Squashed the multiple commits into one and added CHANGES.txt entry. Planning 
to push this to master and 8.x in a little bit. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-8675) Divide Segment Search Amongst Multiple Threads

2019-01-31 Thread Atri Sharma (JIRA)
Atri Sharma created LUCENE-8675:
---

 Summary: Divide Segment Search Amongst Multiple Threads
 Key: LUCENE-8675
 URL: https://issues.apache.org/jira/browse/LUCENE-8675
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/search
Reporter: Atri Sharma


Segment search is a single threaded operation today, which can be a bottleneck 
for large analytical queries which index a lot of data and have complex queries 
which touch multiple segments (imagine a composite query with range query and 
filters on top). This ticket is for discussing the idea of splitting a single 
segment into multiple threads based on mutually exclusive document ID ranges.

This will be a two phase effort, the first phase targeting queries returning 
all matching documents (collectors not terminating early). The second phase 
patch will introduce staged execution and will build on top of this patch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12743) Memory leak introduced in Solr 7.3.0

2019-01-31 Thread Michael Gibney (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757467#comment-16757467
 ] 

Michael Gibney commented on SOLR-12743:
---

Ah, ok; so I guess looking for "overlapping onDeckSearcher" in logs is not 
productive.

[~markus17], thanks for the extra information! A few more questions/thoughts:
 # Does a thread dump provide any useful information? e.g., if an autowarm (or 
other) thread is blocked somewhere?
 # When the problem manifests, is the service running under load heavy enough 
that inserts/cleanup _could_ potentially monopolize a lock?
 # What are your {{autoCommit}} (and {{autoSoftCommit}}, {{commitWithin}}, 
etc.) settings? Are you also running manual commits?
 # Looking only at the code in {{SolrCore}}, it looks like the only way to get 
"PERFORMANCE WARNING: Overlapping onDeckSearchers" errors in your log is to 
have {{maxWarmingSearchers}} set to > 1. You could try setting this to "2" ... 
it's unlikely to hurt (in fact, unlikely to make a difference, per [~dsmiley]) 
– but there's a remote chance it could provide useful feedback.
 # I see you earlier noted that it's normal that two {{SolrIndexSearcher}}s 
should coexist immediately after a commit; so just to clarify, when you say it 
"immediately" leaks a {{SolrIndexSearcher}} instance, you mean it's hanging 
around longer than it should ...

> Memory leak introduced in Solr 7.3.0
> 
>
> Key: SOLR-12743
> URL: https://issues.apache.org/jira/browse/SOLR-12743
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.3, 7.3.1, 7.4
>Reporter: Tomás Fernández Löbbe
>Priority: Critical
> Attachments: SOLR-12743.patch
>
>
> Reported initially by [~markus17]([1], [2]), but other users have had the 
> same issue [3]. Some of the key parts:
> {noformat}
> Some facts:
> * problem started after upgrading from 7.2.1 to 7.3.0;
> * it occurs only in our main text search collection, all other collections 
> are unaffected;
> * despite what i said earlier, it is so far unreproducible outside 
> production, even when mimicking production as good as we can;
> * SortedIntDocSet instances and ConcurrentLRUCache$CacheEntry instances are 
> both leaked on commit;
> * filterCache is enabled using FastLRUCache;
> * filter queries are simple field:value using strings, and three filter query 
> for time range using [NOW/DAY TO NOW+1DAY/DAY] syntax for 'today', 'last 
> week' and 'last month', but rarely used;
> * reloading the core manually frees OldGen;
> * custom URP's don't cause the problem, disabling them doesn't solve it;
> * the collection uses custom extensions for QueryComponent and 
> QueryElevationComponent, ExtendedDismaxQParser and MoreLikeThisQParser, a 
> whole bunch of TokenFilters, and several DocTransformers and due it being 
> only reproducible on production, i really cannot switch these back to 
> Solr/Lucene versions;
> * useFilterForSortedQuery is/was not defined in schema so it was default 
> (true?), SOLR-11769 could be the culprit, i disabled it just now only for the 
> node running 7.4.0, rest of collection runs 7.2.1;
> {noformat}
> {noformat}
> You were right, it was leaking exactly one SolrIndexSearcher instance on each 
> commit. 
> {noformat}
> And from Björn Häuser ([3]):
> {noformat}
> Problem Suspect 1
> 91 instances of "org.apache.solr.search.SolrIndexSearcher", loaded by 
> "org.eclipse.jetty.webapp.WebAppClassLoader @ 0x6807d1048" occupy 
> 1.981.148.336 (38,26%) bytes. 
> Biggest instances:
>         • org.apache.solr.search.SolrIndexSearcher @ 0x6ffd47ea8 - 70.087.272 
> (1,35%) bytes. 
>         • org.apache.solr.search.SolrIndexSearcher @ 0x79ea9c040 - 65.678.264 
> (1,27%) bytes. 
>         • org.apache.solr.search.SolrIndexSearcher @ 0x6855ad680 - 63.050.600 
> (1,22%) bytes. 
> Problem Suspect 2
> 223 instances of "org.apache.solr.util.ConcurrentLRUCache", loaded by 
> "org.eclipse.jetty.webapp.WebAppClassLoader @ 0x6807d1048" occupy 
> 1.373.110.208 (26,52%) bytes. 
> {noformat}
> More details in the email threads.
> [1] 
> [http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201804.mbox/%3Czarafa.5ae201c6.2f85.218a781d795b07b1%40mail1.ams.nl.openindex.io%3E]
>  [2] 
> [http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201806.mbox/%3Czarafa.5b351537.7b8c.647ddc93059f68eb%40mail1.ams.nl.openindex.io%3E]
>  [3] 
> [http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201809.mbox/%3c7b5e78c6-8cf6-42ee-8d28-872230ded...@gmail.com%3E]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12743) Memory leak introduced in Solr 7.3.0

2019-01-31 Thread Michael Gibney (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757467#comment-16757467
 ] 

Michael Gibney edited comment on SOLR-12743 at 1/31/19 4:39 PM:


Ah, ok; so I guess looking for "overlapping onDeckSearcher" in logs is not 
productive.

[~markus17], thanks for the extra information! A few more questions/thoughts:
 # Does a thread dump provide any useful information? e.g., if an autowarm (or 
other) thread is blocked somewhere?
 # When the problem manifests, is the service running under load heavy enough 
that inserts/cleanup _could_ potentially monopolize a lock?
 # What are your {{autoCommit}} (and {{autoSoftCommit}}, {{commitWithin}}, 
etc.) settings? Are you also running manual commits?
 # Looking only at the code in {{SolrCore}}, it looks like the only way to get 
"PERFORMANCE WARNING: Overlapping onDeckSearchers" errors in your log is to 
have {{maxWarmingSearchers}} set to > 1. You could try setting this to "2" ... 
it's unlikely to hurt (in fact, unlikely to make a difference, per [~dsmiley]) 
– but there's a remote chance it could provide useful feedback.
 # I see you earlier noted that it's normal that two {{SolrIndexSearchers}} 
should coexist immediately after a commit; so just to clarify, when you say it 
"immediately" leaks a {{SolrIndexSearcher}} instance, you mean it's hanging 
around longer than it should ...


was (Author: mgibney):
Ah, ok; so I guess looking for "overlapping onDeckSearcher" in logs is not 
productive.

[~markus17], thanks for the extra information! A few more questions/thoughts:
 # Does a thread dump provide any useful information? e.g., if an autowarm (or 
other) thread is blocked somewhere?
 # When the problem manifests, is the service running under load heavy enough 
that inserts/cleanup _could_ potentially monopolize a lock?
 # What are your {{autoCommit}} (and {{autoSoftCommit}}, {{commitWithin}}, 
etc.) settings? Are you also running manual commits?
 # Looking only at the code in {{SolrCore}}, it looks like the only way to get 
"PERFORMANCE WARNING: Overlapping onDeckSearchers" errors in your log is to 
have {{maxWarmingSearchers}} set to > 1. You could try setting this to "2" ... 
it's unlikely to hurt (in fact, unlikely to make a difference, per [~dsmiley]) 
– but there's a remote chance it could provide useful feedback.
 # I see you earlier noted that it's normal that two {{SolrIndexSearcher}}s 
should coexist immediately after a commit; so just to clarify, when you say it 
"immediately" leaks a {{SolrIndexSearcher}} instance, you mean it's hanging 
around longer than it should ...

> Memory leak introduced in Solr 7.3.0
> 
>
> Key: SOLR-12743
> URL: https://issues.apache.org/jira/browse/SOLR-12743
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.3, 7.3.1, 7.4
>Reporter: Tomás Fernández Löbbe
>Priority: Critical
> Attachments: SOLR-12743.patch
>
>
> Reported initially by [~markus17]([1], [2]), but other users have had the 
> same issue [3]. Some of the key parts:
> {noformat}
> Some facts:
> * problem started after upgrading from 7.2.1 to 7.3.0;
> * it occurs only in our main text search collection, all other collections 
> are unaffected;
> * despite what i said earlier, it is so far unreproducible outside 
> production, even when mimicking production as good as we can;
> * SortedIntDocSet instances and ConcurrentLRUCache$CacheEntry instances are 
> both leaked on commit;
> * filterCache is enabled using FastLRUCache;
> * filter queries are simple field:value using strings, and three filter query 
> for time range using [NOW/DAY TO NOW+1DAY/DAY] syntax for 'today', 'last 
> week' and 'last month', but rarely used;
> * reloading the core manually frees OldGen;
> * custom URP's don't cause the problem, disabling them doesn't solve it;
> * the collection uses custom extensions for QueryComponent and 
> QueryElevationComponent, ExtendedDismaxQParser and MoreLikeThisQParser, a 
> whole bunch of TokenFilters, and several DocTransformers and due it being 
> only reproducible on production, i really cannot switch these back to 
> Solr/Lucene versions;
> * useFilterForSortedQuery is/was not defined in schema so it was default 
> (true?), SOLR-11769 could be the culprit, i disabled it just now only for the 
> node running 7.4.0, rest of collection runs 7.2.1;
> {noformat}
> {noformat}
> You were right, it was leaking exactly one SolrIndexSearcher instance on each 
> commit. 
> {noformat}
> And from Björn Häuser ([3]):
> {noformat}
> Problem Suspect 1
> 91 instances of "org.apache.solr.search.SolrIndexSearcher", loaded by 
> "org.eclipse.jetty.webapp.WebAppClassLoader @ 0x6807d1048" occupy 
> 1.981.148.336 (38,26%) bytes. 
> Biggest 

[jira] [Created] (SOLR-13208) Update dom4j in solr package due to security vulnerability

2019-01-31 Thread DW (JIRA)
DW created SOLR-13208:
-

 Summary: Update dom4j in solr package due to security vulnerability
 Key: SOLR-13208
 URL: https://issues.apache.org/jira/browse/SOLR-13208
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: Server
Affects Versions: 7.6
Reporter: DW


The Solr package contains dom4j-1.6.1 in the server webapp component in 
server/solr-webapp/webapp/WEB-INF/lib/dom4j-1.6.1.jar

Please can you upgrade dom4j-1.6.1 due to open security vulnerability to 2.1.1+.

If you need the CVE number, let me know.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13203) RuntimeException causing a 500 response code for invalid user input

2019-01-31 Thread Johannes Kloos (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757319#comment-16757319
 ] 

Johannes Kloos commented on SOLR-13203:
---

Going over our examples where a 500 status code was returned, I noticed that a 
number of other exceptions are also used to report invalid user input, but 
yield 500 errors. In particular, I think it would make sense to have the 
following exceptions give a 400 status code, at least in some cases:
- SOLRException
- NumberFormatException
- IllegalArgumentException
- IOException
- JSONParser.ParserException
- UnsupportedCharsetExecption

Additionally, I found one case of UnsupportedOperationException that should 
probably report a 400 instead of a 500. It is thrown at 
org.apache.lucene.search.Query.createWeight(Query.java:66).

> RuntimeException causing a 500 response code for invalid user input
> ---
>
> Key: SOLR-13203
> URL: https://issues.apache.org/jira/browse/SOLR-13203
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Affects Versions: master (9.0)
> Environment: h1. Steps to reproduce
> * Use a Linux machine.
> *  Build commit {{ea2c8ba}} of Solr as described in the section below.
> * Build the films collection as described below.
> * Start the server using the command {{./bin/solr start -f -p 8983 -s 
> /tmp/home}}
> * Request the URL given in the bug description.
> h1. Compiling the server
> {noformat}
> git clone https://github.com/apache/lucene-solr
> cd lucene-solr
> git checkout ea2c8ba
> ant compile
> cd solr
> ant server
> {noformat}
> h1. Building the collection
> We followed [Exercise 
> 2|http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html#exercise-2] from 
> the [Solr 
> Tutorial|http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html]. The 
> attached file ({{home.zip}}) gives the contents of folder {{/tmp/home}} that 
> you will obtain by following the steps below:
> {noformat}
> mkdir -p /tmp/home
> echo '' > 
> /tmp/home/solr.xml
> {noformat}
> In one terminal start a Solr instance in foreground:
> {noformat}
> ./bin/solr start -f -p 8983 -s /tmp/home
> {noformat}
> In another terminal, create a collection of movies, with no shards and no 
> replication, and initialize it:
> {noformat}
> bin/solr create -c films
> curl -X POST -H 'Content-type:application/json' --data-binary '{"add-field": 
> {"name":"name", "type":"text_general", "multiValued":false, "stored":true}}' 
> http://localhost:8983/solr/films/schema
> curl -X POST -H 'Content-type:application/json' --data-binary 
> '{"add-copy-field" : {"source":"*","dest":"_text_"}}' 
> http://localhost:8983/solr/films/schema
> ./bin/post -c films example/films/films.json
> {noformat}
>Reporter: Johannes Kloos
>Priority: Trivial
>  Labels: diffblue, newdev
> Attachments: home.zip
>
>
> Requesting the following URL causes Solr to return an HTTP 500 error response:
> {noformat}
> http://localhost:8983/solr/films/select?uf=fl=gen*,id=edismax
> {noformat}
> The error response seems to be caused by the following uncaught exception:
> {noformat}
> java.lang.RuntimeException: dynamic field name must start or end with *
> at 
> org.apache.solr.search.ExtendedDismaxQParser$DynamicField.(ExtendedDismaxQParser.java:1610)
> {noformat}
> The DynamicField parser throws this RuntimeException to tell the user that 
> the given query is invalid. Sadly, the exception is never caught, so it 
> manifests as a 500 error instead of a 400 error.
> We found this issue and ~70 more like this using [Diffblue Microservices 
> Testing|https://www.diffblue.com/labs/?utm_source=solr-br]. Find more 
> information on this [fuzz testing 
> campaign|https://www.diffblue.com/blog/2018/12/19/diffblue-microservice-testing-a-sneak-peek-at-our-early-product-and-results?utm_source=solr-br].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] jpountz commented on a change in pull request #556: LUCENE-8673: Use radix sorting when merging dimensional points

2019-01-31 Thread GitBox
jpountz commented on a change in pull request #556: LUCENE-8673: Use radix 
sorting when merging dimensional points
URL: https://github.com/apache/lucene-solr/pull/556#discussion_r252697597
 
 

 ##
 File path: lucene/core/src/java/org/apache/lucene/util/bkd/PointReader.java
 ##
 @@ -35,47 +33,11 @@
   /** Returns the packed byte[] value */
   public abstract byte[] packedValue();
 
-  /** Point ordinal */
-  public abstract long ord();
-
   /** DocID for this point */
   public abstract int docID();
 
-  /** Iterates through the next {@code count} ords, marking them in the 
provided {@code ordBitSet}. */
-  public void markOrds(long count, LongBitSet ordBitSet) throws IOException {
-for(int i=0;i

[JENKINS] Lucene-Solr-NightlyTests-7.7 - Build # 1 - Failure

2019-01-31 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-7.7/1/

5 tests failed.
FAILED:  org.apache.lucene.analysis.ko.TestKoreanTokenizer.testRandomHugeStrings

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([8C5E2BE10F581CB:90E6857D4E833D83]:0)
at 
org.apache.lucene.analysis.ko.KoreanTokenizer.add(KoreanTokenizer.java:334)
at 
org.apache.lucene.analysis.ko.KoreanTokenizer.parse(KoreanTokenizer.java:707)
at 
org.apache.lucene.analysis.ko.KoreanTokenizer.incrementToken(KoreanTokenizer.java:377)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkAnalysisConsistency(BaseTokenStreamTestCase.java:748)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:659)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:561)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:474)
at 
org.apache.lucene.analysis.ko.TestKoreanTokenizer.testRandomHugeStrings(TestKoreanTokenizer.java:313)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.hdfs.HdfsChaosMonkeySafeLeaderTest

Error Message:
ObjectTracker 

[GitHub] dsmiley commented on a change in pull request #551: LUCENE-8662: Override seekExact(BytesRef) in FilterLeafReader.FilterTermsEnum

2019-01-31 Thread GitBox
dsmiley commented on a change in pull request #551: LUCENE-8662: Override 
seekExact(BytesRef) in FilterLeafReader.FilterTermsEnum
URL: https://github.com/apache/lucene-solr/pull/551#discussion_r252692925
 
 

 ##
 File path: lucene/core/src/java/org/apache/lucene/index/TermsEnum.java
 ##
 @@ -65,13 +65,26 @@ public AttributeSource attributes() {
 NOT_FOUND
   };
 
-  /** Attempts to seek to the exact term, returning
-   *  true if the term is found.  If this returns false, the
-   *  enum is unpositioned.  For some codecs, seekExact may
-   *  be substantially faster than {@link #seekCeil}. */
-  public boolean seekExact(BytesRef text) throws IOException {
+  /**
+   * Attempts to seek to the exact term, returning true if the term is found. 
If this returns false, the enum is
+   * unpositioned. For some codecs, seekExact may be substantially faster than 
{@link #seekCeil}.
+   * 
+   * 
+   * This method is performance critical and the Default implementation: 
defaultSeekExact may be slow in some cases, so
+   * Subclass SHOULD have its own implementation if possible.
+   * 
+   * @return true if the term is found; return false if the enum is 
unpositioned.
+   */
+  public abstract boolean seekExact(BytesRef text) throws IOException;
+
+  /**
+   * Default implementation for seekExact(BytesRef), which may be slow in some 
cases. 
+   * The abstract seekExact(BytesRef) method is performance critical, subclass 
SHOULD have its own implementation if
+   * possible.
+   */
+  public final boolean defaultSeekExactImpl(BytesRef text) throws IOException {
 
 Review comment:
   Yes, and mention that particular one-liner in the javadocs for seekExact so 
it's clear how it should semantically behave.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13178) ClassCastExceptions in o.a.s.request.json.ObjectUtil for valid JSON inputs that are not objects

2019-01-31 Thread Johannes Kloos (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Johannes Kloos updated SOLR-13178:
--
Environment: 
Running on Unix, using a git checkout close to master.

h2. Steps to reproduce
 * Build commit ea2c8ba of Solr as described in the section below.
 * Build the films collection as described below.
 * Start the server using the command {{“./bin/solr start -f -p 8983 -s 
/tmp/home”}}
 * Request the URL above.

h2. Compiling the server

{noformat}
git clone https://github.com/apache/lucene-solr
cd lucene-solr
git checkout ea2c8ba
ant compile
cd solr
ant server
{noformat}

h2. Building the collection

We followed Exercise 2 from the quick start tutorial 
([http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html#exercise-2]) - for 
reference, I have attached a copy of the database.

{noformat}
mkdir -p /tmp/home
echo '' > /tmp/home/solr.xml
{noformat}

In one terminal start a Solr instance in foreground:

{noformat}
./bin/solr start -f -p 8983 -s /tmp/home
{noformat}

In another terminal, create a collection of movies, with no shards and no 
replication:

{noformat}
bin/solr create -c films
curl -X POST -H 'Content-type:application/json' --data-binary '\{"add-field": 
{"name":"name", "type":"text_general", "multiValued":false, "stored":true}}' 
http://localhost:8983/solr/films/schema
curl -X POST -H 'Content-type:application/json' --data-binary 
'{"add-copy-field" : {"source":"*","dest":"_text_"}}' 
[http://localhost:8983/solr/films/schema]
./bin/post -c films example/films/films.json
{noformat}

  was:
Running on Unix, using a git checkout close to master.
h2. Steps to reproduce
 * Build commit ea2c8ba of Solr as described in the section below.
 * Build the films collection as described below.
 * {{Start the server using the command “./bin/solr start -f -p 8983 -s 
/tmp/home”}}
 * Request the URL above.

h2. Compiling the server

{{git clone [https://github.com/apache/lucene-solr
 ]cd lucene-solr
 git checkout ea2c8ba
 ant compile
 cd solr
 ant server}}
h2. Building the collection

We followed Exercise 2 from the quick start tutorial 
([http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html#exercise-2]) - for 
reference, I have attached a copy of the database.

{{mkdir -p /tmp/home
 echo '' > 
/tmp/home/solr.xml}}

In one terminal start a Solr instance in foreground:

./bin/solr start -f -p 8983 -s /tmp/home

In another terminal, create a collection of movies, with no shards and no 
replication:

{{bin/solr create -c films
 curl -X POST -H 'Content-type:application/json' --data-binary '\{"add-field": 
{"name":"name", "type":"text_general", "multiValued":false, "stored":true}}' 
[http://localhost:8983/solr/films/schema]}}
 {{curl -X POST -H 'Content-type:application/json' --data-binary 
'{"add-copy-field" : {"source":"*","dest":"_text_"}}' 
[http://localhost:8983/solr/films/schema]}}
 {{./bin/post -c films example/films/films.json}}


> ClassCastExceptions in o.a.s.request.json.ObjectUtil for valid JSON inputs 
> that are not objects
> ---
>
> Key: SOLR-13178
> URL: https://issues.apache.org/jira/browse/SOLR-13178
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Server
>Affects Versions: 7.5, master (9.0)
> Environment: Running on Unix, using a git checkout close to master.
> h2. Steps to reproduce
>  * Build commit ea2c8ba of Solr as described in the section below.
>  * Build the films collection as described below.
>  * Start the server using the command {{“./bin/solr start -f -p 8983 -s 
> /tmp/home”}}
>  * Request the URL above.
> h2. Compiling the server
> {noformat}
> git clone https://github.com/apache/lucene-solr
> cd lucene-solr
> git checkout ea2c8ba
> ant compile
> cd solr
> ant server
> {noformat}
> h2. Building the collection
> We followed Exercise 2 from the quick start tutorial 
> ([http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html#exercise-2]) - 
> for reference, I have attached a copy of the database.
> {noformat}
> mkdir -p /tmp/home
> echo '' > 
> /tmp/home/solr.xml
> {noformat}
> In one terminal start a Solr instance in foreground:
> {noformat}
> ./bin/solr start -f -p 8983 -s /tmp/home
> {noformat}
> In another terminal, create a collection of movies, with no shards and no 
> replication:
> {noformat}
> bin/solr create -c films
> curl -X POST -H 'Content-type:application/json' --data-binary '\{"add-field": 
> {"name":"name", "type":"text_general", "multiValued":false, "stored":true}}' 
> http://localhost:8983/solr/films/schema
> curl -X POST -H 'Content-type:application/json' --data-binary 
> '{"add-copy-field" : {"source":"*","dest":"_text_"}}' 
> [http://localhost:8983/solr/films/schema]
> ./bin/post -c films 

[jira] [Commented] (SOLR-12743) Memory leak introduced in Solr 7.3.0

2019-01-31 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16757294#comment-16757294
 ] 

David Smiley commented on SOLR-12743:
-

My understanding of "Overlapping onDeckSearcher" is that it became impossible 
ever since Solr 6.something in which commits block other commits instead of 
overlapping.  Although that's configurable but it's good by default.

> Memory leak introduced in Solr 7.3.0
> 
>
> Key: SOLR-12743
> URL: https://issues.apache.org/jira/browse/SOLR-12743
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.3, 7.3.1, 7.4
>Reporter: Tomás Fernández Löbbe
>Priority: Critical
> Attachments: SOLR-12743.patch
>
>
> Reported initially by [~markus17]([1], [2]), but other users have had the 
> same issue [3]. Some of the key parts:
> {noformat}
> Some facts:
> * problem started after upgrading from 7.2.1 to 7.3.0;
> * it occurs only in our main text search collection, all other collections 
> are unaffected;
> * despite what i said earlier, it is so far unreproducible outside 
> production, even when mimicking production as good as we can;
> * SortedIntDocSet instances and ConcurrentLRUCache$CacheEntry instances are 
> both leaked on commit;
> * filterCache is enabled using FastLRUCache;
> * filter queries are simple field:value using strings, and three filter query 
> for time range using [NOW/DAY TO NOW+1DAY/DAY] syntax for 'today', 'last 
> week' and 'last month', but rarely used;
> * reloading the core manually frees OldGen;
> * custom URP's don't cause the problem, disabling them doesn't solve it;
> * the collection uses custom extensions for QueryComponent and 
> QueryElevationComponent, ExtendedDismaxQParser and MoreLikeThisQParser, a 
> whole bunch of TokenFilters, and several DocTransformers and due it being 
> only reproducible on production, i really cannot switch these back to 
> Solr/Lucene versions;
> * useFilterForSortedQuery is/was not defined in schema so it was default 
> (true?), SOLR-11769 could be the culprit, i disabled it just now only for the 
> node running 7.4.0, rest of collection runs 7.2.1;
> {noformat}
> {noformat}
> You were right, it was leaking exactly one SolrIndexSearcher instance on each 
> commit. 
> {noformat}
> And from Björn Häuser ([3]):
> {noformat}
> Problem Suspect 1
> 91 instances of "org.apache.solr.search.SolrIndexSearcher", loaded by 
> "org.eclipse.jetty.webapp.WebAppClassLoader @ 0x6807d1048" occupy 
> 1.981.148.336 (38,26%) bytes. 
> Biggest instances:
>         • org.apache.solr.search.SolrIndexSearcher @ 0x6ffd47ea8 - 70.087.272 
> (1,35%) bytes. 
>         • org.apache.solr.search.SolrIndexSearcher @ 0x79ea9c040 - 65.678.264 
> (1,27%) bytes. 
>         • org.apache.solr.search.SolrIndexSearcher @ 0x6855ad680 - 63.050.600 
> (1,22%) bytes. 
> Problem Suspect 2
> 223 instances of "org.apache.solr.util.ConcurrentLRUCache", loaded by 
> "org.eclipse.jetty.webapp.WebAppClassLoader @ 0x6807d1048" occupy 
> 1.373.110.208 (26,52%) bytes. 
> {noformat}
> More details in the email threads.
> [1] 
> [http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201804.mbox/%3Czarafa.5ae201c6.2f85.218a781d795b07b1%40mail1.ams.nl.openindex.io%3E]
>  [2] 
> [http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201806.mbox/%3Czarafa.5b351537.7b8c.647ddc93059f68eb%40mail1.ams.nl.openindex.io%3E]
>  [3] 
> [http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201809.mbox/%3c7b5e78c6-8cf6-42ee-8d28-872230ded...@gmail.com%3E]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >