[jira] [Created] (SOLR-12493) Connection refused (Connection refused) when running dsetool reload_core

2018-06-15 Thread udkantheti (JIRA)
udkantheti created SOLR-12493:
-

 Summary: Connection refused (Connection refused) when running 
dsetool reload_core
 Key: SOLR-12493
 URL: https://issues.apache.org/jira/browse/SOLR-12493
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: Build
Reporter: udkantheti


I am getting Connection refused (Connection refused) when I am runnind 
reload_core with dsetool after we setup jmx , this issue is happening since the 
dse upgrade to 5.0.12 , can some one please help with this?

 

thank you

kantheti



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-7.x - Build # 242 - Still Failing

2018-06-15 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-7.x/242/

No tests ran.

Build Log:
[...truncated 24203 lines...]
[asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: 
invalid part, must have at least one section (e.g., chapter, appendix, etc.)
[asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid 
part, must have at least one section (e.g., chapter, appendix, etc.)
 [java] Processed 2214 links (1764 relative) to 2989 anchors in 230 files
 [echo] Validated Links & Anchors via: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/build/solr-ref-guide/bare-bones-html/

-dist-changes:
 [copy] Copying 4 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/package/changes

-dist-keys:
  [get] Getting: http://home.apache.org/keys/group/lucene.asc
  [get] To: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/package/KEYS

package:

-unpack-solr-tgz:

-ensure-solr-tgz-exists:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/build/solr.tgz.unpacked
[untar] Expanding: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/package/solr-7.5.0.tgz
 into 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/build/solr.tgz.unpacked

generate-maven-artifacts:

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[

[JENKINS] Lucene-Solr-7.x-Solaris (64bit/jdk1.8.0) - Build # 669 - Still Unstable!

2018-06-15 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Solaris/669/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseG1GC

15 tests failed.
FAILED:  org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testTrigger

Error Message:
number of ops expected:<2> but was:<1>

Stack Trace:
java.lang.AssertionError: number of ops expected:<2> but was:<1>
at 
__randomizedtesting.SeedInfo.seed([101E3A7404FE733:62CAD525D980941E]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testTrigger(IndexSizeTriggerTest.java:187)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testTrigger

Error Message:
should have fired an event

Stack Trace:
java.lang.Assertio

[JENKINS] Lucene-Solr-BadApples-7.x-Linux (64bit/jdk-10.0.1) - Build # 51 - Still Unstable!

2018-06-15 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-BadApples-7.x-Linux/51/
Java: 64bit/jdk-10.0.1 -XX:-UseCompressedOops -XX:+UseG1GC

2 tests failed.
FAILED:  org.apache.solr.cloud.autoscaling.sim.TestLargeCluster.testNodeLost

Error Message:
did not finish processing all events in time: started=5, finished=4

Stack Trace:
java.lang.AssertionError: did not finish processing all events in time: 
started=5, finished=4
at 
__randomizedtesting.SeedInfo.seed([1961A9D5677685CF:A674672BE49CE049]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.autoscaling.sim.TestLargeCluster.doTestNodeLost(TestLargeCluster.java:522)
at 
org.apache.solr.cloud.autoscaling.sim.TestLargeCluster.testNodeLost(TestLargeCluster.java:375)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  org.apache.solr.cloud.autoscaling.sim.TestLargeCluster.testNodeLost

Error Message:
did not finish processing all events in time: started=5, finished=4

Stack Trace:
jav

Re: Status of solr tests

2018-06-15 Thread Erick Erickson
Martin:

I have no idea how logging severity levels apply to unit tests that fail.
It's not a question of triaging logs, it's a matter of Jenkins junit test
runs reporting failures.



On Fri, Jun 15, 2018 at 4:25 PM, Martin Gainty  wrote:

> Erick-
>
> appears that style mis-application may be categorised as INFO
> are mixed in with SEVERE errors
>
> Would it make sense to filter the errors based on severity ?
>
>
> https://docs.oracle.com/javase/7/docs/api/java/util/logging/Level.html
> Level (Java Platform SE 7 ) - Oracle Help Center
> 
> docs.oracle.com
> The Level class defines a set of standard logging levels that can be used
> to control logging output. The logging Level objects are ordered and are
> specified by ordered integers.
> if you know Severity you can triage the SEVERE errors before working down
> to INFO errors
>
> WDYT?
> Martin
> __
>
>
>
>
> --
> *From:* Erick Erickson 
> *Sent:* Friday, June 15, 2018 1:05 PM
> *To:* dev@lucene.apache.org; Mark Miller
> *Subject:* Re: Status of solr tests
>
> Mark (and everyone).
>
> I'm trying to be somewhat conservative about what I BadApple, at this
> point it's only things that have failed every week for the last 4.
> Part of that conservatism is to avoid BadApple'ing tests that are
> failing and _should_ fail.
>
> I'm explicitly _not_ delving into any of the causes at all at this
> point, it's overwhelming until we reduce the noise as everyone knows.
>
> So please feel totally free to BadApple anything you know is flakey,
> it won't intrude on my turf ;)
>
> And since I realized I can also report tests that have _not_ failed in
> a month that _are_ BadApple'd, we can be a little freer with
> BadApple'ing tests since there's a mechanism for un-annotating them
> without a lot of tedious effort.
>
> FWIW.
>
> On Fri, Jun 15, 2018 at 9:09 AM, Mark Miller 
> wrote:
> > There is an okay chance I'm going to start making some improvements here
> as
> > well. I've been working on a very stable set of tests on my starburst
> branch
> > and will slowly bring in test fixes over time (I've already been making
> some
> > on that branch for important tests). We should currently be defaulting to
> > tests.badapples=false on all solr test runs - it's a joke to try and get
> a
> > clean run otherwise, and even then somehow 4 or 5 tests that fail
> somewhat
> > commonly have so far avoided Erick's @BadApple hack and slash. They are
> bad
> > appled on my dev branch now, but that is currently where any time I have
> is
> > spent rather than on the main dev branches.
> >
> > Also, too many flakey tests are introduced because devs are not beasting
> or
> > beasting well before committing new heavy tests. Perhaps we could add
> some
> > docs around that.
> >
> > We have built in beasting support, we need to emphasize that a couple
> passes
> > on a new test is not sufficient to test it's quality.
> >
> > - Mark
> >
> > On Fri, Jun 15, 2018 at 9:46 AM Erick Erickson 
> > wrote:
> >>
> >> (Sg) All very true. You're not alone in your frustration.
> >>
> >> I've been trying to at least BadApple tests that fail consistently, so
> >> another option could be to disable BadApple'd tests. My hope has been
> >> to get to the point of being able to reliably get clean runs, at least
> >> when BadApple'd tests are disabled.
> >>
> >> From that point I want to draw a line in the sand and immediately
> >> address tests that fail that are _not_ BadApple'd. At least then we'll
> >> stop getting _worse_. And then we can work on the BadApple'd tests.
> >> But as David says, that's not going to be any time soon. It's been a
> >> couple of months that I've been trying to just get the tests
> >> BadApple'd without even trying to fix any of them.
> >>
> >> It's particularly pernicious because with all the noise we don't see
> >> failures we _should_ see.
> >>
> >> So I don't have any good short-term answer either. We've built up a
> >> very large technical debt in the testing. The first step is to stop
> >> adding more debt, which is what I've been working on so far. And
> >> that's the easy part
> >>
> >> Siigghh
> >>
> >> Erick
> >>
> >>
> >> On Fri, Jun 15, 2018 at 5:29 AM, David Smiley  >
> >> wrote:
> >> > (Sigh) I sympathize with your points Simon.  I'm +1 to modify the
> >> > Lucene-side JIRA QA bot (Yetus) to not execute Solr tests.  We can and
> >> > are
> >> > trying to improve the stability of the Solr tests but even
> >> > optimistically
> >> > the practical reality is that it won't be good enough anytime soon.
> >> > When we
> >> > get there, we can reverse this.
> >> >
> >> > On Fri, Jun 15, 2018 at 3:32 AM Simon Willnauer
> >> > 
> >> > wrote:
> >> >>
> >> >> folks,
> >> >>
> >> >> I got more active working on IndexWriter and Soft-Deletes etc. in the
> >> >> last couple of weeks. It's a blast 

[JENKINS] Lucene-Solr-NightlyTests-7.x - Build # 242 - Failure

2018-06-15 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-7.x/242/

2 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testSplitIntegration

Error Message:
last state: 
DocCollection(testSplitIntegration_collection//clusterstate.json/105)={   
"replicationFactor":"2",   "pullReplicas":"0",   
"router":{"name":"compositeId"},   "maxShardsPerNode":"2",   
"autoAddReplicas":"false",   "nrtReplicas":"2",   "tlogReplicas":"0",   
"autoCreated":"true",   "shards":{ "shard2":{   "replicas":{ 
"core_node3":{   
"core":"testSplitIntegration_collection_shard2_replica_n3",   
"leader":"true",   "SEARCHER.searcher.maxDoc":11,   
"SEARCHER.searcher.deletedDocs":0,   "INDEX.sizeInBytes":15740, 
  "node_name":"127.0.0.1:1_solr",   "state":"active",   
"type":"NRT",   "INDEX.sizeInGB":1.4659017324447632E-5,   
"SEARCHER.searcher.numDocs":11}, "core_node4":{   
"core":"testSplitIntegration_collection_shard2_replica_n4",   
"SEARCHER.searcher.maxDoc":11,   "SEARCHER.searcher.deletedDocs":0, 
  "INDEX.sizeInBytes":15740,   "node_name":"127.0.0.1:10001_solr",  
 "state":"active",   "type":"NRT",   
"INDEX.sizeInGB":1.4659017324447632E-5,   
"SEARCHER.searcher.numDocs":11}},   "range":"0-7fff",   
"state":"active"}, "shard1":{   "stateTimestamp":"1529110277622827600", 
  "replicas":{ "core_node1":{   
"core":"testSplitIntegration_collection_shard1_replica_n1",   
"leader":"true",   "SEARCHER.searcher.maxDoc":14,   
"SEARCHER.searcher.deletedDocs":0,   "INDEX.sizeInBytes":17240, 
  "node_name":"127.0.0.1:1_solr",   "state":"active",   
"type":"NRT",   "INDEX.sizeInGB":1.605600118637085E-5,   
"SEARCHER.searcher.numDocs":14}, "core_node2":{   
"core":"testSplitIntegration_collection_shard1_replica_n2",   
"SEARCHER.searcher.maxDoc":14,   "SEARCHER.searcher.deletedDocs":0, 
  "INDEX.sizeInBytes":17240,   "node_name":"127.0.0.1:10001_solr",  
 "state":"active",   "type":"NRT",   
"INDEX.sizeInGB":1.605600118637085E-5,   
"SEARCHER.searcher.numDocs":14}},   "range":"8000-",   
"state":"inactive"}, "shard1_1":{   "parent":"shard1",   
"stateTimestamp":"1529110277653672700",   "range":"c000-",  
 "state":"active",   "replicas":{ "core_node10":{   
"leader":"true",   
"core":"testSplitIntegration_collection_shard1_1_replica1",   
"SEARCHER.searcher.maxDoc":6,   "SEARCHER.searcher.deletedDocs":0,  
 "INDEX.sizeInBytes":13240,   "node_name":"127.0.0.1:1_solr",   
"base_url":"http://127.0.0.1:1/solr";,   "state":"active",   
"type":"NRT",   "INDEX.sizeInGB":1.2330710887908936E-5, 
  "SEARCHER.searcher.numDocs":6}, "core_node9":{   
"core":"testSplitIntegration_collection_shard1_1_replica0",   
"SEARCHER.searcher.maxDoc":6,   "SEARCHER.searcher.deletedDocs":0,  
 "INDEX.sizeInBytes":13240,   "node_name":"127.0.0.1:10001_solr",   
"base_url":"http://127.0.0.1:10001/solr";,   "state":"active",   
"type":"NRT",   "INDEX.sizeInGB":1.2330710887908936E-5, 
  "SEARCHER.searcher.numDocs":6}}}, "shard1_0":{   "parent":"shard1",   
"stateTimestamp":"1529110277653354950",   "range":"8000-bfff",  
 "state":"active",   "replicas":{ "core_node7":{   
"leader":"true",   
"core":"testSplitIntegration_collection_shard1_0_replica0",   
"SEARCHER.searcher.maxDoc":7,   "SEARCHER.searcher.deletedDocs":0,  
 "INDEX.sizeInBytes":23980,   "node_name":"127.0.0.1:10001_solr",   
"base_url":"http://127.0.0.1:10001/solr";,   "state":"active",   
"type":"NRT",   "INDEX.sizeInGB":2.2333115339279175E-5, 
  "SEARCHER.searcher.numDocs":7}, "core_node8":{   
"core":"testSplitIntegration_collection_shard1_0_replica1",   
"SEARCHER.searcher.maxDoc":7,   "SEARCHER.searcher.deletedDocs":0,  
 "INDEX.sizeInBytes":23980,   "node_name":"127.0.0.1:1_solr",   
"base_url":"http://127.0.0.1:1/solr";,   "state":"active",   
"type":"NRT",   "INDEX.sizeInGB":2.2333115339279175E-5, 
  "SEARCHER.searcher.numDocs":7}

Stack Trace:
java.util.concurrent.TimeoutException: last state: 
DocCollection(testSplitIntegration_collection//clusterstate.json/105)={
  "replicationFactor":"2",
  "pullReplicas":"0",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"2",
  "autoAddReplicas":"false",
  "nrtReplicas":"2",
  "tlogRep

[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1913 - Still Unstable!

2018-06-15 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1913/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseSerialGC

10 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testMergeIntegration

Error Message:
did not finish processing in time

Stack Trace:
java.lang.AssertionError: did not finish processing in time
at 
__randomizedtesting.SeedInfo.seed([74CA149B394C6E27:2773562BDB5DFBDD]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testMergeIntegration(IndexSizeTriggerTest.java:425)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testTrigger

Error Message:
number of ops expected:<2> but was:<1>

Stack Trace:
java.lang.AssertionError: number of ops expected:<2> but was:<1>
at 
__randomizedtesting.SeedInfo.se

[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk-9.0.4) - Build # 2137 - Unstable!

2018-06-15 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2137/
Java: 64bit/jdk-9.0.4 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

10 tests failed.
FAILED:  org.apache.solr.update.AddBlockUpdateTest.testSolrNestedFieldsSingleVal

Error Message:


Stack Trace:
java.lang.NullPointerException
at 
__randomizedtesting.SeedInfo.seed([5A413468F2535FB2:920E5FF0B687DA40]:0)
at 
org.apache.solr.update.AddBlockUpdateTest.getNewClock(AddBlockUpdateTest.java:671)
at 
org.apache.solr.update.AddBlockUpdateTest.indexSolrInputDocumentsDirectly(AddBlockUpdateTest.java:661)
at 
org.apache.solr.update.AddBlockUpdateTest.testSolrNestedFieldsSingleVal(AddBlockUpdateTest.java:338)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:110)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  org.apache.solr.update.AddBlockUpdateTest.testSolrNestedFieldsList

Error Message:


Stack Trace:
java.lang.NullPointerExcept

[JENKINS] Lucene-Solr-7.x-MacOSX (64bit/jdk1.8.0) - Build # 689 - Still Unstable!

2018-06-15 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-MacOSX/689/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseParallelGC

12 tests failed.
FAILED:  org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testMixedBounds

Error Message:
failed to create testMixedBounds_collection Live Nodes: [127.0.0.1:10003_solr, 
127.0.0.1:10002_solr] Last available state: 
DocCollection(testMixedBounds_collection//clusterstate.json/120)={   
"replicationFactor":"2",   "pullReplicas":"0",   
"router":{"name":"compositeId"},   "maxShardsPerNode":"2",   
"autoAddReplicas":"false",   "nrtReplicas":"2",   "tlogReplicas":"0",   
"autoCreated":"true",   "shards":{ "shard2":{   "replicas":{ 
"core_node3":{   "core":"testMixedBounds_collection_shard2_replica_n3", 
  "leader":"true",   "SEARCHER.searcher.maxDoc":0,   
"SEARCHER.searcher.deletedDocs":0,   "INDEX.sizeInBytes":10240, 
  "node_name":"127.0.0.1:10002_solr",   "state":"active",   
"type":"NRT",   "INDEX.sizeInGB":9.5367431640625E-6,   
"SEARCHER.searcher.numDocs":0}, "core_node4":{   
"core":"testMixedBounds_collection_shard2_replica_n4",   
"SEARCHER.searcher.maxDoc":0,   "SEARCHER.searcher.deletedDocs":0,  
 "INDEX.sizeInBytes":10240,   "node_name":"127.0.0.1:10003_solr",   
"state":"active",   "type":"NRT",   
"INDEX.sizeInGB":9.5367431640625E-6,   "SEARCHER.searcher.numDocs":0}}, 
  "range":"0-7fff",   "state":"active"}, "shard1":{   
"stateTimestamp":"1529115817519269600",   "replicas":{ 
"core_node1":{   "core":"testMixedBounds_collection_shard1_replica_n1", 
  "leader":"true",   "SEARCHER.searcher.maxDoc":495,   
"SEARCHER.searcher.deletedDocs":0,   "INDEX.sizeInBytes":257740,
   "node_name":"127.0.0.1:10002_solr",   "state":"active",   
"type":"NRT",   "INDEX.sizeInGB":2.4003908038139343E-4,   
"SEARCHER.searcher.numDocs":495}, "core_node2":{   
"core":"testMixedBounds_collection_shard1_replica_n2",   
"SEARCHER.searcher.maxDoc":495,   "SEARCHER.searcher.deletedDocs":0,
   "INDEX.sizeInBytes":257740,   
"node_name":"127.0.0.1:10003_solr",   "state":"active",   
"type":"NRT",   "INDEX.sizeInGB":2.4003908038139343E-4,   
"SEARCHER.searcher.numDocs":495}},   "range":"8000-",   
"state":"inactive"}, "shard1_1":{   "parent":"shard1",   
"stateTimestamp":"1529115817519744350",   "range":"c000-",  
 "state":"active",   "replicas":{ "core_node10":{   
"leader":"true",   
"core":"testMixedBounds_collection_shard1_1_replica1",   
"SEARCHER.searcher.maxDoc":247,   "SEARCHER.searcher.deletedDocs":0,
   "INDEX.sizeInBytes":133740,   
"node_name":"127.0.0.1:10002_solr",   
"base_url":"http://127.0.0.1:10002/solr";,   "state":"active",   
"type":"NRT",   "INDEX.sizeInGB":1.245550811290741E-4,   
"SEARCHER.searcher.numDocs":247}, "core_node9":{   
"core":"testMixedBounds_collection_shard1_1_replica0",   
"SEARCHER.searcher.maxDoc":247,   "SEARCHER.searcher.deletedDocs":0,
   "INDEX.sizeInBytes":133740,   
"node_name":"127.0.0.1:10003_solr",   
"base_url":"http://127.0.0.1:10003/solr";,   "state":"active",   
"type":"NRT",   "INDEX.sizeInGB":1.245550811290741E-4,   
"SEARCHER.searcher.numDocs":247}}}, "shard1_0":{   "parent":"shard1",   
"stateTimestamp":"1529115817519646350",   "range":"8000-bfff",  
 "state":"active",   "replicas":{ "core_node7":{   
"leader":"true",   
"core":"testMixedBounds_collection_shard1_0_replica0",   
"SEARCHER.searcher.maxDoc":248,   "SEARCHER.searcher.deletedDocs":0,
   "INDEX.sizeInBytes":144480,   
"node_name":"127.0.0.1:10003_solr",   
"base_url":"http://127.0.0.1:10003/solr";,   "state":"active",   
"type":"NRT",   "INDEX.sizeInGB":1.3455748558044434E-4,   
"SEARCHER.searcher.numDocs":248}, "core_node8":{   
"core":"testMixedBounds_collection_shard1_0_replica1",   
"SEARCHER.searcher.maxDoc":248,   "SEARCHER.searcher.deletedDocs":0,
   "INDEX.sizeInBytes":144480,   
"node_name":"127.0.0.1:10002_solr",   
"base_url":"http://127.0.0.1:10002/solr";,   "state":"active",   
"type":"NRT",   "INDEX.sizeInGB":1.3455748558044434E-4,   
"SEARCHER.searcher.numDocs":248}

Stack Trace:
java.lang.AssertionError: failed to create testMixedBounds_collection
Live Nodes: [127.0.0.1:10003_solr, 127.0.0.1:10002_solr]
Last available state: 
DocCo

[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-10.0.1) - Build # 22260 - Still Unstable!

2018-06-15 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/22260/
Java: 64bit/jdk-10.0.1 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testSplitIntegration

Error Message:
events: [CapturedEvent{timestamp=2289018775676104, stage=STARTED, 
actionName='null', event={   "id":"821d80c94d3b0Tb949rjxna6lzcaji0yxyxxyxn",   
"source":"index_size_trigger2",   "eventTime":2289011621417904,   
"eventType":"INDEXSIZE",   "properties":{ "__start__":1, 
"aboveSize":{"testSplitIntegration_collection":["{\"core_node1\":{\n
\"leader\":\"true\",\n\"SEARCHER.searcher.maxDoc\":25,\n
\"SEARCHER.searcher.deletedDocs\":0,\n\"INDEX.sizeInBytes\":22740,\n
\"node_name\":\"127.0.0.1:10005_solr\",\n\"type\":\"NRT\",\n
\"SEARCHER.searcher.numDocs\":25,\n\"__bytes__\":22740,\n
\"core\":\"testSplitIntegration_collection_shard1_replica_n1\",\n
\"__docs__\":25,\n\"violationType\":\"aboveDocs\",\n
\"state\":\"active\",\n\"INDEX.sizeInGB\":2.117827534675598E-5,\n
\"shard\":\"shard1\",\n
\"collection\":\"testSplitIntegration_collection\"}}"]}, "belowSize":{},
 "_enqueue_time_":2289018770214804, "requestedOps":["Op{action=SPLITSHARD, 
hints={COLL_SHARD=[{\n  \"first\":\"testSplitIntegration_collection\",\n  
\"second\":\"shard1\"}]}}"]}}, context={}, config={   
"trigger":"index_size_trigger2",   "stage":[ "STARTED", "ABORTED", 
"SUCCEEDED", "FAILED"],   "afterAction":[ "compute_plan", 
"execute_plan"],   
"class":"org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest$CapturingTriggerListener",
   "beforeAction":[ "compute_plan", "execute_plan"]}, message='null'}, 
CapturedEvent{timestamp=2289018845996654, stage=BEFORE_ACTION, 
actionName='compute_plan', event={   
"id":"821d80c94d3b0Tb949rjxna6lzcaji0yxyxxyxn",   
"source":"index_size_trigger2",   "eventTime":2289011621417904,   
"eventType":"INDEXSIZE",   "properties":{ "__start__":1, 
"aboveSize":{"testSplitIntegration_collection":["{\"core_node1\":{\n
\"leader\":\"true\",\n\"SEARCHER.searcher.maxDoc\":25,\n
\"SEARCHER.searcher.deletedDocs\":0,\n\"INDEX.sizeInBytes\":22740,\n
\"node_name\":\"127.0.0.1:10005_solr\",\n\"type\":\"NRT\",\n
\"SEARCHER.searcher.numDocs\":25,\n\"__bytes__\":22740,\n
\"core\":\"testSplitIntegration_collection_shard1_replica_n1\",\n
\"__docs__\":25,\n\"violationType\":\"aboveDocs\",\n
\"state\":\"active\",\n\"INDEX.sizeInGB\":2.117827534675598E-5,\n
\"shard\":\"shard1\",\n
\"collection\":\"testSplitIntegration_collection\"}}"]}, "belowSize":{},
 "_enqueue_time_":2289018770214804, "requestedOps":["Op{action=SPLITSHARD, 
hints={COLL_SHARD=[{\n  \"first\":\"testSplitIntegration_collection\",\n  
\"second\":\"shard1\"}]}}"]}}, 
context={properties.BEFORE_ACTION=[compute_plan], source=index_size_trigger2}, 
config={   "trigger":"index_size_trigger2",   "stage":[ "STARTED", 
"ABORTED", "SUCCEEDED", "FAILED"],   "afterAction":[ 
"compute_plan", "execute_plan"],   
"class":"org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest$CapturingTriggerListener",
   "beforeAction":[ "compute_plan", "execute_plan"]}, message='null'}, 
CapturedEvent{timestamp=2289018891003504, stage=AFTER_ACTION, 
actionName='compute_plan', event={   
"id":"821d80c94d3b0Tb949rjxna6lzcaji0yxyxxyxn",   
"source":"index_size_trigger2",   "eventTime":2289011621417904,   
"eventType":"INDEXSIZE",   "properties":{ "__start__":1, 
"aboveSize":{"testSplitIntegration_collection":["{\"core_node1\":{\n
\"leader\":\"true\",\n\"SEARCHER.searcher.maxDoc\":25,\n
\"SEARCHER.searcher.deletedDocs\":0,\n\"INDEX.sizeInBytes\":22740,\n
\"node_name\":\"127.0.0.1:10005_solr\",\n\"type\":\"NRT\",\n
\"SEARCHER.searcher.numDocs\":25,\n\"__bytes__\":22740,\n
\"core\":\"testSplitIntegration_collection_shard1_replica_n1\",\n
\"__docs__\":25,\n\"violationType\":\"aboveDocs\",\n
\"state\":\"active\",\n\"INDEX.sizeInGB\":2.117827534675598E-5,\n
\"shard\":\"shard1\",\n
\"collection\":\"testSplitIntegration_collection\"}}"]}, "belowSize":{},
 "_enqueue_time_":2289018770214804, "requestedOps":["Op{action=SPLITSHARD, 
hints={COLL_SHARD=[{\n  \"first\":\"testSplitIntegration_collection\",\n  
\"second\":\"shard1\"}]}}"]}}, 
context={properties.operations=[{class=org.apache.solr.client.solrj.request.CollectionAdminRequest$SplitShard,
 method=GET, params.action=SPLITSHARD, 
params.collection=testSplitIntegration_collection, params.shard=shard1}], 
properties.BEFORE_ACTION=[compute_plan], source=index_size_trigger2, 
properties.AFTER_ACTION=[compute_plan]}, config={   
"trigger":"index_size_trigger2",   "stage":[ "STARTED", "ABORTED", 
"SUCCEEDED", "FAILED"],   "afterAction":[ "compute_plan", 
"execute_plan"],   
"class":"org.apac

[jira] [Resolved] (SOLR-11799) Fix NPE and class cast exceptions in the TimeSeriesStream

2018-06-15 Thread Joel Bernstein (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-11799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein resolved SOLR-11799.
---
Resolution: Resolved

> Fix NPE and class cast exceptions in the TimeSeriesStream
> -
>
> Key: SOLR-11799
> URL: https://issues.apache.org/jira/browse/SOLR-11799
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: SOLR-11799.patch
>
>
> Currently the timeseries Streaming Expression will throw an NPE if there are 
> no results for a bucket and any function other then count(*) is used. It can 
> also throw class cast exceptions if the JSON facet API returns a long for any 
> function (other then count(*)), as it is always expecting a double.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-12183) Refactor Streaming Expression test cases

2018-06-15 Thread Joel Bernstein (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein resolved SOLR-12183.
---
Resolution: Resolved

> Refactor Streaming Expression test cases
> 
>
> Key: SOLR-12183
> URL: https://issues.apache.org/jira/browse/SOLR-12183
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Fix For: 7.4
>
>
> This ticket will breakup the StreamExpressionTest into multiple smaller files 
> based on the following areas:
> 1) Stream Sources
> 2) Stream Decorators
> 3) Stream Evaluators (This may have to be broken up more in the future)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-12280) Ref-Guide: Add Digital Signal Processing documentation

2018-06-15 Thread Joel Bernstein (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein resolved SOLR-12280.
---
Resolution: Resolved

> Ref-Guide: Add Digital Signal Processing documentation
> --
>
> Key: SOLR-12280
> URL: https://issues.apache.org/jira/browse/SOLR-12280
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Fix For: 7.4
>
>
> This ticket will add a new section of documentation in the Math Expressions 
> docs covering the Digital Signal Processing functions. The main areas of 
> documentation coverage will include:
>  * Dot product
>  * Convolution
>  * Cross-correlation
>  * Find Delay
>  * Auto-correlation
>  * Fast Fourier Transform



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-12198) Stream Evaluators should not copy matrices needlessly

2018-06-15 Thread Joel Bernstein (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein resolved SOLR-12198.
---
Resolution: Resolved

> Stream Evaluators should not copy matrices needlessly
> -
>
> Key: SOLR-12198
> URL: https://issues.apache.org/jira/browse/SOLR-12198
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Fix For: 7.4
>
> Attachments: SOLR-12198.patch
>
>
> Currently several of the Stream Evaluators that work with matrices are 
> creating multiple copies of the underlying multi-dimensional arrays. This can 
> lead to excessive memory usage. This ticket will change these implementations 
> so copies of the multi-dimensional arrays that back a matrix are only copied 
> when the *copyOf* function is used.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11216) Make PeerSync more robust

2018-06-15 Thread Cao Manh Dat (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16514525#comment-16514525
 ] 

Cao Manh Dat commented on SOLR-11216:
-

Thank guys for your reviews. This is a rough patch which needs to change/move 
things around to make it cleaner. To be more clear the process of the new 
PeerSync (PeerSyncWithLeader) is
* Replica gets its recent updates versions
* Replica requests recent updates versions + fingerprint from the leader
* Replica requests missed updates (updates in buffer tlog are considered missed 
updates) up to leader's {{fingerprint.maxVersionEncountered}}
* Replica apply missed updates then compare its fingerprint with leader's 
fingerprint in step 2

The reason for getting the fingerprint in step 2 is we do not trust 
{{fingerprint.maxVersionSpecified}}. Therefore we must use the fingerprint of 
the leader with {{fingerprint.maxVersionSpecified==Long.MAX_VALUE}} (or 
fingerprint of leader's index at the time of step 2). We may need to block 
updates between getting recent versions and computing fingerprint on the 
leader's side, but let do it later.

By request updates up to {{fingerprint.maxVersionEncountered}}. We will make 
sure that after apply updates, {{replica.maxVersionEncountered}} will equal 
with the leader, hence its fingerprint will be the same as the leader.

Another optimization here is on step 3, instead of considering buffered updates 
as missed updates, we just need to memo the buffered updates need to be applied 
on step 4.





> Make PeerSync more robust
> -
>
> Key: SOLR-11216
> URL: https://issues.apache.org/jira/browse/SOLR-11216
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Priority: Major
> Attachments: SOLR-11216.patch, SOLR-11216.patch, SOLR-11216.patch
>
>
> First of all, I will change the issue's title with a better name when I have.
> When digging into SOLR-10126. I found a case that can make peerSync fail.
> * leader and replica receive update from 1 to 4
> * replica stop
> * replica miss updates 5, 6
> * replica start recovery
> ## replica buffer updates 7, 8
> ## replica request versions from leader, 
> ## in the same time leader receive update 9, so it will return updates from 1 
> to 9 (for request versions) when replica get recent versions ( so it will be 
> 1,2,3,4,5,6,7,8,9 )
> ## replica do peersync and request updates 5, 6, 9 from leader 
> ## replica apply updates 5, 6, 9. Its index does not have update 7, 8 and 
> maxVersionSpecified for fingerprint is 9, therefore compare fingerprint will 
> fail
> My idea here is why replica request update 9 (step 6) while it knows that 
> updates with lower version ( update 7, 8 ) are on its buffering tlog. Should 
> we request only updates that lower than the lowest update in its buffering 
> tlog ( < 7 )?
> Someone my ask that what if replica won't receive update 9. In that case, 
> leader will put the replica into LIR state, so replica will run recovery 
> process again.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Status of solr tests

2018-06-15 Thread Robert Muir
can we disable this bot already?

On Fri, Jun 15, 2018, 7:25 PM Martin Gainty  wrote:

> Erick-
>
> appears that style mis-application may be categorised as INFO
> are mixed in with SEVERE errors
>
> Would it make sense to filter the errors based on severity ?
>
>
> https://docs.oracle.com/javase/7/docs/api/java/util/logging/Level.html
> Level (Java Platform SE 7 ) - Oracle Help Center
> 
> docs.oracle.com
> The Level class defines a set of standard logging levels that can be used
> to control logging output. The logging Level objects are ordered and are
> specified by ordered integers.
> if you know Severity you can triage the SEVERE errors before working down
> to INFO errors
>
> WDYT?
> Martin
> __
>
>
>
>
> --
> *From:* Erick Erickson 
> *Sent:* Friday, June 15, 2018 1:05 PM
> *To:* dev@lucene.apache.org; Mark Miller
> *Subject:* Re: Status of solr tests
>
> Mark (and everyone).
>
> I'm trying to be somewhat conservative about what I BadApple, at this
> point it's only things that have failed every week for the last 4.
> Part of that conservatism is to avoid BadApple'ing tests that are
> failing and _should_ fail.
>
> I'm explicitly _not_ delving into any of the causes at all at this
> point, it's overwhelming until we reduce the noise as everyone knows.
>
> So please feel totally free to BadApple anything you know is flakey,
> it won't intrude on my turf ;)
>
> And since I realized I can also report tests that have _not_ failed in
> a month that _are_ BadApple'd, we can be a little freer with
> BadApple'ing tests since there's a mechanism for un-annotating them
> without a lot of tedious effort.
>
> FWIW.
>
> On Fri, Jun 15, 2018 at 9:09 AM, Mark Miller 
> wrote:
> > There is an okay chance I'm going to start making some improvements here
> as
> > well. I've been working on a very stable set of tests on my starburst
> branch
> > and will slowly bring in test fixes over time (I've already been making
> some
> > on that branch for important tests). We should currently be defaulting to
> > tests.badapples=false on all solr test runs - it's a joke to try and get
> a
> > clean run otherwise, and even then somehow 4 or 5 tests that fail
> somewhat
> > commonly have so far avoided Erick's @BadApple hack and slash. They are
> bad
> > appled on my dev branch now, but that is currently where any time I have
> is
> > spent rather than on the main dev branches.
> >
> > Also, too many flakey tests are introduced because devs are not beasting
> or
> > beasting well before committing new heavy tests. Perhaps we could add
> some
> > docs around that.
> >
> > We have built in beasting support, we need to emphasize that a couple
> passes
> > on a new test is not sufficient to test it's quality.
> >
> > - Mark
> >
> > On Fri, Jun 15, 2018 at 9:46 AM Erick Erickson 
> > wrote:
> >>
> >> (Sg) All very true. You're not alone in your frustration.
> >>
> >> I've been trying to at least BadApple tests that fail consistently, so
> >> another option could be to disable BadApple'd tests. My hope has been
> >> to get to the point of being able to reliably get clean runs, at least
> >> when BadApple'd tests are disabled.
> >>
> >> From that point I want to draw a line in the sand and immediately
> >> address tests that fail that are _not_ BadApple'd. At least then we'll
> >> stop getting _worse_. And then we can work on the BadApple'd tests.
> >> But as David says, that's not going to be any time soon. It's been a
> >> couple of months that I've been trying to just get the tests
> >> BadApple'd without even trying to fix any of them.
> >>
> >> It's particularly pernicious because with all the noise we don't see
> >> failures we _should_ see.
> >>
> >> So I don't have any good short-term answer either. We've built up a
> >> very large technical debt in the testing. The first step is to stop
> >> adding more debt, which is what I've been working on so far. And
> >> that's the easy part
> >>
> >> Siigghh
> >>
> >> Erick
> >>
> >>
> >> On Fri, Jun 15, 2018 at 5:29 AM, David Smiley  >
> >> wrote:
> >> > (Sigh) I sympathize with your points Simon.  I'm +1 to modify the
> >> > Lucene-side JIRA QA bot (Yetus) to not execute Solr tests.  We can and
> >> > are
> >> > trying to improve the stability of the Solr tests but even
> >> > optimistically
> >> > the practical reality is that it won't be good enough anytime soon.
> >> > When we
> >> > get there, we can reverse this.
> >> >
> >> > On Fri, Jun 15, 2018 at 3:32 AM Simon Willnauer
> >> > 
> >> > wrote:
> >> >>
> >> >> folks,
> >> >>
> >> >> I got more active working on IndexWriter and Soft-Deletes etc. in the
> >> >> last couple of weeks. It's a blast again and I really enjoy it. The
> >> >> one thing that is IMO not acceptable is the status of solr tests. I
> >> >> tried so many times to get them passi

Re: Status of solr tests

2018-06-15 Thread Martin Gainty
Erick-


appears that style mis-application may be categorised as INFO
are mixed in with SEVERE errors


Would it make sense to filter the errors based on severity ?


https://docs.oracle.com/javase/7/docs/api/java/util/logging/Level.html

Level (Java Platform SE 7 ) - Oracle Help 
Center
docs.oracle.com
The Level class defines a set of standard logging levels that can be used to 
control logging output. The logging Level objects are ordered and are specified 
by ordered integers.

if you know Severity you can triage the SEVERE errors before working down to 
INFO errors


WDYT?

Martin
__




From: Erick Erickson 
Sent: Friday, June 15, 2018 1:05 PM
To: dev@lucene.apache.org; Mark Miller
Subject: Re: Status of solr tests

Mark (and everyone).

I'm trying to be somewhat conservative about what I BadApple, at this
point it's only things that have failed every week for the last 4.
Part of that conservatism is to avoid BadApple'ing tests that are
failing and _should_ fail.

I'm explicitly _not_ delving into any of the causes at all at this
point, it's overwhelming until we reduce the noise as everyone knows.

So please feel totally free to BadApple anything you know is flakey,
it won't intrude on my turf ;)

And since I realized I can also report tests that have _not_ failed in
a month that _are_ BadApple'd, we can be a little freer with
BadApple'ing tests since there's a mechanism for un-annotating them
without a lot of tedious effort.

FWIW.

On Fri, Jun 15, 2018 at 9:09 AM, Mark Miller  wrote:
> There is an okay chance I'm going to start making some improvements here as
> well. I've been working on a very stable set of tests on my starburst branch
> and will slowly bring in test fixes over time (I've already been making some
> on that branch for important tests). We should currently be defaulting to
> tests.badapples=false on all solr test runs - it's a joke to try and get a
> clean run otherwise, and even then somehow 4 or 5 tests that fail somewhat
> commonly have so far avoided Erick's @BadApple hack and slash. They are bad
> appled on my dev branch now, but that is currently where any time I have is
> spent rather than on the main dev branches.
>
> Also, too many flakey tests are introduced because devs are not beasting or
> beasting well before committing new heavy tests. Perhaps we could add some
> docs around that.
>
> We have built in beasting support, we need to emphasize that a couple passes
> on a new test is not sufficient to test it's quality.
>
> - Mark
>
> On Fri, Jun 15, 2018 at 9:46 AM Erick Erickson 
> wrote:
>>
>> (Sg) All very true. You're not alone in your frustration.
>>
>> I've been trying to at least BadApple tests that fail consistently, so
>> another option could be to disable BadApple'd tests. My hope has been
>> to get to the point of being able to reliably get clean runs, at least
>> when BadApple'd tests are disabled.
>>
>> From that point I want to draw a line in the sand and immediately
>> address tests that fail that are _not_ BadApple'd. At least then we'll
>> stop getting _worse_. And then we can work on the BadApple'd tests.
>> But as David says, that's not going to be any time soon. It's been a
>> couple of months that I've been trying to just get the tests
>> BadApple'd without even trying to fix any of them.
>>
>> It's particularly pernicious because with all the noise we don't see
>> failures we _should_ see.
>>
>> So I don't have any good short-term answer either. We've built up a
>> very large technical debt in the testing. The first step is to stop
>> adding more debt, which is what I've been working on so far. And
>> that's the easy part
>>
>> Siigghh
>>
>> Erick
>>
>>
>> On Fri, Jun 15, 2018 at 5:29 AM, David Smiley 
>> wrote:
>> > (Sigh) I sympathize with your points Simon.  I'm +1 to modify the
>> > Lucene-side JIRA QA bot (Yetus) to not execute Solr tests.  We can and
>> > are
>> > trying to improve the stability of the Solr tests but even
>> > optimistically
>> > the practical reality is that it won't be good enough anytime soon.
>> > When we
>> > get there, we can reverse this.
>> >
>> > On Fri, Jun 15, 2018 at 3:32 AM Simon Willnauer
>> > 
>> > wrote:
>> >>
>> >> folks,
>> >>
>> >> I got more active working on IndexWriter and Soft-Deletes etc. in the
>> >> last couple of weeks. It's a blast again and I really enjoy it. The
>> >> one thing that is IMO not acceptable is the status of solr tests. I
>> >> tried so many times to get them passing on several different OSs but
>> >> it seems this is pretty hopepless. It's get's even worse the
>> >> Lucene/Solr QA job literally marks every ticket I attach a patch to as
>> >> `-1` because of arbitrary solr tests, here is an example:
>> >>
>> >> || Reason || Tests ||
>> >> | Failed junit tests | solr.rest.TestManagedResourceStora

[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 4679 - Still Unstable!

2018-06-15 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/4679/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseG1GC

2 tests failed.
FAILED:  org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testTrigger

Error Message:
number of ops expected:<2> but was:<1>

Stack Trace:
java.lang.AssertionError: number of ops expected:<2> but was:<1>
at 
__randomizedtesting.SeedInfo.seed([561C724DFD7C7A30:35D744CF64B3091D]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testTrigger(IndexSizeTriggerTest.java:187)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testSplitIntegration

Error Message:
last state: 
DocCollection(testSplitIntegratio

[jira] [Comment Edited] (SOLR-12490) referring/excluding clauses from JSON query DSL in JSON facets.

2018-06-15 Thread Mikhail Khludnev (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16514420#comment-16514420
 ] 

Mikhail Khludnev edited comment on SOLR-12490 at 6/15/18 10:21 PM:
---

Here is fewer-impact approach induced by chat with [~osavrasov]. The proposal 
is to introduce {{json.queries}}, it's like arbitrary {{json.param}} but it's 
translated with query DSL 

{code}
{
"query" : {
  "#top":{
  "parent": {
  "query": "sku-title:foo",
  "filters" : "$childFq", // non-json old style param reference 
  "which": "scope:product"
   }
}
}, // like .param but parsed with query dsl syntax 
"queries":{
 "childFq":[{ "#color" :"color:black" },
{ "#size" : "size:L" }]
},
"facet":{
   "sku_colors_in_prods":{
  "type" : "terms",
  "field" : "color",
  "domain" : {
   "excludeTags":["top",   // we need to drop top-level parent 
query
  "color"],// excluding one child filter clause
   "filter":[ 
  {"param":"childFq"}  // referring to .queries.childFq
   ]
   },
"facet": { // counting products
  "prod_count":"uniqueBlock(_root_)"
   }
   }
}
}
{code}


was (Author: mkhludnev):
Here is few-impact approach induced by chat with [~osavrasov]. The proposal is 
to introduce {{json.queries}}, it's like arbitrary {{json.param}} but it's 
translated with query DSL 

{code}
{
"query" : {
  "#top":{
  "parent": {
  "query": "sku-title:foo",
  "filters" : "$childFq", // non-json old style param reference 
  "which": "scope:product"
   }
}
}, // like .param but parsed with query dsl syntax 
"queries":{
 "childFq":[{ "#color" :"color:black" },
{ "#size" : "size:L" }]
},
"facet":{
   "sku_colors_in_prods":{
  "type" : "terms",
  "field" : "color",
  "domain" : {
   "excludeTags":["top",   // we need to drop top-level parent 
query
  "color"],// excluding one child filter clause
   "filter":[ 
  {"param":"childFq"}  // referring to .queries.childFq
   ]
   },
"facet": { // counting products
  "prod_count":"uniqueBlock(_root_)"
   }
   }
}
}
{code}

> referring/excluding clauses from JSON query DSL in JSON facets. 
> 
>
> Key: SOLR-12490
> URL: https://issues.apache.org/jira/browse/SOLR-12490
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module, faceting
>Reporter: Mikhail Khludnev
>Priority: Major
>
> It's spin off from the 
> [discussion|https://issues.apache.org/jira/browse/SOLR-9685?focusedCommentId=16508720&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16508720].
>  
> h2. Problem
> # after SOLR-9685 we can tag separate clauses in hairish queries like 
> {{parent}}, {{bool}}
> # we can {{domain.excludeTags}}
> # we are looking for child faceting with exclusions, see SOLR-9510, SOLR-8998 
>
> # but we can refer only separate params in {{domain.filter}}, it's not 
> possible to refer separate clauses
> h2. Proposal 
> # tag child clauses multiple times
> {code}
> {
> "query" : {
>   "#top":{
>   "parent": {
>   "query": "sku-title:foo",
>   "filters" : [
>   "scope:sku",
> { "#sku,color" :  "color:black" }, // multiple tags
> { "#sku,size" : "size:L" }
> ],
>   "which": "scope:product"
>}
> }
> }
> }
> {code} 
> # refer to sku clauses, either by 
> ## (1) {{domain.filter.tag}} in addition to {{param}}, or
> ## (2) {{domain.includeTags}} mimicking {{excludeTags}}  
> {code}
> "facet":{
>   "sku_colors_in_prods":{
>   "type" : "terms",
>   "field" : "color",
>"domain" : {
>   "excludeTags":["top","color"],   // we need to drop top-level 
> parent query
>   "filter":[ 
>   {"tag":"sku"}  // (1)
>],
>   "includeTags":"sku"  // (2)
>},
>   "facet":"uniqueBlock(_root_)"
>}
> }
> {code}  
> WDYT, [~osavrasov], [~ysee...@gmail.com]?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For addit

[jira] [Commented] (SOLR-12490) referring/excluding clauses from JSON query DSL in JSON facets.

2018-06-15 Thread Mikhail Khludnev (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16514420#comment-16514420
 ] 

Mikhail Khludnev commented on SOLR-12490:
-

Here is few-impact approach induced by chat with [~osavrasov]. The proposal is 
to introduce {{json.queries}}, it's like arbitrary {{json.param}} but it's 
translated with query DSL 

{code}
{
"query" : {
  "#top":{
  "parent": {
  "query": "sku-title:foo",
  "filters" : "$childFq", // non-json old style param reference 
  "which": "scope:product"
   }
}
}, // like .param but parsed with query dsl syntax 
"queries":{
 "childFq":[{ "#color" :"color:black" },
{ "#size" : "size:L" }]
},
"facet":{
   "sku_colors_in_prods":{
  "type" : "terms",
  "field" : "color",
  "domain" : {
   "excludeTags":["top",   // we need to drop top-level parent 
query
  "color"],// excluding one child filter clause
   "filter":[ 
  {"param":"childFq"}  // referring to .queries.childFq
   ]
   },
"facet": { // counting products
  "prod_count":"uniqueBlock(_root_)"
   }
   }
}
}
{code}

> referring/excluding clauses from JSON query DSL in JSON facets. 
> 
>
> Key: SOLR-12490
> URL: https://issues.apache.org/jira/browse/SOLR-12490
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module, faceting
>Reporter: Mikhail Khludnev
>Priority: Major
>
> It's spin off from the 
> [discussion|https://issues.apache.org/jira/browse/SOLR-9685?focusedCommentId=16508720&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16508720].
>  
> h2. Problem
> # after SOLR-9685 we can tag separate clauses in hairish queries like 
> {{parent}}, {{bool}}
> # we can {{domain.excludeTags}}
> # we are looking for child faceting with exclusions, see SOLR-9510, SOLR-8998 
>
> # but we can refer only separate params in {{domain.filter}}, it's not 
> possible to refer separate clauses
> h2. Proposal 
> # tag child clauses multiple times
> {code}
> {
> "query" : {
>   "#top":{
>   "parent": {
>   "query": "sku-title:foo",
>   "filters" : [
>   "scope:sku",
> { "#sku,color" :  "color:black" }, // multiple tags
> { "#sku,size" : "size:L" }
> ],
>   "which": "scope:product"
>}
> }
> }
> }
> {code} 
> # refer to sku clauses, either by 
> ## (1) {{domain.filter.tag}} in addition to {{param}}, or
> ## (2) {{domain.includeTags}} mimicking {{excludeTags}}  
> {code}
> "facet":{
>   "sku_colors_in_prods":{
>   "type" : "terms",
>   "field" : "color",
>"domain" : {
>   "excludeTags":["top","color"],   // we need to drop top-level 
> parent query
>   "filter":[ 
>   {"tag":"sku"}  // (1)
>],
>   "includeTags":"sku"  // (2)
>},
>   "facet":"uniqueBlock(_root_)"
>}
> }
> {code}  
> WDYT, [~osavrasov], [~ysee...@gmail.com]?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_172) - Build # 22259 - Still Unstable!

2018-06-15 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/22259/
Java: 64bit/jdk1.8.0_172 -XX:-UseCompressedOops -XX:+UseParallelGC

3 tests failed.
FAILED:  org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testTrigger

Error Message:
number of ops expected:<2> but was:<1>

Stack Trace:
java.lang.AssertionError: number of ops expected:<2> but was:<1>
at 
__randomizedtesting.SeedInfo.seed([EB8A523303CF771:6D7393A1A9F3845C]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testTrigger(IndexSizeTriggerTest.java:187)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testTrigger

Error Message:
number of ops expected:<2> but was:<1>

Stack T

[JENKINS-EA] Lucene-Solr-7.x-Windows (64bit/jdk-11-ea+14) - Build # 635 - Still Unstable!

2018-06-15 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/635/
Java: 64bit/jdk-11-ea+14 -XX:-UseCompressedOops -XX:+UseSerialGC

13 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testSplitIntegration

Error Message:
last state: 
DocCollection(testSplitIntegration_collection//clusterstate.json/102)={   
"replicationFactor":"2",   "pullReplicas":"0",   
"router":{"name":"compositeId"},   "maxShardsPerNode":"2",   
"autoAddReplicas":"false",   "nrtReplicas":"2",   "tlogReplicas":"0",   
"autoCreated":"true",   "shards":{ "shard2":{   "replicas":{ 
"core_node3":{   
"core":"testSplitIntegration_collection_shard2_replica_n3",   
"leader":"true",   "SEARCHER.searcher.maxDoc":0,   
"SEARCHER.searcher.deletedDocs":0,   "INDEX.sizeInBytes":10240, 
  "node_name":"127.0.0.1:1_solr",   "state":"active",   
"type":"NRT",   "INDEX.sizeInGB":9.5367431640625E-6,   
"SEARCHER.searcher.numDocs":0}, "core_node4":{   
"core":"testSplitIntegration_collection_shard2_replica_n4",   
"SEARCHER.searcher.maxDoc":0,   "SEARCHER.searcher.deletedDocs":0,  
 "INDEX.sizeInBytes":10240,   "node_name":"127.0.0.1:10001_solr",   
"state":"active",   "type":"NRT",   
"INDEX.sizeInGB":9.5367431640625E-6,   "SEARCHER.searcher.numDocs":0}}, 
  "range":"0-7fff",   "state":"active"}, "shard1":{   
"stateTimestamp":"152909818334647",   "replicas":{ 
"core_node1":{   
"core":"testSplitIntegration_collection_shard1_replica_n1",   
"leader":"true",   "SEARCHER.searcher.maxDoc":14,   
"SEARCHER.searcher.deletedDocs":0,   "INDEX.sizeInBytes":17240, 
  "node_name":"127.0.0.1:1_solr",   "state":"active",   
"type":"NRT",   "INDEX.sizeInGB":1.605600118637085E-5,   
"SEARCHER.searcher.numDocs":14}, "core_node2":{   
"core":"testSplitIntegration_collection_shard1_replica_n2",   
"SEARCHER.searcher.maxDoc":14,   "SEARCHER.searcher.deletedDocs":0, 
  "INDEX.sizeInBytes":17240,   "node_name":"127.0.0.1:10001_solr",  
 "state":"active",   "type":"NRT",   
"INDEX.sizeInGB":1.605600118637085E-5,   
"SEARCHER.searcher.numDocs":14}},   "range":"8000-",   
"state":"inactive"}, "shard1_1":{   "parent":"shard1",   
"stateTimestamp":"152909818340749",   "range":"c000-",  
 "state":"active",   "replicas":{ "core_node10":{   
"leader":"true",   
"core":"testSplitIntegration_collection_shard1_1_replica1",   
"SEARCHER.searcher.maxDoc":7,   "SEARCHER.searcher.deletedDocs":0,  
 "INDEX.sizeInBytes":13740,   "node_name":"127.0.0.1:1_solr",   
"base_url":"http://127.0.0.1:1/solr";,   "state":"active",   
"type":"NRT",   "INDEX.sizeInGB":1.2796372175216675E-5, 
  "SEARCHER.searcher.numDocs":7}, "core_node9":{   
"core":"testSplitIntegration_collection_shard1_1_replica0",   
"SEARCHER.searcher.maxDoc":7,   "SEARCHER.searcher.deletedDocs":0,  
 "INDEX.sizeInBytes":13740,   "node_name":"127.0.0.1:10001_solr",   
"base_url":"http://127.0.0.1:10001/solr";,   "state":"active",   
"type":"NRT",   "INDEX.sizeInGB":1.2796372175216675E-5, 
  "SEARCHER.searcher.numDocs":7}}}, "shard1_0":{   "parent":"shard1",   
"stateTimestamp":"152909818340732",   "range":"8000-bfff",  
 "state":"active",   "replicas":{ "core_node7":{   
"leader":"true",   
"core":"testSplitIntegration_collection_shard1_0_replica0",   
"SEARCHER.searcher.maxDoc":7,   "SEARCHER.searcher.deletedDocs":0,  
 "INDEX.sizeInBytes":23980,   "node_name":"127.0.0.1:10001_solr",   
"base_url":"http://127.0.0.1:10001/solr";,   "state":"active",   
"type":"NRT",   "INDEX.sizeInGB":2.2333115339279175E-5, 
  "SEARCHER.searcher.numDocs":7}, "core_node8":{   
"core":"testSplitIntegration_collection_shard1_0_replica1",   
"SEARCHER.searcher.maxDoc":7,   "SEARCHER.searcher.deletedDocs":0,  
 "INDEX.sizeInBytes":23980,   "node_name":"127.0.0.1:1_solr",   
"base_url":"http://127.0.0.1:1/solr";,   "state":"active",   
"type":"NRT",   "INDEX.sizeInGB":2.2333115339279175E-5, 
  "SEARCHER.searcher.numDocs":7}

Stack Trace:
java.util.concurrent.TimeoutException: last state: 
DocCollection(testSplitIntegration_collection//clusterstate.json/102)={
  "replicationFactor":"2",
  "pullReplicas":"0",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"2",
  "auto

[jira] [Assigned] (SOLR-12398) Make JSON Facet API support Heatmap Facet

2018-06-15 Thread David Smiley (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley reassigned SOLR-12398:
---

Assignee: David Smiley

> Make JSON Facet API support Heatmap Facet
> -
>
> Key: SOLR-12398
> URL: https://issues.apache.org/jira/browse/SOLR-12398
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module, faceting, JSON Request API, spatial
>Reporter: Jaime Yap
>Assignee: David Smiley
>Priority: Major
>  Labels: heatmap
>
> The JSON query Facet API does not support Heatmap facets. For companies that 
> have standardized around generating queries for the JSON query API, it is a 
> major wart to need to also support falling back to the param encoding API in 
> order to make use of them.
> More importantly however, given it's more natural support for nested 
> subfacets, the JSON Query facet API is be able to compute more interesting 
> Heatmap layers for each facet bucket. Without resorting to the older (and 
> much more awkward) facet pivot syntax.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-repro - Build # 828 - Still Unstable

2018-06-15 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/828/

[...truncated 28 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-NightlyTests-7.4/5/consoleText

[repro] Revision: 0a1fe1ed7d9e7a43bb1820caf205fff1934965dd

[repro] Ant options: -Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.4/test-data/enwiki.random.lines.txt
[repro] Repro line:  ant test  -Dtestcase=IndexSizeTriggerTest 
-Dtests.method=testMergeIntegration -Dtests.seed=712BB0A4B11EF4A 
-Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.4/test-data/enwiki.random.lines.txt
 -Dtests.locale=tr-TR -Dtests.timezone=Asia/Tokyo -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=PeerSyncTest -Dtests.method=test 
-Dtests.seed=712BB0A4B11EF4A -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.4/test-data/enwiki.random.lines.txt
 -Dtests.locale=en-ZA -Dtests.timezone=Indian/Christmas -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=SolrRrdBackendFactoryTest 
-Dtests.method=testBasic -Dtests.seed=712BB0A4B11EF4A -Dtests.multiplier=2 
-Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.4/test-data/enwiki.random.lines.txt
 -Dtests.locale=mt -Dtests.timezone=Africa/Lome -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
228a84fd6db3ef5fc1624d69e1c82a1f02c51352
[repro] git fetch

[...truncated 2 lines...]
[repro] git checkout 0a1fe1ed7d9e7a43bb1820caf205fff1934965dd

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   PeerSyncTest
[repro]   SolrRrdBackendFactoryTest
[repro]   IndexSizeTriggerTest
[repro] ant compile-test

[...truncated 3318 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=15 
-Dtests.class="*.PeerSyncTest|*.SolrRrdBackendFactoryTest|*.IndexSizeTriggerTest"
 -Dtests.showOutput=onerror -Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.4/test-data/enwiki.random.lines.txt
 -Dtests.seed=712BB0A4B11EF4A -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.4/test-data/enwiki.random.lines.txt
 -Dtests.locale=en-ZA -Dtests.timezone=Indian/Christmas -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[...truncated 4514 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   1/5 failed: org.apache.solr.metrics.rrd.SolrRrdBackendFactoryTest
[repro]   1/5 failed: org.apache.solr.update.PeerSyncTest
[repro]   3/5 failed: org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest
[repro] git checkout 228a84fd6db3ef5fc1624d69e1c82a1f02c51352

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 5 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-12458) ADLS support for SOLR

2018-06-15 Thread Mike Wingert (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16514323#comment-16514323
 ] 

Mike Wingert commented on SOLR-12458:
-

Updated to fix the "Check Licenses" failure

> ADLS support for SOLR
> -
>
> Key: SOLR-12458
> URL: https://issues.apache.org/jira/browse/SOLR-12458
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration
>Affects Versions: master (8.0)
>Reporter: Mike Wingert
>Priority: Minor
>  Labels: features
> Fix For: master (8.0)
>
> Attachments: SOLR-12458.patch, SOLR-12458.patch, SOLR-12458.patch, 
> SOLR-12458.patch
>
>
> This is to track ADLS support for SOLR.
> ADLS is a HDFS like API available in Microsoft Azure.   
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12458) ADLS support for SOLR

2018-06-15 Thread Mike Wingert (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Wingert updated SOLR-12458:

Attachment: SOLR-12458.patch

> ADLS support for SOLR
> -
>
> Key: SOLR-12458
> URL: https://issues.apache.org/jira/browse/SOLR-12458
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration
>Affects Versions: master (8.0)
>Reporter: Mike Wingert
>Priority: Minor
>  Labels: features
> Fix For: master (8.0)
>
> Attachments: SOLR-12458.patch, SOLR-12458.patch, SOLR-12458.patch, 
> SOLR-12458.patch
>
>
> This is to track ADLS support for SOLR.
> ADLS is a HDFS like API available in Microsoft Azure.   
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12458) ADLS support for SOLR

2018-06-15 Thread Mike Wingert (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16514307#comment-16514307
 ] 

Mike Wingert commented on SOLR-12458:
-

I added a new patch with tests and added storing the transaction log in ADLS.

> ADLS support for SOLR
> -
>
> Key: SOLR-12458
> URL: https://issues.apache.org/jira/browse/SOLR-12458
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration
>Affects Versions: master (8.0)
>Reporter: Mike Wingert
>Priority: Minor
>  Labels: features
> Fix For: master (8.0)
>
> Attachments: SOLR-12458.patch, SOLR-12458.patch, SOLR-12458.patch
>
>
> This is to track ADLS support for SOLR.
> ADLS is a HDFS like API available in Microsoft Azure.   
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12458) ADLS support for SOLR

2018-06-15 Thread Mike Wingert (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Wingert updated SOLR-12458:

Attachment: SOLR-12458.patch

> ADLS support for SOLR
> -
>
> Key: SOLR-12458
> URL: https://issues.apache.org/jira/browse/SOLR-12458
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration
>Affects Versions: master (8.0)
>Reporter: Mike Wingert
>Priority: Minor
>  Labels: features
> Fix For: master (8.0)
>
> Attachments: SOLR-12458.patch, SOLR-12458.patch, SOLR-12458.patch
>
>
> This is to track ADLS support for SOLR.
> ADLS is a HDFS like API available in Microsoft Azure.   
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-master - Build # 2564 - Still Unstable

2018-06-15 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/2564/

1 tests failed.
FAILED:  org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testMixedBounds

Error Message:
failed to create testMixedBounds_collection Live Nodes: [127.0.0.1:10006_solr, 
127.0.0.1:10005_solr] Last available state: 
DocCollection(testMixedBounds_collection//clusterstate.json/26)={   
"replicationFactor":"2",   "pullReplicas":"0",   
"router":{"name":"compositeId"},   "maxShardsPerNode":"2",   
"autoAddReplicas":"false",   "nrtReplicas":"2",   "tlogReplicas":"0",   
"autoCreated":"true",   "shards":{ "shard2":{   "replicas":{ 
"core_node3":{   "core":"testMixedBounds_collection_shard2_replica_n3", 
  "leader":"true",   "SEARCHER.searcher.maxDoc":0,   
"SEARCHER.searcher.deletedDocs":0,   "INDEX.sizeInBytes":10240, 
  "node_name":"127.0.0.1:10005_solr",   "state":"active",   
"type":"NRT",   "INDEX.sizeInGB":9.5367431640625E-6,   
"SEARCHER.searcher.numDocs":0}, "core_node4":{   
"core":"testMixedBounds_collection_shard2_replica_n4",   
"SEARCHER.searcher.maxDoc":0,   "SEARCHER.searcher.deletedDocs":0,  
 "INDEX.sizeInBytes":10240,   "node_name":"127.0.0.1:10006_solr",   
"state":"active",   "type":"NRT",   
"INDEX.sizeInGB":9.5367431640625E-6,   "SEARCHER.searcher.numDocs":0}}, 
  "range":"0-7fff",   "state":"active"}, "shard1":{   
"stateTimestamp":"1529105880681551450",   "replicas":{ 
"core_node1":{   "core":"testMixedBounds_collection_shard1_replica_n1", 
  "leader":"true",   "SEARCHER.searcher.maxDoc":495,   
"SEARCHER.searcher.deletedDocs":0,   "INDEX.sizeInBytes":257740,
   "node_name":"127.0.0.1:10005_solr",   "state":"active",   
"type":"NRT",   "INDEX.sizeInGB":2.4003908038139343E-4,   
"SEARCHER.searcher.numDocs":495}, "core_node2":{   
"core":"testMixedBounds_collection_shard1_replica_n2",   
"SEARCHER.searcher.maxDoc":495,   "SEARCHER.searcher.deletedDocs":0,
   "INDEX.sizeInBytes":257740,   
"node_name":"127.0.0.1:10006_solr",   "state":"active",   
"type":"NRT",   "INDEX.sizeInGB":2.4003908038139343E-4,   
"SEARCHER.searcher.numDocs":495}},   "range":"8000-",   
"state":"inactive"}, "shard1_1":{   "parent":"shard1",   
"stateTimestamp":"1529105880682371200",   "range":"c000-",  
 "state":"active",   "replicas":{ "core_node10":{   
"leader":"true",   
"core":"testMixedBounds_collection_shard1_1_replica1",   
"SEARCHER.searcher.maxDoc":247,   "SEARCHER.searcher.deletedDocs":0,
   "INDEX.sizeInBytes":133740,   
"node_name":"127.0.0.1:10005_solr",   
"base_url":"http://127.0.0.1:10005/solr";,   "state":"active",   
"type":"NRT",   "INDEX.sizeInGB":1.245550811290741E-4,   
"SEARCHER.searcher.numDocs":247}, "core_node9":{   
"core":"testMixedBounds_collection_shard1_1_replica0",   
"SEARCHER.searcher.maxDoc":247,   "SEARCHER.searcher.deletedDocs":0,
   "INDEX.sizeInBytes":133740,   
"node_name":"127.0.0.1:10006_solr",   
"base_url":"http://127.0.0.1:10006/solr";,   "state":"active",   
"type":"NRT",   "INDEX.sizeInGB":1.245550811290741E-4,   
"SEARCHER.searcher.numDocs":247}}}, "shard1_0":{   "parent":"shard1",   
"stateTimestamp":"1529105880682046900",   "range":"8000-bfff",  
 "state":"active",   "replicas":{ "core_node7":{   
"leader":"true",   
"core":"testMixedBounds_collection_shard1_0_replica0",   
"SEARCHER.searcher.maxDoc":248,   "SEARCHER.searcher.deletedDocs":0,
   "INDEX.sizeInBytes":144480,   
"node_name":"127.0.0.1:10006_solr",   
"base_url":"http://127.0.0.1:10006/solr";,   "state":"active",   
"type":"NRT",   "INDEX.sizeInGB":1.3455748558044434E-4,   
"SEARCHER.searcher.numDocs":248}, "core_node8":{   
"core":"testMixedBounds_collection_shard1_0_replica1",   
"SEARCHER.searcher.maxDoc":248,   "SEARCHER.searcher.deletedDocs":0,
   "INDEX.sizeInBytes":144480,   
"node_name":"127.0.0.1:10005_solr",   
"base_url":"http://127.0.0.1:10005/solr";,   "state":"active",   
"type":"NRT",   "INDEX.sizeInGB":1.3455748558044434E-4,   
"SEARCHER.searcher.numDocs":248}

Stack Trace:
java.lang.AssertionError: failed to create testMixedBounds_collection
Live Nodes: [127.0.0.1:10006_solr, 127.0.0.1:10005_solr]
Last available state: 
DocCollection(testMixedBounds_collection//clusterstate.json/26)={
  "

[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk1.8.0_172) - Build # 2135 - Still Unstable!

2018-06-15 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2135/
Java: 64bit/jdk1.8.0_172 -XX:+UseCompressedOops -XX:+UseSerialGC

3 tests failed.
FAILED:  org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testMixedBounds

Error Message:
failed to create testMixedBounds_collection Live Nodes: [127.0.0.1:10003_solr, 
127.0.0.1:10002_solr] Last available state: 
DocCollection(testMixedBounds_collection//clusterstate.json/25)={   
"replicationFactor":"2",   "pullReplicas":"0",   
"router":{"name":"compositeId"},   "maxShardsPerNode":"2",   
"autoAddReplicas":"false",   "nrtReplicas":"2",   "tlogReplicas":"0",   
"autoCreated":"true",   "shards":{ "shard2":{   "replicas":{ 
"core_node3":{   "core":"testMixedBounds_collection_shard2_replica_n3", 
  "leader":"true",   "SEARCHER.searcher.maxDoc":0,   
"SEARCHER.searcher.deletedDocs":0,   "INDEX.sizeInBytes":10240, 
  "node_name":"127.0.0.1:10002_solr",   "state":"active",   
"type":"NRT",   "INDEX.sizeInGB":9.5367431640625E-6,   
"SEARCHER.searcher.numDocs":0}, "core_node4":{   
"core":"testMixedBounds_collection_shard2_replica_n4",   
"SEARCHER.searcher.maxDoc":0,   "SEARCHER.searcher.deletedDocs":0,  
 "INDEX.sizeInBytes":10240,   "node_name":"127.0.0.1:10003_solr",   
"state":"active",   "type":"NRT",   
"INDEX.sizeInGB":9.5367431640625E-6,   "SEARCHER.searcher.numDocs":0}}, 
  "range":"0-7fff",   "state":"active"}, "shard1":{   
"stateTimestamp":"1529092676565647600",   "replicas":{ 
"core_node1":{   "core":"testMixedBounds_collection_shard1_replica_n1", 
  "leader":"true",   "SEARCHER.searcher.maxDoc":495,   
"SEARCHER.searcher.deletedDocs":0,   "INDEX.sizeInBytes":257740,
   "node_name":"127.0.0.1:10002_solr",   "state":"active",   
"type":"NRT",   "INDEX.sizeInGB":2.4003908038139343E-4,   
"SEARCHER.searcher.numDocs":495}, "core_node2":{   
"core":"testMixedBounds_collection_shard1_replica_n2",   
"SEARCHER.searcher.maxDoc":495,   "SEARCHER.searcher.deletedDocs":0,
   "INDEX.sizeInBytes":257740,   
"node_name":"127.0.0.1:10003_solr",   "state":"active",   
"type":"NRT",   "INDEX.sizeInGB":2.4003908038139343E-4,   
"SEARCHER.searcher.numDocs":495}},   "range":"8000-",   
"state":"inactive"}, "shard1_1":{   "parent":"shard1",   
"stateTimestamp":"1529092676565951800",   "range":"c000-",  
 "state":"active",   "replicas":{ "core_node10":{   
"leader":"true",   
"core":"testMixedBounds_collection_shard1_1_replica1",   
"SEARCHER.searcher.maxDoc":247,   "SEARCHER.searcher.deletedDocs":0,
   "INDEX.sizeInBytes":133740,   
"node_name":"127.0.0.1:10002_solr",   
"base_url":"http://127.0.0.1:10002/solr";,   "state":"active",   
"type":"NRT",   "INDEX.sizeInGB":1.245550811290741E-4,   
"SEARCHER.searcher.numDocs":247}, "core_node9":{   
"core":"testMixedBounds_collection_shard1_1_replica0",   
"SEARCHER.searcher.maxDoc":247,   "SEARCHER.searcher.deletedDocs":0,
   "INDEX.sizeInBytes":133740,   
"node_name":"127.0.0.1:10003_solr",   
"base_url":"http://127.0.0.1:10003/solr";,   "state":"active",   
"type":"NRT",   "INDEX.sizeInGB":1.245550811290741E-4,   
"SEARCHER.searcher.numDocs":247}}}, "shard1_0":{   "parent":"shard1",   
"stateTimestamp":"1529092676565889450",   "range":"8000-bfff",  
 "state":"active",   "replicas":{ "core_node7":{   
"leader":"true",   
"core":"testMixedBounds_collection_shard1_0_replica0",   
"SEARCHER.searcher.maxDoc":248,   "SEARCHER.searcher.deletedDocs":0,
   "INDEX.sizeInBytes":144480,   
"node_name":"127.0.0.1:10003_solr",   
"base_url":"http://127.0.0.1:10003/solr";,   "state":"active",   
"type":"NRT",   "INDEX.sizeInGB":1.3455748558044434E-4,   
"SEARCHER.searcher.numDocs":248}, "core_node8":{   
"core":"testMixedBounds_collection_shard1_0_replica1",   
"SEARCHER.searcher.maxDoc":248,   "SEARCHER.searcher.deletedDocs":0,
   "INDEX.sizeInBytes":144480,   
"node_name":"127.0.0.1:10002_solr",   
"base_url":"http://127.0.0.1:10002/solr";,   "state":"active",   
"type":"NRT",   "INDEX.sizeInGB":1.3455748558044434E-4,   
"SEARCHER.searcher.numDocs":248}

Stack Trace:
java.lang.AssertionError: failed to create testMixedBounds_collection
Live Nodes: [127.0.0.1:10003_solr, 127.0.0.1:10002_solr]
Last available state: 
DocCo

[JENKINS] Lucene-Solr-master-Windows (64bit/jdk-10.0.1) - Build # 7360 - Still Unstable!

2018-06-15 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7360/
Java: 64bit/jdk-10.0.1 -XX:-UseCompressedOops -XX:+UseG1GC

18 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testSplitIntegration

Error Message:
did not finish processing in time

Stack Trace:
java.lang.AssertionError: did not finish processing in time
at 
__randomizedtesting.SeedInfo.seed([5EF0948C7CE6D0CC:677E2DCC53191932]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testSplitIntegration(IndexSizeTriggerTest.java:309)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testSplitIntegration

Error Message:
did not finish processing in time

Stack Trace:
java.lang.AssertionError: did not finish processing in time
at 
__randomizedtesting.SeedInfo.seed([5EF0948C7CE6D0CC:677E2DCC

[jira] [Commented] (LUCENE-7976) Make TieredMergePolicy respect maxSegmentSizeMB and allow singleton merges of very large segments

2018-06-15 Thread Erick Erickson (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-7976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16514278#comment-16514278
 ] 

Erick Erickson commented on LUCENE-7976:


I'm not quite sure what's happening, but my two recent pushes don't seem to 
auto-add the git link to the JIRA.

Revision for master: 2519025fdafe55494448854c87e094b14f434b41

Revision for 7x: 9c4e315c1cb3495f400c179159836f568cd2989d

> Make TieredMergePolicy respect maxSegmentSizeMB and allow singleton merges of 
> very large segments
> -
>
> Key: LUCENE-7976
> URL: https://issues.apache.org/jira/browse/LUCENE-7976
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
> Attachments: LUCENE-7976.patch, LUCENE-7976.patch, LUCENE-7976.patch, 
> LUCENE-7976.patch, LUCENE-7976.patch, LUCENE-7976.patch, LUCENE-7976.patch, 
> LUCENE-7976.patch, LUCENE-7976.patch, LUCENE-7976.patch, LUCENE-7976.patch, 
> LUCENE-7976.patch, LUCENE-7976.patch, SOLR-7976.patch
>
>
> We're seeing situations "in the wild" where there are very large indexes (on 
> disk) handled quite easily in a single Lucene index. This is particularly 
> true as features like docValues move data into MMapDirectory space. The 
> current TMP algorithm allows on the order of 50% deleted documents as per a 
> dev list conversation with Mike McCandless (and his blog here:  
> https://www.elastic.co/blog/lucenes-handling-of-deleted-documents).
> Especially in the current era of very large indexes in aggregate, (think many 
> TB) solutions like "you need to distribute your collection over more shards" 
> become very costly. Additionally, the tempting "optimize" button exacerbates 
> the issue since once you form, say, a 100G segment (by 
> optimizing/forceMerging) it is not eligible for merging until 97.5G of the 
> docs in it are deleted (current default 5G max segment size).
> The proposal here would be to add a new parameter to TMP, something like 
>  (no, that's not serious name, suggestions 
> welcome) which would default to 100 (or the same behavior we have now).
> So if I set this parameter to, say, 20%, and the max segment size stays at 
> 5G, the following would happen when segments were selected for merging:
> > any segment with > 20% deleted documents would be merged or rewritten NO 
> > MATTER HOW LARGE. There are two cases,
> >> the segment has < 5G "live" docs. In that case it would be merged with 
> >> smaller segments to bring the resulting segment up to 5G. If no smaller 
> >> segments exist, it would just be rewritten
> >> The segment has > 5G "live" docs (the result of a forceMerge or optimize). 
> >> It would be rewritten into a single segment removing all deleted docs no 
> >> matter how big it is to start. The 100G example above would be rewritten 
> >> to an 80G segment for instance.
> Of course this would lead to potentially much more I/O which is why the 
> default would be the same behavior we see now. As it stands now, though, 
> there's no way to recover from an optimize/forceMerge except to re-index from 
> scratch. We routinely see 200G-300G Lucene indexes at this point "in the 
> wild" with 10s of  shards replicated 3 or more times. And that doesn't even 
> include having these over HDFS.
> Alternatives welcome! Something like the above seems minimally invasive. A 
> new merge policy is certainly an alternative.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-repro - Build # 827 - Still Unstable

2018-06-15 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/827/

[...truncated 56 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-Tests-master/2563/consoleText

[repro] Revision: 228a84fd6db3ef5fc1624d69e1c82a1f02c51352

[repro] Repro line:  ant test  -Dtestcase=TestCollectionStateWatchers 
-Dtests.method=testWaitForStateWatcherIsRetainedOnPredicateFailure 
-Dtests.seed=DAF8EE298DE6E14 -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.locale=pl-PL -Dtests.timezone=Asia/Bishkek -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=CloudSolrClientTest 
-Dtests.method=testOverwriteOption -Dtests.seed=DAF8EE298DE6E14 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=be 
-Dtests.timezone=Africa/Abidjan -Dtests.asserts=true -Dtests.file.encoding=UTF-8

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
228a84fd6db3ef5fc1624d69e1c82a1f02c51352
[repro] git fetch
[repro] git checkout 228a84fd6db3ef5fc1624d69e1c82a1f02c51352

[...truncated 1 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/solrj
[repro]   CloudSolrClientTest
[repro]   TestCollectionStateWatchers
[repro] ant compile-test

[...truncated 2452 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=10 
-Dtests.class="*.CloudSolrClientTest|*.TestCollectionStateWatchers" 
-Dtests.showOutput=onerror  -Dtests.seed=DAF8EE298DE6E14 -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.locale=be -Dtests.timezone=Africa/Abidjan 
-Dtests.asserts=true -Dtests.file.encoding=UTF-8

[...truncated 2399 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   0/5 failed: org.apache.solr.client.solrj.impl.CloudSolrClientTest
[repro]   2/5 failed: org.apache.solr.common.cloud.TestCollectionStateWatchers
[repro] git checkout 228a84fd6db3ef5fc1624d69e1c82a1f02c51352

[...truncated 1 lines...]
[repro] Exiting with code 256

[...truncated 5 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Updated] (LUCENE-7976) Make TieredMergePolicy respect maxSegmentSizeMB and allow singleton merges of very large segments

2018-06-15 Thread Erick Erickson (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-7976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated LUCENE-7976:
---
Attachment: LUCENE-7976.patch

> Make TieredMergePolicy respect maxSegmentSizeMB and allow singleton merges of 
> very large segments
> -
>
> Key: LUCENE-7976
> URL: https://issues.apache.org/jira/browse/LUCENE-7976
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
> Attachments: LUCENE-7976.patch, LUCENE-7976.patch, LUCENE-7976.patch, 
> LUCENE-7976.patch, LUCENE-7976.patch, LUCENE-7976.patch, LUCENE-7976.patch, 
> LUCENE-7976.patch, LUCENE-7976.patch, LUCENE-7976.patch, LUCENE-7976.patch, 
> LUCENE-7976.patch, LUCENE-7976.patch, SOLR-7976.patch
>
>
> We're seeing situations "in the wild" where there are very large indexes (on 
> disk) handled quite easily in a single Lucene index. This is particularly 
> true as features like docValues move data into MMapDirectory space. The 
> current TMP algorithm allows on the order of 50% deleted documents as per a 
> dev list conversation with Mike McCandless (and his blog here:  
> https://www.elastic.co/blog/lucenes-handling-of-deleted-documents).
> Especially in the current era of very large indexes in aggregate, (think many 
> TB) solutions like "you need to distribute your collection over more shards" 
> become very costly. Additionally, the tempting "optimize" button exacerbates 
> the issue since once you form, say, a 100G segment (by 
> optimizing/forceMerging) it is not eligible for merging until 97.5G of the 
> docs in it are deleted (current default 5G max segment size).
> The proposal here would be to add a new parameter to TMP, something like 
>  (no, that's not serious name, suggestions 
> welcome) which would default to 100 (or the same behavior we have now).
> So if I set this parameter to, say, 20%, and the max segment size stays at 
> 5G, the following would happen when segments were selected for merging:
> > any segment with > 20% deleted documents would be merged or rewritten NO 
> > MATTER HOW LARGE. There are two cases,
> >> the segment has < 5G "live" docs. In that case it would be merged with 
> >> smaller segments to bring the resulting segment up to 5G. If no smaller 
> >> segments exist, it would just be rewritten
> >> The segment has > 5G "live" docs (the result of a forceMerge or optimize). 
> >> It would be rewritten into a single segment removing all deleted docs no 
> >> matter how big it is to start. The 100G example above would be rewritten 
> >> to an 80G segment for instance.
> Of course this would lead to potentially much more I/O which is why the 
> default would be the same behavior we see now. As it stands now, though, 
> there's no way to recover from an optimize/forceMerge except to re-index from 
> scratch. We routinely see 200G-300G Lucene indexes at this point "in the 
> wild" with 10s of  shards replicated 3 or more times. And that doesn't even 
> include having these over HDFS.
> Alternatives welcome! Something like the above seems minimally invasive. A 
> new merge policy is certainly an alternative.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-master - Build # 1047 - Still Failing

2018-06-15 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/1047/

No tests ran.

Build Log:
[...truncated 24156 lines...]
[asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: 
invalid part, must have at least one section (e.g., chapter, appendix, etc.)
[asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid 
part, must have at least one section (e.g., chapter, appendix, etc.)
 [java] Processed 2227 links (1776 relative) to 3120 anchors in 247 files
 [echo] Validated Links & Anchors via: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr-ref-guide/bare-bones-html/

-dist-changes:
 [copy] Copying 4 files to 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/changes

-dist-keys:
  [get] Getting: http://home.apache.org/keys/group/lucene.asc
  [get] To: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/KEYS

package:

-unpack-solr-tgz:

-ensure-solr-tgz-exists:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr.tgz.unpacked
[untar] Expanding: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/solr-8.0.0.tgz
 into 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr.tgz.unpacked

generate-maven-artifacts:

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-S

[jira] [Commented] (SOLR-11216) Make PeerSync more robust

2018-06-15 Thread Yonik Seeley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16514244#comment-16514244
 ] 

Yonik Seeley commented on SOLR-11216:
-

{quote}
SolrQueryRequest req = new LocalSolrQueryRequest(core,
new ModifiableSolrParams()); 

request is not safely closed, is this intentional? won't this break the 
reference count mechanism?
{quote}

Yeah, it does look like it should be closed.  A SolrQueryRequest grabs a 
searcher reference on-demand, so that may be why it isn't causing an issue with 
any tests (the commit command doesn't grab a searcher reference with the 
provided request).  It should be fixed anyway though.



> Make PeerSync more robust
> -
>
> Key: SOLR-11216
> URL: https://issues.apache.org/jira/browse/SOLR-11216
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Priority: Major
> Attachments: SOLR-11216.patch, SOLR-11216.patch, SOLR-11216.patch
>
>
> First of all, I will change the issue's title with a better name when I have.
> When digging into SOLR-10126. I found a case that can make peerSync fail.
> * leader and replica receive update from 1 to 4
> * replica stop
> * replica miss updates 5, 6
> * replica start recovery
> ## replica buffer updates 7, 8
> ## replica request versions from leader, 
> ## in the same time leader receive update 9, so it will return updates from 1 
> to 9 (for request versions) when replica get recent versions ( so it will be 
> 1,2,3,4,5,6,7,8,9 )
> ## replica do peersync and request updates 5, 6, 9 from leader 
> ## replica apply updates 5, 6, 9. Its index does not have update 7, 8 and 
> maxVersionSpecified for fingerprint is 9, therefore compare fingerprint will 
> fail
> My idea here is why replica request update 9 (step 6) while it knows that 
> updates with lower version ( update 7, 8 ) are on its buffering tlog. Should 
> we request only updates that lower than the lowest update in its buffering 
> tlog ( < 7 )?
> Someone my ask that what if replica won't receive update 9. In that case, 
> leader will put the replica into LIR state, so replica will run recovery 
> process again.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-10.0.1) - Build # 22258 - Unstable!

2018-06-15 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/22258/
Java: 64bit/jdk-10.0.1 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testMixedBounds

Error Message:
failed to create testMixedBounds_collection Live Nodes: [127.0.0.1:10001_solr, 
127.0.0.1:1_solr] Last available state: 
DocCollection(testMixedBounds_collection//clusterstate.json/145)={   
"replicationFactor":"2",   "pullReplicas":"0",   
"router":{"name":"compositeId"},   "maxShardsPerNode":"2",   
"autoAddReplicas":"false",   "nrtReplicas":"2",   "tlogReplicas":"0",   
"autoCreated":"true",   "shards":{ "shard2":{   "replicas":{ 
"core_node3":{   "core":"testMixedBounds_collection_shard2_replica_n3", 
  "SEARCHER.searcher.maxDoc":0,   
"SEARCHER.searcher.deletedDocs":0,   "INDEX.sizeInBytes":10240, 
  "node_name":"127.0.0.1:10001_solr",   "state":"active",   
"type":"NRT",   "INDEX.sizeInGB":9.5367431640625E-6,   
"SEARCHER.searcher.numDocs":0}, "core_node4":{   
"core":"testMixedBounds_collection_shard2_replica_n4",   
"SEARCHER.searcher.maxDoc":0,   "SEARCHER.searcher.deletedDocs":0,  
 "INDEX.sizeInBytes":10240,   "node_name":"127.0.0.1:1_solr",   
"state":"active",   "type":"NRT",   
"INDEX.sizeInGB":9.5367431640625E-6,   "SEARCHER.searcher.numDocs":0}}, 
  "range":"0-7fff",   "state":"active"}, "shard1":{   
"stateTimestamp":"1529085800204768250",   "replicas":{ 
"core_node1":{   "core":"testMixedBounds_collection_shard1_replica_n1", 
  "leader":"true",   "SEARCHER.searcher.maxDoc":495,   
"SEARCHER.searcher.deletedDocs":0,   "INDEX.sizeInBytes":257740,
   "node_name":"127.0.0.1:10001_solr",   "state":"active",   
"type":"NRT",   "INDEX.sizeInGB":2.4003908038139343E-4,   
"SEARCHER.searcher.numDocs":495}, "core_node2":{   
"core":"testMixedBounds_collection_shard1_replica_n2",   
"SEARCHER.searcher.maxDoc":495,   "SEARCHER.searcher.deletedDocs":0,
   "INDEX.sizeInBytes":257740,   
"node_name":"127.0.0.1:1_solr",   "state":"active",   
"type":"NRT",   "INDEX.sizeInGB":2.4003908038139343E-4,   
"SEARCHER.searcher.numDocs":495}},   "range":"8000-",   
"state":"inactive"}, "shard1_1":{   "parent":"shard1",   
"stateTimestamp":"1529085800205197750",   "range":"c000-",  
 "state":"active",   "replicas":{ "core_node10":{   
"leader":"true",   
"core":"testMixedBounds_collection_shard1_1_replica1",   
"SEARCHER.searcher.maxDoc":247,   "SEARCHER.searcher.deletedDocs":0,
   "INDEX.sizeInBytes":133740,   
"node_name":"127.0.0.1:1_solr",   
"base_url":"http://127.0.0.1:1/solr";,   "state":"active",   
"type":"NRT",   "INDEX.sizeInGB":1.245550811290741E-4,   
"SEARCHER.searcher.numDocs":247}, "core_node9":{   
"core":"testMixedBounds_collection_shard1_1_replica0",   
"SEARCHER.searcher.maxDoc":247,   "SEARCHER.searcher.deletedDocs":0,
   "INDEX.sizeInBytes":133740,   
"node_name":"127.0.0.1:10001_solr",   
"base_url":"http://127.0.0.1:10001/solr";,   "state":"active",   
"type":"NRT",   "INDEX.sizeInGB":1.245550811290741E-4,   
"SEARCHER.searcher.numDocs":247}}}, "shard1_0":{   "parent":"shard1",   
"stateTimestamp":"1529085800205103200",   "range":"8000-bfff",  
 "state":"active",   "replicas":{ "core_node7":{   
"leader":"true",   
"core":"testMixedBounds_collection_shard1_0_replica0",   
"SEARCHER.searcher.maxDoc":248,   "SEARCHER.searcher.deletedDocs":0,
   "INDEX.sizeInBytes":144480,   
"node_name":"127.0.0.1:10001_solr",   
"base_url":"http://127.0.0.1:10001/solr";,   "state":"active",   
"type":"NRT",   "INDEX.sizeInGB":1.3455748558044434E-4,   
"SEARCHER.searcher.numDocs":248}, "core_node8":{   
"core":"testMixedBounds_collection_shard1_0_replica1",   
"SEARCHER.searcher.maxDoc":248,   "SEARCHER.searcher.deletedDocs":0,
   "INDEX.sizeInBytes":144480,   
"node_name":"127.0.0.1:1_solr",   
"base_url":"http://127.0.0.1:1/solr";,   "state":"active",   
"type":"NRT",   "INDEX.sizeInGB":1.3455748558044434E-4,   
"SEARCHER.searcher.numDocs":248}

Stack Trace:
java.lang.AssertionError: failed to create testMixedBounds_collection
Live Nodes: [127.0.0.1:10001_solr, 127.0.0.1:1_solr]
Last available state: 
DocCollection(testMixedBounds

[jira] [Commented] (LUCENE-8004) IndexUpgraderTool should rewrite segments rather than forceMerge

2018-06-15 Thread Erick Erickson (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16514163#comment-16514163
 ] 

Erick Erickson commented on LUCENE-8004:


Made it not a blocker since I actually looked at the code and the call is 
forceMerge with maxSegments = 1. So this will still work as it does now even 
after I check in LUCENE-7976 today.

Once we get through JIRAs like LUCENE-8264 and SOLR-12259, making this tool 
work without creating one segment will, I hope, be simple so we can revisit 
this then.

> IndexUpgraderTool should rewrite segments rather than forceMerge
> 
>
> Key: LUCENE-8004
> URL: https://issues.apache.org/jira/browse/LUCENE-8004
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
>
> Spinoff from LUCENE-7976. We help users get themselves into a corner by using 
> forceMerge on an index to rewrite all segments in the current Lucene format. 
> We should rewrite each individual segment instead. This would also help with 
> upgrading X-2->X-1, then X-1->X.
> Of course the preferred method is to re-index from scratch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8004) IndexUpgraderTool should rewrite segments rather than forceMerge

2018-06-15 Thread Erick Erickson (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated LUCENE-8004:
---
Priority: Major  (was: Blocker)

> IndexUpgraderTool should rewrite segments rather than forceMerge
> 
>
> Key: LUCENE-8004
> URL: https://issues.apache.org/jira/browse/LUCENE-8004
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
>
> Spinoff from LUCENE-7976. We help users get themselves into a corner by using 
> forceMerge on an index to rewrite all segments in the current Lucene format. 
> We should rewrite each individual segment instead. This would also help with 
> upgrading X-2->X-1, then X-1->X.
> Of course the preferred method is to re-index from scratch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Solaris (64bit/jdk1.8.0) - Build # 668 - Still Unstable!

2018-06-15 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Solaris/668/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC

9 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testSplitIntegration

Error Message:
did not finish processing in time

Stack Trace:
java.lang.AssertionError: did not finish processing in time
at 
__randomizedtesting.SeedInfo.seed([E2DFEFC449422B49:DB51568466BDE2B7]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testSplitIntegration(IndexSizeTriggerTest.java:298)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testMixedBounds

Error Message:
did not finish processing in time

Stack Trace:
java.lang.AssertionError: did not finish processing in time
at 
__randomizedtesting.SeedInfo.seed([E2DFE

[JENKINS] Lucene-Solr-BadApples-Tests-master - Build # 78 - Unstable

2018-06-15 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-master/78/

8 tests failed.
FAILED:  
org.apache.solr.cloud.api.collections.TestCollectionsAPIViaSolrCloudCluster.testCollectionCreateSearchDelete

Error Message:
Error from server at 
http://127.0.0.1:54712/solr/testcollection_shard1_replica_n2: Expected mime 
type application/octet-stream but got text/html.Error 404 
Can not find: /solr/testcollection_shard1_replica_n2/update  
HTTP ERROR 404 Problem accessing 
/solr/testcollection_shard1_replica_n2/update. Reason: Can not find: 
/solr/testcollection_shard1_replica_n2/updatehttp://eclipse.org/jetty";>Powered by Jetty:// 9.4.10.v20180503  
  

Stack Trace:
org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from 
server at http://127.0.0.1:54712/solr/testcollection_shard1_replica_n2: 
Expected mime type application/octet-stream but got text/html. 


Error 404 Can not find: 
/solr/testcollection_shard1_replica_n2/update

HTTP ERROR 404
Problem accessing /solr/testcollection_shard1_replica_n2/update. Reason:
Can not find: 
/solr/testcollection_shard1_replica_n2/updatehttp://eclipse.org/jetty";>Powered by Jetty:// 9.4.10.v20180503




at 
__randomizedtesting.SeedInfo.seed([B4380E3F99975242:17C2A09A1E7FB8E7]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:551)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1015)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:886)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:948)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:948)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:948)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:948)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:948)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:819)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.request.UpdateRequest.commit(UpdateRequest.java:233)
at 
org.apache.solr.cloud.api.collections.TestCollectionsAPIViaSolrCloudCluster.testCollectionCreateSearchDelete(TestCollectionsAPIViaSolrCloudCluster.java:170)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.ran

Re: Status of solr tests

2018-06-15 Thread Erick Erickson
Mark (and everyone).

I'm trying to be somewhat conservative about what I BadApple, at this
point it's only things that have failed every week for the last 4.
Part of that conservatism is to avoid BadApple'ing tests that are
failing and _should_ fail.

I'm explicitly _not_ delving into any of the causes at all at this
point, it's overwhelming until we reduce the noise as everyone knows.

So please feel totally free to BadApple anything you know is flakey,
it won't intrude on my turf ;)

And since I realized I can also report tests that have _not_ failed in
a month that _are_ BadApple'd, we can be a little freer with
BadApple'ing tests since there's a mechanism for un-annotating them
without a lot of tedious effort.

FWIW.

On Fri, Jun 15, 2018 at 9:09 AM, Mark Miller  wrote:
> There is an okay chance I'm going to start making some improvements here as
> well. I've been working on a very stable set of tests on my starburst branch
> and will slowly bring in test fixes over time (I've already been making some
> on that branch for important tests). We should currently be defaulting to
> tests.badapples=false on all solr test runs - it's a joke to try and get a
> clean run otherwise, and even then somehow 4 or 5 tests that fail somewhat
> commonly have so far avoided Erick's @BadApple hack and slash. They are bad
> appled on my dev branch now, but that is currently where any time I have is
> spent rather than on the main dev branches.
>
> Also, too many flakey tests are introduced because devs are not beasting or
> beasting well before committing new heavy tests. Perhaps we could add some
> docs around that.
>
> We have built in beasting support, we need to emphasize that a couple passes
> on a new test is not sufficient to test it's quality.
>
> - Mark
>
> On Fri, Jun 15, 2018 at 9:46 AM Erick Erickson 
> wrote:
>>
>> (Sg) All very true. You're not alone in your frustration.
>>
>> I've been trying to at least BadApple tests that fail consistently, so
>> another option could be to disable BadApple'd tests. My hope has been
>> to get to the point of being able to reliably get clean runs, at least
>> when BadApple'd tests are disabled.
>>
>> From that point I want to draw a line in the sand and immediately
>> address tests that fail that are _not_ BadApple'd. At least then we'll
>> stop getting _worse_. And then we can work on the BadApple'd tests.
>> But as David says, that's not going to be any time soon. It's been a
>> couple of months that I've been trying to just get the tests
>> BadApple'd without even trying to fix any of them.
>>
>> It's particularly pernicious because with all the noise we don't see
>> failures we _should_ see.
>>
>> So I don't have any good short-term answer either. We've built up a
>> very large technical debt in the testing. The first step is to stop
>> adding more debt, which is what I've been working on so far. And
>> that's the easy part
>>
>> Siigghh
>>
>> Erick
>>
>>
>> On Fri, Jun 15, 2018 at 5:29 AM, David Smiley 
>> wrote:
>> > (Sigh) I sympathize with your points Simon.  I'm +1 to modify the
>> > Lucene-side JIRA QA bot (Yetus) to not execute Solr tests.  We can and
>> > are
>> > trying to improve the stability of the Solr tests but even
>> > optimistically
>> > the practical reality is that it won't be good enough anytime soon.
>> > When we
>> > get there, we can reverse this.
>> >
>> > On Fri, Jun 15, 2018 at 3:32 AM Simon Willnauer
>> > 
>> > wrote:
>> >>
>> >> folks,
>> >>
>> >> I got more active working on IndexWriter and Soft-Deletes etc. in the
>> >> last couple of weeks. It's a blast again and I really enjoy it. The
>> >> one thing that is IMO not acceptable is the status of solr tests. I
>> >> tried so many times to get them passing on several different OSs but
>> >> it seems this is pretty hopepless. It's get's even worse the
>> >> Lucene/Solr QA job literally marks every ticket I attach a patch to as
>> >> `-1` because of arbitrary solr tests, here is an example:
>> >>
>> >> || Reason || Tests ||
>> >> | Failed junit tests | solr.rest.TestManagedResourceStorage |
>> >> |   | solr.cloud.autoscaling.SearchRateTriggerIntegrationTest |
>> >> |   | solr.cloud.autoscaling.ScheduledMaintenanceTriggerTest |
>> >> |   | solr.client.solrj.impl.CloudSolrClientTest |
>> >> |   | solr.common.util.TestJsonRecordReader |
>> >>
>> >> Speaking to other committers I hear we should just disable this job.
>> >> Sorry, WTF?
>> >>
>> >> These tests seem to fail all the time, randomly and over and over
>> >> again. This renders the test as entirely useless to me. I even invest
>> >> time (wrong, I invested) looking into it if they are caused by me or
>> >> if I can do something about it. Yet, someone could call me out for
>> >> being responsible for them as a commiter, yes I am hence this email. I
>> >> don't think I am obliged to fix them. These projects have 50+
>> >> committers and having a shared codebase doesn't mean everybody has to
>> >>

[jira] [Commented] (SOLR-11216) Make PeerSync more robust

2018-06-15 Thread hamada (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16514102#comment-16514102
 ] 

hamada commented on SOLR-11216:
---

General review comments, some not related to the patch but relevant In general.

PeerSyncWithLeader 

use 

startingVersions.isEmpty() rather than size() == 0, same for 215

The following try/finally can return, in which case proc is not closed, Is this 
intentional, and if so please add a comment to the effect

line 299, consider sizing the List properly to avoid  garbage side effect from 
growing the list, same applies to line 317

 

HttpShardHandler.java 

if (urls.size()==0) { with if (urls.isEmpty()) {

 

RecoveryStrategy.java

line 223 and 613, 235 (on 
core.getDeletionPolicy().getLatestCommit().getGeneration()) may result in an 
NPE 

line 436 

SolrQueryRequest req = new LocalSolrQueryRequest(core,
 new ModifiableSolrParams()); 

request is not safely closed, is this intentional? won't this break the 
reference count mechanism?

 

 

> Make PeerSync more robust
> -
>
> Key: SOLR-11216
> URL: https://issues.apache.org/jira/browse/SOLR-11216
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Priority: Major
> Attachments: SOLR-11216.patch, SOLR-11216.patch, SOLR-11216.patch
>
>
> First of all, I will change the issue's title with a better name when I have.
> When digging into SOLR-10126. I found a case that can make peerSync fail.
> * leader and replica receive update from 1 to 4
> * replica stop
> * replica miss updates 5, 6
> * replica start recovery
> ## replica buffer updates 7, 8
> ## replica request versions from leader, 
> ## in the same time leader receive update 9, so it will return updates from 1 
> to 9 (for request versions) when replica get recent versions ( so it will be 
> 1,2,3,4,5,6,7,8,9 )
> ## replica do peersync and request updates 5, 6, 9 from leader 
> ## replica apply updates 5, 6, 9. Its index does not have update 7, 8 and 
> maxVersionSpecified for fingerprint is 9, therefore compare fingerprint will 
> fail
> My idea here is why replica request update 9 (step 6) while it knows that 
> updates with lower version ( update 7, 8 ) are on its buffering tlog. Should 
> we request only updates that lower than the lowest update in its buffering 
> tlog ( < 7 )?
> Someone my ask that what if replica won't receive update 9. In that case, 
> leader will put the replica into LIR state, so replica will run recovery 
> process again.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11558) It would be nice if the Graph section of the Cloud tab in the Admin UI could give some more information about the replicas of a collection

2018-06-15 Thread Erick Erickson (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16514092#comment-16514092
 ] 

Erick Erickson commented on SOLR-11558:
---

Sounds good, let me know when you have something since I'm js/UI-challenged ;)

Looking at this a little, note Varun's comments on SOLR-11578, mainly that we 
don't really need to repeat redundant stuff. I.e. when you mouse over a replica 
you know already whether it's a leader or active etc. I don't have strong 
feelings about whether we should display more or not though so whatever you 
think appropriate and we can debate ;)

I think it makes sense to put things like shard ranges in a shard tooltip and 
collection-level stuff over the collection, but again whatever you think is 
easiest to use.



> It would be nice if the Graph section of the Cloud tab in the Admin UI could 
> give some more information about the replicas of a collection
> --
>
> Key: SOLR-11558
> URL: https://issues.apache.org/jira/browse/SOLR-11558
> Project: Solr
>  Issue Type: Wish
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Reporter: Tomás Fernández Löbbe
>Assignee: Erick Erickson
>Priority: Minor
>
> Right now it lists the nodes where they are hosted, the state and if they are 
> or not leader. I usually find the need to see more, like the replica and core 
> names and the replica type, and I find myself moving between this view and 
> the “tree” view. 
> I thought about two options:
> # A mouse over action that lists the additional information (after some time 
> of holding the mouse pointer on top of the replica)
> # Modify the click action to display this information (right now the click 
> sends you to the admin UI of that particular replica)
> The same could be done to display some extra information of the shard (like 
> active/inactive, routing range) and the collection (autoAddReplicas, 
> maxShardsPerNode, configset, etc)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk-10.0.1) - Build # 2134 - Unstable!

2018-06-15 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2134/
Java: 64bit/jdk-10.0.1 -XX:-UseCompressedOops -XX:+UseSerialGC

5 tests failed.
FAILED:  
org.apache.solr.cloud.MoveReplicaHDFSFailoverTest.testOldReplicaIsDeletedInRaceCondition

Error Message:
Error from server at http://127.0.0.1:36933/solr: Could not fully remove 
collection: movereplicatest_coll4

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:36933/solr: Could not fully remove collection: 
movereplicatest_coll4
at 
__randomizedtesting.SeedInfo.seed([C224CA1E6F97D880:C8744568AF97B921]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1106)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:886)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:819)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at 
org.apache.solr.cloud.MoveReplicaHDFSFailoverTest.testOldReplicaIsDeletedInRaceCondition(MoveReplicaHDFSFailoverTest.java:195)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
  

[jira] [Commented] (SOLR-11558) It would be nice if the Graph section of the Cloud tab in the Admin UI could give some more information about the replicas of a collection

2018-06-15 Thread Kevin Cowan (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16514079#comment-16514079
 ] 

Kevin Cowan commented on SOLR-11558:


[~erickerickson]   I will be happy to work on this in the near future. :)

> It would be nice if the Graph section of the Cloud tab in the Admin UI could 
> give some more information about the replicas of a collection
> --
>
> Key: SOLR-11558
> URL: https://issues.apache.org/jira/browse/SOLR-11558
> Project: Solr
>  Issue Type: Wish
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Reporter: Tomás Fernández Löbbe
>Assignee: Erick Erickson
>Priority: Minor
>
> Right now it lists the nodes where they are hosted, the state and if they are 
> or not leader. I usually find the need to see more, like the replica and core 
> names and the replica type, and I find myself moving between this view and 
> the “tree” view. 
> I thought about two options:
> # A mouse over action that lists the additional information (after some time 
> of holding the mouse pointer on top of the replica)
> # Modify the click action to display this information (right now the click 
> sends you to the admin UI of that particular replica)
> The same could be done to display some extra information of the shard (like 
> active/inactive, routing range) and the collection (autoAddReplicas, 
> maxShardsPerNode, configset, etc)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7976) Make TieredMergePolicy respect maxSegmentSizeMB and allow singleton merges of very large segments

2018-06-15 Thread Michael McCandless (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-7976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16514064#comment-16514064
 ] 

Michael McCandless commented on LUCENE-7976:


OK I just chatted w/ [~erickerickson] and indeed I was simply confused – the 
current TMP already has logic to not run multiple "max sized" merges, and so 
this patch isn't changing that.

 

+1 to push!

> Make TieredMergePolicy respect maxSegmentSizeMB and allow singleton merges of 
> very large segments
> -
>
> Key: LUCENE-7976
> URL: https://issues.apache.org/jira/browse/LUCENE-7976
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
> Attachments: LUCENE-7976.patch, LUCENE-7976.patch, LUCENE-7976.patch, 
> LUCENE-7976.patch, LUCENE-7976.patch, LUCENE-7976.patch, LUCENE-7976.patch, 
> LUCENE-7976.patch, LUCENE-7976.patch, LUCENE-7976.patch, LUCENE-7976.patch, 
> LUCENE-7976.patch, SOLR-7976.patch
>
>
> We're seeing situations "in the wild" where there are very large indexes (on 
> disk) handled quite easily in a single Lucene index. This is particularly 
> true as features like docValues move data into MMapDirectory space. The 
> current TMP algorithm allows on the order of 50% deleted documents as per a 
> dev list conversation with Mike McCandless (and his blog here:  
> https://www.elastic.co/blog/lucenes-handling-of-deleted-documents).
> Especially in the current era of very large indexes in aggregate, (think many 
> TB) solutions like "you need to distribute your collection over more shards" 
> become very costly. Additionally, the tempting "optimize" button exacerbates 
> the issue since once you form, say, a 100G segment (by 
> optimizing/forceMerging) it is not eligible for merging until 97.5G of the 
> docs in it are deleted (current default 5G max segment size).
> The proposal here would be to add a new parameter to TMP, something like 
>  (no, that's not serious name, suggestions 
> welcome) which would default to 100 (or the same behavior we have now).
> So if I set this parameter to, say, 20%, and the max segment size stays at 
> 5G, the following would happen when segments were selected for merging:
> > any segment with > 20% deleted documents would be merged or rewritten NO 
> > MATTER HOW LARGE. There are two cases,
> >> the segment has < 5G "live" docs. In that case it would be merged with 
> >> smaller segments to bring the resulting segment up to 5G. If no smaller 
> >> segments exist, it would just be rewritten
> >> The segment has > 5G "live" docs (the result of a forceMerge or optimize). 
> >> It would be rewritten into a single segment removing all deleted docs no 
> >> matter how big it is to start. The 100G example above would be rewritten 
> >> to an 80G segment for instance.
> Of course this would lead to potentially much more I/O which is why the 
> default would be the same behavior we see now. As it stands now, though, 
> there's no way to recover from an optimize/forceMerge except to re-index from 
> scratch. We routinely see 200G-300G Lucene indexes at this point "in the 
> wild" with 10s of  shards replicated 3 or more times. And that doesn't even 
> include having these over HDFS.
> Alternatives welcome! Something like the above seems minimally invasive. A 
> new merge policy is certainly an alternative.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11676) nrt replicas is always 1 when not specified

2018-06-15 Thread Varun Thacker (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16514060#comment-16514060
 ] 

Varun Thacker commented on SOLR-11676:
--

Patch which keeps replicationFactor and nrtReplicas in sync . I'll see if I can 
improve where the checks are placed tomorrow

> nrt replicas is always 1 when not specified
> ---
>
> Key: SOLR-11676
> URL: https://issues.apache.org/jira/browse/SOLR-11676
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Priority: Major
> Attachments: SOLR-11676.patch, SOLR-11676.patch, SOLR-11676.patch, 
> SOLR-11676.patch
>
>
> I created 1 2 shard X 2 replica collection . Here's the log entry for it
> {code}
> 2017-11-27 06:43:47.071 INFO  (qtp159259014-22) [   ] 
> o.a.s.h.a.CollectionsHandler Invoked Collection Action :create with params 
> replicationFactor=2&routerName=compositeId&collection.configName=_default&maxShardsPerNode=2&name=test_recovery&router.name=compositeId&action=CREATE&numShards=2&wt=json&_=1511764995711
>  and sendToOCPQueue=true
> {code}
> And then when I look at the state.json file I see nrtReplicas is set to 1. 
> Any combination of numShards and replicationFactor without explicitly 
> specifying the "nrtReplicas" param puts the "nrtReplicas" as 1 instead of 
> using the replicationFactor value
> {code}
> {"test_recovery":{
> "pullReplicas":"0",
> "replicationFactor":"2",
> ...
> "nrtReplicas":"1",
> "tlogReplicas":"0",
> ..
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11676) nrt replicas is always 1 when not specified

2018-06-15 Thread Varun Thacker (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-11676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-11676:
-
Attachment: SOLR-11676.patch

> nrt replicas is always 1 when not specified
> ---
>
> Key: SOLR-11676
> URL: https://issues.apache.org/jira/browse/SOLR-11676
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Priority: Major
> Attachments: SOLR-11676.patch, SOLR-11676.patch, SOLR-11676.patch, 
> SOLR-11676.patch
>
>
> I created 1 2 shard X 2 replica collection . Here's the log entry for it
> {code}
> 2017-11-27 06:43:47.071 INFO  (qtp159259014-22) [   ] 
> o.a.s.h.a.CollectionsHandler Invoked Collection Action :create with params 
> replicationFactor=2&routerName=compositeId&collection.configName=_default&maxShardsPerNode=2&name=test_recovery&router.name=compositeId&action=CREATE&numShards=2&wt=json&_=1511764995711
>  and sendToOCPQueue=true
> {code}
> And then when I look at the state.json file I see nrtReplicas is set to 1. 
> Any combination of numShards and replicationFactor without explicitly 
> specifying the "nrtReplicas" param puts the "nrtReplicas" as 1 instead of 
> using the replicationFactor value
> {code}
> {"test_recovery":{
> "pullReplicas":"0",
> "replicationFactor":"2",
> ...
> "nrtReplicas":"1",
> "tlogReplicas":"0",
> ..
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Do we need the MODIFYCOLLECTION Api?

2018-06-15 Thread Varun Thacker
So let's keep *collection.configName* and *replicationFactor*.

If we were to think of this API today , would MODIFYCOLLECTION be where we
still put it?

It almost feels like a collection setting. Maybe Collection Properties
( SOLR-11960 ) is where it should live?


On Fri, Jun 15, 2018 at 4:58 PM, Erick Erickson 
wrote:

> re: collection.configName
>
> bq. Right and then basically we are giving a way for users to shoot
> themselves in the foot :)
>
> They can also delete their index files
>
> Seriously though, what if I have a bunch of collections sharing a
> configset then I need to specialize only one by _adding_ fields? I'd
> like to copy the configset to a new one and then point my collection
> at it. And with the UninvertingMergePolicy adding DV would be one such
> specialization.
>
> I've also seen time-series collections (let's say 30 days) where you
> _cannot_ reindex. But you want to modify your schema anyway. People
> have
> 1> defined a new field that's a variant of the old field
> 2> have their indexing program index to _both_ for 30 days
> 3> change the app to use the new field
> 4> change the indexing program to stop indexing to the old field
>
> Sure, the metadata for the field is still carried along but that's not
> a problem for a few fields.
>
> Point is it's dangerous to go changing your configset for an existing
> collection, sure. But I find the API a better option than having to
> manually edit your ZK nodes.
>
> FWIW
>
> On Fri, Jun 15, 2018 at 7:18 AM, Varun Thacker  wrote:
> > Hi Jan,
> >
> > I agree with how your thinking of replicationFactor as basically being a
> > equivalent to nrtReplicas . Let's not change that.
> >
> > so the is #7 the real only use for this API?
> >
> > On Fri, Jun 15, 2018 at 1:46 PM, Jan Høydahl 
> wrote:
> >>
> >> Do we have a v2 API for CREATE and MODIFYCOLLECTION? E.g.
> >>
> >> POST http://localhost:8983/api/c
> >> { modify-collection: { replicationFactor: 3 } }
> >>
> >> Perhaps we should focus on a decent v2 API and deprecate the old
> confusing
> >> one?
> >>
> >> wrt. replicationFactor / nrtReplica / pullReplicas / tlogReplicas, my
> wish
> >> is that replicationFactor keeps on living as today, only setting
> >> nrtReplicas, and is mutually exclusive to any of the three others. So
> if you
> >> have a collection with tlogReplicas defined, then modifying
> >> "replicationFactor" should throw and error. But if you only ever care
> about
> >> NRT replicas then you can keep using replicationFactor as before???
> >>
> >> --
> >> Jan Høydahl, search solution architect
> >> Cominvent AS - www.cominvent.com
> >>
> >> 15. jun. 2018 kl. 13:22 skrev Varun Thacker :
> >>
> >> Today the Modify Collection supports the following properties to be
> >> modified
> >>
> >> maxShardsPerNode
> >> rule
> >> snitch
> >> policy
> >> collection.configName
> >> autoAddReplicas
> >> replicationFactor
> >>
> >> 1-4 seems something we should get rid of because we have the AutoScaling
> >> Policy framework?
> >>
> >> 5> Can anyone point out the use-case for this?
> >>
> >> 6> autoAddReplicas can be changed as a clusterprop and modify-collection
> >> API ? Hmm. Which one is supposed to win?
> >>
> >> 7> We need to allow a user to change replicationFactor. But how does
> this
> >> help? We have nrtReplicas / pullReplicas / tlogReplicas so changing this
> >> sounds just confusing? Or allow changing all replica types ?
> >>
> >>
> >
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


Re: Status of solr tests

2018-06-15 Thread Mark Miller
There is an okay chance I'm going to start making some improvements here as
well. I've been working on a very stable set of tests on my starburst
branch and will slowly bring in test fixes over time (I've already been
making some on that branch for important tests). We should currently be
defaulting to tests.badapples=false on all solr test runs - it's a joke to
try and get a clean run otherwise, and even then somehow 4 or 5 tests that
fail somewhat commonly have so far avoided Erick's @BadApple hack and
slash. They are bad appled on my dev branch now, but that is currently
where any time I have is spent rather than on the main dev branches.

Also, too many flakey tests are introduced because devs are not beasting or
beasting well before committing new heavy tests. Perhaps we could add some
docs around that.

We have built in beasting support, we need to emphasize that a couple
passes on a new test is not sufficient to test it's quality.

- Mark

On Fri, Jun 15, 2018 at 9:46 AM Erick Erickson 
wrote:

> (Sg) All very true. You're not alone in your frustration.
>
> I've been trying to at least BadApple tests that fail consistently, so
> another option could be to disable BadApple'd tests. My hope has been
> to get to the point of being able to reliably get clean runs, at least
> when BadApple'd tests are disabled.
>
> From that point I want to draw a line in the sand and immediately
> address tests that fail that are _not_ BadApple'd. At least then we'll
> stop getting _worse_. And then we can work on the BadApple'd tests.
> But as David says, that's not going to be any time soon. It's been a
> couple of months that I've been trying to just get the tests
> BadApple'd without even trying to fix any of them.
>
> It's particularly pernicious because with all the noise we don't see
> failures we _should_ see.
>
> So I don't have any good short-term answer either. We've built up a
> very large technical debt in the testing. The first step is to stop
> adding more debt, which is what I've been working on so far. And
> that's the easy part
>
> Siigghh
>
> Erick
>
>
> On Fri, Jun 15, 2018 at 5:29 AM, David Smiley 
> wrote:
> > (Sigh) I sympathize with your points Simon.  I'm +1 to modify the
> > Lucene-side JIRA QA bot (Yetus) to not execute Solr tests.  We can and
> are
> > trying to improve the stability of the Solr tests but even optimistically
> > the practical reality is that it won't be good enough anytime soon.
> When we
> > get there, we can reverse this.
> >
> > On Fri, Jun 15, 2018 at 3:32 AM Simon Willnauer <
> simon.willna...@gmail.com>
> > wrote:
> >>
> >> folks,
> >>
> >> I got more active working on IndexWriter and Soft-Deletes etc. in the
> >> last couple of weeks. It's a blast again and I really enjoy it. The
> >> one thing that is IMO not acceptable is the status of solr tests. I
> >> tried so many times to get them passing on several different OSs but
> >> it seems this is pretty hopepless. It's get's even worse the
> >> Lucene/Solr QA job literally marks every ticket I attach a patch to as
> >> `-1` because of arbitrary solr tests, here is an example:
> >>
> >> || Reason || Tests ||
> >> | Failed junit tests | solr.rest.TestManagedResourceStorage |
> >> |   | solr.cloud.autoscaling.SearchRateTriggerIntegrationTest |
> >> |   | solr.cloud.autoscaling.ScheduledMaintenanceTriggerTest |
> >> |   | solr.client.solrj.impl.CloudSolrClientTest |
> >> |   | solr.common.util.TestJsonRecordReader |
> >>
> >> Speaking to other committers I hear we should just disable this job.
> >> Sorry, WTF?
> >>
> >> These tests seem to fail all the time, randomly and over and over
> >> again. This renders the test as entirely useless to me. I even invest
> >> time (wrong, I invested) looking into it if they are caused by me or
> >> if I can do something about it. Yet, someone could call me out for
> >> being responsible for them as a commiter, yes I am hence this email. I
> >> don't think I am obliged to fix them. These projects have 50+
> >> committers and having a shared codebase doesn't mean everybody has to
> >> take care of everything. I think we are at the point where if I work
> >> on Lucene I won't run solr tests at all otherwise there won't be any
> >> progress. On the other hand solr tests never pass I wonder if the solr
> >> code-base gets changes nevertheless? That is again a terrible
> >> situation.
> >>
> >> I spoke to varun and  anshum during buzzwords if they can give me some
> >> hints what I am doing wrong but it seems like the way it is. I feel
> >> terrible pushing stuff to our repo still seeing our tests fail. I get
> >> ~15 build failures from solr tests a day I am not the only one that
> >> has mail filters to archive them if there isn't a lucene tests in the
> >> failures.
> >>
> >> This is a terrible state folks, how do we fix it? It's the lucene land
> >> that get much love on the testing end but that also requires more work
> >> on it, I expect solr 

[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1912 - Still Unstable!

2018-06-15 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1912/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseG1GC

12 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testMergeIntegration

Error Message:
did not finish processing in time

Stack Trace:
java.lang.AssertionError: did not finish processing in time
at 
__randomizedtesting.SeedInfo.seed([38D378B987002066:6B6A3A096511B59C]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testMergeIntegration(IndexSizeTriggerTest.java:425)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testMixedBounds

Error Message:
failed to create testMixedBounds_collection Live Nodes: [127.0.0.1:10001_solr, 
127.0.0.1:1_solr] Last available state: 
DocCollection(testMixedBounds_collec

[jira] [Resolved] (SOLR-10428) CloudSolrClient: Qerying multiple collection aliases leads to SolrException: Collection not found

2018-06-15 Thread Erick Erickson (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-10428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-10428.
---
   Resolution: Fixed
Fix Version/s: 7.2

> CloudSolrClient: Qerying multiple collection aliases leads to SolrException: 
> Collection not found
> -
>
> Key: SOLR-10428
> URL: https://issues.apache.org/jira/browse/SOLR-10428
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 6.4, 6.4.1, 6.4.2, 6.5, 7.0
>Reporter: Philip Pock
>Priority: Minor
> Fix For: 7.2
>
>
> We have multiple collections and an alias is created for each of them. e.g.:
> alias-a -> collection-a, alias-b -> collection-b
> We search in multiple collections by passing the aliases of the collections 
> in the collections parameter.
> {code}solrClient.query("alias-a,alias-b", params, 
> SolrRequest.METHOD.POST){code}
> The client can't find the collection and throws an Exception. Relevant parts 
> of the stacktrace using v6.5.0:
> {noformat}
> org.apache.solr.common.SolrException: Collection not found: collection-a
>   at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.getCollectionNames(CloudSolrClient.java:1394)
>   at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1087)
>   at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:1057)
>   at 
> org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:160)
>   at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:974)
> {noformat}
> Everything works fine with a single alias.
> I think this issue was introduced with SOLR-9784. Please see my comment below.
> {code:title=org.apache.solr.client.solrj.impl.CloudSolrClient }
> Set getCollectionNames(String collection) {
> List rawCollectionsList = StrUtils.splitSmart(collection, ",", 
> true);
> Set collectionNames = new HashSet<>();
> for (String collectionName : rawCollectionsList) {
>   if (stateProvider.getState(collectionName) == null) {
> // I assume that collectionName should be passed to getAlias here
> String alias = stateProvider.getAlias(collection);
> if (alias != null) {
>   List aliasList = StrUtils.splitSmart(alias, ",", true);
>   collectionNames.addAll(aliasList);
>   continue;
> }
>   throw new SolrException(ErrorCode.BAD_REQUEST, "Collection not 
> found: " + collectionName);
> }
>   collectionNames.add(collectionName);
> }
> return collectionNames;
>   }
> {code}
> The suggested change is similar to the previous revision: 
> https://github.com/apache/lucene-solr/commit/5650939a8d41b7bad584947a2c9dcedf3774b8de#diff-c8d54eacd46180b332c86c7ae448abaeL1301



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-12492) Ability to control Spellcheck for particular docs

2018-06-15 Thread Erick Erickson (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-12492.
---
Resolution: Invalid

This is not an appropriate use of Solr's JIRA, the issue tracker is not a 
support portal. We try to reserve the JIRA system for code issues rather than 
usage questions.

Please ask the question here: solr-u...@lucene.apache.org, see: 
http://lucene.apache.org/solr/community.html#mailing-lists-irc

If the consensus there is that there are code issues, we can reopen this JIRA 
or create a new one.

> Ability to control Spellcheck for particular docs
> -
>
> Key: SOLR-12492
> URL: https://issues.apache.org/jira/browse/SOLR-12492
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Inigo Solomon
>Priority: Minor
>
> Scenario - Lets assume we have 1 docs in solr and around 100 employees in 
> an organisation .When an user queries the solr index , we filter the results 
> based on department field and display it to the users. Now i wanted to 
> implement spellcheck feature to that index.If spellcheck is implemented to 
> solr index , when an user queries the index , the number of docs returned 
> will be from the whole index while the docs displayed will be less as we have 
> filtered based on department .
>  
> Suggest me a way to fix this .



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-8106) Add script to attempt to reproduce failing tests from a Jenkins log

2018-06-15 Thread Steve Rowe (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe resolved LUCENE-8106.

Resolution: Fixed

Thanks [~jpountz], resolved.

> Add script to attempt to reproduce failing tests from a Jenkins log
> ---
>
> Key: LUCENE-8106
> URL: https://issues.apache.org/jira/browse/LUCENE-8106
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Steve Rowe
>Assignee: Steve Rowe
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: LUCENE-8106-part2.patch, LUCENE-8106-part3.patch, 
> LUCENE-8106-part4.patch, LUCENE-8106.part5.patch, LUCENE-8106.patch, 
> LUCENE-8106.patch
>
>
> This script will be runnable from a downstream job triggered by an upstream 
> failing Jenkins job, passing log location info between the two.
> The script will also be runnable manually from a developer's cmdline.
> From the script help:
> {noformat}
> Usage:
>  python3 -u reproduceJenkinsFailures.py URL
> Must be run from a Lucene/Solr git workspace. Downloads the Jenkins
> log pointed to by the given URL, parses it for Git revision and failed
> Lucene/Solr tests, checks out the Git revision in the local workspace,
> groups the failed tests by module, then runs
> 'ant test -Dtest.dups=5 -Dtests.class="*.test1[|*.test2[...]]" ...'
> in each module of interest, failing at the end if any of the runs fails.
> To control the maximum number of concurrent JVMs used for each module's
> test run, set 'tests.jvms', e.g. in ~/lucene.build.properties
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Do we need the MODIFYCOLLECTION Api?

2018-06-15 Thread Erick Erickson
re: collection.configName

bq. Right and then basically we are giving a way for users to shoot
themselves in the foot :)

They can also delete their index files

Seriously though, what if I have a bunch of collections sharing a
configset then I need to specialize only one by _adding_ fields? I'd
like to copy the configset to a new one and then point my collection
at it. And with the UninvertingMergePolicy adding DV would be one such
specialization.

I've also seen time-series collections (let's say 30 days) where you
_cannot_ reindex. But you want to modify your schema anyway. People
have
1> defined a new field that's a variant of the old field
2> have their indexing program index to _both_ for 30 days
3> change the app to use the new field
4> change the indexing program to stop indexing to the old field

Sure, the metadata for the field is still carried along but that's not
a problem for a few fields.

Point is it's dangerous to go changing your configset for an existing
collection, sure. But I find the API a better option than having to
manually edit your ZK nodes.

FWIW

On Fri, Jun 15, 2018 at 7:18 AM, Varun Thacker  wrote:
> Hi Jan,
>
> I agree with how your thinking of replicationFactor as basically being a
> equivalent to nrtReplicas . Let's not change that.
>
> so the is #7 the real only use for this API?
>
> On Fri, Jun 15, 2018 at 1:46 PM, Jan Høydahl  wrote:
>>
>> Do we have a v2 API for CREATE and MODIFYCOLLECTION? E.g.
>>
>> POST http://localhost:8983/api/c
>> { modify-collection: { replicationFactor: 3 } }
>>
>> Perhaps we should focus on a decent v2 API and deprecate the old confusing
>> one?
>>
>> wrt. replicationFactor / nrtReplica / pullReplicas / tlogReplicas, my wish
>> is that replicationFactor keeps on living as today, only setting
>> nrtReplicas, and is mutually exclusive to any of the three others. So if you
>> have a collection with tlogReplicas defined, then modifying
>> "replicationFactor" should throw and error. But if you only ever care about
>> NRT replicas then you can keep using replicationFactor as before???
>>
>> --
>> Jan Høydahl, search solution architect
>> Cominvent AS - www.cominvent.com
>>
>> 15. jun. 2018 kl. 13:22 skrev Varun Thacker :
>>
>> Today the Modify Collection supports the following properties to be
>> modified
>>
>> maxShardsPerNode
>> rule
>> snitch
>> policy
>> collection.configName
>> autoAddReplicas
>> replicationFactor
>>
>> 1-4 seems something we should get rid of because we have the AutoScaling
>> Policy framework?
>>
>> 5> Can anyone point out the use-case for this?
>>
>> 6> autoAddReplicas can be changed as a clusterprop and modify-collection
>> API ? Hmm. Which one is supposed to win?
>>
>> 7> We need to allow a user to change replicationFactor. But how does this
>> help? We have nrtReplicas / pullReplicas / tlogReplicas so changing this
>> sounds just confusing? Or allow changing all replica types ?
>>
>>
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Status of solr tests

2018-06-15 Thread Erick Erickson
(Sg) All very true. You're not alone in your frustration.

I've been trying to at least BadApple tests that fail consistently, so
another option could be to disable BadApple'd tests. My hope has been
to get to the point of being able to reliably get clean runs, at least
when BadApple'd tests are disabled.

>From that point I want to draw a line in the sand and immediately
address tests that fail that are _not_ BadApple'd. At least then we'll
stop getting _worse_. And then we can work on the BadApple'd tests.
But as David says, that's not going to be any time soon. It's been a
couple of months that I've been trying to just get the tests
BadApple'd without even trying to fix any of them.

It's particularly pernicious because with all the noise we don't see
failures we _should_ see.

So I don't have any good short-term answer either. We've built up a
very large technical debt in the testing. The first step is to stop
adding more debt, which is what I've been working on so far. And
that's the easy part

Siigghh

Erick


On Fri, Jun 15, 2018 at 5:29 AM, David Smiley  wrote:
> (Sigh) I sympathize with your points Simon.  I'm +1 to modify the
> Lucene-side JIRA QA bot (Yetus) to not execute Solr tests.  We can and are
> trying to improve the stability of the Solr tests but even optimistically
> the practical reality is that it won't be good enough anytime soon.  When we
> get there, we can reverse this.
>
> On Fri, Jun 15, 2018 at 3:32 AM Simon Willnauer 
> wrote:
>>
>> folks,
>>
>> I got more active working on IndexWriter and Soft-Deletes etc. in the
>> last couple of weeks. It's a blast again and I really enjoy it. The
>> one thing that is IMO not acceptable is the status of solr tests. I
>> tried so many times to get them passing on several different OSs but
>> it seems this is pretty hopepless. It's get's even worse the
>> Lucene/Solr QA job literally marks every ticket I attach a patch to as
>> `-1` because of arbitrary solr tests, here is an example:
>>
>> || Reason || Tests ||
>> | Failed junit tests | solr.rest.TestManagedResourceStorage |
>> |   | solr.cloud.autoscaling.SearchRateTriggerIntegrationTest |
>> |   | solr.cloud.autoscaling.ScheduledMaintenanceTriggerTest |
>> |   | solr.client.solrj.impl.CloudSolrClientTest |
>> |   | solr.common.util.TestJsonRecordReader |
>>
>> Speaking to other committers I hear we should just disable this job.
>> Sorry, WTF?
>>
>> These tests seem to fail all the time, randomly and over and over
>> again. This renders the test as entirely useless to me. I even invest
>> time (wrong, I invested) looking into it if they are caused by me or
>> if I can do something about it. Yet, someone could call me out for
>> being responsible for them as a commiter, yes I am hence this email. I
>> don't think I am obliged to fix them. These projects have 50+
>> committers and having a shared codebase doesn't mean everybody has to
>> take care of everything. I think we are at the point where if I work
>> on Lucene I won't run solr tests at all otherwise there won't be any
>> progress. On the other hand solr tests never pass I wonder if the solr
>> code-base gets changes nevertheless? That is again a terrible
>> situation.
>>
>> I spoke to varun and  anshum during buzzwords if they can give me some
>> hints what I am doing wrong but it seems like the way it is. I feel
>> terrible pushing stuff to our repo still seeing our tests fail. I get
>> ~15 build failures from solr tests a day I am not the only one that
>> has mail filters to archive them if there isn't a lucene tests in the
>> failures.
>>
>> This is a terrible state folks, how do we fix it? It's the lucene land
>> that get much love on the testing end but that also requires more work
>> on it, I expect solr to do the same. That at the same time requires
>> stop pushing new stuff until the situation is under control. The
>> effort of marking stuff as bad apples isn't the answer, this requires
>> effort from the drivers behind this project.
>>
>> simon
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
> --
> Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
> LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
> http://www.solrenterprisesearchserver.com

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8359) Extend ToParentBlockJoinQuery with 'minimum matched children' functionality

2018-06-15 Thread Yonik Seeley (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16513889#comment-16513889
 ] 

Yonik Seeley commented on LUCENE-8359:
--

I haven't had a chance to look at the patch, but +1 for the idea of adding the 
high level functionality!

> Extend ToParentBlockJoinQuery with 'minimum matched children' functionality 
> 
>
> Key: LUCENE-8359
> URL: https://issues.apache.org/jira/browse/LUCENE-8359
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Andrey Kudryavtsev
>Priority: Minor
>  Labels: lucene
> Attachments: LUCENE-8359
>
>
> I have a hierarchal data in index and requirements like 'match parent only if 
> at least {{n}} his children were matched'.  
> I used to solve it by combination of some lucene / solr tricks like 'frange' 
> filtration by sum of matched children score, so it's doable out of the box 
> with some efforts right now. But also it could be solved by 
> \{{ToParentBlockJoinQuery}} extension with new numeric parameter, tried to do 
> it in attached patch. 
> Not sure if this should be in main branch, just put it here, maybe someone 
> would have similar problems.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11216) Make PeerSync more robust

2018-06-15 Thread Cao Manh Dat (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16513887#comment-16513887
 ] 

Cao Manh Dat commented on SOLR-11216:
-

Attached patch for Solution 2. Created a new class PeerSyncWithLeader with some 
duplications with its original class (PeerSync) but what we will gain here is 
an easier to understand flow (fewer flags) and optimized for doing peerSync on 
recovery.
Any objections about this separations? [~shalinmangar] [~markrmil...@gmail.com]


> Make PeerSync more robust
> -
>
> Key: SOLR-11216
> URL: https://issues.apache.org/jira/browse/SOLR-11216
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Priority: Major
> Attachments: SOLR-11216.patch, SOLR-11216.patch, SOLR-11216.patch
>
>
> First of all, I will change the issue's title with a better name when I have.
> When digging into SOLR-10126. I found a case that can make peerSync fail.
> * leader and replica receive update from 1 to 4
> * replica stop
> * replica miss updates 5, 6
> * replica start recovery
> ## replica buffer updates 7, 8
> ## replica request versions from leader, 
> ## in the same time leader receive update 9, so it will return updates from 1 
> to 9 (for request versions) when replica get recent versions ( so it will be 
> 1,2,3,4,5,6,7,8,9 )
> ## replica do peersync and request updates 5, 6, 9 from leader 
> ## replica apply updates 5, 6, 9. Its index does not have update 7, 8 and 
> maxVersionSpecified for fingerprint is 9, therefore compare fingerprint will 
> fail
> My idea here is why replica request update 9 (step 6) while it knows that 
> updates with lower version ( update 7, 8 ) are on its buffering tlog. Should 
> we request only updates that lower than the lowest update in its buffering 
> tlog ( < 7 )?
> Someone my ask that what if replica won't receive update 9. In that case, 
> leader will put the replica into LIR state, so replica will run recovery 
> process again.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11216) Make PeerSync more robust

2018-06-15 Thread Cao Manh Dat (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-11216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cao Manh Dat updated SOLR-11216:

Attachment: SOLR-11216.patch

> Make PeerSync more robust
> -
>
> Key: SOLR-11216
> URL: https://issues.apache.org/jira/browse/SOLR-11216
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Priority: Major
> Attachments: SOLR-11216.patch, SOLR-11216.patch, SOLR-11216.patch
>
>
> First of all, I will change the issue's title with a better name when I have.
> When digging into SOLR-10126. I found a case that can make peerSync fail.
> * leader and replica receive update from 1 to 4
> * replica stop
> * replica miss updates 5, 6
> * replica start recovery
> ## replica buffer updates 7, 8
> ## replica request versions from leader, 
> ## in the same time leader receive update 9, so it will return updates from 1 
> to 9 (for request versions) when replica get recent versions ( so it will be 
> 1,2,3,4,5,6,7,8,9 )
> ## replica do peersync and request updates 5, 6, 9 from leader 
> ## replica apply updates 5, 6, 9. Its index does not have update 7, 8 and 
> maxVersionSpecified for fingerprint is 9, therefore compare fingerprint will 
> fail
> My idea here is why replica request update 9 (step 6) while it knows that 
> updates with lower version ( update 7, 8 ) are on its buffering tlog. Should 
> we request only updates that lower than the lowest update in its buffering 
> tlog ( < 7 )?
> Someone my ask that what if replica won't receive update 9. In that case, 
> leader will put the replica into LIR state, so replica will run recovery 
> process again.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Do we need the MODIFYCOLLECTION Api?

2018-06-15 Thread Varun Thacker
Hi Jan,

I agree with how your thinking of replicationFactor as basically being a
equivalent to nrtReplicas . Let's not change that.

so the is #7 the real only use for this API?

On Fri, Jun 15, 2018 at 1:46 PM, Jan Høydahl  wrote:

> Do we have a v2 API for CREATE and MODIFYCOLLECTION? E.g.
>
> POST http://localhost:8983/api/c
> { modify-collection: { replicationFactor: 3 } }
>
> Perhaps we should focus on a decent v2 API and deprecate the old confusing
> one?
>
> wrt. replicationFactor / nrtReplica / pullReplicas / tlogReplicas, my wish
> is that replicationFactor keeps on living as today, only setting
> nrtReplicas, and is mutually exclusive to any of the three others. So if
> you have a collection with tlogReplicas defined, then modifying
> "replicationFactor" should throw and error. But if you only ever care about
> NRT replicas then you can keep using replicationFactor as before???
>
> --
> Jan Høydahl, search solution architect
> Cominvent AS - www.cominvent.com
>
> 15. jun. 2018 kl. 13:22 skrev Varun Thacker :
>
> Today the Modify Collection supports the following properties to be
> modified
>
>1. maxShardsPerNode
>2. rule
>3. snitch
>4. policy
>5. collection.configName
>6. autoAddReplicas
>7. replicationFactor
>
> 1-4 seems something we should get rid of because we have the AutoScaling
> Policy framework?
>
> 5> Can anyone point out the use-case for this?
>
> 6> autoAddReplicas can be changed as a clusterprop and modify-collection
> API ? Hmm. Which one is supposed to win?
>
> 7> We need to allow a user to change replicationFactor. But how does this
> help? We have nrtReplicas / pullReplicas / tlogReplicas so changing this
> sounds just confusing? Or allow changing all replica types ?
>
>
>


Re: Do we need the MODIFYCOLLECTION Api?

2018-06-15 Thread Varun Thacker
On Fri, Jun 15, 2018 at 2:44 PM, David Smiley 
wrote:

> +1 to get rid of #1, #2, #3, #7.
>
> Maybe I'm mistaken but I thought "policy" was a part of the auto scaling
> framework?
>

Yeah. And
http://lucene.apache.org/solr/guide/solrcloud-autoscaling-api.html#create-and-modify-cluster-policies
seems like the way to modify it.  So I wonder why should modifycollection
support it?
Maybe Noble , AB or Shalin could confirm?


> Maybe the capability for autoAddReplicas should be considered an aspect of
> the auto scaling framework instead of a collection setting, and thus we
> could remove it here?
>

Yeah I'd love for that to happen. It's even tied to triggers etc so seems
like it should be enabled/disabled via the autoscaling API

>
> I think the ability to modify collection.configName seems useful albeit
> rare to use in practice.  Perhaps you want to try out a bunch of changes
> and want to easily roll back.  You could create a config with those
> modifications, try it out, and if you don't like the results then point
> your config back to the original.  Although In practice it may not always
> be possible to just switch configs since a reindex may be required.
>

Right and then basically we are giving a way for users to shoot themselves
in the foot :)

>
>
> On Fri, Jun 15, 2018 at 7:22 AM Varun Thacker  wrote:
>
>> Today the Modify Collection supports the following properties to be
>> modified
>>
>>1. maxShardsPerNode
>>2. rule
>>3. snitch
>>4. policy
>>5. collection.configName
>>6. autoAddReplicas
>>7. replicationFactor
>>
>> 1-4 seems something we should get rid of because we have the AutoScaling
>> Policy framework?
>>
>> 5> Can anyone point out the use-case for this?
>>
>> 6> autoAddReplicas can be changed as a clusterprop and modify-collection
>> API ? Hmm. Which one is supposed to win?
>>
>> 7> We need to allow a user to change replicationFactor. But how does this
>> help? We have nrtReplicas / pullReplicas / tlogReplicas so changing this
>> sounds just confusing? Or allow changing all replica types ?
>>
> --
> Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
> LinkedIn: http://linkedin.com/in/davidwsmiley | Book: http://www.
> solrenterprisesearchserver.com
>


[jira] [Commented] (SOLR-11200) provide a config to enable disable ConcurrentMergeSchedule.doAutoIOThrottle

2018-06-15 Thread Cassandra Targett (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16513862#comment-16513862
 ] 

Cassandra Targett commented on SOLR-11200:
--

Yes, you can ignore those errors, they happen in every single build. It's hard 
to explain without getting deep into the weeds with Asciidoctor, but they only 
mean we picked a document type (because it's the best of the few available 
options) to structure the output document but our content doesn't 100% conform 
to the rules for that type. In a perfect world we'd be able to say "Yeah, I 
know but I don't care so don't tell me about it". Any errors we really care 
about we've told the validation job to fail the build, so if it succeeds, 
you're fine.

> provide a config to enable disable ConcurrentMergeSchedule.doAutoIOThrottle
> ---
>
> Key: SOLR-11200
> URL: https://issues.apache.org/jira/browse/SOLR-11200
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Nawab Zada Asad iqbal
>Assignee: Dawid Weiss
>Priority: Minor
> Fix For: 7.4
>
> Attachments: SOLR-11200.patch, SOLR-11200.patch, SOLR-11200.patch
>
>
> This config can be useful while bulk indexing. Lucene introduced it 
> https://issues.apache.org/jira/browse/LUCENE-6119 . 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8359) Extend ToParentBlockJoinQuery with 'minimum matched children' functionality

2018-06-15 Thread Andrey Kudryavtsev (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8359?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Kudryavtsev updated LUCENE-8359:
---
Attachment: LUCENE-8359

> Extend ToParentBlockJoinQuery with 'minimum matched children' functionality 
> 
>
> Key: LUCENE-8359
> URL: https://issues.apache.org/jira/browse/LUCENE-8359
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Andrey Kudryavtsev
>Priority: Minor
>  Labels: lucene
> Attachments: LUCENE-8359
>
>
> I have a hierarchal data in index and requirements like 'match parent only if 
> at least {{n}} his children were matched'.  
> I used to solve it by combination of some lucene / solr tricks like 'frange' 
> filtration by sum of matched children score, so it's doable out of the box 
> with some efforts right now. But also it could be solved by 
> \{{ToParentBlockJoinQuery}} extension with new numeric parameter, tried to do 
> it in attached patch. 
> Not sure if this should be in main branch, just put it here, maybe someone 
> would have similar problems.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-8359) Extend ToParentBlockJoinQuery with 'minimum matched children' functionality

2018-06-15 Thread Andrey Kudryavtsev (JIRA)
Andrey Kudryavtsev created LUCENE-8359:
--

 Summary: Extend ToParentBlockJoinQuery with 'minimum matched 
children' functionality 
 Key: LUCENE-8359
 URL: https://issues.apache.org/jira/browse/LUCENE-8359
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Andrey Kudryavtsev


I have a hierarchal data in index and requirements like 'match parent only if 
at least {{n}} his children were matched'.  

I used to solve it by combination of some lucene / solr tricks like 'frange' 
filtration by sum of matched children score, so it's doable out of the box with 
some efforts right now. But also it could be solved by 
\{{ToParentBlockJoinQuery}} extension with new numeric parameter, tried to do 
it in attached patch. 

Not sure if this should be in main branch, just put it here, maybe someone 
would have similar problems.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-MacOSX (64bit/jdk-9) - Build # 688 - Still Unstable!

2018-06-15 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-MacOSX/688/
Java: 64bit/jdk-9 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

5 tests failed.
FAILED:  org.apache.solr.update.MaxSizeAutoCommitTest.deleteTest

Error Message:
Tlog size exceeds the max size bound. Tlog path: 
/Users/jenkins/workspace/Lucene-Solr-7.x-MacOSX/solr/build/solr-core/test/J1/temp/solr.update.MaxSizeAutoCommitTest_C23C90D1B1B8E77A-001/init-core-data-001/tlog/tlog.002,
 tlog size: ۱۳۰۲

Stack Trace:
java.lang.AssertionError: Tlog size exceeds the max size bound. Tlog path: 
/Users/jenkins/workspace/Lucene-Solr-7.x-MacOSX/solr/build/solr-core/test/J1/temp/solr.update.MaxSizeAutoCommitTest_C23C90D1B1B8E77A-001/init-core-data-001/tlog/tlog.002,
 tlog size: ۱۳۰۲
at 
__randomizedtesting.SeedInfo.seed([C23C90D1B1B8E77A:D272752ECA16DE8B]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.update.MaxSizeAutoCommitTest.getTlogFileSizes(MaxSizeAutoCommitTest.java:379)
at 
org.apache.solr.update.MaxSizeAutoCommitTest.deleteTest(MaxSizeAutoCommitTest.java:200)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.St

[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1565 - Still Unstable

2018-06-15 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1565/

1 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.hdfs.HdfsBasicDistributedZkTest

Error Message:
ObjectTracker found 1 object(s) that were not released!!! [NRTCachingDirectory] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.lucene.store.NRTCachingDirectory  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:348)
  at org.apache.solr.core.SolrCore.getNewIndexDir(SolrCore.java:358)  at 
org.apache.solr.core.SolrCore.cleanupOldIndexDirectories(SolrCore.java:3091)  
at org.apache.solr.core.SolrCore.close(SolrCore.java:1612)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:1004)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:867)  at 
org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1140)
  at org.apache.solr.core.CoreContainer.lambda$load$13(CoreContainer.java:686)  
at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197)
  at java.util.concurrent.FutureTask.run(FutureTask.java:266)  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
 at java.lang.Thread.run(Thread.java:748)  

Stack Trace:
java.lang.AssertionError: ObjectTracker found 1 object(s) that were not 
released!!! [NRTCachingDirectory]
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.lucene.store.NRTCachingDirectory
at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:348)
at org.apache.solr.core.SolrCore.getNewIndexDir(SolrCore.java:358)
at 
org.apache.solr.core.SolrCore.cleanupOldIndexDirectories(SolrCore.java:3091)
at org.apache.solr.core.SolrCore.close(SolrCore.java:1612)
at org.apache.solr.core.SolrCore.(SolrCore.java:1004)
at org.apache.solr.core.SolrCore.(SolrCore.java:867)
at 
org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1140)
at 
org.apache.solr.core.CoreContainer.lambda$load$13(CoreContainer.java:686)
at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)


at __randomizedtesting.SeedInfo.seed([96DD454BA974C6D2]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at 
org.apache.solr.SolrTestCaseJ4.teardownTestCases(SolrTestCaseJ4.java:304)
at sun.reflect.GeneratedMethodAccessor20.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:897)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apac

[jira] [Commented] (LUCENE-8358) Asserting trips when IW tries to free ram by writing DV updates

2018-06-15 Thread Simon Willnauer (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16513847#comment-16513847
 ] 

Simon Willnauer commented on LUCENE-8358:
-

commit bot might be behind so I'm adding this manually:

{noformat}
commit 772e171ac6e70c96295f65749d0d15339133b8a6 (HEAD -> master, apache/master)
Author: Simon Willnauer 
Date:   Fri Jun 15 10:44:26 2018 +0200

LUCENE-8358: Relax assertion in IW#writeSomeDocValuesUpdates

This assertion is too strict since we can see this situation if for instance
a ReadersAndUpdates instance gets written to disk concurrently and
readerpooling is off. This change also simplifies 
ReaderPool#getReadersByRam and
adds a test for it.
{noformat}


{noformat}
commit 20c1b7a24a8a42e5d266441270629698e35906b1 (apache/branch_7x, branch_7x)
Author: Simon Willnauer 
Date:   Fri Jun 15 10:44:26 2018 +0200

LUCENE-8358: Relax assertion in IW#writeSomeDocValuesUpdates

This assertion is too strict since we can see this situation if for instance
a ReadersAndUpdates instance gets written to disk concurrently and
readerpooling is off. This change also simplifies 
ReaderPool#getReadersByRam and
adds a test for it.
{noformat}


{noformat}
commit 97736b827e3fc821fb37f785b82242cc6e47f0ba (apache/branch_7_4, branch_7_4)
Author: Simon Willnauer 
Date:   Fri Jun 15 10:44:26 2018 +0200

LUCENE-8358: Relax assertion in IW#writeSomeDocValuesUpdates

This assertion is too strict since we can see this situation if for instance
a ReadersAndUpdates instance gets written to disk concurrently and
readerpooling is off. This change also simplifies 
ReaderPool#getReadersByRam and
adds a test for it.
{noformat}



> Asserting trips when IW tries to free ram by writing DV updates
> ---
>
> Key: LUCENE-8358
> URL: https://issues.apache.org/jira/browse/LUCENE-8358
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 7.4, master (8.0), 7.5
>Reporter: Simon Willnauer
>Priority: Blocker
> Fix For: 7.4, master (8.0), 7.5
>
> Attachments: LUCENE-8358.patch, LUCENE-8358.patch, LUCENE-8358.patch
>
>
> This assertion is pretty new I think we need to relax is since there are 
> chances that this situation is valid ie. if a ReadersAndUpdates instance gets 
> concurrently written and readerpooling is off. That is just fine since this 
> is best effort anyway. I will attach a patch.
> {noformat}
> 07:35:14[junit4] Suite: org.apache.lucene.index.TestBinaryDocValuesUpdates
> 07:35:14[junit4] IGNOR/A 0.01s J0 | 
> TestBinaryDocValuesUpdates.testTonsOfUpdates
> 07:35:14[junit4]> Assumption #1: 'nightly' test group is disabled 
> (@Nightly())
> 07:35:14[junit4]   1> TEST: isNRT=false 
> reader1=StandardDirectoryReader(segments_1:4 _0(7.5.0):c2)
> 07:35:14[junit4]   1> TEST: now reopen
> 07:35:14[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestBinaryDocValuesUpdates -Dtests.method=testUpdatesAreFlushed 
> -Dtests.seed=B8D5250C8CAA9010 -Dtests.slow=true -Dtests.badapples=true 
> -Dtests.locale=en-IE -Dtests.timezone=SystemV/CST6 -Dtests.asserts=true 
> -Dtests.file.encoding=UTF8
> 07:35:14[junit4] FAILURE 0.04s J0 | 
> TestBinaryDocValuesUpdates.testUpdatesAreFlushed <<<
> 07:35:14[junit4]> Throwable #1: java.lang.AssertionError: Segment 
> [_2(7.5.0):c1] is not dropped yet
> 07:35:14[junit4]> at 
> __randomizedtesting.SeedInfo.seed([B8D5250C8CAA9010:8228ECC925943F29]:0)
> 07:35:14[junit4]> at 
> org.apache.lucene.index.IndexWriter.writeSomeDocValuesUpdates(IndexWriter.java:613)
> 07:35:14[junit4]> at 
> org.apache.lucene.index.FrozenBufferedUpdates.apply(FrozenBufferedUpdates.java:298)
> 07:35:14[junit4]> at 
> org.apache.lucene.index.IndexWriter.lambda$publishFrozenUpdates$3(IndexWriter.java:2594)
> 07:35:14[junit4]> at 
> org.apache.lucene.index.IndexWriter.processEvents(IndexWriter.java:5064)
> 07:35:14[junit4]> at 
> org.apache.lucene.index.IndexWriter.updateBinaryDocValue(IndexWriter.java:1742)
> 07:35:14[junit4]> at 
> org.apache.lucene.index.TestBinaryDocValuesUpdates.testUpdatesAreFlushed(TestBinaryDocValuesUpdates.java:100)
> 07:35:14[junit4]> at java.lang.Thread.run(Thread.java:748)
> 07:35:14[junit4]   2> Jun 15, 2018 1:35:14 AM 
> com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
>  uncaughtException
> 07:35:14[junit4]   2> WARNING: Uncaught exception in thread: 
> Thread[Lucene Merge Thread #1,5,TGRP-TestBinaryDocValuesUpdates]
> 07:35:14[junit4]   2> org.apache.lucene.index.MergePolicy$MergeException: 
> java.lang.IllegalStateException: this writer hit an unrecoverable error; 
>

[jira] [Commented] (SOLR-12362) JSON loader should save the relationship of children

2018-06-15 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16513785#comment-16513785
 ] 

David Smiley commented on SOLR-12362:
-

BTW I pushed a fix last night – 
[https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;a=commit;h=4799bc5a4a36d4b550f69f8e2a233b857b6b0340]

I have no idea why I didn't see the error before... I think I was simply 
mistaken.

> JSON loader should save the relationship of children
> 
>
> Key: SOLR-12362
> URL: https://issues.apache.org/jira/browse/SOLR-12362
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: mosh
>Assignee: David Smiley
>Priority: Major
> Fix For: 7.5
>
>  Time Spent: 6h
>  Remaining Estimate: 0h
>
> Once _childDocuments in SolrInputDocument is changed to a Map, JsonLoader 
> should add the child document to the map while saving its key name, to 
> maintain the child's relationship to its parent.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Do we need the MODIFYCOLLECTION Api?

2018-06-15 Thread David Smiley
+1 to get rid of #1, #2, #3, #7.

Maybe I'm mistaken but I thought "policy" was a part of the auto scaling
framework?

Maybe the capability for autoAddReplicas should be considered an aspect of
the auto scaling framework instead of a collection setting, and thus we
could remove it here?

I think the ability to modify collection.configName seems useful albeit
rare to use in practice.  Perhaps you want to try out a bunch of changes
and want to easily roll back.  You could create a config with those
modifications, try it out, and if you don't like the results then point
your config back to the original.  Although In practice it may not always
be possible to just switch configs since a reindex may be required.

On Fri, Jun 15, 2018 at 7:22 AM Varun Thacker  wrote:

> Today the Modify Collection supports the following properties to be
> modified
>
>1. maxShardsPerNode
>2. rule
>3. snitch
>4. policy
>5. collection.configName
>6. autoAddReplicas
>7. replicationFactor
>
> 1-4 seems something we should get rid of because we have the AutoScaling
> Policy framework?
>
> 5> Can anyone point out the use-case for this?
>
> 6> autoAddReplicas can be changed as a clusterprop and modify-collection
> API ? Hmm. Which one is supposed to win?
>
> 7> We need to allow a user to change replicationFactor. But how does this
> help? We have nrtReplicas / pullReplicas / tlogReplicas so changing this
> sounds just confusing? Or allow changing all replica types ?
>
-- 
Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
http://www.solrenterprisesearchserver.com


Re: Status of solr tests

2018-06-15 Thread David Smiley
(Sigh) I sympathize with your points Simon.  I'm +1 to modify the
Lucene-side JIRA QA bot (Yetus) to not execute Solr tests.  We can and are
trying to improve the stability of the Solr tests but even optimistically
the practical reality is that it won't be good enough anytime soon.  When
we get there, we can reverse this.

On Fri, Jun 15, 2018 at 3:32 AM Simon Willnauer 
wrote:

> folks,
>
> I got more active working on IndexWriter and Soft-Deletes etc. in the
> last couple of weeks. It's a blast again and I really enjoy it. The
> one thing that is IMO not acceptable is the status of solr tests. I
> tried so many times to get them passing on several different OSs but
> it seems this is pretty hopepless. It's get's even worse the
> Lucene/Solr QA job literally marks every ticket I attach a patch to as
> `-1` because of arbitrary solr tests, here is an example:
>
> || Reason || Tests ||
> | Failed junit tests | solr.rest.TestManagedResourceStorage |
> |   | solr.cloud.autoscaling.SearchRateTriggerIntegrationTest |
> |   | solr.cloud.autoscaling.ScheduledMaintenanceTriggerTest |
> |   | solr.client.solrj.impl.CloudSolrClientTest |
> |   | solr.common.util.TestJsonRecordReader |
>
> Speaking to other committers I hear we should just disable this job.
> Sorry, WTF?
>
> These tests seem to fail all the time, randomly and over and over
> again. This renders the test as entirely useless to me. I even invest
> time (wrong, I invested) looking into it if they are caused by me or
> if I can do something about it. Yet, someone could call me out for
> being responsible for them as a commiter, yes I am hence this email. I
> don't think I am obliged to fix them. These projects have 50+
> committers and having a shared codebase doesn't mean everybody has to
> take care of everything. I think we are at the point where if I work
> on Lucene I won't run solr tests at all otherwise there won't be any
> progress. On the other hand solr tests never pass I wonder if the solr
> code-base gets changes nevertheless? That is again a terrible
> situation.
>
> I spoke to varun and  anshum during buzzwords if they can give me some
> hints what I am doing wrong but it seems like the way it is. I feel
> terrible pushing stuff to our repo still seeing our tests fail. I get
> ~15 build failures from solr tests a day I am not the only one that
> has mail filters to archive them if there isn't a lucene tests in the
> failures.
>
> This is a terrible state folks, how do we fix it? It's the lucene land
> that get much love on the testing end but that also requires more work
> on it, I expect solr to do the same. That at the same time requires
> stop pushing new stuff until the situation is under control. The
> effort of marking stuff as bad apples isn't the answer, this requires
> effort from the drivers behind this project.
>
> simon
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
> --
Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
http://www.solrenterprisesearchserver.com


[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk-9) - Build # 4678 - Still Unstable!

2018-06-15 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/4678/
Java: 64bit/jdk-9 -XX:+UseCompressedOops -XX:+UseG1GC

4 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testSplitIntegration

Error Message:
events: [CapturedEvent{timestamp=2242412245321970, stage=STARTED, 
actionName='null', event={   "id":"7f77547f310c2T4yhwp4fg7qla7xoas3bx66dmo",   
"source":"index_size_trigger2",   "eventTime":2242407927320770,   
"eventType":"INDEXSIZE",   "properties":{ "__start__":1, 
"aboveSize":{"testSplitIntegration_collection":["{\"core_node1\":{\n
\"leader\":\"true\",\n\"SEARCHER.searcher.maxDoc\":25,\n
\"SEARCHER.searcher.deletedDocs\":0,\n\"INDEX.sizeInBytes\":22740,\n
\"node_name\":\"127.0.0.1:10003_solr\",\n\"type\":\"NRT\",\n
\"SEARCHER.searcher.numDocs\":25,\n\"__bytes__\":22740,\n
\"core\":\"testSplitIntegration_collection_shard1_replica_n1\",\n
\"__docs__\":25,\n\"violationType\":\"aboveDocs\",\n
\"state\":\"active\",\n\"INDEX.sizeInGB\":2.117827534675598E-5,\n
\"shard\":\"shard1\",\n
\"collection\":\"testSplitIntegration_collection\"}}"]}, "belowSize":{},
 "_enqueue_time_":2242412236049170, "requestedOps":["Op{action=SPLITSHARD, 
hints={COLL_SHARD=[{\n  \"first\":\"testSplitIntegration_collection\",\n  
\"second\":\"shard1\"}]}}"]}}, context={}, config={   
"trigger":"index_size_trigger2",   "stage":[ "STARTED", "ABORTED", 
"SUCCEEDED", "FAILED"],   "afterAction":[ "compute_plan", 
"execute_plan"],   
"class":"org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest$CapturingTriggerListener",
   "beforeAction":[ "compute_plan", "execute_plan"]}, message='null'}, 
CapturedEvent{timestamp=2242412344412820, stage=BEFORE_ACTION, 
actionName='compute_plan', event={   
"id":"7f77547f310c2T4yhwp4fg7qla7xoas3bx66dmo",   
"source":"index_size_trigger2",   "eventTime":2242407927320770,   
"eventType":"INDEXSIZE",   "properties":{ "__start__":1, 
"aboveSize":{"testSplitIntegration_collection":["{\"core_node1\":{\n
\"leader\":\"true\",\n\"SEARCHER.searcher.maxDoc\":25,\n
\"SEARCHER.searcher.deletedDocs\":0,\n\"INDEX.sizeInBytes\":22740,\n
\"node_name\":\"127.0.0.1:10003_solr\",\n\"type\":\"NRT\",\n
\"SEARCHER.searcher.numDocs\":25,\n\"__bytes__\":22740,\n
\"core\":\"testSplitIntegration_collection_shard1_replica_n1\",\n
\"__docs__\":25,\n\"violationType\":\"aboveDocs\",\n
\"state\":\"active\",\n\"INDEX.sizeInGB\":2.117827534675598E-5,\n
\"shard\":\"shard1\",\n
\"collection\":\"testSplitIntegration_collection\"}}"]}, "belowSize":{},
 "_enqueue_time_":2242412236049170, "requestedOps":["Op{action=SPLITSHARD, 
hints={COLL_SHARD=[{\n  \"first\":\"testSplitIntegration_collection\",\n  
\"second\":\"shard1\"}]}}"]}}, 
context={properties.BEFORE_ACTION=[compute_plan], source=index_size_trigger2}, 
config={   "trigger":"index_size_trigger2",   "stage":[ "STARTED", 
"ABORTED", "SUCCEEDED", "FAILED"],   "afterAction":[ 
"compute_plan", "execute_plan"],   
"class":"org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest$CapturingTriggerListener",
   "beforeAction":[ "compute_plan", "execute_plan"]}, message='null'}, 
CapturedEvent{timestamp=2242412391178870, stage=AFTER_ACTION, 
actionName='compute_plan', event={   
"id":"7f77547f310c2T4yhwp4fg7qla7xoas3bx66dmo",   
"source":"index_size_trigger2",   "eventTime":2242407927320770,   
"eventType":"INDEXSIZE",   "properties":{ "__start__":1, 
"aboveSize":{"testSplitIntegration_collection":["{\"core_node1\":{\n
\"leader\":\"true\",\n\"SEARCHER.searcher.maxDoc\":25,\n
\"SEARCHER.searcher.deletedDocs\":0,\n\"INDEX.sizeInBytes\":22740,\n
\"node_name\":\"127.0.0.1:10003_solr\",\n\"type\":\"NRT\",\n
\"SEARCHER.searcher.numDocs\":25,\n\"__bytes__\":22740,\n
\"core\":\"testSplitIntegration_collection_shard1_replica_n1\",\n
\"__docs__\":25,\n\"violationType\":\"aboveDocs\",\n
\"state\":\"active\",\n\"INDEX.sizeInGB\":2.117827534675598E-5,\n
\"shard\":\"shard1\",\n
\"collection\":\"testSplitIntegration_collection\"}}"]}, "belowSize":{},
 "_enqueue_time_":2242412236049170, "requestedOps":["Op{action=SPLITSHARD, 
hints={COLL_SHARD=[{\n  \"first\":\"testSplitIntegration_collection\",\n  
\"second\":\"shard1\"}]}}"]}}, 
context={properties.operations=[{class=org.apache.solr.client.solrj.request.CollectionAdminRequest$SplitShard,
 method=GET, params.action=SPLITSHARD, 
params.collection=testSplitIntegration_collection, params.shard=shard1}], 
properties.BEFORE_ACTION=[compute_plan], source=index_size_trigger2, 
properties.AFTER_ACTION=[compute_plan]}, config={   
"trigger":"index_size_trigger2",   "stage":[ "STARTED", "ABORTED", 
"SUCCEEDED", "FAILED"],   "afterAction":[ "compute_plan", 
"execute_plan"],   
"class":"org.apache.solr.cloud.au

[JENKINS] Lucene-Solr-repro - Build # 826 - Still Unstable

2018-06-15 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/826/

[...truncated 29 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-7.x/82/consoleText

[repro] Revision: 35a7e95bce54d53820ce3e0b1e2966f609a1c1d2

[repro] Repro line:  ant test  -Dtestcase=IndexSizeTriggerTest 
-Dtests.method=testSplitIntegration -Dtests.seed=CA9304A1DB77A685 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true -Dtests.locale=sl 
-Dtests.timezone=Iran -Dtests.asserts=true -Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=IndexSizeTriggerTest 
-Dtests.method=testTrigger -Dtests.seed=CA9304A1DB77A685 -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.badapples=true -Dtests.locale=sl 
-Dtests.timezone=Iran -Dtests.asserts=true -Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=TestSolrCloudWithHadoopAuthPlugin 
-Dtests.seed=CA9304A1DB77A685 -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.badapples=true -Dtests.locale=ja-JP-u-ca-japanese-x-lvariant-JP 
-Dtests.timezone=Brazil/DeNoronha -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
228a84fd6db3ef5fc1624d69e1c82a1f02c51352
[repro] git fetch
[repro] git checkout 35a7e95bce54d53820ce3e0b1e2966f609a1c1d2

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   IndexSizeTriggerTest
[repro]   TestSolrCloudWithHadoopAuthPlugin
[repro] ant compile-test

[...truncated 3318 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=10 
-Dtests.class="*.IndexSizeTriggerTest|*.TestSolrCloudWithHadoopAuthPlugin" 
-Dtests.showOutput=onerror  -Dtests.seed=CA9304A1DB77A685 -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.badapples=true -Dtests.locale=sl 
-Dtests.timezone=Iran -Dtests.asserts=true -Dtests.file.encoding=UTF-8

[...truncated 25601 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   0/5 failed: 
org.apache.solr.security.hadoop.TestSolrCloudWithHadoopAuthPlugin
[repro]   5/5 failed: org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest

[repro] Re-testing 100% failures at the tip of branch_7x
[repro] ant clean

[...truncated 8 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   IndexSizeTriggerTest
[repro] ant compile-test

[...truncated 3318 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.IndexSizeTriggerTest" -Dtests.showOutput=onerror  
-Dtests.seed=CA9304A1DB77A685 -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.badapples=true -Dtests.locale=sl -Dtests.timezone=Iran 
-Dtests.asserts=true -Dtests.file.encoding=UTF-8

[...truncated 24663 lines...]
[repro] Setting last failure code to 256

[repro] Failures at the tip of branch_7x:
[repro]   5/5 failed: org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest

[repro] Re-testing 100% failures at the tip of branch_7x without a seed
[repro] ant clean

[...truncated 8 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   IndexSizeTriggerTest
[repro] ant compile-test

[...truncated 3318 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.IndexSizeTriggerTest" -Dtests.showOutput=onerror  
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true -Dtests.locale=sl 
-Dtests.timezone=Iran -Dtests.asserts=true -Dtests.file.encoding=UTF-8

[...truncated 12216 lines...]
[repro] Setting last failure code to 256

[repro] Failures at the tip of branch_7x without a seed:
[repro]   5/5 failed: org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest
[repro] git checkout 228a84fd6db3ef5fc1624d69e1c82a1f02c51352

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 5 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Resolved] (SOLR-12416) router.autoDeleteAge is not accepted in CREATEALIAS command

2018-06-15 Thread David Smiley (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley resolved SOLR-12416.
-
   Resolution: Fixed
 Assignee: David Smiley
Fix Version/s: 7.4

Thanks for chasing this down Joachim; it made my work easy!

https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;a=commit;h=c22da7d7a9ab09f3e73dc675952c47c3516add97

> router.autoDeleteAge is not accepted in CREATEALIAS command
> ---
>
> Key: SOLR-12416
> URL: https://issues.apache.org/jira/browse/SOLR-12416
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.3.1
> Environment: Experimenting with a freshly downloaded Solr 7.3.1
>Reporter: Joachim Sauer
>Assignee: David Smiley
>Priority: Minor
> Fix For: 7.4
>
> Attachments: SOLR-12416.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> I've been experimenting with time routed aliases, specifically with the 
> autoDeleteAge feature (SOLR-11925) and notice that the router.autoDeleteAge 
> parameter was silently ignored in the CREATEALIAS command.
>  
> Using ALIASPROP to set it worked just fine.
>  
> The problem seems to be that 
> [TimeRoutedAlias.OPTIONAL_ROUTER_PARAMS|https://github.com/apache/lucene-solr/blob/bf6503ba5871228692ca79f0b2204a935538e00a/solr/core/src/java/org/apache/solr/cloud/api/collections/TimeRoutedAlias.java#L83]
>  has not been updated when the autoDeleteAge property was added.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-repro - Build # 825 - Still Unstable

2018-06-15 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/825/

[...truncated 29 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-SmokeRelease-7.4/4/consoleText

[repro] Revision: 0a1fe1ed7d9e7a43bb1820caf205fff1934965dd

[repro] Ant options: -DsmokeTestRelease.java9=/home/jenkins/tools/java/latest1.9
[repro] Repro line:  ant test  -Dtestcase=IndexSizeTriggerTest 
-Dtests.method=testMergeIntegration -Dtests.seed=4CDA78824CDD90D1 
-Dtests.multiplier=2 -Dtests.locale=ar-EG -Dtests.timezone=America/Whitehorse 
-Dtests.asserts=true -Dtests.file.encoding=UTF-8

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
228a84fd6db3ef5fc1624d69e1c82a1f02c51352
[repro] git fetch
[repro] git checkout 0a1fe1ed7d9e7a43bb1820caf205fff1934965dd

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   IndexSizeTriggerTest
[repro] ant compile-test

[...truncated 3318 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.IndexSizeTriggerTest" -Dtests.showOutput=onerror 
-DsmokeTestRelease.java9=/home/jenkins/tools/java/latest1.9 
-Dtests.seed=4CDA78824CDD90D1 -Dtests.multiplier=2 -Dtests.locale=ar-EG 
-Dtests.timezone=America/Whitehorse -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[...truncated 7067 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   5/5 failed: org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest

[repro] Re-testing 100% failures at the tip of branch_7_4
[repro] ant clean

[...truncated 8 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   IndexSizeTriggerTest
[repro] ant compile-test

[...truncated 3318 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.IndexSizeTriggerTest" -Dtests.showOutput=onerror 
-DsmokeTestRelease.java9=/home/jenkins/tools/java/latest1.9 
-Dtests.seed=4CDA78824CDD90D1 -Dtests.multiplier=2 -Dtests.locale=ar-EG 
-Dtests.timezone=America/Whitehorse -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[...truncated 5847 lines...]
[repro] Setting last failure code to 256

[repro] Failures at the tip of branch_7_4:
[repro]   4/5 failed: org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest
[repro] git checkout 228a84fd6db3ef5fc1624d69e1c82a1f02c51352

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 5 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

Re: Do we need the MODIFYCOLLECTION Api?

2018-06-15 Thread Jan Høydahl
Do we have a v2 API for CREATE and MODIFYCOLLECTION? E.g.

POST http://localhost:8983/api/c  
{ modify-collection: { replicationFactor: 3 } }

Perhaps we should focus on a decent v2 API and deprecate the old confusing one?

wrt. replicationFactor / nrtReplica / pullReplicas / tlogReplicas, my wish is 
that replicationFactor keeps on living as today, only setting nrtReplicas, and 
is mutually exclusive to any of the three others. So if you have a collection 
with tlogReplicas defined, then modifying "replicationFactor" should throw and 
error. But if you only ever care about NRT replicas then you can keep using 
replicationFactor as before???

--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com

> 15. jun. 2018 kl. 13:22 skrev Varun Thacker :
> 
> Today the Modify Collection supports the following properties to be modified
> maxShardsPerNode
> rule
> snitch
> policy
> collection.configName
> autoAddReplicas
> replicationFactor
> 1-4 seems something we should get rid of because we have the AutoScaling 
> Policy framework?
> 
> 5> Can anyone point out the use-case for this?
> 
> 6> autoAddReplicas can be changed as a clusterprop and modify-collection API 
> ? Hmm. Which one is supposed to win?
> 
> 7> We need to allow a user to change replicationFactor. But how does this 
> help? We have nrtReplicas / pullReplicas / tlogReplicas so changing this 
> sounds just confusing? Or allow changing all replica types ? 



[jira] [Commented] (SOLR-11985) Allow percentage in replica attribute in policy

2018-06-15 Thread Lucene/Solr QA (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16513682#comment-16513682
 ] 

Lucene/Solr QA commented on SOLR-11985:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
15s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Release audit (RAT) {color} | 
{color:green}  3m 55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Check forbidden APIs {color} | 
{color:green}  3m 55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Validate source patterns {color} | 
{color:green}  3m 55s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 
50s{color} | {color:green} solrj in the patch passed. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 27m 52s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | SOLR-11985 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12927839/SOLR-11985.patch |
| Optional Tests |  compile  javac  unit  ratsources  checkforbiddenapis  
validatesourcepatterns  |
| uname | Linux lucene2-us-west.apache.org 4.4.0-112-generic #135-Ubuntu SMP 
Fri Jan 19 11:48:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | ant |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-SOLR-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh
 |
| git revision | master / 772e171 |
| ant | version: Apache Ant(TM) version 1.9.6 compiled on July 8 2015 |
| Default Java | 1.8.0_172 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-SOLR-Build/124/testReport/ |
| modules | C: solr/solrj U: solr/solrj |
| Console output | 
https://builds.apache.org/job/PreCommit-SOLR-Build/124/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> Allow percentage in replica attribute in policy
> ---
>
> Key: SOLR-11985
> URL: https://issues.apache.org/jira/browse/SOLR-11985
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Noble Paul
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: SOLR-11985.patch
>
>
> Today we can only specify an absolute number in the 'replica' attribute in 
> the policy rules. It'd be useful to write a percentage value to make certain 
> use-cases easier. For example:
> {code:java}
> // Keep a third of the the replicas of each shard in east region
> {"replica" : "<34%", "shard" : "#EACH", "sysprop:region": "east"}
> // Keep two thirds of the the replicas of each shard in west region
> {"replica" : "<67%", "shard" : "#EACH", "sysprop:region": "west"}
> {code}
> Today the above must be represented by different rules for each collection if 
> they have different replication factors. Also if the replication factor 
> changes later, the absolute value has to be changed in tandem. So expressing 
> a percentage removes both of these restrictions.
> This feature means that the value of the attribute {{"replica"}} is only 
> available just in time. We call such values {{"computed values"}} . The 
> computed value for this attribute depends on other attributes as well. 
>  Take the following 2 rules
> {code:java}
> //example 1
> {"replica" : "<34%", "shard" : "#EACH", "sysprop:region": "east"}
> //example 2
> {"replica" : "<34%",  "sysprop:region": "east"}
> {code}
> assume we have collection {{"A"}} with 2 shards and {{replicationFactor=3}}
> *example 1* would mean that the value of replica is computed as
> {{3 * 34 / 100 = 1.02}}
> Which means *_for each shard_* keep less than 1.02 replica in east 
> availability zone
>  
> *example 2* would mean that the value of replica is computed as 
> {{3 * 2 * 34 / 100 = 2.04}}
>  
> which mean

Do we need the MODIFYCOLLECTION Api?

2018-06-15 Thread Varun Thacker
Today the Modify Collection supports the following properties to be modified

   1. maxShardsPerNode
   2. rule
   3. snitch
   4. policy
   5. collection.configName
   6. autoAddReplicas
   7. replicationFactor

1-4 seems something we should get rid of because we have the AutoScaling
Policy framework?

5> Can anyone point out the use-case for this?

6> autoAddReplicas can be changed as a clusterprop and modify-collection
API ? Hmm. Which one is supposed to win?

7> We need to allow a user to change replicationFactor. But how does this
help? We have nrtReplicas / pullReplicas / tlogReplicas so changing this
sounds just confusing? Or allow changing all replica types ?


[jira] [Commented] (LUCENE-8343) BlendedInfixSuggester bad score calculus for certain suggestion weights

2018-06-15 Thread Alessandro Benedetti (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16513663#comment-16513663
 ] 

Alessandro Benedetti commented on LUCENE-8343:
--

Hi [~mikemccand], thanks for your review !
I followed your suggestions and I updated the Pull Request ( fixing a recent 
merge conflict).
Feel free to check the additional comments in there.

I agree to bring this to 8.x .
When we are close to an acceptable status let me know and I will go on with 
refinements and double checks to be production ready.

> BlendedInfixSuggester bad score calculus for certain suggestion weights
> ---
>
> Key: LUCENE-8343
> URL: https://issues.apache.org/jira/browse/LUCENE-8343
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 7.3.1
>Reporter: Alessandro Benedetti
>Priority: Major
> Attachments: LUCENE-8343.patch, LUCENE-8343.patch, LUCENE-8343.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Currently the BlendedInfixSuggester return a (long) score to rank the 
> suggestions.
> This score is calculated as a multiplication between :
> long *Weight* : the suggestion weight, coming from a document field, it can 
> be any long value ( including 1, 0,.. )
> double *Coefficient* : 0<=x<=1, calculated based on the position match, 
> earlier the better
> The resulting score is a long, which means that at the moment, any weight<10 
> can bring inconsistencies.
> *Edge cases* 
> Weight =1
> Score = 1( if we have a match at the beginning of the suggestion) or 0 ( for 
> any other match)
> Weight =0
> Score = 0 ( independently of the position match coefficient)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-repro - Build # 824 - Still Unstable

2018-06-15 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/824/

[...truncated 29 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-SmokeRelease-7.x/241/consoleText

[repro] Revision: 35a7e95bce54d53820ce3e0b1e2966f609a1c1d2

[repro] Ant options: -DsmokeTestRelease.java9=/home/jenkins/tools/java/latest1.9
[repro] Repro line:  ant test  -Dtestcase=SolrRrdBackendFactoryTest 
-Dtests.method=testBasic -Dtests.seed=D725B215CF28E4C2 -Dtests.multiplier=2 
-Dtests.locale=zh-SG -Dtests.timezone=MIT -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=IndexSizeTriggerTest 
-Dtests.method=testMergeIntegration -Dtests.seed=D725B215CF28E4C2 
-Dtests.multiplier=2 -Dtests.locale=bg-BG -Dtests.timezone=Europe/Zagreb 
-Dtests.asserts=true -Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=IndexSizeTriggerTest 
-Dtests.method=testSplitIntegration -Dtests.seed=D725B215CF28E4C2 
-Dtests.multiplier=2 -Dtests.locale=bg-BG -Dtests.timezone=Europe/Zagreb 
-Dtests.asserts=true -Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=CdcrBidirectionalTest 
-Dtests.method=testBiDir -Dtests.seed=D725B215CF28E4C2 -Dtests.multiplier=2 
-Dtests.locale=fi -Dtests.timezone=Atlantic/Cape_Verde -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
228a84fd6db3ef5fc1624d69e1c82a1f02c51352
[repro] git fetch
[repro] git checkout 35a7e95bce54d53820ce3e0b1e2966f609a1c1d2

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   SolrRrdBackendFactoryTest
[repro]   CdcrBidirectionalTest
[repro]   IndexSizeTriggerTest
[repro] ant compile-test

[...truncated 3318 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=15 
-Dtests.class="*.SolrRrdBackendFactoryTest|*.CdcrBidirectionalTest|*.IndexSizeTriggerTest"
 -Dtests.showOutput=onerror 
-DsmokeTestRelease.java9=/home/jenkins/tools/java/latest1.9 
-Dtests.seed=D725B215CF28E4C2 -Dtests.multiplier=2 -Dtests.locale=zh-SG 
-Dtests.timezone=MIT -Dtests.asserts=true -Dtests.file.encoding=UTF-8

[...truncated 12779 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   0/5 failed: org.apache.solr.cloud.cdcr.CdcrBidirectionalTest
[repro]   2/5 failed: org.apache.solr.metrics.rrd.SolrRrdBackendFactoryTest
[repro]   5/5 failed: org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest

[repro] Re-testing 100% failures at the tip of branch_7x
[repro] ant clean

[...truncated 8 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   IndexSizeTriggerTest
[repro] ant compile-test

[...truncated 3318 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.IndexSizeTriggerTest" -Dtests.showOutput=onerror 
-DsmokeTestRelease.java9=/home/jenkins/tools/java/latest1.9 
-Dtests.seed=D725B215CF28E4C2 -Dtests.multiplier=2 -Dtests.locale=bg-BG 
-Dtests.timezone=Europe/Zagreb -Dtests.asserts=true -Dtests.file.encoding=UTF-8

[...truncated 22981 lines...]
[repro] Setting last failure code to 256

[repro] Failures at the tip of branch_7x:
[repro]   4/5 failed: org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest
[repro] git checkout 228a84fd6db3ef5fc1624d69e1c82a1f02c51352

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 5 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk-9.0.4) - Build # 2132 - Still Unstable!

2018-06-15 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2132/
Java: 64bit/jdk-9.0.4 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
org.apache.solr.cloud.TestSolrCloudWithDelegationTokens.testDelegationTokenRenew

Error Message:
expected:<200> but was:<403>

Stack Trace:
java.lang.AssertionError: expected:<200> but was:<403>
at 
__randomizedtesting.SeedInfo.seed([5B6E8CBF0F06AE92:6CF578A137CA7336]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.cloud.TestSolrCloudWithDelegationTokens.renewDelegationToken(TestSolrCloudWithDelegationTokens.java:132)
at 
org.apache.solr.cloud.TestSolrCloudWithDelegationTokens.verifyDelegationTokenRenew(TestSolrCloudWithDelegationTokens.java:316)
at 
org.apache.solr.cloud.TestSolrCloudWithDelegationTokens.testDelegationTokenRenew(TestSolrCloudWithDelegationTokens.java:333)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapte

[GitHub] lucene-solr pull request #398: Lucene 8343 data type migration

2018-06-15 Thread alessandrobenedetti
Github user alessandrobenedetti commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/398#discussion_r195698769
  
--- Diff: 
lucene/suggest/src/java/org/apache/lucene/search/suggest/Lookup.java ---
@@ -53,7 +53,7 @@
 public final Object highlightKey;
 
 /** the key's weight */
-public final long value;
+public final double value;
--- End diff --

I agree, just adde!


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Windows (32bit/jdk1.8.0_172) - Build # 634 - Still Unstable!

2018-06-15 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/634/
Java: 32bit/jdk1.8.0_172 -server -XX:+UseParallelGC

21 tests failed.
FAILED:  org.apache.solr.cloud.cdcr.CdcrBidirectionalTest.testBiDir

Error Message:
Captured an uncaught exception in thread: Thread[id=11447, 
name=cdcr-replicator-6025-thread-1, state=RUNNABLE, 
group=TGRP-CdcrBidirectionalTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=11447, name=cdcr-replicator-6025-thread-1, 
state=RUNNABLE, group=TGRP-CdcrBidirectionalTest]
Caused by: java.lang.AssertionError
at __randomizedtesting.SeedInfo.seed([B111D8F964DB]:0)
at 
org.apache.solr.update.CdcrUpdateLog$CdcrLogReader.forwardSeek(CdcrUpdateLog.java:611)
at org.apache.solr.handler.CdcrReplicator.run(CdcrReplicator.java:125)
at 
org.apache.solr.handler.CdcrReplicatorScheduler.lambda$null$0(CdcrReplicatorScheduler.java:81)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)


FAILED:  org.apache.solr.update.MaxSizeAutoCommitTest.endToEndTest 
{seed=[B111D8F964DB:AB47B7CF4A1843D9]}

Error Message:
Tlog size exceeds the max size bound. Tlog path: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\solr-core\test\J1\temp\solr.update.MaxSizeAutoCommitTest_B111D8F964DB-001\init-core-data-001\tlog\tlog.004,
 tlog size: 5574

Stack Trace:
java.lang.AssertionError: Tlog size exceeds the max size bound. Tlog path: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\solr-core\test\J1\temp\solr.update.MaxSizeAutoCommitTest_B111D8F964DB-001\init-core-data-001\tlog\tlog.004,
 tlog size: 5574
at 
__randomizedtesting.SeedInfo.seed([B111D8F964DB:AB47B7CF4A1843D9]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.update.MaxSizeAutoCommitTest.getTlogFileSizes(MaxSizeAutoCommitTest.java:379)
at 
org.apache.solr.update.MaxSizeAutoCommitTest.endToEndTest(MaxSizeAutoCommitTest.java:231)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.ca

[JENKINS] Lucene-Solr-repro - Build # 823 - Unstable

2018-06-15 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/823/

[...truncated 29 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-NightlyTests-7.x/241/consoleText

[repro] Revision: 35a7e95bce54d53820ce3e0b1e2966f609a1c1d2

[repro] Ant options: -Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt
[repro] Repro line:  ant test  -Dtestcase=IndexSizeTriggerTest 
-Dtests.method=testSplitIntegration -Dtests.seed=D0587EE810FE7C38 
-Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=de-DE -Dtests.timezone=Etc/UCT -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=IndexSizeTriggerTest 
-Dtests.method=testMixedBounds -Dtests.seed=D0587EE810FE7C38 
-Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=de-DE -Dtests.timezone=Etc/UCT -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=IndexSizeTriggerTest 
-Dtests.method=testMergeIntegration -Dtests.seed=D0587EE810FE7C38 
-Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=de-DE -Dtests.timezone=Etc/UCT -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
228a84fd6db3ef5fc1624d69e1c82a1f02c51352
[repro] git fetch
[repro] git checkout 35a7e95bce54d53820ce3e0b1e2966f609a1c1d2

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   IndexSizeTriggerTest
[repro] ant compile-test

[...truncated 3318 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.IndexSizeTriggerTest" -Dtests.showOutput=onerror 
-Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt
 -Dtests.seed=D0587EE810FE7C38 -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=de-DE -Dtests.timezone=Etc/UCT -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[...truncated 8610 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   4/5 failed: org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest
[repro] git checkout 228a84fd6db3ef5fc1624d69e1c82a1f02c51352

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 6 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

Re: How do we interpret replicationFactor ?

2018-06-15 Thread Varun Thacker
If someone has 5 minutes could they please review my approach taken
in SOLR-11676

On Fri, Jun 15, 2018 at 12:24 PM, Varun Thacker  wrote:

> Thanks Tomás
>
> The approach I'm taking is SolrJ never sets replicationFactor and keep
> back-compat for older clients who would set both replicationFactor and
> nrtReplicas for the same thing
>
> I'm not going to remove it from cluster state just yet ( even with keeping
> back-compat ) . I'm thinking this parameter could mean an overarching
> replicationFactor ( used internally ) which would be a sum of all the
> replica types . We could use this info internally while external users
> would not be able to set it in the future
>
> On Fri, Jun 15, 2018 at 10:06 AM, Tomás Fernández Löbbe <
> tomasflo...@gmail.com> wrote:
>
>> I think we should deprecate it. There were some concerns about this
>> because new users would understand quickly what "replicationFactor" is,
>> while "nrtReplicas" is not so intuitive, but I don't like having two ways
>> to do the same, and now that there are multiple types of replicas I think
>> it's better for the parameter to be explicit.
>> I would still keep accepting the parameter for backwards compatibility,
>> but maybe remove the internal use of it? Maybe even remove it from the
>> clusterstate (and again, make sure we can still read cluster states that
>> have it, for upgrades).
>>
>> On Thu, Jun 14, 2018 at 2:46 PM, Varun Thacker  wrote:
>>
>>> While working on SOLR-11676
>>>  a few questions came
>>> that were't obvious
>>>
>>> Should a user be allowed to specify replicationFactor and nrtReplicas ?
>>> Today it's possible but my answer was it shouldn't be. What do others
>>> think?
>>>
>>> If everyone agrees the two shouldn't be specified then there is one
>>> problem while fixing this - SolrJ
>>>
>>> if (nrtReplicas != null) {
>>>   params.set( ZkStateReader.REPLICATION_FACTOR, nrtReplicas);// Keep both 
>>> for compatibility?
>>>   params.set( ZkStateReader.NRT_REPLICAS, nrtReplicas);
>>> }
>>>
>>> SolrJ sets both replicationFactor and nrtReplicas with the same value. So 
>>> if we simply put a check at the server saying "don't allow both parameters" 
>>> all SolrJ calls from older clients will fail
>>>
>>> The compromise would be the server could check if both nrtReplicas and 
>>> replicationFactor are equal then don't error out
>>>
>>>
>>> Second question, SolrJ doesn't allow a user to specify replicationFactor 
>>> but if you're using the API directly it's allowed.
>>>
>>> Do we plan on deprecating replicationFactor eventually in favour of 
>>> nrtReplicas ? If yes would 7.5 be a good place to start throwing a warning ?
>>>
>>>
>>>
>>
>


[jira] [Updated] (SOLR-11676) nrt replicas is always 1 when not specified

2018-06-15 Thread Varun Thacker (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-11676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-11676:
-
Attachment: SOLR-11676.patch

> nrt replicas is always 1 when not specified
> ---
>
> Key: SOLR-11676
> URL: https://issues.apache.org/jira/browse/SOLR-11676
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Priority: Major
> Attachments: SOLR-11676.patch, SOLR-11676.patch, SOLR-11676.patch
>
>
> I created 1 2 shard X 2 replica collection . Here's the log entry for it
> {code}
> 2017-11-27 06:43:47.071 INFO  (qtp159259014-22) [   ] 
> o.a.s.h.a.CollectionsHandler Invoked Collection Action :create with params 
> replicationFactor=2&routerName=compositeId&collection.configName=_default&maxShardsPerNode=2&name=test_recovery&router.name=compositeId&action=CREATE&numShards=2&wt=json&_=1511764995711
>  and sendToOCPQueue=true
> {code}
> And then when I look at the state.json file I see nrtReplicas is set to 1. 
> Any combination of numShards and replicationFactor without explicitly 
> specifying the "nrtReplicas" param puts the "nrtReplicas" as 1 instead of 
> using the replicationFactor value
> {code}
> {"test_recovery":{
> "pullReplicas":"0",
> "replicationFactor":"2",
> ...
> "nrtReplicas":"1",
> "tlogReplicas":"0",
> ..
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-7.4 - Build # 6 - Still Unstable

2018-06-15 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.4/6/

2 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testMergeIntegration

Error Message:
did not finish processing in time

Stack Trace:
java.lang.AssertionError: did not finish processing in time
at 
__randomizedtesting.SeedInfo.seed([EED05680BFF4EE34:BD6914305DE57BCE]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testMergeIntegration(IndexSizeTriggerTest.java:406)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
org.apache.solr.cloud.autoscaling.sim.TestTriggerIntegration.testNodeAddedTrigger

Error Message:
The trigger did not fire at all

Stack Trace:
java.lang.AssertionError: The trigger did not fire at all
at 
__randomizedtesting.SeedInfo.seed([EED05680BFF4EE34:C8BAE8B9A4FB4158]:0)
at org.junit.Assert.

Re: How do we interpret replicationFactor ?

2018-06-15 Thread Varun Thacker
Thanks Tomás

The approach I'm taking is SolrJ never sets replicationFactor and keep
back-compat for older clients who would set both replicationFactor and
nrtReplicas for the same thing

I'm not going to remove it from cluster state just yet ( even with keeping
back-compat ) . I'm thinking this parameter could mean an overarching
replicationFactor ( used internally ) which would be a sum of all the
replica types . We could use this info internally while external users
would not be able to set it in the future

On Fri, Jun 15, 2018 at 10:06 AM, Tomás Fernández Löbbe <
tomasflo...@gmail.com> wrote:

> I think we should deprecate it. There were some concerns about this
> because new users would understand quickly what "replicationFactor" is,
> while "nrtReplicas" is not so intuitive, but I don't like having two ways
> to do the same, and now that there are multiple types of replicas I think
> it's better for the parameter to be explicit.
> I would still keep accepting the parameter for backwards compatibility,
> but maybe remove the internal use of it? Maybe even remove it from the
> clusterstate (and again, make sure we can still read cluster states that
> have it, for upgrades).
>
> On Thu, Jun 14, 2018 at 2:46 PM, Varun Thacker  wrote:
>
>> While working on SOLR-11676
>>  a few questions came
>> that were't obvious
>>
>> Should a user be allowed to specify replicationFactor and nrtReplicas ?
>> Today it's possible but my answer was it shouldn't be. What do others
>> think?
>>
>> If everyone agrees the two shouldn't be specified then there is one
>> problem while fixing this - SolrJ
>>
>> if (nrtReplicas != null) {
>>   params.set( ZkStateReader.REPLICATION_FACTOR, nrtReplicas);// Keep both 
>> for compatibility?
>>   params.set( ZkStateReader.NRT_REPLICAS, nrtReplicas);
>> }
>>
>> SolrJ sets both replicationFactor and nrtReplicas with the same value. So if 
>> we simply put a check at the server saying "don't allow both parameters" all 
>> SolrJ calls from older clients will fail
>>
>> The compromise would be the server could check if both nrtReplicas and 
>> replicationFactor are equal then don't error out
>>
>>
>> Second question, SolrJ doesn't allow a user to specify replicationFactor but 
>> if you're using the API directly it's allowed.
>>
>> Do we plan on deprecating replicationFactor eventually in favour of 
>> nrtReplicas ? If yes would 7.5 be a good place to start throwing a warning ?
>>
>>
>>
>


[GitHub] lucene-solr pull request #398: Lucene 8343 data type migration

2018-06-15 Thread alessandrobenedetti
Github user alessandrobenedetti commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/398#discussion_r195689810
  
--- Diff: 
lucene/suggest/src/java/org/apache/lucene/search/suggest/InputIterator.java ---
@@ -34,7 +34,7 @@
 public interface InputIterator extends BytesRefIterator {
 
   /** A term's weight, higher numbers mean better suggestions. */
--- End diff --

Hi Michael,
The reason to allow for null at the InputIterator level is to distinguish 
it from an explicit 0 weight.
In the DocumentDictionary this translates in differentiating when  the 
weight field was missing for the original document ( NULL ) in opposition to 
when the weight field was present and with 0 value.
At this level we just want to ensure that the same behavior is maintained 
when we build the auxiliary index : 
i.e. if the weight field was missing for the original document, I want it 
to be null for the auxiliary index as well.
How the different suggesters implementation will use this to return a 
suggestion score, I think will depend on a case by case scenario.
Did I misunderstand anything here ?


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr 7.4

2018-06-15 Thread Simon Willnauer
this issue is fixed

On Fri, Jun 15, 2018 at 10:54 AM, Simon Willnauer
 wrote:
> our CI found a failure, I opened a blocker and attached a patch:
> https://issues.apache.org/jira/browse/LUCENE-8358
>
> On Fri, Jun 15, 2018 at 9:15 AM, Simon Willnauer
>  wrote:
>> +1 for a first RC
>>
>> On Fri, Jun 15, 2018 at 9:08 AM, Adrien Grand  wrote:
>>> It looks like blockers are all resolved, please let me know if I am missing
>>> something. I will build a first RC on Monday.
>>>
>>> Le jeu. 14 juin 2018 à 15:02, Alan Woodward  a écrit :

 LUCENE-8357 is in.

 On 14 Jun 2018, at 09:27, Adrien Grand  wrote:

 +1

 Le jeu. 14 juin 2018 à 10:02, Alan Woodward  a écrit
 :
>
> Hi Adrien,
>
> If possible I’d like to get LUCENE-8357 in, which fixes a regression in
> Explanations for Solr’s boost queries.
>
> Alan
>
>
> On 13 Jun 2018, at 20:42, Adrien Grand  wrote:
>
> It is. In general I trust your judgement to only backport low-risk fixes.
>
> Le mer. 13 juin 2018 à 21:03, Steve Rowe  a écrit :
>>
>> Crap, I forgot to ask for inclusion of SOLR-12481, a doc-only fix
>> related to SOLR-12434.  I’m going to assume that it’s okay to backport as
>> well, I’ll go do that now.  Sorry for the churn.
>>
>> --
>> Steve
>> www.lucidworks.com
>>
>> > On Jun 13, 2018, at 1:49 PM, Steve Rowe  wrote:
>> >
>> > Thanks, done.
>> >
>> > --
>> > Steve
>> > www.lucidworks.com
>> >
>> >> On Jun 13, 2018, at 1:08 PM, Adrien Grand  wrote:
>> >>
>> >> OK to backport SOLR-12434.
>> >>
>> >> Le mer. 13 juin 2018 à 18:41, Steve Rowe  a écrit :
>> >> Adrien,
>> >>
>> >> Are you okay with backporting SOLR-12434 to 7.4?
>> >>
>> >> --
>> >> Steve
>> >> www.lucidworks.com
>> >>
>> >>> On Jun 12, 2018, at 9:37 AM, Steve Rowe  wrote:
>> >>>
>> >>> Done, thanks Adrien.
>> >>>
>> >>> --
>> >>> Steve
>> >>> www.lucidworks.com
>> >>>
>>  On Jun 12, 2018, at 9:23 AM, Adrien Grand 
>>  wrote:
>> 
>>  +1 to backport LUCENE-8278 to 7.4. Thanks Steve.
>> 
>>  Le lun. 11 juin 2018 à 23:21, Steve Rowe  a écrit
>>  :
>>  Adrien,
>> 
>>  Are you okay with including the fix on LUCENE-8278 in 7.4?
>> 
>>  --
>>  Steve
>>  www.lucidworks.com
>> 
>> > On Jun 11, 2018, at 11:24 AM, Adrien Grand 
>> > wrote:
>> >
>> > No worries Uwe, we'll wait. Enjoy Buzzwords!
>> >
>> > Le lun. 11 juin 2018 à 17:08, Uwe Schindler  a
>> > écrit :
>> > I still have this new security issue and to fix it finally
>> > everywhere, it requires API changes. So please wait, I am working 
>> > but
>> > buzzwords is so interesting! 🤯
>> >
>> > Uwe
>> >
>> >
>> > Am June 11, 2018 2:45:54 PM UTC schrieb David Smiley
>> > :
>> > It'd be nice to get in this bug
>> > https://issues.apache.org/jira/browse/LUCENE-8344 but is pending a 
>> > review.
>> >
>> > On Tue, Jun 5, 2018 at 4:24 AM Adrien Grand 
>> > wrote:
>> > Hi all,
>> >
>> > We released 7.3 two months ago on April 4th and we accumulated
>> > quite a number of features, enhancements and fixes that are not 
>> > released
>> > yet, so I'd like to start working on releasing Lucene/Solr 7.4.0.
>> >
>> > I propose to create the 7.4 branch later this week and build the
>> > first RC early next week if that works for everyone. Please let me 
>> > know if
>> > that are bug fixes that we think should make it to 7.4 and might 
>> > not be
>> > ready by then.
>> >
>> > Adrien
>> > --
>> > Lucene/Solr Search Committer, Consultant, Developer, Author,
>> > Speaker
>> > LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
>> > http://www.solrenterprisesearchserver.com
>> >
>> > --
>> > Uwe Schindler
>> > Achterdiek 19, 28357 Bremen
>> > https://www.thetaphi.de
>> 
>> 
>> 
>>  -
>>  To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>>  For additional commands, e-mail: dev-h...@lucene.apache.org
>> 
>> >>>
>> >>
>> >
>>
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>

>>>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org

[jira] [Commented] (LUCENE-8358) Asserting trips when IW tries to free ram by writing DV updates

2018-06-15 Thread Adrien Grand (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16513579#comment-16513579
 ] 

Adrien Grand commented on LUCENE-8358:
--

+1 Let's put a comment to explain why you take a snapshot of the ram usage 
before pushing?

> Asserting trips when IW tries to free ram by writing DV updates
> ---
>
> Key: LUCENE-8358
> URL: https://issues.apache.org/jira/browse/LUCENE-8358
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 7.4, master (8.0), 7.5
>Reporter: Simon Willnauer
>Priority: Blocker
> Fix For: 7.4, master (8.0), 7.5
>
> Attachments: LUCENE-8358.patch, LUCENE-8358.patch, LUCENE-8358.patch
>
>
> This assertion is pretty new I think we need to relax is since there are 
> chances that this situation is valid ie. if a ReadersAndUpdates instance gets 
> concurrently written and readerpooling is off. That is just fine since this 
> is best effort anyway. I will attach a patch.
> {noformat}
> 07:35:14[junit4] Suite: org.apache.lucene.index.TestBinaryDocValuesUpdates
> 07:35:14[junit4] IGNOR/A 0.01s J0 | 
> TestBinaryDocValuesUpdates.testTonsOfUpdates
> 07:35:14[junit4]> Assumption #1: 'nightly' test group is disabled 
> (@Nightly())
> 07:35:14[junit4]   1> TEST: isNRT=false 
> reader1=StandardDirectoryReader(segments_1:4 _0(7.5.0):c2)
> 07:35:14[junit4]   1> TEST: now reopen
> 07:35:14[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestBinaryDocValuesUpdates -Dtests.method=testUpdatesAreFlushed 
> -Dtests.seed=B8D5250C8CAA9010 -Dtests.slow=true -Dtests.badapples=true 
> -Dtests.locale=en-IE -Dtests.timezone=SystemV/CST6 -Dtests.asserts=true 
> -Dtests.file.encoding=UTF8
> 07:35:14[junit4] FAILURE 0.04s J0 | 
> TestBinaryDocValuesUpdates.testUpdatesAreFlushed <<<
> 07:35:14[junit4]> Throwable #1: java.lang.AssertionError: Segment 
> [_2(7.5.0):c1] is not dropped yet
> 07:35:14[junit4]> at 
> __randomizedtesting.SeedInfo.seed([B8D5250C8CAA9010:8228ECC925943F29]:0)
> 07:35:14[junit4]> at 
> org.apache.lucene.index.IndexWriter.writeSomeDocValuesUpdates(IndexWriter.java:613)
> 07:35:14[junit4]> at 
> org.apache.lucene.index.FrozenBufferedUpdates.apply(FrozenBufferedUpdates.java:298)
> 07:35:14[junit4]> at 
> org.apache.lucene.index.IndexWriter.lambda$publishFrozenUpdates$3(IndexWriter.java:2594)
> 07:35:14[junit4]> at 
> org.apache.lucene.index.IndexWriter.processEvents(IndexWriter.java:5064)
> 07:35:14[junit4]> at 
> org.apache.lucene.index.IndexWriter.updateBinaryDocValue(IndexWriter.java:1742)
> 07:35:14[junit4]> at 
> org.apache.lucene.index.TestBinaryDocValuesUpdates.testUpdatesAreFlushed(TestBinaryDocValuesUpdates.java:100)
> 07:35:14[junit4]> at java.lang.Thread.run(Thread.java:748)
> 07:35:14[junit4]   2> Jun 15, 2018 1:35:14 AM 
> com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
>  uncaughtException
> 07:35:14[junit4]   2> WARNING: Uncaught exception in thread: 
> Thread[Lucene Merge Thread #1,5,TGRP-TestBinaryDocValuesUpdates]
> 07:35:14[junit4]   2> org.apache.lucene.index.MergePolicy$MergeException: 
> java.lang.IllegalStateException: this writer hit an unrecoverable error; 
> cannot merge
> 07:35:14[junit4]   2> at 
> __randomizedtesting.SeedInfo.seed([B8D5250C8CAA9010]:0)
> 07:35:14[junit4]   2> at 
> org.apache.lucene.index.ConcurrentMergeScheduler.handleMergeException(ConcurrentMergeScheduler.java:704)
> 07:35:14[junit4]   2> at 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:684)
> 07:35:14[junit4]   2> Caused by: java.lang.IllegalStateException: this 
> writer hit an unrecoverable error; cannot merge
> 07:35:14[junit4]   2> at 
> org.apache.lucene.index.IndexWriter._mergeInit(IndexWriter.java:4222)
> 07:35:14[junit4]   2> at 
> org.apache.lucene.index.IndexWriter.mergeInit(IndexWriter.java:4202)
> 07:35:14[junit4]   2> at 
> org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:4054)
> 07:35:14[junit4]   2> at 
> org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:625)
> 07:35:14[junit4]   2> at 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:662)
> 07:35:14[junit4]   2> Caused by: java.lang.AssertionError: Segment 
> [_2(7.5.0):c1] is not dropped yet
> 07:35:14[junit4]   2> at 
> __randomizedtesting.SeedInfo.seed([B8D5250C8CAA9010:8228ECC925943F29]:0)
> 07:35:14[junit4]   2> at 
> org.apache.lucene.index.IndexWriter.writeSomeDocValuesUpdates(IndexWriter.java:613)
> 07:35:14[junit4]   2

[jira] [Commented] (LUCENE-8358) Asserting trips when IW tries to free ram by writing DV updates

2018-06-15 Thread Simon Willnauer (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16513575#comment-16513575
 ] 

Simon Willnauer commented on LUCENE-8358:
-

[~jpountz] I think I fixed it now. it was broken before as well since that lock 
isn't protecting us from concurrent modifications to that value.


> Asserting trips when IW tries to free ram by writing DV updates
> ---
>
> Key: LUCENE-8358
> URL: https://issues.apache.org/jira/browse/LUCENE-8358
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 7.4, master (8.0), 7.5
>Reporter: Simon Willnauer
>Priority: Blocker
> Fix For: 7.4, master (8.0), 7.5
>
> Attachments: LUCENE-8358.patch, LUCENE-8358.patch, LUCENE-8358.patch
>
>
> This assertion is pretty new I think we need to relax is since there are 
> chances that this situation is valid ie. if a ReadersAndUpdates instance gets 
> concurrently written and readerpooling is off. That is just fine since this 
> is best effort anyway. I will attach a patch.
> {noformat}
> 07:35:14[junit4] Suite: org.apache.lucene.index.TestBinaryDocValuesUpdates
> 07:35:14[junit4] IGNOR/A 0.01s J0 | 
> TestBinaryDocValuesUpdates.testTonsOfUpdates
> 07:35:14[junit4]> Assumption #1: 'nightly' test group is disabled 
> (@Nightly())
> 07:35:14[junit4]   1> TEST: isNRT=false 
> reader1=StandardDirectoryReader(segments_1:4 _0(7.5.0):c2)
> 07:35:14[junit4]   1> TEST: now reopen
> 07:35:14[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestBinaryDocValuesUpdates -Dtests.method=testUpdatesAreFlushed 
> -Dtests.seed=B8D5250C8CAA9010 -Dtests.slow=true -Dtests.badapples=true 
> -Dtests.locale=en-IE -Dtests.timezone=SystemV/CST6 -Dtests.asserts=true 
> -Dtests.file.encoding=UTF8
> 07:35:14[junit4] FAILURE 0.04s J0 | 
> TestBinaryDocValuesUpdates.testUpdatesAreFlushed <<<
> 07:35:14[junit4]> Throwable #1: java.lang.AssertionError: Segment 
> [_2(7.5.0):c1] is not dropped yet
> 07:35:14[junit4]> at 
> __randomizedtesting.SeedInfo.seed([B8D5250C8CAA9010:8228ECC925943F29]:0)
> 07:35:14[junit4]> at 
> org.apache.lucene.index.IndexWriter.writeSomeDocValuesUpdates(IndexWriter.java:613)
> 07:35:14[junit4]> at 
> org.apache.lucene.index.FrozenBufferedUpdates.apply(FrozenBufferedUpdates.java:298)
> 07:35:14[junit4]> at 
> org.apache.lucene.index.IndexWriter.lambda$publishFrozenUpdates$3(IndexWriter.java:2594)
> 07:35:14[junit4]> at 
> org.apache.lucene.index.IndexWriter.processEvents(IndexWriter.java:5064)
> 07:35:14[junit4]> at 
> org.apache.lucene.index.IndexWriter.updateBinaryDocValue(IndexWriter.java:1742)
> 07:35:14[junit4]> at 
> org.apache.lucene.index.TestBinaryDocValuesUpdates.testUpdatesAreFlushed(TestBinaryDocValuesUpdates.java:100)
> 07:35:14[junit4]> at java.lang.Thread.run(Thread.java:748)
> 07:35:14[junit4]   2> Jun 15, 2018 1:35:14 AM 
> com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
>  uncaughtException
> 07:35:14[junit4]   2> WARNING: Uncaught exception in thread: 
> Thread[Lucene Merge Thread #1,5,TGRP-TestBinaryDocValuesUpdates]
> 07:35:14[junit4]   2> org.apache.lucene.index.MergePolicy$MergeException: 
> java.lang.IllegalStateException: this writer hit an unrecoverable error; 
> cannot merge
> 07:35:14[junit4]   2> at 
> __randomizedtesting.SeedInfo.seed([B8D5250C8CAA9010]:0)
> 07:35:14[junit4]   2> at 
> org.apache.lucene.index.ConcurrentMergeScheduler.handleMergeException(ConcurrentMergeScheduler.java:704)
> 07:35:14[junit4]   2> at 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:684)
> 07:35:14[junit4]   2> Caused by: java.lang.IllegalStateException: this 
> writer hit an unrecoverable error; cannot merge
> 07:35:14[junit4]   2> at 
> org.apache.lucene.index.IndexWriter._mergeInit(IndexWriter.java:4222)
> 07:35:14[junit4]   2> at 
> org.apache.lucene.index.IndexWriter.mergeInit(IndexWriter.java:4202)
> 07:35:14[junit4]   2> at 
> org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:4054)
> 07:35:14[junit4]   2> at 
> org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:625)
> 07:35:14[junit4]   2> at 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:662)
> 07:35:14[junit4]   2> Caused by: java.lang.AssertionError: Segment 
> [_2(7.5.0):c1] is not dropped yet
> 07:35:14[junit4]   2> at 
> __randomizedtesting.SeedInfo.seed([B8D5250C8CAA9010:8228ECC925943F29]:0)
> 07:35:14[junit4]   2> at 
> org.apache.lucene.index.IndexWriter.writeSomeDocV

[jira] [Updated] (LUCENE-8358) Asserting trips when IW tries to free ram by writing DV updates

2018-06-15 Thread Simon Willnauer (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Willnauer updated LUCENE-8358:

Attachment: LUCENE-8358.patch

> Asserting trips when IW tries to free ram by writing DV updates
> ---
>
> Key: LUCENE-8358
> URL: https://issues.apache.org/jira/browse/LUCENE-8358
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 7.4, master (8.0), 7.5
>Reporter: Simon Willnauer
>Priority: Blocker
> Fix For: 7.4, master (8.0), 7.5
>
> Attachments: LUCENE-8358.patch, LUCENE-8358.patch, LUCENE-8358.patch
>
>
> This assertion is pretty new I think we need to relax is since there are 
> chances that this situation is valid ie. if a ReadersAndUpdates instance gets 
> concurrently written and readerpooling is off. That is just fine since this 
> is best effort anyway. I will attach a patch.
> {noformat}
> 07:35:14[junit4] Suite: org.apache.lucene.index.TestBinaryDocValuesUpdates
> 07:35:14[junit4] IGNOR/A 0.01s J0 | 
> TestBinaryDocValuesUpdates.testTonsOfUpdates
> 07:35:14[junit4]> Assumption #1: 'nightly' test group is disabled 
> (@Nightly())
> 07:35:14[junit4]   1> TEST: isNRT=false 
> reader1=StandardDirectoryReader(segments_1:4 _0(7.5.0):c2)
> 07:35:14[junit4]   1> TEST: now reopen
> 07:35:14[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestBinaryDocValuesUpdates -Dtests.method=testUpdatesAreFlushed 
> -Dtests.seed=B8D5250C8CAA9010 -Dtests.slow=true -Dtests.badapples=true 
> -Dtests.locale=en-IE -Dtests.timezone=SystemV/CST6 -Dtests.asserts=true 
> -Dtests.file.encoding=UTF8
> 07:35:14[junit4] FAILURE 0.04s J0 | 
> TestBinaryDocValuesUpdates.testUpdatesAreFlushed <<<
> 07:35:14[junit4]> Throwable #1: java.lang.AssertionError: Segment 
> [_2(7.5.0):c1] is not dropped yet
> 07:35:14[junit4]> at 
> __randomizedtesting.SeedInfo.seed([B8D5250C8CAA9010:8228ECC925943F29]:0)
> 07:35:14[junit4]> at 
> org.apache.lucene.index.IndexWriter.writeSomeDocValuesUpdates(IndexWriter.java:613)
> 07:35:14[junit4]> at 
> org.apache.lucene.index.FrozenBufferedUpdates.apply(FrozenBufferedUpdates.java:298)
> 07:35:14[junit4]> at 
> org.apache.lucene.index.IndexWriter.lambda$publishFrozenUpdates$3(IndexWriter.java:2594)
> 07:35:14[junit4]> at 
> org.apache.lucene.index.IndexWriter.processEvents(IndexWriter.java:5064)
> 07:35:14[junit4]> at 
> org.apache.lucene.index.IndexWriter.updateBinaryDocValue(IndexWriter.java:1742)
> 07:35:14[junit4]> at 
> org.apache.lucene.index.TestBinaryDocValuesUpdates.testUpdatesAreFlushed(TestBinaryDocValuesUpdates.java:100)
> 07:35:14[junit4]> at java.lang.Thread.run(Thread.java:748)
> 07:35:14[junit4]   2> Jun 15, 2018 1:35:14 AM 
> com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
>  uncaughtException
> 07:35:14[junit4]   2> WARNING: Uncaught exception in thread: 
> Thread[Lucene Merge Thread #1,5,TGRP-TestBinaryDocValuesUpdates]
> 07:35:14[junit4]   2> org.apache.lucene.index.MergePolicy$MergeException: 
> java.lang.IllegalStateException: this writer hit an unrecoverable error; 
> cannot merge
> 07:35:14[junit4]   2> at 
> __randomizedtesting.SeedInfo.seed([B8D5250C8CAA9010]:0)
> 07:35:14[junit4]   2> at 
> org.apache.lucene.index.ConcurrentMergeScheduler.handleMergeException(ConcurrentMergeScheduler.java:704)
> 07:35:14[junit4]   2> at 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:684)
> 07:35:14[junit4]   2> Caused by: java.lang.IllegalStateException: this 
> writer hit an unrecoverable error; cannot merge
> 07:35:14[junit4]   2> at 
> org.apache.lucene.index.IndexWriter._mergeInit(IndexWriter.java:4222)
> 07:35:14[junit4]   2> at 
> org.apache.lucene.index.IndexWriter.mergeInit(IndexWriter.java:4202)
> 07:35:14[junit4]   2> at 
> org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:4054)
> 07:35:14[junit4]   2> at 
> org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:625)
> 07:35:14[junit4]   2> at 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:662)
> 07:35:14[junit4]   2> Caused by: java.lang.AssertionError: Segment 
> [_2(7.5.0):c1] is not dropped yet
> 07:35:14[junit4]   2> at 
> __randomizedtesting.SeedInfo.seed([B8D5250C8CAA9010:8228ECC925943F29]:0)
> 07:35:14[junit4]   2> at 
> org.apache.lucene.index.IndexWriter.writeSomeDocValuesUpdates(IndexWriter.java:613)
> 07:35:14[junit4]   2> at 
> org.apache.lucene.index.FrozenBufferedUpdates.apply(FrozenBufferedUpdates.java:298)
> 07:35:14

[jira] [Commented] (LUCENE-8358) Asserting trips when IW tries to free ram by writing DV updates

2018-06-15 Thread Adrien Grand (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16513567#comment-16513567
 ] 

Adrien Grand commented on LUCENE-8358:
--

Sorry I just realized my comment about locking does not apply. +1 to the patch

> Asserting trips when IW tries to free ram by writing DV updates
> ---
>
> Key: LUCENE-8358
> URL: https://issues.apache.org/jira/browse/LUCENE-8358
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 7.4, master (8.0), 7.5
>Reporter: Simon Willnauer
>Priority: Blocker
> Fix For: 7.4, master (8.0), 7.5
>
> Attachments: LUCENE-8358.patch, LUCENE-8358.patch
>
>
> This assertion is pretty new I think we need to relax is since there are 
> chances that this situation is valid ie. if a ReadersAndUpdates instance gets 
> concurrently written and readerpooling is off. That is just fine since this 
> is best effort anyway. I will attach a patch.
> {noformat}
> 07:35:14[junit4] Suite: org.apache.lucene.index.TestBinaryDocValuesUpdates
> 07:35:14[junit4] IGNOR/A 0.01s J0 | 
> TestBinaryDocValuesUpdates.testTonsOfUpdates
> 07:35:14[junit4]> Assumption #1: 'nightly' test group is disabled 
> (@Nightly())
> 07:35:14[junit4]   1> TEST: isNRT=false 
> reader1=StandardDirectoryReader(segments_1:4 _0(7.5.0):c2)
> 07:35:14[junit4]   1> TEST: now reopen
> 07:35:14[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestBinaryDocValuesUpdates -Dtests.method=testUpdatesAreFlushed 
> -Dtests.seed=B8D5250C8CAA9010 -Dtests.slow=true -Dtests.badapples=true 
> -Dtests.locale=en-IE -Dtests.timezone=SystemV/CST6 -Dtests.asserts=true 
> -Dtests.file.encoding=UTF8
> 07:35:14[junit4] FAILURE 0.04s J0 | 
> TestBinaryDocValuesUpdates.testUpdatesAreFlushed <<<
> 07:35:14[junit4]> Throwable #1: java.lang.AssertionError: Segment 
> [_2(7.5.0):c1] is not dropped yet
> 07:35:14[junit4]> at 
> __randomizedtesting.SeedInfo.seed([B8D5250C8CAA9010:8228ECC925943F29]:0)
> 07:35:14[junit4]> at 
> org.apache.lucene.index.IndexWriter.writeSomeDocValuesUpdates(IndexWriter.java:613)
> 07:35:14[junit4]> at 
> org.apache.lucene.index.FrozenBufferedUpdates.apply(FrozenBufferedUpdates.java:298)
> 07:35:14[junit4]> at 
> org.apache.lucene.index.IndexWriter.lambda$publishFrozenUpdates$3(IndexWriter.java:2594)
> 07:35:14[junit4]> at 
> org.apache.lucene.index.IndexWriter.processEvents(IndexWriter.java:5064)
> 07:35:14[junit4]> at 
> org.apache.lucene.index.IndexWriter.updateBinaryDocValue(IndexWriter.java:1742)
> 07:35:14[junit4]> at 
> org.apache.lucene.index.TestBinaryDocValuesUpdates.testUpdatesAreFlushed(TestBinaryDocValuesUpdates.java:100)
> 07:35:14[junit4]> at java.lang.Thread.run(Thread.java:748)
> 07:35:14[junit4]   2> Jun 15, 2018 1:35:14 AM 
> com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
>  uncaughtException
> 07:35:14[junit4]   2> WARNING: Uncaught exception in thread: 
> Thread[Lucene Merge Thread #1,5,TGRP-TestBinaryDocValuesUpdates]
> 07:35:14[junit4]   2> org.apache.lucene.index.MergePolicy$MergeException: 
> java.lang.IllegalStateException: this writer hit an unrecoverable error; 
> cannot merge
> 07:35:14[junit4]   2> at 
> __randomizedtesting.SeedInfo.seed([B8D5250C8CAA9010]:0)
> 07:35:14[junit4]   2> at 
> org.apache.lucene.index.ConcurrentMergeScheduler.handleMergeException(ConcurrentMergeScheduler.java:704)
> 07:35:14[junit4]   2> at 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:684)
> 07:35:14[junit4]   2> Caused by: java.lang.IllegalStateException: this 
> writer hit an unrecoverable error; cannot merge
> 07:35:14[junit4]   2> at 
> org.apache.lucene.index.IndexWriter._mergeInit(IndexWriter.java:4222)
> 07:35:14[junit4]   2> at 
> org.apache.lucene.index.IndexWriter.mergeInit(IndexWriter.java:4202)
> 07:35:14[junit4]   2> at 
> org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:4054)
> 07:35:14[junit4]   2> at 
> org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:625)
> 07:35:14[junit4]   2> at 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:662)
> 07:35:14[junit4]   2> Caused by: java.lang.AssertionError: Segment 
> [_2(7.5.0):c1] is not dropped yet
> 07:35:14[junit4]   2> at 
> __randomizedtesting.SeedInfo.seed([B8D5250C8CAA9010:8228ECC925943F29]:0)
> 07:35:14[junit4]   2> at 
> org.apache.lucene.index.IndexWriter.writeSomeDocValuesUpdates(IndexWriter.java:613)
> 07:35:14[junit4]   2> at 
> org.apache.lucene.in

[jira] [Commented] (LUCENE-8358) Asserting trips when IW tries to free ram by writing DV updates

2018-06-15 Thread Simon Willnauer (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16513563#comment-16513563
 ] 

Simon Willnauer commented on LUCENE-8358:
-

[~jpountz] I attached a new patch

> Asserting trips when IW tries to free ram by writing DV updates
> ---
>
> Key: LUCENE-8358
> URL: https://issues.apache.org/jira/browse/LUCENE-8358
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 7.4, master (8.0), 7.5
>Reporter: Simon Willnauer
>Priority: Blocker
> Fix For: 7.4, master (8.0), 7.5
>
> Attachments: LUCENE-8358.patch, LUCENE-8358.patch
>
>
> This assertion is pretty new I think we need to relax is since there are 
> chances that this situation is valid ie. if a ReadersAndUpdates instance gets 
> concurrently written and readerpooling is off. That is just fine since this 
> is best effort anyway. I will attach a patch.
> {noformat}
> 07:35:14[junit4] Suite: org.apache.lucene.index.TestBinaryDocValuesUpdates
> 07:35:14[junit4] IGNOR/A 0.01s J0 | 
> TestBinaryDocValuesUpdates.testTonsOfUpdates
> 07:35:14[junit4]> Assumption #1: 'nightly' test group is disabled 
> (@Nightly())
> 07:35:14[junit4]   1> TEST: isNRT=false 
> reader1=StandardDirectoryReader(segments_1:4 _0(7.5.0):c2)
> 07:35:14[junit4]   1> TEST: now reopen
> 07:35:14[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestBinaryDocValuesUpdates -Dtests.method=testUpdatesAreFlushed 
> -Dtests.seed=B8D5250C8CAA9010 -Dtests.slow=true -Dtests.badapples=true 
> -Dtests.locale=en-IE -Dtests.timezone=SystemV/CST6 -Dtests.asserts=true 
> -Dtests.file.encoding=UTF8
> 07:35:14[junit4] FAILURE 0.04s J0 | 
> TestBinaryDocValuesUpdates.testUpdatesAreFlushed <<<
> 07:35:14[junit4]> Throwable #1: java.lang.AssertionError: Segment 
> [_2(7.5.0):c1] is not dropped yet
> 07:35:14[junit4]> at 
> __randomizedtesting.SeedInfo.seed([B8D5250C8CAA9010:8228ECC925943F29]:0)
> 07:35:14[junit4]> at 
> org.apache.lucene.index.IndexWriter.writeSomeDocValuesUpdates(IndexWriter.java:613)
> 07:35:14[junit4]> at 
> org.apache.lucene.index.FrozenBufferedUpdates.apply(FrozenBufferedUpdates.java:298)
> 07:35:14[junit4]> at 
> org.apache.lucene.index.IndexWriter.lambda$publishFrozenUpdates$3(IndexWriter.java:2594)
> 07:35:14[junit4]> at 
> org.apache.lucene.index.IndexWriter.processEvents(IndexWriter.java:5064)
> 07:35:14[junit4]> at 
> org.apache.lucene.index.IndexWriter.updateBinaryDocValue(IndexWriter.java:1742)
> 07:35:14[junit4]> at 
> org.apache.lucene.index.TestBinaryDocValuesUpdates.testUpdatesAreFlushed(TestBinaryDocValuesUpdates.java:100)
> 07:35:14[junit4]> at java.lang.Thread.run(Thread.java:748)
> 07:35:14[junit4]   2> Jun 15, 2018 1:35:14 AM 
> com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
>  uncaughtException
> 07:35:14[junit4]   2> WARNING: Uncaught exception in thread: 
> Thread[Lucene Merge Thread #1,5,TGRP-TestBinaryDocValuesUpdates]
> 07:35:14[junit4]   2> org.apache.lucene.index.MergePolicy$MergeException: 
> java.lang.IllegalStateException: this writer hit an unrecoverable error; 
> cannot merge
> 07:35:14[junit4]   2> at 
> __randomizedtesting.SeedInfo.seed([B8D5250C8CAA9010]:0)
> 07:35:14[junit4]   2> at 
> org.apache.lucene.index.ConcurrentMergeScheduler.handleMergeException(ConcurrentMergeScheduler.java:704)
> 07:35:14[junit4]   2> at 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:684)
> 07:35:14[junit4]   2> Caused by: java.lang.IllegalStateException: this 
> writer hit an unrecoverable error; cannot merge
> 07:35:14[junit4]   2> at 
> org.apache.lucene.index.IndexWriter._mergeInit(IndexWriter.java:4222)
> 07:35:14[junit4]   2> at 
> org.apache.lucene.index.IndexWriter.mergeInit(IndexWriter.java:4202)
> 07:35:14[junit4]   2> at 
> org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:4054)
> 07:35:14[junit4]   2> at 
> org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:625)
> 07:35:14[junit4]   2> at 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:662)
> 07:35:14[junit4]   2> Caused by: java.lang.AssertionError: Segment 
> [_2(7.5.0):c1] is not dropped yet
> 07:35:14[junit4]   2> at 
> __randomizedtesting.SeedInfo.seed([B8D5250C8CAA9010:8228ECC925943F29]:0)
> 07:35:14[junit4]   2> at 
> org.apache.lucene.index.IndexWriter.writeSomeDocValuesUpdates(IndexWriter.java:613)
> 07:35:14[junit4]   2> at 
> org.apache.lucene.index.FrozenBufferedUpdates.apply(FrozenB

[jira] [Updated] (LUCENE-8358) Asserting trips when IW tries to free ram by writing DV updates

2018-06-15 Thread Simon Willnauer (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Willnauer updated LUCENE-8358:

Attachment: LUCENE-8358.patch

> Asserting trips when IW tries to free ram by writing DV updates
> ---
>
> Key: LUCENE-8358
> URL: https://issues.apache.org/jira/browse/LUCENE-8358
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 7.4, master (8.0), 7.5
>Reporter: Simon Willnauer
>Priority: Blocker
> Fix For: 7.4, master (8.0), 7.5
>
> Attachments: LUCENE-8358.patch, LUCENE-8358.patch
>
>
> This assertion is pretty new I think we need to relax is since there are 
> chances that this situation is valid ie. if a ReadersAndUpdates instance gets 
> concurrently written and readerpooling is off. That is just fine since this 
> is best effort anyway. I will attach a patch.
> {noformat}
> 07:35:14[junit4] Suite: org.apache.lucene.index.TestBinaryDocValuesUpdates
> 07:35:14[junit4] IGNOR/A 0.01s J0 | 
> TestBinaryDocValuesUpdates.testTonsOfUpdates
> 07:35:14[junit4]> Assumption #1: 'nightly' test group is disabled 
> (@Nightly())
> 07:35:14[junit4]   1> TEST: isNRT=false 
> reader1=StandardDirectoryReader(segments_1:4 _0(7.5.0):c2)
> 07:35:14[junit4]   1> TEST: now reopen
> 07:35:14[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestBinaryDocValuesUpdates -Dtests.method=testUpdatesAreFlushed 
> -Dtests.seed=B8D5250C8CAA9010 -Dtests.slow=true -Dtests.badapples=true 
> -Dtests.locale=en-IE -Dtests.timezone=SystemV/CST6 -Dtests.asserts=true 
> -Dtests.file.encoding=UTF8
> 07:35:14[junit4] FAILURE 0.04s J0 | 
> TestBinaryDocValuesUpdates.testUpdatesAreFlushed <<<
> 07:35:14[junit4]> Throwable #1: java.lang.AssertionError: Segment 
> [_2(7.5.0):c1] is not dropped yet
> 07:35:14[junit4]> at 
> __randomizedtesting.SeedInfo.seed([B8D5250C8CAA9010:8228ECC925943F29]:0)
> 07:35:14[junit4]> at 
> org.apache.lucene.index.IndexWriter.writeSomeDocValuesUpdates(IndexWriter.java:613)
> 07:35:14[junit4]> at 
> org.apache.lucene.index.FrozenBufferedUpdates.apply(FrozenBufferedUpdates.java:298)
> 07:35:14[junit4]> at 
> org.apache.lucene.index.IndexWriter.lambda$publishFrozenUpdates$3(IndexWriter.java:2594)
> 07:35:14[junit4]> at 
> org.apache.lucene.index.IndexWriter.processEvents(IndexWriter.java:5064)
> 07:35:14[junit4]> at 
> org.apache.lucene.index.IndexWriter.updateBinaryDocValue(IndexWriter.java:1742)
> 07:35:14[junit4]> at 
> org.apache.lucene.index.TestBinaryDocValuesUpdates.testUpdatesAreFlushed(TestBinaryDocValuesUpdates.java:100)
> 07:35:14[junit4]> at java.lang.Thread.run(Thread.java:748)
> 07:35:14[junit4]   2> Jun 15, 2018 1:35:14 AM 
> com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
>  uncaughtException
> 07:35:14[junit4]   2> WARNING: Uncaught exception in thread: 
> Thread[Lucene Merge Thread #1,5,TGRP-TestBinaryDocValuesUpdates]
> 07:35:14[junit4]   2> org.apache.lucene.index.MergePolicy$MergeException: 
> java.lang.IllegalStateException: this writer hit an unrecoverable error; 
> cannot merge
> 07:35:14[junit4]   2> at 
> __randomizedtesting.SeedInfo.seed([B8D5250C8CAA9010]:0)
> 07:35:14[junit4]   2> at 
> org.apache.lucene.index.ConcurrentMergeScheduler.handleMergeException(ConcurrentMergeScheduler.java:704)
> 07:35:14[junit4]   2> at 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:684)
> 07:35:14[junit4]   2> Caused by: java.lang.IllegalStateException: this 
> writer hit an unrecoverable error; cannot merge
> 07:35:14[junit4]   2> at 
> org.apache.lucene.index.IndexWriter._mergeInit(IndexWriter.java:4222)
> 07:35:14[junit4]   2> at 
> org.apache.lucene.index.IndexWriter.mergeInit(IndexWriter.java:4202)
> 07:35:14[junit4]   2> at 
> org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:4054)
> 07:35:14[junit4]   2> at 
> org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:625)
> 07:35:14[junit4]   2> at 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:662)
> 07:35:14[junit4]   2> Caused by: java.lang.AssertionError: Segment 
> [_2(7.5.0):c1] is not dropped yet
> 07:35:14[junit4]   2> at 
> __randomizedtesting.SeedInfo.seed([B8D5250C8CAA9010:8228ECC925943F29]:0)
> 07:35:14[junit4]   2> at 
> org.apache.lucene.index.IndexWriter.writeSomeDocValuesUpdates(IndexWriter.java:613)
> 07:35:14[junit4]   2> at 
> org.apache.lucene.index.FrozenBufferedUpdates.apply(FrozenBufferedUpdates.java:298)
> 07:35:14[junit4]   2> a

[jira] [Commented] (LUCENE-8358) Asserting trips when IW tries to free ram by writing DV updates

2018-06-15 Thread Adrien Grand (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16513551#comment-16513551
 ] 

Adrien Grand commented on LUCENE-8358:
--

The change makes sense to me. Should we sort outside of the synchronized block 
in ReaderPool and use a more meaningful variable name than `list`? Otherwise +1.

> Asserting trips when IW tries to free ram by writing DV updates
> ---
>
> Key: LUCENE-8358
> URL: https://issues.apache.org/jira/browse/LUCENE-8358
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 7.4, master (8.0), 7.5
>Reporter: Simon Willnauer
>Priority: Blocker
> Fix For: 7.4, master (8.0), 7.5
>
> Attachments: LUCENE-8358.patch
>
>
> This assertion is pretty new I think we need to relax is since there are 
> chances that this situation is valid ie. if a ReadersAndUpdates instance gets 
> concurrently written and readerpooling is off. That is just fine since this 
> is best effort anyway. I will attach a patch.
> {noformat}
> 07:35:14[junit4] Suite: org.apache.lucene.index.TestBinaryDocValuesUpdates
> 07:35:14[junit4] IGNOR/A 0.01s J0 | 
> TestBinaryDocValuesUpdates.testTonsOfUpdates
> 07:35:14[junit4]> Assumption #1: 'nightly' test group is disabled 
> (@Nightly())
> 07:35:14[junit4]   1> TEST: isNRT=false 
> reader1=StandardDirectoryReader(segments_1:4 _0(7.5.0):c2)
> 07:35:14[junit4]   1> TEST: now reopen
> 07:35:14[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestBinaryDocValuesUpdates -Dtests.method=testUpdatesAreFlushed 
> -Dtests.seed=B8D5250C8CAA9010 -Dtests.slow=true -Dtests.badapples=true 
> -Dtests.locale=en-IE -Dtests.timezone=SystemV/CST6 -Dtests.asserts=true 
> -Dtests.file.encoding=UTF8
> 07:35:14[junit4] FAILURE 0.04s J0 | 
> TestBinaryDocValuesUpdates.testUpdatesAreFlushed <<<
> 07:35:14[junit4]> Throwable #1: java.lang.AssertionError: Segment 
> [_2(7.5.0):c1] is not dropped yet
> 07:35:14[junit4]> at 
> __randomizedtesting.SeedInfo.seed([B8D5250C8CAA9010:8228ECC925943F29]:0)
> 07:35:14[junit4]> at 
> org.apache.lucene.index.IndexWriter.writeSomeDocValuesUpdates(IndexWriter.java:613)
> 07:35:14[junit4]> at 
> org.apache.lucene.index.FrozenBufferedUpdates.apply(FrozenBufferedUpdates.java:298)
> 07:35:14[junit4]> at 
> org.apache.lucene.index.IndexWriter.lambda$publishFrozenUpdates$3(IndexWriter.java:2594)
> 07:35:14[junit4]> at 
> org.apache.lucene.index.IndexWriter.processEvents(IndexWriter.java:5064)
> 07:35:14[junit4]> at 
> org.apache.lucene.index.IndexWriter.updateBinaryDocValue(IndexWriter.java:1742)
> 07:35:14[junit4]> at 
> org.apache.lucene.index.TestBinaryDocValuesUpdates.testUpdatesAreFlushed(TestBinaryDocValuesUpdates.java:100)
> 07:35:14[junit4]> at java.lang.Thread.run(Thread.java:748)
> 07:35:14[junit4]   2> Jun 15, 2018 1:35:14 AM 
> com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
>  uncaughtException
> 07:35:14[junit4]   2> WARNING: Uncaught exception in thread: 
> Thread[Lucene Merge Thread #1,5,TGRP-TestBinaryDocValuesUpdates]
> 07:35:14[junit4]   2> org.apache.lucene.index.MergePolicy$MergeException: 
> java.lang.IllegalStateException: this writer hit an unrecoverable error; 
> cannot merge
> 07:35:14[junit4]   2> at 
> __randomizedtesting.SeedInfo.seed([B8D5250C8CAA9010]:0)
> 07:35:14[junit4]   2> at 
> org.apache.lucene.index.ConcurrentMergeScheduler.handleMergeException(ConcurrentMergeScheduler.java:704)
> 07:35:14[junit4]   2> at 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:684)
> 07:35:14[junit4]   2> Caused by: java.lang.IllegalStateException: this 
> writer hit an unrecoverable error; cannot merge
> 07:35:14[junit4]   2> at 
> org.apache.lucene.index.IndexWriter._mergeInit(IndexWriter.java:4222)
> 07:35:14[junit4]   2> at 
> org.apache.lucene.index.IndexWriter.mergeInit(IndexWriter.java:4202)
> 07:35:14[junit4]   2> at 
> org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:4054)
> 07:35:14[junit4]   2> at 
> org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:625)
> 07:35:14[junit4]   2> at 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:662)
> 07:35:14[junit4]   2> Caused by: java.lang.AssertionError: Segment 
> [_2(7.5.0):c1] is not dropped yet
> 07:35:14[junit4]   2> at 
> __randomizedtesting.SeedInfo.seed([B8D5250C8CAA9010:8228ECC925943F29]:0)
> 07:35:14[junit4]   2> at 
> org.apache.lucene.index.IndexWriter.writeSomeDocValuesUpdates(IndexWriter.java:

Re: Lucene/Solr 7.4

2018-06-15 Thread Simon Willnauer
our CI found a failure, I opened a blocker and attached a patch:
https://issues.apache.org/jira/browse/LUCENE-8358

On Fri, Jun 15, 2018 at 9:15 AM, Simon Willnauer
 wrote:
> +1 for a first RC
>
> On Fri, Jun 15, 2018 at 9:08 AM, Adrien Grand  wrote:
>> It looks like blockers are all resolved, please let me know if I am missing
>> something. I will build a first RC on Monday.
>>
>> Le jeu. 14 juin 2018 à 15:02, Alan Woodward  a écrit :
>>>
>>> LUCENE-8357 is in.
>>>
>>> On 14 Jun 2018, at 09:27, Adrien Grand  wrote:
>>>
>>> +1
>>>
>>> Le jeu. 14 juin 2018 à 10:02, Alan Woodward  a écrit
>>> :

 Hi Adrien,

 If possible I’d like to get LUCENE-8357 in, which fixes a regression in
 Explanations for Solr’s boost queries.

 Alan


 On 13 Jun 2018, at 20:42, Adrien Grand  wrote:

 It is. In general I trust your judgement to only backport low-risk fixes.

 Le mer. 13 juin 2018 à 21:03, Steve Rowe  a écrit :
>
> Crap, I forgot to ask for inclusion of SOLR-12481, a doc-only fix
> related to SOLR-12434.  I’m going to assume that it’s okay to backport as
> well, I’ll go do that now.  Sorry for the churn.
>
> --
> Steve
> www.lucidworks.com
>
> > On Jun 13, 2018, at 1:49 PM, Steve Rowe  wrote:
> >
> > Thanks, done.
> >
> > --
> > Steve
> > www.lucidworks.com
> >
> >> On Jun 13, 2018, at 1:08 PM, Adrien Grand  wrote:
> >>
> >> OK to backport SOLR-12434.
> >>
> >> Le mer. 13 juin 2018 à 18:41, Steve Rowe  a écrit :
> >> Adrien,
> >>
> >> Are you okay with backporting SOLR-12434 to 7.4?
> >>
> >> --
> >> Steve
> >> www.lucidworks.com
> >>
> >>> On Jun 12, 2018, at 9:37 AM, Steve Rowe  wrote:
> >>>
> >>> Done, thanks Adrien.
> >>>
> >>> --
> >>> Steve
> >>> www.lucidworks.com
> >>>
>  On Jun 12, 2018, at 9:23 AM, Adrien Grand 
>  wrote:
> 
>  +1 to backport LUCENE-8278 to 7.4. Thanks Steve.
> 
>  Le lun. 11 juin 2018 à 23:21, Steve Rowe  a écrit
>  :
>  Adrien,
> 
>  Are you okay with including the fix on LUCENE-8278 in 7.4?
> 
>  --
>  Steve
>  www.lucidworks.com
> 
> > On Jun 11, 2018, at 11:24 AM, Adrien Grand 
> > wrote:
> >
> > No worries Uwe, we'll wait. Enjoy Buzzwords!
> >
> > Le lun. 11 juin 2018 à 17:08, Uwe Schindler  a
> > écrit :
> > I still have this new security issue and to fix it finally
> > everywhere, it requires API changes. So please wait, I am working 
> > but
> > buzzwords is so interesting! 🤯
> >
> > Uwe
> >
> >
> > Am June 11, 2018 2:45:54 PM UTC schrieb David Smiley
> > :
> > It'd be nice to get in this bug
> > https://issues.apache.org/jira/browse/LUCENE-8344 but is pending a 
> > review.
> >
> > On Tue, Jun 5, 2018 at 4:24 AM Adrien Grand 
> > wrote:
> > Hi all,
> >
> > We released 7.3 two months ago on April 4th and we accumulated
> > quite a number of features, enhancements and fixes that are not 
> > released
> > yet, so I'd like to start working on releasing Lucene/Solr 7.4.0.
> >
> > I propose to create the 7.4 branch later this week and build the
> > first RC early next week if that works for everyone. Please let me 
> > know if
> > that are bug fixes that we think should make it to 7.4 and might 
> > not be
> > ready by then.
> >
> > Adrien
> > --
> > Lucene/Solr Search Committer, Consultant, Developer, Author,
> > Speaker
> > LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
> > http://www.solrenterprisesearchserver.com
> >
> > --
> > Uwe Schindler
> > Achterdiek 19, 28357 Bremen
> > https://www.thetaphi.de
> 
> 
> 
>  -
>  To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>  For additional commands, e-mail: dev-h...@lucene.apache.org
> 
> >>>
> >>
> >
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>

>>>
>>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Windows (64bit/jdk-10) - Build # 7359 - Still Unstable!

2018-06-15 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7359/
Java: 64bit/jdk-10 -XX:+UseCompressedOops -XX:+UseG1GC

15 tests failed.
FAILED:  org.apache.solr.handler.TestReplicationHandler.doTestStressReplication

Error Message:
found:2[index.20180615054621005, index.20180615054621535, index.properties, 
replication.properties, snapshot_metadata]

Stack Trace:
java.lang.AssertionError: found:2[index.20180615054621005, 
index.20180615054621535, index.properties, replication.properties, 
snapshot_metadata]
at 
__randomizedtesting.SeedInfo.seed([B27FC4C88533A90E:69D4C40E801BC0BD]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.handler.TestReplicationHandler.checkForSingleIndex(TestReplicationHandler.java:968)
at 
org.apache.solr.handler.TestReplicationHandler.checkForSingleIndex(TestReplicationHandler.java:939)
at 
org.apache.solr.handler.TestReplicationHandler.doTestStressReplication(TestReplicationHandler.java:915)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$Stateme

[jira] [Commented] (LUCENE-8358) Asserting trips when IW tries to free ram by writing DV updates

2018-06-15 Thread Simon Willnauer (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16513544#comment-16513544
 ] 

Simon Willnauer commented on LUCENE-8358:
-

I also fixed `ReaderPool#getReadersByRam` which could create a PriorityQueue 
with a size of 0. I simplified it into a sorted list which defines it's order 
under lock. I think that's cleaner

> Asserting trips when IW tries to free ram by writing DV updates
> ---
>
> Key: LUCENE-8358
> URL: https://issues.apache.org/jira/browse/LUCENE-8358
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 7.4, master (8.0), 7.5
>Reporter: Simon Willnauer
>Priority: Blocker
> Fix For: 7.4, master (8.0), 7.5
>
> Attachments: LUCENE-8358.patch
>
>
> This assertion is pretty new I think we need to relax is since there are 
> chances that this situation is valid ie. if a ReadersAndUpdates instance gets 
> concurrently written and readerpooling is off. That is just fine since this 
> is best effort anyway. I will attach a patch.
> {noformat}
> 07:35:14[junit4] Suite: org.apache.lucene.index.TestBinaryDocValuesUpdates
> 07:35:14[junit4] IGNOR/A 0.01s J0 | 
> TestBinaryDocValuesUpdates.testTonsOfUpdates
> 07:35:14[junit4]> Assumption #1: 'nightly' test group is disabled 
> (@Nightly())
> 07:35:14[junit4]   1> TEST: isNRT=false 
> reader1=StandardDirectoryReader(segments_1:4 _0(7.5.0):c2)
> 07:35:14[junit4]   1> TEST: now reopen
> 07:35:14[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestBinaryDocValuesUpdates -Dtests.method=testUpdatesAreFlushed 
> -Dtests.seed=B8D5250C8CAA9010 -Dtests.slow=true -Dtests.badapples=true 
> -Dtests.locale=en-IE -Dtests.timezone=SystemV/CST6 -Dtests.asserts=true 
> -Dtests.file.encoding=UTF8
> 07:35:14[junit4] FAILURE 0.04s J0 | 
> TestBinaryDocValuesUpdates.testUpdatesAreFlushed <<<
> 07:35:14[junit4]> Throwable #1: java.lang.AssertionError: Segment 
> [_2(7.5.0):c1] is not dropped yet
> 07:35:14[junit4]> at 
> __randomizedtesting.SeedInfo.seed([B8D5250C8CAA9010:8228ECC925943F29]:0)
> 07:35:14[junit4]> at 
> org.apache.lucene.index.IndexWriter.writeSomeDocValuesUpdates(IndexWriter.java:613)
> 07:35:14[junit4]> at 
> org.apache.lucene.index.FrozenBufferedUpdates.apply(FrozenBufferedUpdates.java:298)
> 07:35:14[junit4]> at 
> org.apache.lucene.index.IndexWriter.lambda$publishFrozenUpdates$3(IndexWriter.java:2594)
> 07:35:14[junit4]> at 
> org.apache.lucene.index.IndexWriter.processEvents(IndexWriter.java:5064)
> 07:35:14[junit4]> at 
> org.apache.lucene.index.IndexWriter.updateBinaryDocValue(IndexWriter.java:1742)
> 07:35:14[junit4]> at 
> org.apache.lucene.index.TestBinaryDocValuesUpdates.testUpdatesAreFlushed(TestBinaryDocValuesUpdates.java:100)
> 07:35:14[junit4]> at java.lang.Thread.run(Thread.java:748)
> 07:35:14[junit4]   2> Jun 15, 2018 1:35:14 AM 
> com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
>  uncaughtException
> 07:35:14[junit4]   2> WARNING: Uncaught exception in thread: 
> Thread[Lucene Merge Thread #1,5,TGRP-TestBinaryDocValuesUpdates]
> 07:35:14[junit4]   2> org.apache.lucene.index.MergePolicy$MergeException: 
> java.lang.IllegalStateException: this writer hit an unrecoverable error; 
> cannot merge
> 07:35:14[junit4]   2> at 
> __randomizedtesting.SeedInfo.seed([B8D5250C8CAA9010]:0)
> 07:35:14[junit4]   2> at 
> org.apache.lucene.index.ConcurrentMergeScheduler.handleMergeException(ConcurrentMergeScheduler.java:704)
> 07:35:14[junit4]   2> at 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:684)
> 07:35:14[junit4]   2> Caused by: java.lang.IllegalStateException: this 
> writer hit an unrecoverable error; cannot merge
> 07:35:14[junit4]   2> at 
> org.apache.lucene.index.IndexWriter._mergeInit(IndexWriter.java:4222)
> 07:35:14[junit4]   2> at 
> org.apache.lucene.index.IndexWriter.mergeInit(IndexWriter.java:4202)
> 07:35:14[junit4]   2> at 
> org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:4054)
> 07:35:14[junit4]   2> at 
> org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:625)
> 07:35:14[junit4]   2> at 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:662)
> 07:35:14[junit4]   2> Caused by: java.lang.AssertionError: Segment 
> [_2(7.5.0):c1] is not dropped yet
> 07:35:14[junit4]   2> at 
> __randomizedtesting.SeedInfo.seed([B8D5250C8CAA9010:8228ECC925943F29]:0)
> 07:35:14[junit4]   2> at 
> org.apache.lucene.index.IndexWriter.writeS

  1   2   >