Re: Is searching on docValues=true indexed=false fields trappy

2017-08-28 Thread Varun Thacker
Hi Erick,

I don't think all Solr queries support indexed=false and docValues=true.
For example SOLR-11190 added such support to graph queries recently.

I added this to the ref guide under the graph query parser section ( I
notice a typo there which i"ll fix ):

+The supported fieldTypes are point fields with docValues enabled or string
fields with indexed=true or docValues=true.
+For string fields which are indexed=false and docValues=true please refer
to the javadocs for `DocValuesTermsQuery`
+for it's performance characteristics so indexed=true will perform better
for most use-cases.

Maybe your message sends out the message better?

On Tue, Aug 29, 2017 at 2:08 AM, Erick Erickson 
wrote:

> bq: What do you mean by 'be in the JVM'?
>
> I wasn't sure if a more efficient searching structure would be built
> in the JVM or not, building an inverted structure out of docValues
> there. But you're saying not, it's a linear scan of the uninverted
> structure out in the OS's memory.
>
> It would have been quite ironic if we started seeing a message like
> "inverting docValues field for searching" in the logs. Symmetrical to
> the background for docValues I'll admit... ;)
>
> Thanks for the confirmation.
>
> That leaves whether this is reasonable behavior or not. It feels like
> a documentation issue, something like
>
> 'While searching on fields having docValues="true", indexed="false" is
> possible, it is orders of magnitude slower than searching on fields
> with indexed="true". We _strongly_ recommend that any field that is
> used for searching be configured with indexed="true" '
>
> That's assuming that just dis-allowing searching on dv=true,
> indexed=false fields is not an option.
>
> WDYT?
>
> Erick
>
> On Mon, Aug 28, 2017 at 12:55 PM, Adrien Grand  wrote:
> > Indeed this will be a linear scan if it is not intersected with a
> selective
> > query, which is quite trappy.
> >
> > What do you mean by 'be in the JVM'?
> >
> > Le lun. 28 août 2017 à 21:49, Erick Erickson  a
> > écrit :
> >>
> >> You can search on fields with DV=true and indexed=false. But IIUC this
> >> is a "table scan". Does it really make sense to support this?
> >>
> >> NOTE: Haven't checked the code, but even if we build an efficient
> >> structure, it would be in the JVM, correct?
> >>
> >> -
> >> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> >> For additional commands, e-mail: dev-h...@lucene.apache.org
> >>
> >
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


Re: Release 7.0 process starts

2017-08-28 Thread Varun Thacker
I don't think holding up the release process infinitely till we stabilize
all the tests is an option. On the other hand getting an RC to build is
pretty difficult ( I am facing the same problem with 6.6.1 ) and I am sure
people will run into this while voting for the release?

We could identify the top 2/3 tests which fail regularly while building the
RC and either disable them or see if someone volunteers to fix them ?

On Tue, Aug 29, 2017 at 2:53 AM, Ishan Chattopadhyaya <
ichattopadhy...@gmail.com> wrote:

> > Those flaky Solr tests are annoying since people will also run into
> failures when
> > checking the RC? Should we disable these tests on the 7.0 branch so that
> building
> > and verifying this RC isn't annoying to everybody working on this
> release?
>
> +1. If it is hampering the release process, I think we should either not
> release without fixing them, or disable them for release (building,
> verifying).
>
> On Mon, Aug 28, 2017 at 11:47 PM, Anshum Gupta  wrote:
>
>> Though those failing tests are annoying, I would not recommend ignoring
>> those tests. We can manually ignore those test failures when we are testing
>> stuff out though.
>>
>> -Anshum
>>
>>
>>
>> On Aug 28, 2017, at 11:10 AM, Adrien Grand  wrote:
>>
>> Those flaky Solr tests are annoying since people will also run into
>> failures when checking the RC? Should we disable these tests on the 7.0
>> branch so that building and verifying this RC isn't annoying to everybody
>> working on this release?
>>
>> Le lun. 28 août 2017 à 19:23, Anshum Gupta  a écrit :
>>
>>> Thanks Adrien! It worked with a fresh clone, at least ant check-licenses
>>> worked, so I’m assuming the RC creation would work too.
>>> I’m running that, and it might take a couple of hours for me to create
>>> one, as a few SolrCloud tests are still a little flakey and they fail
>>> occasionally.
>>>
>>> -Anshum
>>>
>>>
>>>
>>> On Aug 28, 2017, at 10:13 AM, Anshum Gupta  wrote:
>>>
>>> Adrien,
>>>
>>> Yes, ant check-licenses fails with the same error, and so does ant
>>> validate (from the root dir). This is after running ant clean -f.
>>>
>>> BUILD FAILED
>>> /Users/anshum/workspace/lucene-solr/build.xml:117: The following error
>>> occurred while executing this line:
>>> /Users/anshum/workspace/lucene-solr/lucene/build.xml:90: The following
>>> error occurred while executing this line:
>>> /Users/anshum/workspace/lucene-solr/lucene/tools/custom-tasks.xml:62:
>>> JAR resource does not exist: analysis/icu/lib/icu4j-56.1.jar
>>>
>>> I didn’t realize that the dependency was upgraded, and what confuses me
>>> is that the file actually exists.
>>>
>>> anshum$ ls analysis/icu/lib/icu4j-5
>>> icu4j-56.1.jar  icu4j-59.1.jar
>>>
>>> It seems like it’s something that git clean, ant clean clean-jars etc.
>>> didn’t fix. This is really surprising but I’ll try and checking out again
>>> and creating and RC (after checking for the dependencies).
>>> I think ant should be responsible for cleaning this up, and not git so
>>> there’s something off there.
>>>
>>> -Anshum
>>>
>>>
>>>
>>> On Aug 28, 2017, at 8:51 AM, Adrien Grand  wrote:
>>>
>>> You mentioned you tried to run the script multiple times. Have you run
>>> git clean at some point? Maybe this is due to a stale working copy?
>>>
>>> Le lun. 28 août 2017 à 08:53, Adrien Grand  a écrit :
>>>
 Hi Anshum,

 Does running ant check-licenses from the Lucene directory fail as well?
 The error message that you are getting looks weird to me since Lucene 7.0
 depends on ICU 59.1, not 56.1 since https://issues.apache.org/jira
 /browse/LUCENE-7540.

 Le ven. 25 août 2017 à 23:42, Anshum Gupta  a
 écrit :

> A quick question, in case someone has an idea around what’s going on.
> When I run the following command:
>
> python3 -u dev-tools/scripts/buildAndPushRelease.py --push-local
> /Users/anshum/solr/release/7.0.0/rc0 --rc-num 1 --sign 
>
> I end up with the following error:
>
> BUILD FAILED
> /Users/anshum/workspace/lucene-solr/build.xml:117: The following
> error occurred while executing this line:
> /Users/anshum/workspace/lucene-solr/lucene/build.xml:90: The
> following error occurred while executing this line:
> /Users/anshum/workspace/lucene-solr/lucene/tools/custom-tasks.xml:62:
> JAR resource does not exist: analysis/icu/lib/icu4j-56.1.jar
>
> Any idea as to what’s going on? This generally fails after the tests
> have run, and the script has processed for about 45 minutes and it’s
> consistent i.e. all the times when the tests pass, the process fails with
> this warning.
>
> I can also confirm that this file exists at
> lucene/analysis/icy/lib/icu4j-56.1.jar .
>
> Has anyone else seen this when working on the release?
>
> -Anshum
>
>
>
> 

[jira] [Commented] (LUCENE-7944) GPG Password checker throws error only when process has not been terminated

2017-08-28 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16144766#comment-16144766
 ] 

Varun Thacker commented on LUCENE-7944:
---

Hi Anshum,

Did you also run into the same issue which prompted the change to all the 
remaining branches? 

Curious how others didn't run into this previously. 

> GPG Password checker throws error only when process has not been terminated
> ---
>
> Key: LUCENE-7944
> URL: https://issues.apache.org/jira/browse/LUCENE-7944
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: general/build
>Reporter: Anshum Gupta
>
> GPG Password checker throws error only when process has not been terminated. 
> This is just a tracker JIRA for the commits.
> Here are the commits:
> master: 47e7fbc4dcf73486a58138b110ffa3b5191d651a
> branch_7x: d2216a66cfa0a962fb716a272959bef4894d121a
> branch_7_0: 5f7c00ed09fede00379a15eab3588e2134778994
> branch_6_6: 6dd958ae833f335859b1f973e85c63ece895e997 
> https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=6dd958a
> *NOTE:* This is yet to be committed to 6x
> cc:  [~varunthacker]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Windows (32bit/jdk-9-ea+181) - Build # 6854 - Still Failing!

2017-08-28 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6854/
Java: 32bit/jdk-9-ea+181 -client -XX:+UseConcMarkSweepGC --illegal-access=deny

4 tests failed.
FAILED:  org.apache.solr.metrics.reporters.SolrSlf4jReporterTest.testReporter

Error Message:
Test requires that log-level is at-least INFO, but INFO is disabled

Stack Trace:
java.lang.AssertionError: Test requires that log-level is at-least INFO, but 
INFO is disabled
at 
__randomizedtesting.SeedInfo.seed([A4A19F29EC43F659:FB45B21E874F651C]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.metrics.reporters.SolrSlf4jReporterTest.ensureLoggingConfiguredAppropriately(SolrSlf4jReporterTest.java:99)
at 
org.apache.solr.metrics.reporters.SolrSlf4jReporterTest.testReporter(SolrSlf4jReporterTest.java:49)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  
org.apache.solr.update.processor.UpdateRequestProcessorFactoryTest.testUpdateDistribChainSkipping

Error Message:

[jira] [Commented] (LUCENE-7942) For Geo3d paths, aggregating distance values using "+" is not adequate for squared distances

2017-08-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16144755#comment-16144755
 ] 

ASF subversion and git services commented on LUCENE-7942:
-

Commit 97562e801d89d004561fe475ccb98e87ccc8bb77 in lucene-solr's branch 
refs/heads/branch_7x from [~kwri...@metacarta.com]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=97562e8 ]

LUCENE-7942: Explicitly require conversion to 'aggregation' form before 
aggregating distances, plus require a conversion back.  This is more efficient 
than my initial commit for this ticket, since sqrt values will be cached for 
path segments, and will not need to be recomputed.


> For Geo3d paths, aggregating distance values using "+" is not adequate for 
> squared distances
> 
>
> Key: LUCENE-7942
> URL: https://issues.apache.org/jira/browse/LUCENE-7942
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/spatial3d
>Reporter: Karl Wright
>Assignee: Karl Wright
> Fix For: 6.7, master (8.0), 7.1
>
>
> The GeoStandardPath object aggregates distances segment by segment using 
> simple addition.  For some kinds of Distance computations, though, addition 
> is not an adequate way to do this.  The xxxSquaredDistance computations, for 
> example, do not produce true squared distances but rather a distance metric 
> that is a combination of both squared and linear.
> I propose adding support in Distance for aggregation, which would allow 
> distance calculators to compute an accurate distance (at some computational 
> cost) instead.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7942) For Geo3d paths, aggregating distance values using "+" is not adequate for squared distances

2017-08-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16144754#comment-16144754
 ] 

ASF subversion and git services commented on LUCENE-7942:
-

Commit 4f6cfd6d50df14f9f03ff3bd6b2b3a49c00f4dc8 in lucene-solr's branch 
refs/heads/branch_6x from [~kwri...@metacarta.com]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=4f6cfd6 ]

LUCENE-7942: Explicitly require conversion to 'aggregation' form before 
aggregating distances, plus require a conversion back.  This is more efficient 
than my initial commit for this ticket, since sqrt values will be cached for 
path segments, and will not need to be recomputed.


> For Geo3d paths, aggregating distance values using "+" is not adequate for 
> squared distances
> 
>
> Key: LUCENE-7942
> URL: https://issues.apache.org/jira/browse/LUCENE-7942
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/spatial3d
>Reporter: Karl Wright
>Assignee: Karl Wright
> Fix For: 6.7, master (8.0), 7.1
>
>
> The GeoStandardPath object aggregates distances segment by segment using 
> simple addition.  For some kinds of Distance computations, though, addition 
> is not an adequate way to do this.  The xxxSquaredDistance computations, for 
> example, do not produce true squared distances but rather a distance metric 
> that is a combination of both squared and linear.
> I propose adding support in Distance for aggregation, which would allow 
> distance calculators to compute an accurate distance (at some computational 
> cost) instead.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7942) For Geo3d paths, aggregating distance values using "+" is not adequate for squared distances

2017-08-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16144753#comment-16144753
 ] 

ASF subversion and git services commented on LUCENE-7942:
-

Commit c01d692baca08a7929055a4d41ad2aae7b50661d in lucene-solr's branch 
refs/heads/master from [~kwri...@metacarta.com]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=c01d692 ]

LUCENE-7942: Explicitly require conversion to 'aggregation' form before 
aggregating distances, plus require a conversion back.  This is more efficient 
than my initial commit for this ticket, since sqrt values will be cached for 
path segments, and will not need to be recomputed.


> For Geo3d paths, aggregating distance values using "+" is not adequate for 
> squared distances
> 
>
> Key: LUCENE-7942
> URL: https://issues.apache.org/jira/browse/LUCENE-7942
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/spatial3d
>Reporter: Karl Wright
>Assignee: Karl Wright
> Fix For: 6.7, master (8.0), 7.1
>
>
> The GeoStandardPath object aggregates distances segment by segment using 
> simple addition.  For some kinds of Distance computations, though, addition 
> is not an adequate way to do this.  The xxxSquaredDistance computations, for 
> example, do not produce true squared distances but rather a distance metric 
> that is a combination of both squared and linear.
> I propose adding support in Distance for aggregation, which would allow 
> distance calculators to compute an accurate distance (at some computational 
> cost) instead.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11250) Add new LTR model which loads the model definition from the external resource

2017-08-28 Thread Yuki Yano (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16144734#comment-16144734
 ] 

Yuki Yano commented on SOLR-11250:
--

[~shalinmangar]
Thank you for your comment!

In my understanding, {{SolrResourceLoader}} uses {{URLClassLoader}} for loading 
resources from classpaths and it is difficult to (maybe can't?) load resources 
from some protocols like {{http}} or {{ftp}}.

Do you have any idea to load remote resources such that 
"http://somewhere:80/mymodel.json; with {{SolrResourceLoader}} or should I 
restrict the locations of resources inside the instance directory (i.e., users 
should place these resources inside classpaths before starting Solr)?

> Add new LTR model which loads the model definition from the external resource
> -
>
> Key: SOLR-11250
> URL: https://issues.apache.org/jira/browse/SOLR-11250
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - LTR
>Reporter: Yuki Yano
>Priority: Minor
> Attachments: SOLR-11250_master.patch, SOLR-11250_master_v2.patch, 
> SOLR-11250_master_v3.patch, SOLR-11250.patch
>
>
> We add new model which contains only the location of the external model and 
> loads it during the initialization.
> By this procedure, large models which are difficult to upload to ZooKeeper 
> can be available.
> The new model works as the wrapper of existing models, and deligates APIs to 
> them.
> We add two classes by this patch:
> * {{ExternalModel}} : a base class for models with external resources.
> * {{URIExternalModel}} : an implementation of {{ExternalModel}} which loads 
> the external model from specified URI (ex. file:, http:, etc.).
> For example, if you have a model on the local disk 
> "file:///var/models/myModel.json", the definition of {{URIExternalModel}} 
> will be like the following.
> {code}
> {
>   "class" : "org.apache.solr.ltr.model.URIExternalModel",
>   "name" : "myURIExternalModel",
>   "features" : [],
>   "params" : {
> "uri" : "file:///var/models/myModel.json"
>   }
> }
> {code}
> If you use LTR with {{model=myURIExternalModel}}, the model of 
> {{myModel.json}} will be used for scoring documents.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-11244) Query DSL for Solr

2017-08-28 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16144698#comment-16144698
 ] 

Uwe Schindler edited comment on SOLR-11244 at 8/29/17 3:50 AM:
---

bq. The general downside to using local params as the convergence layer 
(besides double-parsing) is the loss of type info (e.g. was a parameter an 
integer, string, or map).

And error handling if unknown params are given. Currently everything is parsed 
to local params, but the later query parsers cannot easily bail out on invalid 
key names in the JSON. I know we could do the same like with analyzer factories 
(removing the entries from the local params map), but thats not eay to 
implement in the current way. With having a JSON parser for every query parser 
this is easier.

Nevertheless, I like the approach, although it double parses! Are we sure that 
we have no escaping problems anywhere with special characters?


was (Author: thetaphi):
bq. The general downside to using local params as the convergence layer 
(besides double-parsing) is the loss of type info (e.g. was a parameter an 
integer, string, or map).

And error handling if unknown params are given. Currently everything is arsed 
to local params, but the later query parsers cannot easily bail out on invalid 
key names in the JSON. I know we could do the same like with analyzer factories 
(removing the entries from the local params map), but thats not eay to 
implement in the current way. With having a JSON parser for every query parser 
this is easier.

Nevertheless, I like the approach, although it double parses! Are we sure that 
we have no escaping problems anywhere with special characters?

> Query DSL for Solr
> --
>
> Key: SOLR-11244
> URL: https://issues.apache.org/jira/browse/SOLR-11244
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
> Attachments: SOLR-11244.patch, SOLR-11244.patch, SOLR-11244.patch, 
> Solr Query DSL - examples.html
>
>
> It will be great if Solr has a powerful query DSL. This ticket is an 
> extension of [http://yonik.com/solr-json-request-api/].
> Here are several examples of Query DSL
> {code}
> curl -XGET http://localhost:8983/solr/query -d '
> {
> "query" : {
> "lucene" : {
> "df" : "content",
> "query" : "solr lucene"
> }
> }
> }
> {code}
> the above example can be rewritten as (because lucene is the default qparser)
> {code}
> curl -XGET http://localhost:8983/solr/query -d '
> {
> "query" : "content:(solr lucene)"
> }
> {code}
> more complex example:
> {code}
> curl -XGET http://localhost:8983/solr/query -d '
> { 
> "query" : {
> "boost" : {
> "query" : {
> "lucene" : {
> "q.op" : "AND",
> "df" : "cat_s",
> "query" : "A"
> }
> }
> "b" : "log(popularity)"
> }
> }
> }
> {code}
> I call it Json Query Object (JQO) and It defined as :
> - It can be a valid query string for Lucene query parser, for example : 
> "title:solr"
> - It can be a valid local parameters string, for example : "{!dismax 
> qf=myfield}solr rocks"
> - It can be a json object with structure like this 
> {code}
> {
>   "query-parser-name" : {
>  "param1" : "value1",
>  "param2" : "value2",
>  "query" : ,
>  "another-param" : 
>   }
> }
> {code}
> Therefore the above dismax query can be rewritten as ( be noticed that the 
> query argument in local parameters, is put as value of {{query}} field )
> {code}
> {
>   "dismax" : {
>  "qf" : "myfield"
>  "query" : "solr rocks"
>   }
> }
> {code}
> I will attach an HTML, contain more examples of Query DSL.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11244) Query DSL for Solr

2017-08-28 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16144698#comment-16144698
 ] 

Uwe Schindler commented on SOLR-11244:
--

bq. The general downside to using local params as the convergence layer 
(besides double-parsing) is the loss of type info (e.g. was a parameter an 
integer, string, or map).

And error handling if unknown params are given. Currently everything is arsed 
to local params, but the later query parsers cannot easily bail out on invalid 
key names in the JSON. I know we could do the same like with analyzer factories 
(removing the entries from the local params map), but thats not eay to 
implement in the current way. With having a JSON parser for every query parser 
this is easier.

Nevertheless, I like the approach, although it double parses! Are we sure that 
we have no escaping problems anywhere with special characters?

> Query DSL for Solr
> --
>
> Key: SOLR-11244
> URL: https://issues.apache.org/jira/browse/SOLR-11244
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
> Attachments: SOLR-11244.patch, SOLR-11244.patch, SOLR-11244.patch, 
> Solr Query DSL - examples.html
>
>
> It will be great if Solr has a powerful query DSL. This ticket is an 
> extension of [http://yonik.com/solr-json-request-api/].
> Here are several examples of Query DSL
> {code}
> curl -XGET http://localhost:8983/solr/query -d '
> {
> "query" : {
> "lucene" : {
> "df" : "content",
> "query" : "solr lucene"
> }
> }
> }
> {code}
> the above example can be rewritten as (because lucene is the default qparser)
> {code}
> curl -XGET http://localhost:8983/solr/query -d '
> {
> "query" : "content:(solr lucene)"
> }
> {code}
> more complex example:
> {code}
> curl -XGET http://localhost:8983/solr/query -d '
> { 
> "query" : {
> "boost" : {
> "query" : {
> "lucene" : {
> "q.op" : "AND",
> "df" : "cat_s",
> "query" : "A"
> }
> }
> "b" : "log(popularity)"
> }
> }
> }
> {code}
> I call it Json Query Object (JQO) and It defined as :
> - It can be a valid query string for Lucene query parser, for example : 
> "title:solr"
> - It can be a valid local parameters string, for example : "{!dismax 
> qf=myfield}solr rocks"
> - It can be a json object with structure like this 
> {code}
> {
>   "query-parser-name" : {
>  "param1" : "value1",
>  "param2" : "value2",
>  "query" : ,
>  "another-param" : 
>   }
> }
> {code}
> Therefore the above dismax query can be rewritten as ( be noticed that the 
> query argument in local parameters, is put as value of {{query}} field )
> {code}
> {
>   "dismax" : {
>  "qf" : "myfield"
>  "query" : "solr rocks"
>   }
> }
> {code}
> I will attach an HTML, contain more examples of Query DSL.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-11295) JSON Qparser

2017-08-28 Thread Yonik Seeley (JIRA)
Yonik Seeley created SOLR-11295:
---

 Summary: JSON Qparser
 Key: SOLR-11295
 URL: https://issues.apache.org/jira/browse/SOLR-11295
 Project: Solr
  Issue Type: New Feature
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Yonik Seeley


SOLR-11244 makes existing qparsers accessible to the JSON Request API.
We should also make a QParser that serves as an entry point into that syntax.

{code}
fq={!json}{bool:{must:["query1","query2"]}}
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11244) Query DSL for Solr

2017-08-28 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16144696#comment-16144696
 ] 

Yonik Seeley commented on SOLR-11244:
-

The general downside to using local params as the convergence layer (besides 
double-parsing) is the loss of type info (e.g. was a parameter an integer, 
string, or map).
Of course this is really only an issue for future qparsers that would want to 
take advantage of the extra type info that JSON provides. The important 95% of 
this issue is the user-visible HTTP API, and that looks fine... a great 
integration between JSON and the existing qparsers.  This implementation also 
shouldn't hamper us much in the future if we wanted to change the underlying 
mechanisms, or just extend it to add more type info.

+1 to commit.


> Query DSL for Solr
> --
>
> Key: SOLR-11244
> URL: https://issues.apache.org/jira/browse/SOLR-11244
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
> Attachments: SOLR-11244.patch, SOLR-11244.patch, SOLR-11244.patch, 
> Solr Query DSL - examples.html
>
>
> It will be great if Solr has a powerful query DSL. This ticket is an 
> extension of [http://yonik.com/solr-json-request-api/].
> Here are several examples of Query DSL
> {code}
> curl -XGET http://localhost:8983/solr/query -d '
> {
> "query" : {
> "lucene" : {
> "df" : "content",
> "query" : "solr lucene"
> }
> }
> }
> {code}
> the above example can be rewritten as (because lucene is the default qparser)
> {code}
> curl -XGET http://localhost:8983/solr/query -d '
> {
> "query" : "content:(solr lucene)"
> }
> {code}
> more complex example:
> {code}
> curl -XGET http://localhost:8983/solr/query -d '
> { 
> "query" : {
> "boost" : {
> "query" : {
> "lucene" : {
> "q.op" : "AND",
> "df" : "cat_s",
> "query" : "A"
> }
> }
> "b" : "log(popularity)"
> }
> }
> }
> {code}
> I call it Json Query Object (JQO) and It defined as :
> - It can be a valid query string for Lucene query parser, for example : 
> "title:solr"
> - It can be a valid local parameters string, for example : "{!dismax 
> qf=myfield}solr rocks"
> - It can be a json object with structure like this 
> {code}
> {
>   "query-parser-name" : {
>  "param1" : "value1",
>  "param2" : "value2",
>  "query" : ,
>  "another-param" : 
>   }
> }
> {code}
> Therefore the above dismax query can be rewritten as ( be noticed that the 
> query argument in local parameters, is put as value of {{query}} field )
> {code}
> {
>   "dismax" : {
>  "qf" : "myfield"
>  "query" : "solr rocks"
>   }
> }
> {code}
> I will attach an HTML, contain more examples of Query DSL.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Solaris (64bit/jdk1.8.0) - Build # 142 - Failure!

2017-08-28 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Solaris/142/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseSerialGC

2 tests failed.
FAILED:  org.apache.solr.cloud.HttpPartitionTest.test

Error Message:
Doc with id=1 not found in http://127.0.0.1:44793/xul/d/collMinRf_1x3 due to: 
Path not found: /id; rsp={doc=null}

Stack Trace:
java.lang.AssertionError: Doc with id=1 not found in 
http://127.0.0.1:44793/xul/d/collMinRf_1x3 due to: Path not found: /id; 
rsp={doc=null}
at 
__randomizedtesting.SeedInfo.seed([76BB88580D00C1E3:FEEFB782A3FCAC1B]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.HttpPartitionTest.assertDocExists(HttpPartitionTest.java:603)
at 
org.apache.solr.cloud.HttpPartitionTest.assertDocsExistInAllReplicas(HttpPartitionTest.java:558)
at 
org.apache.solr.cloud.HttpPartitionTest.testMinRf(HttpPartitionTest.java:249)
at 
org.apache.solr.cloud.HttpPartitionTest.test(HttpPartitionTest.java:127)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:993)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 

[jira] [Commented] (SOLR-10628) Less verbose output from bin/solr commands

2017-08-28 Thread Jason Gerlowski (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16144680#comment-16144680
 ] 

Jason Gerlowski commented on SOLR-10628:


Ah, figured it out, the "getLogLevel" method in the patch had a bug in it:

{code}
+  /**
+   * Return a string representing the current static ROOT logging level
+   * @return a string TRACE, DEBUG, WARN, ERROR or INFO representing current 
log level. Default is INFO
+   */
+  public static String getLogLevelString() {
+final Logger rootLogger = LoggerFactory.getLogger(Logger.ROOT_LOGGER_NAME);
+if (rootLogger.isTraceEnabled()) return "TRACE";
+else if (rootLogger.isDebugEnabled()) return "DEBUG";
+else if (rootLogger.isWarnEnabled()) return "WARN";
+else if (rootLogger.isErrorEnabled()) return "ERROR";
+else if (rootLogger.isInfoEnabled()) return "INFO";
+else return "INFO";
+  }
{code}

The above logic will return "WARN" when the log level is set to INFO.  (This is 
because {{isXEnabled}} returns true if the log-level is <= X ).  To correct 
this logic, the severity level clauses need to be ordered in increasing order 
(the {{isInfoEnabled}} line needs to come above the {{isWarnEnabled}} line).

I've tested this on single test runs and verified that the correct log-level is 
getting cached/reset.  Running longer test runs now before uploading a modified 
patch.

> Less verbose output from bin/solr commands
> --
>
> Key: SOLR-10628
> URL: https://issues.apache.org/jira/browse/SOLR-10628
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: scripts and tools
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
> Fix For: master (8.0), 7.1
>
> Attachments: SOLR-10628-loglevel-fix_jan.patch, 
> SOLR-10628-loglevel-fix.patch, SOLR-10628.patch, SOLR-10628.patch, 
> SOLR-10628.patch, SOLR-10628.patch, SOLR-10628.patch, 
> solr_script_outputs.txt, updated_command_output.txt
>
>
> Creating a collection with {{bin/solr create}} today is too verbose:
> {noformat}
> $ bin/solr create -c foo
> Connecting to ZooKeeper at localhost:9983 ...
> INFO  - 2017-05-08 09:06:54.409; 
> org.apache.solr.client.solrj.impl.ZkClientClusterStateProvider; Cluster at 
> localhost:9983 ready
> Uploading 
> /Users/janhoy/git/lucene-solr/solr/server/solr/configsets/data_driven_schema_configs/conf
>  for config foo to ZooKeeper at localhost:9983
> Creating new collection 'foo' using command:
> http://localhost:8983/solr/admin/collections?action=CREATE=foo=1=1=1=foo
> {
>   "responseHeader":{
> "status":0,
> "QTime":4178},
>   "success":{"192.168.127.248:8983_solr":{
>   "responseHeader":{
> "status":0,
> "QTime":2959},
>   "core":"foo_shard1_replica1"}}}
> {noformat}
> A normal user don't need all this info. Propose to move all the details to 
> verbose mode ({{-V)}} and let the default be the following instead:
> {noformat}
> $ bin/solr create -c foo
> Connecting to ZooKeeper at localhost:9983 ...
> Created collection 'foo' with 1 shard(s), 1 replica(s) using config-set 
> 'data_driven_schema_configs'
> {noformat}
> Error messages must of course still be verbose.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10628) Less verbose output from bin/solr commands

2017-08-28 Thread Jason Gerlowski (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16144675#comment-16144675
 ] 

Jason Gerlowski commented on SOLR-10628:


I like the approach in the new patch (it's embarassingly simpler than my 
approach), but when I apply it on top of {{master}} it doesn't seem to be 
resetting the log-level as expected.  Putting a print-statement at the very end 
of the SolrTestCaseJ4 AfterClass shows that the log-level is still WARN after 
BasicAuthIntegrationTest, or any of the others run.

Haven't wrapped my head around "why" yet...

> Less verbose output from bin/solr commands
> --
>
> Key: SOLR-10628
> URL: https://issues.apache.org/jira/browse/SOLR-10628
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: scripts and tools
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
> Fix For: master (8.0), 7.1
>
> Attachments: SOLR-10628-loglevel-fix_jan.patch, 
> SOLR-10628-loglevel-fix.patch, SOLR-10628.patch, SOLR-10628.patch, 
> SOLR-10628.patch, SOLR-10628.patch, SOLR-10628.patch, 
> solr_script_outputs.txt, updated_command_output.txt
>
>
> Creating a collection with {{bin/solr create}} today is too verbose:
> {noformat}
> $ bin/solr create -c foo
> Connecting to ZooKeeper at localhost:9983 ...
> INFO  - 2017-05-08 09:06:54.409; 
> org.apache.solr.client.solrj.impl.ZkClientClusterStateProvider; Cluster at 
> localhost:9983 ready
> Uploading 
> /Users/janhoy/git/lucene-solr/solr/server/solr/configsets/data_driven_schema_configs/conf
>  for config foo to ZooKeeper at localhost:9983
> Creating new collection 'foo' using command:
> http://localhost:8983/solr/admin/collections?action=CREATE=foo=1=1=1=foo
> {
>   "responseHeader":{
> "status":0,
> "QTime":4178},
>   "success":{"192.168.127.248:8983_solr":{
>   "responseHeader":{
> "status":0,
> "QTime":2959},
>   "core":"foo_shard1_replica1"}}}
> {noformat}
> A normal user don't need all this info. Propose to move all the details to 
> verbose mode ({{-V)}} and let the default be the following instead:
> {noformat}
> $ bin/solr create -c foo
> Connecting to ZooKeeper at localhost:9983 ...
> Created collection 'foo' with 1 shard(s), 1 replica(s) using config-set 
> 'data_driven_schema_configs'
> {noformat}
> Error messages must of course still be verbose.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-9-ea+181) - Build # 20387 - Still Failing!

2017-08-28 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/20387/
Java: 64bit/jdk-9-ea+181 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC 
--illegal-access=deny

1 tests failed.
FAILED:  org.apache.solr.metrics.reporters.SolrSlf4jReporterTest.testReporter

Error Message:
Test requires that log-level is at-least INFO, but INFO is disabled

Stack Trace:
java.lang.AssertionError: Test requires that log-level is at-least INFO, but 
INFO is disabled
at 
__randomizedtesting.SeedInfo.seed([A08EF24BBF3EE7A1:FF6ADF7CD43274E4]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.metrics.reporters.SolrSlf4jReporterTest.ensureLoggingConfiguredAppropriately(SolrSlf4jReporterTest.java:99)
at 
org.apache.solr.metrics.reporters.SolrSlf4jReporterTest.testReporter(SolrSlf4jReporterTest.java:49)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)




Build Log:
[...truncated 1521 lines...]
   [junit4] JVM J2: stderr was not empty, see: 

[jira] [Resolved] (SOLR-11209) Upgrade HttpClient to 4.5.3

2017-08-28 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-11209.

   Resolution: Fixed
Fix Version/s: 7.1
   master (8.0)

Thanks Hrishikesh!

> Upgrade HttpClient to 4.5.3
> ---
>
> Key: SOLR-11209
> URL: https://issues.apache.org/jira/browse/SOLR-11209
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hrishikesh Gadre
>Assignee: Mark Miller
>Priority: Minor
> Fix For: master (8.0), 7.1
>
>
> We have not upgraded HttpClient version for long time (since SOLR-6865 was 
> committed). It may be a good idea to upgrade to the latest stable version 
> (which is 4.5.3).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 4153 - Failure!

2017-08-28 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/4153/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test

Error Message:
Could not find collection:collection2

Stack Trace:
java.lang.AssertionError: Could not find collection:collection2
at 
__randomizedtesting.SeedInfo.seed([375EC011B19C0B54:BF0AFFCB1F6066AC]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNotNull(Assert.java:526)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:155)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:140)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:135)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForRecoveriesToFinish(AbstractFullDistribZkTestBase.java:908)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.testIndexingBatchPerRequestWithHttpSolrClient(FullSolrCloudDistribCmdsTest.java:612)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test(FullSolrCloudDistribCmdsTest.java:152)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:993)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-11209) Upgrade HttpClient to 4.5.3

2017-08-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16144626#comment-16144626
 ] 

ASF subversion and git services commented on SOLR-11209:


Commit 472e18342e238063dd5d76f0b3160103abb789b0 in lucene-solr's branch 
refs/heads/branch_7x from markrmiller
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=472e183 ]

SOLR-11209: Upgrade HttpClient to 4.5.3.


> Upgrade HttpClient to 4.5.3
> ---
>
> Key: SOLR-11209
> URL: https://issues.apache.org/jira/browse/SOLR-11209
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hrishikesh Gadre
>Assignee: Mark Miller
>Priority: Minor
>
> We have not upgraded HttpClient version for long time (since SOLR-6865 was 
> committed). It may be a good idea to upgrade to the latest stable version 
> (which is 4.5.3).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11209) Upgrade HttpClient to 4.5.3

2017-08-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16144621#comment-16144621
 ] 

ASF subversion and git services commented on SOLR-11209:


Commit db87e55750fb4f18407bab0463a5b262130ace3e in lucene-solr's branch 
refs/heads/master from markrmiller
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=db87e55 ]

SOLR-11209: Upgrade HttpClient to 4.5.3.


> Upgrade HttpClient to 4.5.3
> ---
>
> Key: SOLR-11209
> URL: https://issues.apache.org/jira/browse/SOLR-11209
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hrishikesh Gadre
>Assignee: Mark Miller
>Priority: Minor
>
> We have not upgraded HttpClient version for long time (since SOLR-6865 was 
> committed). It may be a good idea to upgrade to the latest stable version 
> (which is 4.5.3).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-7.x-Linux (32bit/jdk-9-ea+181) - Build # 330 - Still Unstable!

2017-08-28 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/330/
Java: 32bit/jdk-9-ea+181 -client -XX:+UseSerialGC --illegal-access=deny

3 tests failed.
FAILED:  org.apache.solr.cloud.ForceLeaderTest.testReplicasInLIRNoLeader

Error Message:
Doc with id=1 not found in http://127.0.0.1:44457/forceleader_test_collection 
due to: Path not found: /id; rsp={doc=null}

Stack Trace:
java.lang.AssertionError: Doc with id=1 not found in 
http://127.0.0.1:44457/forceleader_test_collection due to: Path not found: /id; 
rsp={doc=null}
at 
__randomizedtesting.SeedInfo.seed([97CF256CEC84C487:715811ACD5063DE6]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.HttpPartitionTest.assertDocExists(HttpPartitionTest.java:603)
at 
org.apache.solr.cloud.HttpPartitionTest.assertDocsExistInAllReplicas(HttpPartitionTest.java:556)
at 
org.apache.solr.cloud.ForceLeaderTest.testReplicasInLIRNoLeader(ForceLeaderTest.java:142)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:993)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[JENKINS] Lucene-Solr-7.x-MacOSX (64bit/jdk1.8.0) - Build # 142 - Still Unstable!

2017-08-28 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-MacOSX/142/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseSerialGC

2 tests failed.
FAILED:  org.apache.solr.cloud.CustomCollectionTest.testRouteFieldForHashRouter

Error Message:
Collection not found: routeFieldColl

Stack Trace:
org.apache.solr.common.SolrException: Collection not found: routeFieldColl
at 
__randomizedtesting.SeedInfo.seed([2AB5ECCDBBCAECEA:8283721024AB07B0]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.getCollectionNames(CloudSolrClient.java:1139)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:822)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:793)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:178)
at 
org.apache.solr.client.solrj.request.UpdateRequest.commit(UpdateRequest.java:233)
at 
org.apache.solr.cloud.CustomCollectionTest.testRouteFieldForHashRouter(CustomCollectionTest.java:166)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-11276) Refactoring SolrZkClient + ConnectionManager + ConnectionStrategy

2017-08-28 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16144505#comment-16144505
 ] 

Mark Miller commented on SOLR-11276:


I have not had time to closely review the patch, but I'm all for any 
simplification. For something like this though, it would be nice to add some 
additional testing to make sure we can be confident these changes don't break 
any behavior we rely on that may not easily show up in current tests.

> Refactoring SolrZkClient  + ConnectionManager + ConnectionStrategy
> --
>
> Key: SOLR-11276
> URL: https://issues.apache.org/jira/browse/SOLR-11276
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
> Attachments: SOLR-11276.patch
>
>
> I found OnReconnect mechanism of current SolrZkClient is very hard to follow. 
> I think we should do some refactoring to make it cleaner.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.6-Windows (64bit/jdk1.8.0_144) - Build # 41 - Still Unstable!

2017-08-28 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.6-Windows/41/
Java: 64bit/jdk1.8.0_144 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  
org.apache.solr.cloud.CdcrBootstrapTest.testBootstrapWithContinousIndexingOnSourceCluster

Error Message:
Document mismatch on target after sync expected:<2000> but was:<1100>

Stack Trace:
java.lang.AssertionError: Document mismatch on target after sync 
expected:<2000> but was:<1100>
at 
__randomizedtesting.SeedInfo.seed([785D2111F3547409:AC186A481402C7F2]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.cloud.CdcrBootstrapTest.testBootstrapWithContinousIndexingOnSourceCluster(CdcrBootstrapTest.java:309)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 12618 

[jira] [Updated] (LUCENE-7944) GPG Password checker throws error only when process has not been terminated

2017-08-28 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta updated LUCENE-7944:
-
Component/s: general/build

> GPG Password checker throws error only when process has not been terminated
> ---
>
> Key: LUCENE-7944
> URL: https://issues.apache.org/jira/browse/LUCENE-7944
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: general/build
>Reporter: Anshum Gupta
>
> GPG Password checker throws error only when process has not been terminated. 
> This is just a tracker JIRA for the commits.
> Here are the commits:
> master: 47e7fbc4dcf73486a58138b110ffa3b5191d651a
> branch_7x: d2216a66cfa0a962fb716a272959bef4894d121a
> branch_7_0: 5f7c00ed09fede00379a15eab3588e2134778994
> branch_6_6: 6dd958ae833f335859b1f973e85c63ece895e997 
> https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=6dd958a
> *NOTE:* This is yet to be committed to 6x
> cc:  [~varunthacker]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7944) GPG Password checker throws error only when process has not been terminated

2017-08-28 Thread Anshum Gupta (JIRA)
Anshum Gupta created LUCENE-7944:


 Summary: GPG Password checker throws error only when process has 
not been terminated
 Key: LUCENE-7944
 URL: https://issues.apache.org/jira/browse/LUCENE-7944
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Anshum Gupta


GPG Password checker throws error only when process has not been terminated. 
This is just a tracker JIRA for the commits.

Here are the commits:
master: 47e7fbc4dcf73486a58138b110ffa3b5191d651a
branch_7x: d2216a66cfa0a962fb716a272959bef4894d121a
branch_7_0: 5f7c00ed09fede00379a15eab3588e2134778994

branch_6_6: 6dd958ae833f335859b1f973e85c63ece895e997 
https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=6dd958a

*NOTE:* This is yet to be committed to 6x

cc:  [~varunthacker]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.6-Linux (64bit/jdk1.8.0_144) - Build # 108 - Still Unstable!

2017-08-28 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.6-Linux/108/
Java: 64bit/jdk1.8.0_144 -XX:+UseCompressedOops -XX:+UseG1GC

2 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation

Error Message:
2 threads leaked from SUITE scope at 
org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation: 1) 
Thread[id=3861, name=jetty-launcher-520-thread-1-EventThread, 
state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation] 
at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
 at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)  
   at 
org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323)
 at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105)  
   at 
org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:41)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.readValue(SharedValue.java:244)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.access$100(SharedValue.java:44)
 at 
org.apache.curator.framework.recipes.shared.SharedValue$1.process(SharedValue.java:61)
 at 
org.apache.curator.framework.imps.NamespaceWatcher.process(NamespaceWatcher.java:67)
 at 
org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:530)   
  at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:505)   
 2) Thread[id=3864, name=jetty-launcher-520-thread-2-EventThread, 
state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation] 
at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
 at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)  
   at 
org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323)
 at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105)  
   at 
org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:41)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.readValue(SharedValue.java:244)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.access$100(SharedValue.java:44)
 at 
org.apache.curator.framework.recipes.shared.SharedValue$1.process(SharedValue.java:61)
 at 
org.apache.curator.framework.imps.NamespaceWatcher.process(NamespaceWatcher.java:67)
 at 
org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:530)   
  at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:505)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 2 threads leaked from SUITE 
scope at org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation: 
   1) Thread[id=3861, name=jetty-launcher-520-thread-1-EventThread, 
state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)
at 
org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323)
at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105)
at 
org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288)
at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279)
at 

Re: Release a 6.6.1

2017-08-28 Thread Anshum Gupta
I see the same problem with 7.0, so I’ll commit the same change to master, 
branch_7x, and branch_7_0 too. If this doesn’t work, we can revert it from all 
the branches.

-Anshum



> On Aug 25, 2017, at 10:58 PM, Varun Thacker  wrote:
> 
> Thanks Steve! I changed the exit condition and now trying to built the RC 
> again
> 
> I had to commit the change on branch_6_6 - 
> https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=6dd958a 
> .  If it 
> doesn't work I'll revert the change.
> 
> On Thu, Aug 24, 2017 at 4:11 AM, Steve Rowe  > wrote:
> Hi Varun,
> 
> I’m not sure what’s happening, but I’d guess that “result = p.poll()” in 
> runAndSendGPGPassword() indicates that the process hasn’t completed or has an 
> error condition.  Maybe print out “result"?  The python docs say that None is 
> returned when the process hasn’t completed.
> 
> If the process hasn’t completed, then it’s just a matter of waiting until it 
> does.  The loop that looks for a password prompt has an exit condition of an 
> empty line returned from p.stdout.readline(), but maybe there should be an 
> additional exit condition: p.poll() returning non-None.
> 
> --
> Steve
> www.lucidworks.com 
> 
> > On Aug 23, 2017, at 5:51 PM, Varun Thacker  > > wrote:
> >
> > So I tried again and ran into the same issue -
> >
> >   File "dev-tools/scripts/buildAndPushRelease.py", line 114, in prepare
> > runAndSendGPGPassword(cmd, gpgPassword)
> >   File "dev-tools/scripts/buildAndPushRelease.py", line 60, in 
> > runAndSendGPGPassword
> > raise RuntimeError(msg)
> > RuntimeError: FAILED: ant -Dversion=6.6.1 -Dgpg.key=7F3DE8DA 
> > prepare-release [see log /tmp/release.log]
> >
> > At this point I took the the runAndSendGPGPassword method and ran it from a 
> > separate python program and it passed. So not sure why it fails when used 
> > directly from buildAndPushRelease especially since the output shows build 
> > successful for the "prepare-release" target and the "sign-artifacts" target 
> > completed as well.
> >
> >
> >
> >
> > On Wed, Aug 23, 2017 at 9:11 PM, Ishan Chattopadhyaya 
> > > wrote:
> > Cool, all the best! In cases when the release process completed without 
> > further errors, except for the one above, the generated artifacts 
> > eventually failed the smoke test (with some missing signature files error). 
> > So, I had to redo them making sure that the sign-artifacts step was not 
> > missed (I remember waiting for that step in order to wait for the password 
> > prompt).
> >
> > On Wed, Aug 23, 2017 at 4:42 PM, Varun Thacker  > > wrote:
> > Hi Ishan,
> >
> > That's useful info!
> >
> > The failure I posted was from my second attempt with "export 
> > GPG_TTY=$(tty)" present. I was prompted the password.
> >
> > Once I entered the password the "sign-artifacts:" phase looks to have 
> > completed. The attached output doesn't even show a failure.
> >
> > I'm going to give it another try in the meanwhile
> >
> > On Wed, Aug 23, 2017 at 4:20 PM, Ishan Chattopadhyaya 
> > > wrote:
> > Varun, I had the same issue. Please see my notes in the end of the 
> > ReleaseToDo section.
> >
> > On Wed, Aug 23, 2017 at 1:43 PM, Varun Thacker  > > wrote:
> > An update on the RC build : In the first couple of attempts a Solr test 
> > would fail so the process would get aborted.
> >
> > Then I hit "gpg: signing failed: Inappropriate ioctl for device" in the 
> > "prepare-release" phase. I was able to fix this by installing a couple of 
> > extra packages and following some instructions online.
> >
> > In the last attempt I hit this:
> >
> > Prepare release...
> >   git pull...
> >   git clone is clean
> >   git rev: f4fb90886690c829a062f4243a62825f810ad359
> >   Check DOAP files
> >   ant clean test validate documentation-lint
> >   lucene prepare-release
> > FAILED: ant -Dversion=6.6.1 -Dgpg.key=7F3DE8DA prepare-release [see log 
> > /tmp/release.log]
> > Traceback (most recent call last):
> >   File "dev-tools/scripts/buildAndPushRelease.py", line 313, in 
> > main()
> >   File "dev-tools/scripts/buildAndPushRelease.py", line 294, in main
> > rev = prepare(c.root, c.version, c.key_id, c.key_password)
> >   File "dev-tools/scripts/buildAndPushRelease.py", line 114, in prepare
> > runAndSendGPGPassword(cmd, gpgPassword)
> >   File "dev-tools/scripts/buildAndPushRelease.py", line 60, in 
> > runAndSendGPGPassword
> > raise RuntimeError(msg)
> > RuntimeError: FAILED: ant -Dversion=6.6.1 -Dgpg.key=7F3DE8DA 
> > prepare-release [see log /tmp/release.log]
> >
> > Here's the release.log output : 

[JENKINS] Lucene-Solr-7.x-Windows (64bit/jdk1.8.0_144) - Build # 145 - Still Unstable!

2017-08-28 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/145/
Java: 64bit/jdk1.8.0_144 -XX:+UseCompressedOops -XX:+UseG1GC

2 tests failed.
FAILED:  org.apache.solr.metrics.reporters.SolrSlf4jReporterTest.testReporter

Error Message:
No 'solr.node' logs in: {numFound=0,start=0,docs=[]}

Stack Trace:
java.lang.AssertionError: No 'solr.node' logs in: {numFound=0,start=0,docs=[]}
at 
__randomizedtesting.SeedInfo.seed([B0CF161DC5A2333D:EF2B3B2AAEAEA078]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.metrics.reporters.SolrSlf4jReporterTest.testReporter(SolrSlf4jReporterTest.java:85)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
org.apache.solr.update.processor.UpdateRequestProcessorFactoryTest.testUpdateDistribChainSkipping

Error Message:
Tests must be run with INFO level logging otherwise LogUpdateProcessor isn't 
used and can't be tested.

Stack Trace:
java.lang.AssertionError: Tests must be run 

Re: Release 7.0 process starts

2017-08-28 Thread Ishan Chattopadhyaya
> Those flaky Solr tests are annoying since people will also run into
failures when
> checking the RC? Should we disable these tests on the 7.0 branch so that
building
> and verifying this RC isn't annoying to everybody working on this release?

+1. If it is hampering the release process, I think we should either not
release without fixing them, or disable them for release (building,
verifying).

On Mon, Aug 28, 2017 at 11:47 PM, Anshum Gupta  wrote:

> Though those failing tests are annoying, I would not recommend ignoring
> those tests. We can manually ignore those test failures when we are testing
> stuff out though.
>
> -Anshum
>
>
>
> On Aug 28, 2017, at 11:10 AM, Adrien Grand  wrote:
>
> Those flaky Solr tests are annoying since people will also run into
> failures when checking the RC? Should we disable these tests on the 7.0
> branch so that building and verifying this RC isn't annoying to everybody
> working on this release?
>
> Le lun. 28 août 2017 à 19:23, Anshum Gupta  a écrit :
>
>> Thanks Adrien! It worked with a fresh clone, at least ant check-licenses
>> worked, so I’m assuming the RC creation would work too.
>> I’m running that, and it might take a couple of hours for me to create
>> one, as a few SolrCloud tests are still a little flakey and they fail
>> occasionally.
>>
>> -Anshum
>>
>>
>>
>> On Aug 28, 2017, at 10:13 AM, Anshum Gupta  wrote:
>>
>> Adrien,
>>
>> Yes, ant check-licenses fails with the same error, and so does ant
>> validate (from the root dir). This is after running ant clean -f.
>>
>> BUILD FAILED
>> /Users/anshum/workspace/lucene-solr/build.xml:117: The following error
>> occurred while executing this line:
>> /Users/anshum/workspace/lucene-solr/lucene/build.xml:90: The following
>> error occurred while executing this line:
>> /Users/anshum/workspace/lucene-solr/lucene/tools/custom-tasks.xml:62:
>> JAR resource does not exist: analysis/icu/lib/icu4j-56.1.jar
>>
>> I didn’t realize that the dependency was upgraded, and what confuses me
>> is that the file actually exists.
>>
>> anshum$ ls analysis/icu/lib/icu4j-5
>> icu4j-56.1.jar  icu4j-59.1.jar
>>
>> It seems like it’s something that git clean, ant clean clean-jars etc.
>> didn’t fix. This is really surprising but I’ll try and checking out again
>> and creating and RC (after checking for the dependencies).
>> I think ant should be responsible for cleaning this up, and not git so
>> there’s something off there.
>>
>> -Anshum
>>
>>
>>
>> On Aug 28, 2017, at 8:51 AM, Adrien Grand  wrote:
>>
>> You mentioned you tried to run the script multiple times. Have you run
>> git clean at some point? Maybe this is due to a stale working copy?
>>
>> Le lun. 28 août 2017 à 08:53, Adrien Grand  a écrit :
>>
>>> Hi Anshum,
>>>
>>> Does running ant check-licenses from the Lucene directory fail as well?
>>> The error message that you are getting looks weird to me since Lucene 7.0
>>> depends on ICU 59.1, not 56.1 since https://issues.apache.org/
>>> jira/browse/LUCENE-7540.
>>>
>>> Le ven. 25 août 2017 à 23:42, Anshum Gupta  a écrit :
>>>
 A quick question, in case someone has an idea around what’s going on.
 When I run the following command:

 python3 -u dev-tools/scripts/buildAndPushRelease.py --push-local
 /Users/anshum/solr/release/7.0.0/rc0 --rc-num 1 --sign 

 I end up with the following error:

 BUILD FAILED
 /Users/anshum/workspace/lucene-solr/build.xml:117: The following error
 occurred while executing this line:
 /Users/anshum/workspace/lucene-solr/lucene/build.xml:90: The following
 error occurred while executing this line:
 /Users/anshum/workspace/lucene-solr/lucene/tools/custom-tasks.xml:62:
 JAR resource does not exist: analysis/icu/lib/icu4j-56.1.jar

 Any idea as to what’s going on? This generally fails after the tests
 have run, and the script has processed for about 45 minutes and it’s
 consistent i.e. all the times when the tests pass, the process fails with
 this warning.

 I can also confirm that this file exists at
 lucene/analysis/icy/lib/icu4j-56.1.jar .

 Has anyone else seen this when working on the release?

 -Anshum



 On Aug 23, 2017, at 4:21 AM, Andrzej Białecki <
 andrzej.biale...@lucidworks.com> wrote:


 On 23 Aug 2017, at 13:06, Uwe Schindler  wrote:

 Keep in mind that there is also branch_7_0.


 Right, but the changes related to these issues were committed to master
 before branch_7_0 was created, and these specific issues are only about
 back-porting to 6x.


 Uww

 Am 23. August 2017 12:26:42 MESZ schrieb "Andrzej Białecki" <
 a...@getopt.org>:
>
>
> On 23 Aug 2017, at 08:15, Anshum Gupta  wrote:
>
> I also found more 

[jira] [Updated] (SOLR-10628) Less verbose output from bin/solr commands

2017-08-28 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-10628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-10628:
---
Attachment: SOLR-10628-loglevel-fix_jan.patch

Have a look at {{SOLR-10628-loglevel-fix_jan.patch}}. Minimal solution that 
takes Jason's remeber/restore logic and puts it into two lines in 
SolrTestCaseJ4.

As I read the comments here this should solve the test failures, but I have not 
done any beasting to validate. Hoss, what do you think?

> Less verbose output from bin/solr commands
> --
>
> Key: SOLR-10628
> URL: https://issues.apache.org/jira/browse/SOLR-10628
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: scripts and tools
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
> Fix For: master (8.0), 7.1
>
> Attachments: SOLR-10628-loglevel-fix_jan.patch, 
> SOLR-10628-loglevel-fix.patch, SOLR-10628.patch, SOLR-10628.patch, 
> SOLR-10628.patch, SOLR-10628.patch, SOLR-10628.patch, 
> solr_script_outputs.txt, updated_command_output.txt
>
>
> Creating a collection with {{bin/solr create}} today is too verbose:
> {noformat}
> $ bin/solr create -c foo
> Connecting to ZooKeeper at localhost:9983 ...
> INFO  - 2017-05-08 09:06:54.409; 
> org.apache.solr.client.solrj.impl.ZkClientClusterStateProvider; Cluster at 
> localhost:9983 ready
> Uploading 
> /Users/janhoy/git/lucene-solr/solr/server/solr/configsets/data_driven_schema_configs/conf
>  for config foo to ZooKeeper at localhost:9983
> Creating new collection 'foo' using command:
> http://localhost:8983/solr/admin/collections?action=CREATE=foo=1=1=1=foo
> {
>   "responseHeader":{
> "status":0,
> "QTime":4178},
>   "success":{"192.168.127.248:8983_solr":{
>   "responseHeader":{
> "status":0,
> "QTime":2959},
>   "core":"foo_shard1_replica1"}}}
> {noformat}
> A normal user don't need all this info. Propose to move all the details to 
> verbose mode ({{-V)}} and let the default be the following instead:
> {noformat}
> $ bin/solr create -c foo
> Connecting to ZooKeeper at localhost:9983 ...
> Created collection 'foo' with 1 shard(s), 1 replica(s) using config-set 
> 'data_driven_schema_configs'
> {noformat}
> Error messages must of course still be verbose.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Pathological index condition

2017-08-28 Thread Erick Erickson
bq: I guess the alternative would be to occasionally roll the dice and
decide to merge that kind of segment.

That's what I was getting to  with the "autoCompact" idea in a more
deterministic manner.



On Mon, Aug 28, 2017 at 1:32 PM, Walter Underwood  wrote:
> That makes sense.
>
> I guess the alternative would be to occasionally roll the dice and decide to
> merge that kind of segment.
>
> wunder
> Walter Underwood
> wun...@wunderwood.org
> http://observer.wunderwood.org/  (my blog)
>
>
> On Aug 28, 2017, at 1:28 PM, Erick Erickson  wrote:
>
> I don't think jitter would help. As long as a segment has > 50% max
> segment size "live" docs, it's forever ineligible for merging (outside
> optimize of expungeDeletes commands). So the "zone" is anything over
> 50%.
>
> Or I missed your point.
>
> Erick
>
> On Mon, Aug 28, 2017 at 12:50 PM, Walter Underwood
>  wrote:
>
> If this happens in a precise zone, how about adding some random jitter to
> the threshold? That tends to get this kind of lock-up unstuck.
>
> wunder
> Walter Underwood
> wun...@wunderwood.org
> http://observer.wunderwood.org/  (my blog)
>
>
> On Aug 28, 2017, at 12:44 PM, Erick Erickson 
> wrote:
>
> And one more thought (not very well thought out).
>
> A parameter on TMP (or whatever) that did <3> something like:
>
> a parameter 
> a parameter 
> On startup TMP takes the current timestamp
>
> *> Every minute (or whatever) it checks the current timestamp and if
>  is in between the last check time and now, do <2>.
>
> set the last checked time to the value from * above.
>
>
> Taking the current timestamp would keep from kicking of the compaction
> on startup, so we wouldn't need to keep some stateful information
> across restarts and wouldn't go into a compact cycle on startup.
>
> Erick
>
> On Sun, Aug 27, 2017 at 11:31 AM, Erick Erickson
>  wrote:
>
> I've been thinking about this a little more. Since this is an outlier,
> I'm loathe to change the core TMP merge selection process. Say the max
> segment size if 5G. You'd be doing an awful lot of I/O to merge a
> segment with 4.75G "live" docs with one with 0.25G. Plus that doesn't
> really allow users who issue the tempting "optimize" command to
> recover; that one huge segment can hang around for a _very_ long time,
> accumulating lots of deleted docs. Same with expungeDeletes.
>
> I can think of several approaches:
>
> 1> despite my comment above, a flag that says something like "if a
> segment has > X% deleted docs, merge it with a smaller segment anyway
> respecting the max segment size. I know, I know this will affect
> indexing throughput, do it anyway".
>
> 2> A special op (or perhaps a flag on expungeDeletes) that would
> behave like <1> but on-demand rather than part of standard merging.
>
> In both of these cases, if a segment had > X% deleted docs but the
> live doc size for that segment was > the max seg size, rewrite it into
> a single new segment removing deleted docs.
>
> 3> some way to do the above on a schedule. My notion is something like
> a maintenance window at 1:00 AM. You'd still have a live collection,
> but (presumably) a way to purge the day's accumulation of deleted
> documents during off hours.
>
> 4> ???
>
> I probably like <2> best so far, I don't see this condition in the
> wild very often it usually occurs during heavy re-indexing operations
> and often after an optimize or expungeDeletes has happened. <1> could
> get horribly pathological if the threshold was 1% or something.
>
> WDYT?
>
>
> On Wed, Aug 9, 2017 at 2:40 PM, Erick Erickson 
> wrote:
>
> Thanks Mike:
>
> bq: Or are you saying that each segments 20% of not-deleted docs is
> still greater than 1/2 of the max segment size, and so TMP considers
> them ineligible?
>
> Exactly.
>
> Hadn't seen the blog, thanks for that. Added to my list of things to refer
> to.
>
> The problem we're seeing is that "in the wild" there are cases where
> people can now get satisfactory performance from huge numbers of
> documents, as in close to 2B (there was a question on the user's list
> about that recently). So allowing up to 60% deleted documents is
> dangerous in that situation.
>
> And the situation is exacerbated by optimizing (I know, "don't do that").
>
> Ah, well, the joys of people using this open source thing and pushing
> its limits.
>
> Thanks again,
> Erick
>
> On Tue, Aug 8, 2017 at 3:49 PM, Michael McCandless
>  wrote:
>
> Hi Erick,
>
> Some questions/answers below:
>
> On Sun, Aug 6, 2017 at 8:22 PM, Erick Erickson 
> wrote:
>
>
> Particularly interested if Mr. McCandless has any opinions here.
>
> I admit it took some work, but I can create an index that never merges
> and is 80% deleted documents using TieredMergePolicy.
>
> I'm trying to understand how indexes "in the wild" can have > 30%
> deleted 

[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_144) - Build # 20386 - Failure!

2017-08-28 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/20386/
Java: 32bit/jdk1.8.0_144 -client -XX:+UseParallelGC

All tests passed

Build Log:
[...truncated 54158 lines...]
-ecj-javadoc-lint-src:
[mkdir] Created dir: /tmp/ecj1413748498
 [ecj-lint] Compiling 1124 source files to /tmp/ecj1413748498
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/home/jenkins/.ivy2/cache/org.restlet.jee/org.restlet/jars/org.restlet-2.3.0.jar
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/home/jenkins/.ivy2/cache/org.restlet.jee/org.restlet.ext.servlet/jars/org.restlet.ext.servlet-2.3.0.jar
 [ecj-lint] --
 [ecj-lint] 1. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/java/org/apache/solr/core/CoreContainer.java
 (at line 1002)
 [ecj-lint] core = new SolrCore(this, dcore, coreConfig);
 [ecj-lint] 
 [ecj-lint] Resource leak: 'core' is never closed
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 2. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/java/org/apache/solr/core/HdfsDirectoryFactory.java
 (at line 234)
 [ecj-lint] dir = new BlockDirectory(path, hdfsDir, cache, null, 
blockCacheReadEnabled, false, cacheMerges, cacheReadOnce);
 [ecj-lint] 
^^
 [ecj-lint] Resource leak: 'dir' is never closed
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 3. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/java/org/apache/solr/handler/AnalysisRequestHandlerBase.java
 (at line 121)
 [ecj-lint] reader = cfiltfac.create(reader);
 [ecj-lint] 
 [ecj-lint] Resource leak: 'reader' is not closed at this location
 [ecj-lint] --
 [ecj-lint] 4. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/java/org/apache/solr/handler/AnalysisRequestHandlerBase.java
 (at line 145)
 [ecj-lint] return namedList;
 [ecj-lint] ^
 [ecj-lint] Resource leak: 'listBasedTokenStream' is not closed at this location
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 5. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/java/org/apache/solr/handler/ReplicationHandler.java
 (at line 1282)
 [ecj-lint] DirectoryReader reader = s==null ? null : 
s.get().getIndexReader();
 [ecj-lint] ^^
 [ecj-lint] Resource leak: 'reader' is never closed
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 6. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/java/org/apache/solr/handler/sql/SolrTable.java
 (at line 517)
 [ecj-lint] ParallelStream parallelStream = new ParallelStream(zk, 
collection, tupleStream, numWorkers, comp);
 [ecj-lint]^^
 [ecj-lint] Resource leak: 'parallelStream' is never closed
 [ecj-lint] --
 [ecj-lint] 7. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/java/org/apache/solr/handler/sql/SolrTable.java
 (at line 743)
 [ecj-lint] ParallelStream parallelStream = new ParallelStream(zkHost, 
collection, tupleStream, numWorkers, comp);
 [ecj-lint]^^
 [ecj-lint] Resource leak: 'parallelStream' is never closed
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 8. WARNING in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/java/org/apache/solr/highlight/DefaultSolrHighlighter.java
 (at line 578)
 [ecj-lint] tvWindowStream = new OffsetWindowTokenFilter(tvStream);
 [ecj-lint] ^^
 [ecj-lint] Resource leak: 'tvWindowStream' is never closed
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 9. ERROR in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/java/org/apache/solr/metrics/reporters/SolrSlf4jReporter.java
 (at line 21)
 [ecj-lint] import java.util.SortedMap;
 [ecj-lint]^^^
 [ecj-lint] The import java.util.SortedMap is never used
 [ecj-lint] --
 [ecj-lint] 10. ERROR in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/java/org/apache/solr/metrics/reporters/SolrSlf4jReporter.java
 (at line 24)
 [ecj-lint] import com.codahale.metrics.Counter;
 [ecj-lint]
 [ecj-lint] The import com.codahale.metrics.Counter is never used
 [ecj-lint] --
 [ecj-lint] 11. ERROR in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/core/src/java/org/apache/solr/metrics/reporters/SolrSlf4jReporter.java
 (at line 25)
 [ecj-lint] import com.codahale.metrics.Gauge;
 [ecj-lint]^^
 [ecj-lint] The import com.codahale.metrics.Gauge is never used
 [ecj-lint] --
 [ecj-lint] 12. ERROR in 

Re: Is searching on docValues=true indexed=false fields trappy

2017-08-28 Thread Erick Erickson
bq: What do you mean by 'be in the JVM'?

I wasn't sure if a more efficient searching structure would be built
in the JVM or not, building an inverted structure out of docValues
there. But you're saying not, it's a linear scan of the uninverted
structure out in the OS's memory.

It would have been quite ironic if we started seeing a message like
"inverting docValues field for searching" in the logs. Symmetrical to
the background for docValues I'll admit... ;)

Thanks for the confirmation.

That leaves whether this is reasonable behavior or not. It feels like
a documentation issue, something like

'While searching on fields having docValues="true", indexed="false" is
possible, it is orders of magnitude slower than searching on fields
with indexed="true". We _strongly_ recommend that any field that is
used for searching be configured with indexed="true" '

That's assuming that just dis-allowing searching on dv=true,
indexed=false fields is not an option.

WDYT?

Erick

On Mon, Aug 28, 2017 at 12:55 PM, Adrien Grand  wrote:
> Indeed this will be a linear scan if it is not intersected with a selective
> query, which is quite trappy.
>
> What do you mean by 'be in the JVM'?
>
> Le lun. 28 août 2017 à 21:49, Erick Erickson  a
> écrit :
>>
>> You can search on fields with DV=true and indexed=false. But IIUC this
>> is a "table scan". Does it really make sense to support this?
>>
>> NOTE: Haven't checked the code, but even if we build an efficient
>> structure, it would be in the JVM, correct?
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Pathological index condition

2017-08-28 Thread Walter Underwood
That makes sense.

I guess the alternative would be to occasionally roll the dice and decide to 
merge that kind of segment.

wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/  (my blog)


> On Aug 28, 2017, at 1:28 PM, Erick Erickson  wrote:
> 
> I don't think jitter would help. As long as a segment has > 50% max
> segment size "live" docs, it's forever ineligible for merging (outside
> optimize of expungeDeletes commands). So the "zone" is anything over
> 50%.
> 
> Or I missed your point.
> 
> Erick
> 
> On Mon, Aug 28, 2017 at 12:50 PM, Walter Underwood
>  wrote:
>> If this happens in a precise zone, how about adding some random jitter to
>> the threshold? That tends to get this kind of lock-up unstuck.
>> 
>> wunder
>> Walter Underwood
>> wun...@wunderwood.org
>> http://observer.wunderwood.org/  (my blog)
>> 
>> 
>> On Aug 28, 2017, at 12:44 PM, Erick Erickson 
>> wrote:
>> 
>> And one more thought (not very well thought out).
>> 
>> A parameter on TMP (or whatever) that did <3> something like:
>> 
>> a parameter 
>> a parameter 
>> On startup TMP takes the current timestamp
>> 
>> *> Every minute (or whatever) it checks the current timestamp and if
>>  is in between the last check time and now, do <2>.
>> 
>> set the last checked time to the value from * above.
>> 
>> 
>> Taking the current timestamp would keep from kicking of the compaction
>> on startup, so we wouldn't need to keep some stateful information
>> across restarts and wouldn't go into a compact cycle on startup.
>> 
>> Erick
>> 
>> On Sun, Aug 27, 2017 at 11:31 AM, Erick Erickson
>>  wrote:
>> 
>> I've been thinking about this a little more. Since this is an outlier,
>> I'm loathe to change the core TMP merge selection process. Say the max
>> segment size if 5G. You'd be doing an awful lot of I/O to merge a
>> segment with 4.75G "live" docs with one with 0.25G. Plus that doesn't
>> really allow users who issue the tempting "optimize" command to
>> recover; that one huge segment can hang around for a _very_ long time,
>> accumulating lots of deleted docs. Same with expungeDeletes.
>> 
>> I can think of several approaches:
>> 
>> 1> despite my comment above, a flag that says something like "if a
>> segment has > X% deleted docs, merge it with a smaller segment anyway
>> respecting the max segment size. I know, I know this will affect
>> indexing throughput, do it anyway".
>> 
>> 2> A special op (or perhaps a flag on expungeDeletes) that would
>> behave like <1> but on-demand rather than part of standard merging.
>> 
>> In both of these cases, if a segment had > X% deleted docs but the
>> live doc size for that segment was > the max seg size, rewrite it into
>> a single new segment removing deleted docs.
>> 
>> 3> some way to do the above on a schedule. My notion is something like
>> a maintenance window at 1:00 AM. You'd still have a live collection,
>> but (presumably) a way to purge the day's accumulation of deleted
>> documents during off hours.
>> 
>> 4> ???
>> 
>> I probably like <2> best so far, I don't see this condition in the
>> wild very often it usually occurs during heavy re-indexing operations
>> and often after an optimize or expungeDeletes has happened. <1> could
>> get horribly pathological if the threshold was 1% or something.
>> 
>> WDYT?
>> 
>> 
>> On Wed, Aug 9, 2017 at 2:40 PM, Erick Erickson 
>> wrote:
>> 
>> Thanks Mike:
>> 
>> bq: Or are you saying that each segments 20% of not-deleted docs is
>> still greater than 1/2 of the max segment size, and so TMP considers
>> them ineligible?
>> 
>> Exactly.
>> 
>> Hadn't seen the blog, thanks for that. Added to my list of things to refer
>> to.
>> 
>> The problem we're seeing is that "in the wild" there are cases where
>> people can now get satisfactory performance from huge numbers of
>> documents, as in close to 2B (there was a question on the user's list
>> about that recently). So allowing up to 60% deleted documents is
>> dangerous in that situation.
>> 
>> And the situation is exacerbated by optimizing (I know, "don't do that").
>> 
>> Ah, well, the joys of people using this open source thing and pushing
>> its limits.
>> 
>> Thanks again,
>> Erick
>> 
>> On Tue, Aug 8, 2017 at 3:49 PM, Michael McCandless
>>  wrote:
>> 
>> Hi Erick,
>> 
>> Some questions/answers below:
>> 
>> On Sun, Aug 6, 2017 at 8:22 PM, Erick Erickson 
>> wrote:
>> 
>> 
>> Particularly interested if Mr. McCandless has any opinions here.
>> 
>> I admit it took some work, but I can create an index that never merges
>> and is 80% deleted documents using TieredMergePolicy.
>> 
>> I'm trying to understand how indexes "in the wild" can have > 30%
>> deleted documents. I think the root issue here is that
>> TieredMergePolicy doesn't consider for merging any segments > 50% of
>> maxMergedSegmentMB of 

Re: Pathological index condition

2017-08-28 Thread Erick Erickson
I don't think jitter would help. As long as a segment has > 50% max
segment size "live" docs, it's forever ineligible for merging (outside
optimize of expungeDeletes commands). So the "zone" is anything over
50%.

Or I missed your point.

Erick

On Mon, Aug 28, 2017 at 12:50 PM, Walter Underwood
 wrote:
> If this happens in a precise zone, how about adding some random jitter to
> the threshold? That tends to get this kind of lock-up unstuck.
>
> wunder
> Walter Underwood
> wun...@wunderwood.org
> http://observer.wunderwood.org/  (my blog)
>
>
> On Aug 28, 2017, at 12:44 PM, Erick Erickson 
> wrote:
>
> And one more thought (not very well thought out).
>
> A parameter on TMP (or whatever) that did <3> something like:
>
> a parameter 
> a parameter 
> On startup TMP takes the current timestamp
>
> *> Every minute (or whatever) it checks the current timestamp and if
>  is in between the last check time and now, do <2>.
>
> set the last checked time to the value from * above.
>
>
> Taking the current timestamp would keep from kicking of the compaction
> on startup, so we wouldn't need to keep some stateful information
> across restarts and wouldn't go into a compact cycle on startup.
>
> Erick
>
> On Sun, Aug 27, 2017 at 11:31 AM, Erick Erickson
>  wrote:
>
> I've been thinking about this a little more. Since this is an outlier,
> I'm loathe to change the core TMP merge selection process. Say the max
> segment size if 5G. You'd be doing an awful lot of I/O to merge a
> segment with 4.75G "live" docs with one with 0.25G. Plus that doesn't
> really allow users who issue the tempting "optimize" command to
> recover; that one huge segment can hang around for a _very_ long time,
> accumulating lots of deleted docs. Same with expungeDeletes.
>
> I can think of several approaches:
>
> 1> despite my comment above, a flag that says something like "if a
> segment has > X% deleted docs, merge it with a smaller segment anyway
> respecting the max segment size. I know, I know this will affect
> indexing throughput, do it anyway".
>
> 2> A special op (or perhaps a flag on expungeDeletes) that would
> behave like <1> but on-demand rather than part of standard merging.
>
> In both of these cases, if a segment had > X% deleted docs but the
> live doc size for that segment was > the max seg size, rewrite it into
> a single new segment removing deleted docs.
>
> 3> some way to do the above on a schedule. My notion is something like
> a maintenance window at 1:00 AM. You'd still have a live collection,
> but (presumably) a way to purge the day's accumulation of deleted
> documents during off hours.
>
> 4> ???
>
> I probably like <2> best so far, I don't see this condition in the
> wild very often it usually occurs during heavy re-indexing operations
> and often after an optimize or expungeDeletes has happened. <1> could
> get horribly pathological if the threshold was 1% or something.
>
> WDYT?
>
>
> On Wed, Aug 9, 2017 at 2:40 PM, Erick Erickson 
> wrote:
>
> Thanks Mike:
>
> bq: Or are you saying that each segments 20% of not-deleted docs is
> still greater than 1/2 of the max segment size, and so TMP considers
> them ineligible?
>
> Exactly.
>
> Hadn't seen the blog, thanks for that. Added to my list of things to refer
> to.
>
> The problem we're seeing is that "in the wild" there are cases where
> people can now get satisfactory performance from huge numbers of
> documents, as in close to 2B (there was a question on the user's list
> about that recently). So allowing up to 60% deleted documents is
> dangerous in that situation.
>
> And the situation is exacerbated by optimizing (I know, "don't do that").
>
> Ah, well, the joys of people using this open source thing and pushing
> its limits.
>
> Thanks again,
> Erick
>
> On Tue, Aug 8, 2017 at 3:49 PM, Michael McCandless
>  wrote:
>
> Hi Erick,
>
> Some questions/answers below:
>
> On Sun, Aug 6, 2017 at 8:22 PM, Erick Erickson 
> wrote:
>
>
> Particularly interested if Mr. McCandless has any opinions here.
>
> I admit it took some work, but I can create an index that never merges
> and is 80% deleted documents using TieredMergePolicy.
>
> I'm trying to understand how indexes "in the wild" can have > 30%
> deleted documents. I think the root issue here is that
> TieredMergePolicy doesn't consider for merging any segments > 50% of
> maxMergedSegmentMB of non-deleted documents.
>
> Let's say I have segments at the default 5G max. For the sake of
> argument, it takes exactly 5,000,000 identically-sized documents to
> fill the segment to exactly 5G.
>
> IIUC, as long as the segment has more than 2,500,000 documents in it
> it'll never be eligible for merging.
>
>
>
> That's right.
>
>
> The only way to force deleted
> docs to be purged is to expungeDeletes or optimize, neither of which
> is recommended.
>
>
>
> +1
>
> The condition 

Re: Is searching on docValues=true indexed=false fields trappy

2017-08-28 Thread Adrien Grand
Indeed this will be a linear scan if it is not intersected with a selective
query, which is quite trappy.

What do you mean by 'be in the JVM'?

Le lun. 28 août 2017 à 21:49, Erick Erickson  a
écrit :

> You can search on fields with DV=true and indexed=false. But IIUC this
> is a "table scan". Does it really make sense to support this?
>
> NOTE: Haven't checked the code, but even if we build an efficient
> structure, it would be in the JVM, correct?
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


Re: Pathological index condition

2017-08-28 Thread Walter Underwood
If this happens in a precise zone, how about adding some random jitter to the 
threshold? That tends to get this kind of lock-up unstuck.

wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/  (my blog)


> On Aug 28, 2017, at 12:44 PM, Erick Erickson  wrote:
> 
> And one more thought (not very well thought out).
> 
> A parameter on TMP (or whatever) that did <3> something like:
>> a parameter 
>> a parameter 
>> On startup TMP takes the current timestamp
> *> Every minute (or whatever) it checks the current timestamp and if
>  is in between the last check time and now, do <2>.
>> set the last checked time to the value from * above.
> 
> Taking the current timestamp would keep from kicking of the compaction
> on startup, so we wouldn't need to keep some stateful information
> across restarts and wouldn't go into a compact cycle on startup.
> 
> Erick
> 
> On Sun, Aug 27, 2017 at 11:31 AM, Erick Erickson
>  wrote:
>> I've been thinking about this a little more. Since this is an outlier,
>> I'm loathe to change the core TMP merge selection process. Say the max
>> segment size if 5G. You'd be doing an awful lot of I/O to merge a
>> segment with 4.75G "live" docs with one with 0.25G. Plus that doesn't
>> really allow users who issue the tempting "optimize" command to
>> recover; that one huge segment can hang around for a _very_ long time,
>> accumulating lots of deleted docs. Same with expungeDeletes.
>> 
>> I can think of several approaches:
>> 
>> 1> despite my comment above, a flag that says something like "if a
>> segment has > X% deleted docs, merge it with a smaller segment anyway
>> respecting the max segment size. I know, I know this will affect
>> indexing throughput, do it anyway".
>> 
>> 2> A special op (or perhaps a flag on expungeDeletes) that would
>> behave like <1> but on-demand rather than part of standard merging.
>> 
>> In both of these cases, if a segment had > X% deleted docs but the
>> live doc size for that segment was > the max seg size, rewrite it into
>> a single new segment removing deleted docs.
>> 
>> 3> some way to do the above on a schedule. My notion is something like
>> a maintenance window at 1:00 AM. You'd still have a live collection,
>> but (presumably) a way to purge the day's accumulation of deleted
>> documents during off hours.
>> 
>> 4> ???
>> 
>> I probably like <2> best so far, I don't see this condition in the
>> wild very often it usually occurs during heavy re-indexing operations
>> and often after an optimize or expungeDeletes has happened. <1> could
>> get horribly pathological if the threshold was 1% or something.
>> 
>> WDYT?
>> 
>> 
>> On Wed, Aug 9, 2017 at 2:40 PM, Erick Erickson  
>> wrote:
>>> Thanks Mike:
>>> 
>>> bq: Or are you saying that each segments 20% of not-deleted docs is
>>> still greater than 1/2 of the max segment size, and so TMP considers
>>> them ineligible?
>>> 
>>> Exactly.
>>> 
>>> Hadn't seen the blog, thanks for that. Added to my list of things to refer 
>>> to.
>>> 
>>> The problem we're seeing is that "in the wild" there are cases where
>>> people can now get satisfactory performance from huge numbers of
>>> documents, as in close to 2B (there was a question on the user's list
>>> about that recently). So allowing up to 60% deleted documents is
>>> dangerous in that situation.
>>> 
>>> And the situation is exacerbated by optimizing (I know, "don't do that").
>>> 
>>> Ah, well, the joys of people using this open source thing and pushing
>>> its limits.
>>> 
>>> Thanks again,
>>> Erick
>>> 
>>> On Tue, Aug 8, 2017 at 3:49 PM, Michael McCandless
>>>  wrote:
 Hi Erick,
 
 Some questions/answers below:
 
 On Sun, Aug 6, 2017 at 8:22 PM, Erick Erickson 
 wrote:
> 
> Particularly interested if Mr. McCandless has any opinions here.
> 
> I admit it took some work, but I can create an index that never merges
> and is 80% deleted documents using TieredMergePolicy.
> 
> I'm trying to understand how indexes "in the wild" can have > 30%
> deleted documents. I think the root issue here is that
> TieredMergePolicy doesn't consider for merging any segments > 50% of
> maxMergedSegmentMB of non-deleted documents.
> 
> Let's say I have segments at the default 5G max. For the sake of
> argument, it takes exactly 5,000,000 identically-sized documents to
> fill the segment to exactly 5G.
> 
> IIUC, as long as the segment has more than 2,500,000 documents in it
> it'll never be eligible for merging.
 
 
 That's right.
 
> 
> The only way to force deleted
> docs to be purged is to expungeDeletes or optimize, neither of which
> is recommended.
 
 
 +1
 
> The condition I created was highly artificial but illustrative:
> - I set my max segment size to 20M
> - 

Is searching on docValues=true indexed=false fields trappy

2017-08-28 Thread Erick Erickson
You can search on fields with DV=true and indexed=false. But IIUC this
is a "table scan". Does it really make sense to support this?

NOTE: Haven't checked the code, but even if we build an efficient
structure, it would be in the JVM, correct?

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Pathological index condition

2017-08-28 Thread Erick Erickson
And one more thought (not very well thought out).

A parameter on TMP (or whatever) that did <3> something like:
> a parameter 
> a parameter 
> On startup TMP takes the current timestamp
*> Every minute (or whatever) it checks the current timestamp and if
 is in between the last check time and now, do <2>.
> set the last checked time to the value from * above.

Taking the current timestamp would keep from kicking of the compaction
on startup, so we wouldn't need to keep some stateful information
across restarts and wouldn't go into a compact cycle on startup.

Erick

On Sun, Aug 27, 2017 at 11:31 AM, Erick Erickson
 wrote:
> I've been thinking about this a little more. Since this is an outlier,
> I'm loathe to change the core TMP merge selection process. Say the max
> segment size if 5G. You'd be doing an awful lot of I/O to merge a
> segment with 4.75G "live" docs with one with 0.25G. Plus that doesn't
> really allow users who issue the tempting "optimize" command to
> recover; that one huge segment can hang around for a _very_ long time,
> accumulating lots of deleted docs. Same with expungeDeletes.
>
> I can think of several approaches:
>
> 1> despite my comment above, a flag that says something like "if a
> segment has > X% deleted docs, merge it with a smaller segment anyway
> respecting the max segment size. I know, I know this will affect
> indexing throughput, do it anyway".
>
> 2> A special op (or perhaps a flag on expungeDeletes) that would
> behave like <1> but on-demand rather than part of standard merging.
>
> In both of these cases, if a segment had > X% deleted docs but the
> live doc size for that segment was > the max seg size, rewrite it into
> a single new segment removing deleted docs.
>
> 3> some way to do the above on a schedule. My notion is something like
> a maintenance window at 1:00 AM. You'd still have a live collection,
> but (presumably) a way to purge the day's accumulation of deleted
> documents during off hours.
>
> 4> ???
>
> I probably like <2> best so far, I don't see this condition in the
> wild very often it usually occurs during heavy re-indexing operations
> and often after an optimize or expungeDeletes has happened. <1> could
> get horribly pathological if the threshold was 1% or something.
>
> WDYT?
>
>
> On Wed, Aug 9, 2017 at 2:40 PM, Erick Erickson  
> wrote:
>> Thanks Mike:
>>
>> bq: Or are you saying that each segments 20% of not-deleted docs is
>> still greater than 1/2 of the max segment size, and so TMP considers
>> them ineligible?
>>
>> Exactly.
>>
>> Hadn't seen the blog, thanks for that. Added to my list of things to refer 
>> to.
>>
>> The problem we're seeing is that "in the wild" there are cases where
>> people can now get satisfactory performance from huge numbers of
>> documents, as in close to 2B (there was a question on the user's list
>> about that recently). So allowing up to 60% deleted documents is
>> dangerous in that situation.
>>
>> And the situation is exacerbated by optimizing (I know, "don't do that").
>>
>> Ah, well, the joys of people using this open source thing and pushing
>> its limits.
>>
>> Thanks again,
>> Erick
>>
>> On Tue, Aug 8, 2017 at 3:49 PM, Michael McCandless
>>  wrote:
>>> Hi Erick,
>>>
>>> Some questions/answers below:
>>>
>>> On Sun, Aug 6, 2017 at 8:22 PM, Erick Erickson 
>>> wrote:

 Particularly interested if Mr. McCandless has any opinions here.

 I admit it took some work, but I can create an index that never merges
 and is 80% deleted documents using TieredMergePolicy.

 I'm trying to understand how indexes "in the wild" can have > 30%
 deleted documents. I think the root issue here is that
 TieredMergePolicy doesn't consider for merging any segments > 50% of
 maxMergedSegmentMB of non-deleted documents.

 Let's say I have segments at the default 5G max. For the sake of
 argument, it takes exactly 5,000,000 identically-sized documents to
 fill the segment to exactly 5G.

 IIUC, as long as the segment has more than 2,500,000 documents in it
 it'll never be eligible for merging.
>>>
>>>
>>> That's right.
>>>

 The only way to force deleted
 docs to be purged is to expungeDeletes or optimize, neither of which
 is recommended.
>>>
>>>
>>> +1
>>>
 The condition I created was highly artificial but illustrative:
 - I set my max segment size to 20M
 - Through experimentation I found that each segment would hold roughly
 160K synthetic docs.
 - I set my ramBuffer to 1G.
 - Then I'd index 500K docs, then delete 400K of them, and commit. This
 produces a single segment occupying (roughly) 80M of disk space, 15M
 or so of it "live" documents the rest deleted.
 - rinse, repeat with a disjoint set of doc IDs.

 The number of segments continues to grow forever, each one consisting
 of 80% 

[jira] [Commented] (SOLR-11003) Enabling bi-directional CDCR active-active clusters

2017-08-28 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16144175#comment-16144175
 ] 

Erick Erickson commented on SOLR-11003:
---

Well, it was a nice theory, too bad it's not true. I added a loop in the test 
(on master, but I don't think that matters) where if the document counts don't 
match, I add one more doc to the source and go back around the 
WaitForTargetToSync loop again. 

At the initial failure I see counts of
target: 1901
source: 2000

After my new loop I see counts of
target: 1902
source: 2001

Clearly my new doc is being indexed to the source and sent to the target so 
it's not just a matter of the docs getting to the target but somehow not being 
available to the currently-open searcher.

So this is very unlikely related to SOLR-11034 and SOLR-11035, never mind.

> Enabling bi-directional CDCR active-active clusters
> ---
>
> Key: SOLR-11003
> URL: https://issues.apache.org/jira/browse/SOLR-11003
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR
>Reporter: Amrit Sarkar
>Assignee: Varun Thacker
> Attachments: sample-configs.zip, SOLR-11003.patch, SOLR-11003.patch, 
> SOLR-11003.patch, SOLR-11003.patch, SOLR-11003.patch, 
> SOLR-11003-tlogutils.patch
>
>
> The latest version of Solr CDCR across collections / clusters is in 
> active-passive format, where we can index into source collection and the 
> updates gets forwarded to the passive one and vice-versa is not supported.
> https://lucene.apache.org/solr/guide/6_6/cross-data-center-replication-cdcr.html
> https://issues.apache.org/jira/browse/SOLR-6273
> We are try to get a  design ready to index in both collections and the 
> updates gets reflected across the collections in real-time. 
> ClusterACollectionA => ClusterBCollectionB | ClusterBCollectionB => 
> ClusterACollectionA.
> The best use-case would be to we keep indexing in ClusterACollectionA which 
> forwards the updates to ClusterBCollectionB. If ClusterACollectionA gets 
> down, we point the indexer and searcher application to ClusterBCollectionB. 
> Once ClusterACollectionA is up, depending on updates count, they will be 
> bootstrapped or forwarded to ClusterACollectionA from ClusterBCollectionB and 
> keep indexing on the ClusterBCollectionB.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Windows (32bit/jdk1.8.0_144) - Build # 6853 - Failure!

2017-08-28 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6853/
Java: 32bit/jdk1.8.0_144 -client -XX:+UseParallelGC

1 tests failed.
FAILED:  
org.apache.solr.cloud.CdcrBootstrapTest.testBootstrapWithContinousIndexingOnSourceCluster

Error Message:
Document mismatch on target after sync expected:<2000> but was:<1100>

Stack Trace:
java.lang.AssertionError: Document mismatch on target after sync 
expected:<2000> but was:<1100>
at 
__randomizedtesting.SeedInfo.seed([A2286BB632ABBA1C:766D20EFD5FD09E7]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.cloud.CdcrBootstrapTest.testBootstrapWithContinousIndexingOnSourceCluster(CdcrBootstrapTest.java:309)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 12799 lines...]
  

[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk1.8.0_144) - Build # 329 - Still Unstable!

2017-08-28 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/329/
Java: 64bit/jdk1.8.0_144 -XX:-UseCompressedOops -XX:+UseParallelGC

2 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation

Error Message:
2 threads leaked from SUITE scope at 
org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation: 1) 
Thread[id=681, name=jetty-launcher-100-thread-2-EventThread, 
state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation] 
at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
 at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)  
   at 
org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323)
 at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105)  
   at 
org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:41)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.readValue(SharedValue.java:244)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.access$100(SharedValue.java:44)
 at 
org.apache.curator.framework.recipes.shared.SharedValue$1.process(SharedValue.java:61)
 at 
org.apache.curator.framework.imps.NamespaceWatcher.process(NamespaceWatcher.java:67)
 at 
org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:530)   
  at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:505)   
 2) Thread[id=679, name=jetty-launcher-100-thread-1-EventThread, 
state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation] 
at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
 at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)  
   at 
org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323)
 at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105)  
   at 
org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:41)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.readValue(SharedValue.java:244)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.access$100(SharedValue.java:44)
 at 
org.apache.curator.framework.recipes.shared.SharedValue$1.process(SharedValue.java:61)
 at 
org.apache.curator.framework.imps.NamespaceWatcher.process(NamespaceWatcher.java:67)
 at 
org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:530)   
  at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:505)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 2 threads leaked from SUITE 
scope at org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation: 
   1) Thread[id=681, name=jetty-launcher-100-thread-2-EventThread, 
state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)
at 
org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323)
at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105)
at 
org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288)
at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279)
at 

Re: Release 7.0 process starts

2017-08-28 Thread Anshum Gupta
Though those failing tests are annoying, I would not recommend ignoring those 
tests. We can manually ignore those test failures when we are testing stuff out 
though.

-Anshum



> On Aug 28, 2017, at 11:10 AM, Adrien Grand  wrote:
> 
> Those flaky Solr tests are annoying since people will also run into failures 
> when checking the RC? Should we disable these tests on the 7.0 branch so that 
> building and verifying this RC isn't annoying to everybody working on this 
> release?
> 
> Le lun. 28 août 2017 à 19:23, Anshum Gupta  > a écrit :
> Thanks Adrien! It worked with a fresh clone, at least ant check-licenses 
> worked, so I’m assuming the RC creation would work too.
> I’m running that, and it might take a couple of hours for me to create one, 
> as a few SolrCloud tests are still a little flakey and they fail occasionally.
> 
> -Anshum
> 
> 
> 
>> On Aug 28, 2017, at 10:13 AM, Anshum Gupta > > wrote:
>> 
>> Adrien,
>> 
>> Yes, ant check-licenses fails with the same error, and so does ant validate 
>> (from the root dir). This is after running ant clean -f.
>> 
>> BUILD FAILED
>> /Users/anshum/workspace/lucene-solr/build.xml:117: The following error 
>> occurred while executing this line:
>> /Users/anshum/workspace/lucene-solr/lucene/build.xml:90: The following error 
>> occurred while executing this line:
>> /Users/anshum/workspace/lucene-solr/lucene/tools/custom-tasks.xml:62: JAR 
>> resource does not exist: analysis/icu/lib/icu4j-56.1.jar
>> 
>> I didn’t realize that the dependency was upgraded, and what confuses me is 
>> that the file actually exists.
>> 
>> anshum$ ls analysis/icu/lib/icu4j-5
>> icu4j-56.1.jar  icu4j-59.1.jar
>> 
>> It seems like it’s something that git clean, ant clean clean-jars etc. 
>> didn’t fix. This is really surprising but I’ll try and checking out again 
>> and creating and RC (after checking for the dependencies).
>> I think ant should be responsible for cleaning this up, and not git so 
>> there’s something off there.
>> 
>> -Anshum
>> 
>> 
>> 
>>> On Aug 28, 2017, at 8:51 AM, Adrien Grand >> > wrote:
>>> 
>>> You mentioned you tried to run the script multiple times. Have you run git 
>>> clean at some point? Maybe this is due to a stale working copy?
>>> 
>>> Le lun. 28 août 2017 à 08:53, Adrien Grand >> > a écrit :
>>> Hi Anshum,
>>> 
>>> Does running ant check-licenses from the Lucene directory fail as well? The 
>>> error message that you are getting looks weird to me since Lucene 7.0 
>>> depends on ICU 59.1, not 56.1 since 
>>> https://issues.apache.org/jira/browse/LUCENE-7540 
>>> .
>>> 
>>> Le ven. 25 août 2017 à 23:42, Anshum Gupta >> > a écrit :
>>> A quick question, in case someone has an idea around what’s going on. When 
>>> I run the following command:
>>> 
>>> python3 -u dev-tools/scripts/buildAndPushRelease.py --push-local 
>>> /Users/anshum/solr/release/7.0.0/rc0 --rc-num 1 --sign 
>>> 
>>> I end up with the following error:
>>> 
>>> BUILD FAILED
>>> /Users/anshum/workspace/lucene-solr/build.xml:117: The following error 
>>> occurred while executing this line:
>>> /Users/anshum/workspace/lucene-solr/lucene/build.xml:90: The following 
>>> error occurred while executing this line:
>>> /Users/anshum/workspace/lucene-solr/lucene/tools/custom-tasks.xml:62: JAR 
>>> resource does not exist: analysis/icu/lib/icu4j-56.1.jar
>>> 
>>> Any idea as to what’s going on? This generally fails after the tests have 
>>> run, and the script has processed for about 45 minutes and it’s consistent 
>>> i.e. all the times when the tests pass, the process fails with this warning.
>>> 
>>> I can also confirm that this file exists at 
>>> lucene/analysis/icy/lib/icu4j-56.1.jar .
>>> 
>>> Has anyone else seen this when working on the release?
>>> 
>>> -Anshum
>>> 
>>> 
>>> 
 On Aug 23, 2017, at 4:21 AM, Andrzej Białecki 
 > 
 wrote:
 
 
> On 23 Aug 2017, at 13:06, Uwe Schindler  > wrote:
> 
> Keep in mind that there is also branch_7_0.
 
 Right, but the changes related to these issues were committed to master 
 before branch_7_0 was created, and these specific issues are only about 
 back-porting to 6x.
 
> 
> Uww
> 
> Am 23. August 2017 12:26:42 MESZ schrieb "Andrzej Białecki" 
> >:
> 
>> On 23 Aug 2017, at 08:15, Anshum Gupta > > wrote:
>> 
>> I also found more issues when comparing 7x, with 6x this time. I’ll take 
>> a look at wether it’s just the CHANGES entries or have 

[jira] [Commented] (SOLR-11275) Adding diagrams for AutoAddReplica into Solr Ref Guide

2017-08-28 Thread Cassandra Targett (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16144145#comment-16144145
 ] 

Cassandra Targett commented on SOLR-11275:
--

I started looking at this a couple of days ago and adding asciidoctor-diagram 
wasn't as straightforward right away as I hoped it would be (I neglected to 
review adding it in the context of building with Jekyll and Ant). I'll need a 
little bit of time to get back to this, but will do so soon.

> Adding diagrams for AutoAddReplica into Solr Ref Guide
> --
>
> Key: SOLR-11275
> URL: https://issues.apache.org/jira/browse/SOLR-11275
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Mano Kovacs
> Attachments: autoaddreplica.png, autoaddreplica.puml
>
>
> Pilot jira for adding PlantUML diagrams for documenting internals.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Release 7.0 process starts

2017-08-28 Thread Adrien Grand
Those flaky Solr tests are annoying since people will also run into
failures when checking the RC? Should we disable these tests on the 7.0
branch so that building and verifying this RC isn't annoying to everybody
working on this release?

Le lun. 28 août 2017 à 19:23, Anshum Gupta  a écrit :

> Thanks Adrien! It worked with a fresh clone, at least ant check-licenses
> worked, so I’m assuming the RC creation would work too.
> I’m running that, and it might take a couple of hours for me to create
> one, as a few SolrCloud tests are still a little flakey and they fail
> occasionally.
>
> -Anshum
>
>
>
> On Aug 28, 2017, at 10:13 AM, Anshum Gupta  wrote:
>
> Adrien,
>
> Yes, ant check-licenses fails with the same error, and so does ant
> validate (from the root dir). This is after running ant clean -f.
>
> BUILD FAILED
> /Users/anshum/workspace/lucene-solr/build.xml:117: The following error
> occurred while executing this line:
> /Users/anshum/workspace/lucene-solr/lucene/build.xml:90: The following
> error occurred while executing this line:
> /Users/anshum/workspace/lucene-solr/lucene/tools/custom-tasks.xml:62: JAR
> resource does not exist: analysis/icu/lib/icu4j-56.1.jar
>
> I didn’t realize that the dependency was upgraded, and what confuses me is
> that the file actually exists.
>
> anshum$ ls analysis/icu/lib/icu4j-5
> icu4j-56.1.jar  icu4j-59.1.jar
>
> It seems like it’s something that git clean, ant clean clean-jars etc.
> didn’t fix. This is really surprising but I’ll try and checking out again
> and creating and RC (after checking for the dependencies).
> I think ant should be responsible for cleaning this up, and not git so
> there’s something off there.
>
> -Anshum
>
>
>
> On Aug 28, 2017, at 8:51 AM, Adrien Grand  wrote:
>
> You mentioned you tried to run the script multiple times. Have you run git
> clean at some point? Maybe this is due to a stale working copy?
>
> Le lun. 28 août 2017 à 08:53, Adrien Grand  a écrit :
>
>> Hi Anshum,
>>
>> Does running ant check-licenses from the Lucene directory fail as well?
>> The error message that you are getting looks weird to me since Lucene 7.0
>> depends on ICU 59.1, not 56.1 since
>> https://issues.apache.org/jira/browse/LUCENE-7540.
>>
>> Le ven. 25 août 2017 à 23:42, Anshum Gupta  a écrit :
>>
>>> A quick question, in case someone has an idea around what’s going on.
>>> When I run the following command:
>>>
>>> python3 -u dev-tools/scripts/buildAndPushRelease.py --push-local
>>> /Users/anshum/solr/release/7.0.0/rc0 --rc-num 1 --sign 
>>>
>>> I end up with the following error:
>>>
>>> BUILD FAILED
>>> /Users/anshum/workspace/lucene-solr/build.xml:117: The following error
>>> occurred while executing this line:
>>> /Users/anshum/workspace/lucene-solr/lucene/build.xml:90: The following
>>> error occurred while executing this line:
>>> /Users/anshum/workspace/lucene-solr/lucene/tools/custom-tasks.xml:62:
>>> JAR resource does not exist: analysis/icu/lib/icu4j-56.1.jar
>>>
>>> Any idea as to what’s going on? This generally fails after the tests
>>> have run, and the script has processed for about 45 minutes and it’s
>>> consistent i.e. all the times when the tests pass, the process fails with
>>> this warning.
>>>
>>> I can also confirm that this file exists at
>>> lucene/analysis/icy/lib/icu4j-56.1.jar .
>>>
>>> Has anyone else seen this when working on the release?
>>>
>>> -Anshum
>>>
>>>
>>>
>>> On Aug 23, 2017, at 4:21 AM, Andrzej Białecki <
>>> andrzej.biale...@lucidworks.com> wrote:
>>>
>>>
>>> On 23 Aug 2017, at 13:06, Uwe Schindler  wrote:
>>>
>>> Keep in mind that there is also branch_7_0.
>>>
>>>
>>> Right, but the changes related to these issues were committed to master
>>> before branch_7_0 was created, and these specific issues are only about
>>> back-porting to 6x.
>>>
>>>
>>> Uww
>>>
>>> Am 23. August 2017 12:26:42 MESZ schrieb "Andrzej Białecki" <
>>> a...@getopt.org>:


 On 23 Aug 2017, at 08:15, Anshum Gupta  wrote:

 I also found more issues when comparing 7x, with 6x this time. I’ll
 take a look at wether it’s just the CHANGES entries or have these actually
 missed the branch. I assume it’s just the CHANGES, but want to be sure. If
 the committers involved can pitch in, I’d appreciate, else I’ll work on
 this for a bit right now and continue with this tomorrow morning.

 - SOLR-10477 (Ab)


 This is a partial back-port of relevant improvements from master to 6x,
 so there are no strictly corresponding commits on 7x/master.

 - SOLR-10631: Metric reporters leak on 6x. (Ab)


 This one has been fixed as part of other related issues in branches 7.x
 / master, so it only required a specific fix for 6x.

 - SOLR-1 (Ab)


 This has been committed first to 7x, then to 6x and it’s present in
 

[JENKINS] Lucene-Solr-NightlyTests-7.0 - Build # 35 - Still Unstable

2017-08-28 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-7.0/35/

9 tests failed.
FAILED:  org.apache.lucene.spatial3d.TestGeo3DPoint.testGeo3DRelations

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([37424D69F4F2ABF5]:0)


FAILED:  
org.apache.solr.cloud.CdcrReplicationHandlerTest.testReplicationWithBufferedUpdates

Error Message:
Timeout while trying to assert number of documents @ 
http://127.0.0.1:41220/_uh/qh/source_collection_shard1_replica_n2/

Stack Trace:
java.lang.AssertionError: Timeout while trying to assert number of documents @ 
http://127.0.0.1:41220/_uh/qh/source_collection_shard1_replica_n2/
at 
__randomizedtesting.SeedInfo.seed([A4D131819495438D:77D8619FD106DF1A]:0)
at 
org.apache.solr.cloud.CdcrReplicationHandlerTest.assertNumDocs(CdcrReplicationHandlerTest.java:256)
at 
org.apache.solr.cloud.CdcrReplicationHandlerTest.testReplicationWithBufferedUpdates(CdcrReplicationHandlerTest.java:236)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:993)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 

Re: Release 7.0 process starts

2017-08-28 Thread Anshum Gupta
Thanks Adrien! It worked with a fresh clone, at least ant check-licenses 
worked, so I’m assuming the RC creation would work too.
I’m running that, and it might take a couple of hours for me to create one, as 
a few SolrCloud tests are still a little flakey and they fail occasionally.

-Anshum



> On Aug 28, 2017, at 10:13 AM, Anshum Gupta  wrote:
> 
> Adrien,
> 
> Yes, ant check-licenses fails with the same error, and so does ant validate 
> (from the root dir). This is after running ant clean -f.
> 
> BUILD FAILED
> /Users/anshum/workspace/lucene-solr/build.xml:117: The following error 
> occurred while executing this line:
> /Users/anshum/workspace/lucene-solr/lucene/build.xml:90: The following error 
> occurred while executing this line:
> /Users/anshum/workspace/lucene-solr/lucene/tools/custom-tasks.xml:62: JAR 
> resource does not exist: analysis/icu/lib/icu4j-56.1.jar
> 
> I didn’t realize that the dependency was upgraded, and what confuses me is 
> that the file actually exists.
> 
> anshum$ ls analysis/icu/lib/icu4j-5
> icu4j-56.1.jar  icu4j-59.1.jar
> 
> It seems like it’s something that git clean, ant clean clean-jars etc. didn’t 
> fix. This is really surprising but I’ll try and checking out again and 
> creating and RC (after checking for the dependencies).
> I think ant should be responsible for cleaning this up, and not git so 
> there’s something off there.
> 
> -Anshum
> 
> 
> 
>> On Aug 28, 2017, at 8:51 AM, Adrien Grand > > wrote:
>> 
>> You mentioned you tried to run the script multiple times. Have you run git 
>> clean at some point? Maybe this is due to a stale working copy?
>> 
>> Le lun. 28 août 2017 à 08:53, Adrien Grand > > a écrit :
>> Hi Anshum,
>> 
>> Does running ant check-licenses from the Lucene directory fail as well? The 
>> error message that you are getting looks weird to me since Lucene 7.0 
>> depends on ICU 59.1, not 56.1 since 
>> https://issues.apache.org/jira/browse/LUCENE-7540 
>> .
>> 
>> Le ven. 25 août 2017 à 23:42, Anshum Gupta > > a écrit :
>> A quick question, in case someone has an idea around what’s going on. When I 
>> run the following command:
>> 
>> python3 -u dev-tools/scripts/buildAndPushRelease.py --push-local 
>> /Users/anshum/solr/release/7.0.0/rc0 --rc-num 1 --sign 
>> 
>> I end up with the following error:
>> 
>> BUILD FAILED
>> /Users/anshum/workspace/lucene-solr/build.xml:117: The following error 
>> occurred while executing this line:
>> /Users/anshum/workspace/lucene-solr/lucene/build.xml:90: The following error 
>> occurred while executing this line:
>> /Users/anshum/workspace/lucene-solr/lucene/tools/custom-tasks.xml:62: JAR 
>> resource does not exist: analysis/icu/lib/icu4j-56.1.jar
>> 
>> Any idea as to what’s going on? This generally fails after the tests have 
>> run, and the script has processed for about 45 minutes and it’s consistent 
>> i.e. all the times when the tests pass, the process fails with this warning.
>> 
>> I can also confirm that this file exists at 
>> lucene/analysis/icy/lib/icu4j-56.1.jar .
>> 
>> Has anyone else seen this when working on the release?
>> 
>> -Anshum
>> 
>> 
>> 
>>> On Aug 23, 2017, at 4:21 AM, Andrzej Białecki 
>>> > 
>>> wrote:
>>> 
>>> 
 On 23 Aug 2017, at 13:06, Uwe Schindler > wrote:
 
 Keep in mind that there is also branch_7_0.
>>> 
>>> Right, but the changes related to these issues were committed to master 
>>> before branch_7_0 was created, and these specific issues are only about 
>>> back-porting to 6x.
>>> 
 
 Uww
 
 Am 23. August 2017 12:26:42 MESZ schrieb "Andrzej Białecki" 
 >:
 
> On 23 Aug 2017, at 08:15, Anshum Gupta  > wrote:
> 
> I also found more issues when comparing 7x, with 6x this time. I’ll take 
> a look at wether it’s just the CHANGES entries or have these actually 
> missed the branch. I assume it’s just the CHANGES, but want to be sure. 
> If the committers involved can pitch in, I’d appreciate, else I’ll work 
> on this for a bit right now and continue with this tomorrow morning.
> 
> - SOLR-10477 (Ab)
 
 This is a partial back-port of relevant improvements from master to 6x, so 
 there are no strictly corresponding commits on 7x/master.
 
> - SOLR-10631: Metric reporters leak on 6x. (Ab)
 
 This one has been fixed as part of other related issues in branches 7.x / 
 master, so it only required a specific fix for 6x.
 
> - SOLR-1 (Ab)
> 
 
 This has been committed first to 7x, then to 6x and it’s present in 
 branch_6_6.

[jira] [Assigned] (SOLR-11209) Upgrade HttpClient to 4.5.3

2017-08-28 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller reassigned SOLR-11209:
--

Assignee: Mark Miller

> Upgrade HttpClient to 4.5.3
> ---
>
> Key: SOLR-11209
> URL: https://issues.apache.org/jira/browse/SOLR-11209
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hrishikesh Gadre
>Assignee: Mark Miller
>Priority: Minor
>
> We have not upgraded HttpClient version for long time (since SOLR-6865 was 
> committed). It may be a good idea to upgrade to the latest stable version 
> (which is 4.5.3).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Release 7.0 process starts

2017-08-28 Thread Anshum Gupta
Adrien,

Yes, ant check-licenses fails with the same error, and so does ant validate 
(from the root dir). This is after running ant clean -f.

BUILD FAILED
/Users/anshum/workspace/lucene-solr/build.xml:117: The following error occurred 
while executing this line:
/Users/anshum/workspace/lucene-solr/lucene/build.xml:90: The following error 
occurred while executing this line:
/Users/anshum/workspace/lucene-solr/lucene/tools/custom-tasks.xml:62: JAR 
resource does not exist: analysis/icu/lib/icu4j-56.1.jar

I didn’t realize that the dependency was upgraded, and what confuses me is that 
the file actually exists.

anshum$ ls analysis/icu/lib/icu4j-5
icu4j-56.1.jar  icu4j-59.1.jar

It seems like it’s something that git clean, ant clean clean-jars etc. didn’t 
fix. This is really surprising but I’ll try and checking out again and creating 
and RC (after checking for the dependencies).
I think ant should be responsible for cleaning this up, and not git so there’s 
something off there.

-Anshum



> On Aug 28, 2017, at 8:51 AM, Adrien Grand  wrote:
> 
> You mentioned you tried to run the script multiple times. Have you run git 
> clean at some point? Maybe this is due to a stale working copy?
> 
> Le lun. 28 août 2017 à 08:53, Adrien Grand  > a écrit :
> Hi Anshum,
> 
> Does running ant check-licenses from the Lucene directory fail as well? The 
> error message that you are getting looks weird to me since Lucene 7.0 depends 
> on ICU 59.1, not 56.1 since https://issues.apache.org/jira/browse/LUCENE-7540 
> .
> 
> Le ven. 25 août 2017 à 23:42, Anshum Gupta  > a écrit :
> A quick question, in case someone has an idea around what’s going on. When I 
> run the following command:
> 
> python3 -u dev-tools/scripts/buildAndPushRelease.py --push-local 
> /Users/anshum/solr/release/7.0.0/rc0 --rc-num 1 --sign 
> 
> I end up with the following error:
> 
> BUILD FAILED
> /Users/anshum/workspace/lucene-solr/build.xml:117: The following error 
> occurred while executing this line:
> /Users/anshum/workspace/lucene-solr/lucene/build.xml:90: The following error 
> occurred while executing this line:
> /Users/anshum/workspace/lucene-solr/lucene/tools/custom-tasks.xml:62: JAR 
> resource does not exist: analysis/icu/lib/icu4j-56.1.jar
> 
> Any idea as to what’s going on? This generally fails after the tests have 
> run, and the script has processed for about 45 minutes and it’s consistent 
> i.e. all the times when the tests pass, the process fails with this warning.
> 
> I can also confirm that this file exists at 
> lucene/analysis/icy/lib/icu4j-56.1.jar .
> 
> Has anyone else seen this when working on the release?
> 
> -Anshum
> 
> 
> 
>> On Aug 23, 2017, at 4:21 AM, Andrzej Białecki 
>> > 
>> wrote:
>> 
>> 
>>> On 23 Aug 2017, at 13:06, Uwe Schindler >> > wrote:
>>> 
>>> Keep in mind that there is also branch_7_0.
>> 
>> Right, but the changes related to these issues were committed to master 
>> before branch_7_0 was created, and these specific issues are only about 
>> back-porting to 6x.
>> 
>>> 
>>> Uww
>>> 
>>> Am 23. August 2017 12:26:42 MESZ schrieb "Andrzej Białecki" 
>>> >:
>>> 
 On 23 Aug 2017, at 08:15, Anshum Gupta > wrote:
 
 I also found more issues when comparing 7x, with 6x this time. I’ll take a 
 look at wether it’s just the CHANGES entries or have these actually missed 
 the branch. I assume it’s just the CHANGES, but want to be sure. If the 
 committers involved can pitch in, I’d appreciate, else I’ll work on this 
 for a bit right now and continue with this tomorrow morning.
 
 - SOLR-10477 (Ab)
>>> 
>>> This is a partial back-port of relevant improvements from master to 6x, so 
>>> there are no strictly corresponding commits on 7x/master.
>>> 
 - SOLR-10631: Metric reporters leak on 6x. (Ab)
>>> 
>>> This one has been fixed as part of other related issues in branches 7.x / 
>>> master, so it only required a specific fix for 6x.
>>> 
 - SOLR-1 (Ab)
 
>>> 
>>> This has been committed first to 7x, then to 6x and it’s present in 
>>> branch_6_6.
>>> 
>>> 
>>> ---
>>> Best regards,
>>> 
>>> Andrzej Bialecki
>>> 
>>> 
>>> --
>>> Uwe Schindler
>>> Achterdiek 19, 28357 Bremen
>>> https://www.thetaphi.de 
> 



[JENKINS] Lucene-Solr-7.x-Solaris (64bit/jdk1.8.0) - Build # 141 - Still Unstable!

2017-08-28 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Solaris/141/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
org.apache.solr.update.processor.UpdateRequestProcessorFactoryTest.testUpdateDistribChainSkipping

Error Message:
Tests must be run with INFO level logging otherwise LogUpdateProcessor isn't 
used and can't be tested.

Stack Trace:
java.lang.AssertionError: Tests must be run with INFO level logging otherwise 
LogUpdateProcessor isn't used and can't be tested.
at 
__randomizedtesting.SeedInfo.seed([EF9CF31DF7C7AAA3:9E780DCB812C8D6F]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.update.processor.UpdateRequestProcessorFactoryTest.testUpdateDistribChainSkipping(UpdateRequestProcessorFactoryTest.java:101)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 12214 lines...]
   

[jira] [Resolved] (LUCENE-7943) Plane.findArcDistancePoints() sometimes throws assertion failures even when plane explicitly normalized

2017-08-28 Thread Karl Wright (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karl Wright resolved LUCENE-7943.
-
   Resolution: Fixed
Fix Version/s: 7.1
   master (8.0)
   6.7

> Plane.findArcDistancePoints() sometimes throws assertion failures even when 
> plane explicitly normalized
> ---
>
> Key: LUCENE-7943
> URL: https://issues.apache.org/jira/browse/LUCENE-7943
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Karl Wright
>Assignee: Karl Wright
> Fix For: 6.7, master (8.0), 7.1
>
>
> The following assertion sometimes fails even when the plane has been 
> explicitly normalized:
> {code}
> assert Math.abs(x*x + y*y + z*z - 1.0) < MINIMUM_RESOLUTION_SQUARED : 
> "Plane needs to be normalized";
> {code}
> I can find nothing wrong with the assertion check, but the numerical accuracy 
> is 1e-16, which just isn't high enough to support MINIMUM_RESOLUTION_SQUARED 
> accuracy (1e-24).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7943) Plane.findArcDistancePoints() sometimes throws assertion failures even when plane explicitly normalized

2017-08-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16143969#comment-16143969
 ] 

ASF subversion and git services commented on LUCENE-7943:
-

Commit 3343b5c50369ec9d4ed30db3647f569abe4001a9 in lucene-solr's branch 
refs/heads/branch_7x from [~kwri...@metacarta.com]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=3343b5c ]

LUCENE-7943: Disable an overly-aggressive assertion.


> Plane.findArcDistancePoints() sometimes throws assertion failures even when 
> plane explicitly normalized
> ---
>
> Key: LUCENE-7943
> URL: https://issues.apache.org/jira/browse/LUCENE-7943
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Karl Wright
>Assignee: Karl Wright
>
> The following assertion sometimes fails even when the plane has been 
> explicitly normalized:
> {code}
> assert Math.abs(x*x + y*y + z*z - 1.0) < MINIMUM_RESOLUTION_SQUARED : 
> "Plane needs to be normalized";
> {code}
> I can find nothing wrong with the assertion check, but the numerical accuracy 
> is 1e-16, which just isn't high enough to support MINIMUM_RESOLUTION_SQUARED 
> accuracy (1e-24).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7943) Plane.findArcDistancePoints() sometimes throws assertion failures even when plane explicitly normalized

2017-08-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16143967#comment-16143967
 ] 

ASF subversion and git services commented on LUCENE-7943:
-

Commit 268789584ea83b8e66099045fa81c07a16178da6 in lucene-solr's branch 
refs/heads/branch_6x from [~kwri...@metacarta.com]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=2687895 ]

LUCENE-7943: Disable an overly-aggressive assertion.


> Plane.findArcDistancePoints() sometimes throws assertion failures even when 
> plane explicitly normalized
> ---
>
> Key: LUCENE-7943
> URL: https://issues.apache.org/jira/browse/LUCENE-7943
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Karl Wright
>Assignee: Karl Wright
>
> The following assertion sometimes fails even when the plane has been 
> explicitly normalized:
> {code}
> assert Math.abs(x*x + y*y + z*z - 1.0) < MINIMUM_RESOLUTION_SQUARED : 
> "Plane needs to be normalized";
> {code}
> I can find nothing wrong with the assertion check, but the numerical accuracy 
> is 1e-16, which just isn't high enough to support MINIMUM_RESOLUTION_SQUARED 
> accuracy (1e-24).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7943) Plane.findArcDistancePoints() sometimes throws assertion failures even when plane explicitly normalized

2017-08-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16143966#comment-16143966
 ] 

ASF subversion and git services commented on LUCENE-7943:
-

Commit 200beab09ee68903e8511b2329f2cd54cf4de00a in lucene-solr's branch 
refs/heads/master from [~kwri...@metacarta.com]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=200beab ]

LUCENE-7943: Disable an overly-aggressive assertion.


> Plane.findArcDistancePoints() sometimes throws assertion failures even when 
> plane explicitly normalized
> ---
>
> Key: LUCENE-7943
> URL: https://issues.apache.org/jira/browse/LUCENE-7943
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Karl Wright
>Assignee: Karl Wright
>
> The following assertion sometimes fails even when the plane has been 
> explicitly normalized:
> {code}
> assert Math.abs(x*x + y*y + z*z - 1.0) < MINIMUM_RESOLUTION_SQUARED : 
> "Plane needs to be normalized";
> {code}
> I can find nothing wrong with the assertion check, but the numerical accuracy 
> is 1e-16, which just isn't high enough to support MINIMUM_RESOLUTION_SQUARED 
> accuracy (1e-24).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7943) Plane.findArcDistancePoints() sometimes throws assertion failures even when plane explicitly normalized

2017-08-28 Thread Karl Wright (JIRA)
Karl Wright created LUCENE-7943:
---

 Summary: Plane.findArcDistancePoints() sometimes throws assertion 
failures even when plane explicitly normalized
 Key: LUCENE-7943
 URL: https://issues.apache.org/jira/browse/LUCENE-7943
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Karl Wright
Assignee: Karl Wright


The following assertion sometimes fails even when the plane has been explicitly 
normalized:

{code}
assert Math.abs(x*x + y*y + z*z - 1.0) < MINIMUM_RESOLUTION_SQUARED : 
"Plane needs to be normalized";
{code}

I can find nothing wrong with the assertion check, but the numerical accuracy 
is 1e-16, which just isn't high enough to support MINIMUM_RESOLUTION_SQUARED 
accuracy (1e-24).




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11294) BasicAuthIntegrationTest fails a lot with No registered leader message

2017-08-28 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16143958#comment-16143958
 ] 

Varun Thacker commented on SOLR-11294:
--

{code}
 [junit4]   2> 8224 INFO  (qtp121821744-33) [n:127.0.0.1:57133_solr] 
o.a.s.s.RuleBasedAuthorizationPlugin request has come without principal. failed 
permission {
 [junit4]   2>   "name":"collection-admin-edit",
 [junit4]   2>   "role":"admin",
 [junit4]   2>   "index":3}
{code}

This looks like a failed request? Atleast it should be marked as a WARN ? Again 
not related to the actual fail.. Posting comments while going through the logs

> BasicAuthIntegrationTest fails a lot with No registered leader message
> --
>
> Key: SOLR-11294
> URL: https://issues.apache.org/jira/browse/SOLR-11294
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Priority: Blocker
> Fix For: 6.6.1
>
> Attachments: TestBasicAuthIntegration_Fail.log
>
>
> I can see 20+ failures over the last 7 days with Jenkins enabled on 
> branch_6_6 for BasicAuthIntegrationTest
> {code}
> Error Message:
> Error from server at http://127.0.0.1:61124/solr/authCollection: No 
> registered leader was found after waiting for 4000ms , collection: 
> authCollection slice: shard2
> {code}
> Attaching the seed and logs for 1 such test run that failed on my machine
> {code}
> ant test  -Dtestcase=BasicAuthIntegrationTest -Dtests.method=testBasicAuth 
> -Dtests.seed=82AFFEAD74267467 -Dtests.slow=true -Dtests.locale=hu 
> -Dtests.timezone=Etc/Universal -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
> {code}
> It's failed 3/4 times I attempted to build the RC for 6.6.1 today so I am 
> marking this as a blocker currently



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11278) CdcrBootstrapTest failing in branch_6_6

2017-08-28 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16143949#comment-16143949
 ] 

Amrit Sarkar commented on SOLR-11278:
-

Posting SOLR-11003 discussion here:

bq. All the tests in this class fails where we stop CDCR, index docs in source 
and then turns on CDCR again and expect BOOTSTRAP to happen. If I debug on IDE, 
all tests passes successfully (as the steps slows down), suggesting the time to 
wait for target to sync is low. But increasing it to 5 minutes even, instead of 
default 2 minutes, doesn't work. I increased explicit commit issued while 
waiting from 1 to 3 second, doesn't work either.
Let me know if there are other tests which are failing too related to CDCR.

bq. Though the tests which are constantly failing I mentioned above, do Solr 
Core reload every second when it waits for target to sync. It can very well be 
variant of SOLR-11034 and/or SOLR-11035, as we can see occasional NPE at 
IndexFetcher.java:753.



> CdcrBootstrapTest failing in branch_6_6
> ---
>
> Key: SOLR-11278
> URL: https://issues.apache.org/jira/browse/SOLR-11278
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR
>Affects Versions: 6.6.1
>Reporter: Amrit Sarkar
> Attachments: test_results
>
>
> I ran beast for 10 rounds:
> ant beast -Dtestcase=CdcrBootstrapTest -Dtests.multiplier=2 -Dtests.slow=true 
> -Dtests.locale=vi -Dtests.timezone=Asia/Yekaterinburg -Dtests.asserts=true 
> -Dtests.file.encoding=US-ASCII -Dbeast.iters=10
> and seeing following failure:
> {code}
>   [beaster] [01:37:16.282] FAILURE  153s | 
> CdcrBootstrapTest.testBootstrapWithSourceCluster <<<
>   [beaster]> Throwable #1: java.lang.AssertionError: Document mismatch on 
> target after sync expected:<2000> but was:<1000>
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11294) BasicAuthIntegrationTest fails a lot with No registered leader message

2017-08-28 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16143946#comment-16143946
 ] 

Varun Thacker commented on SOLR-11294:
--

{code}[junit4]   2> 2612 INFO  (qtp1166217625-42) [n:127.0.0.1:57135_solr] 
o.a.s.c.TransientSolrCoreCacheDefault Allocating transient cache for 2147483647 
transient cores
{code}

Looking at this log entry it indicates that it's allocating 2B  entries but it 
caps out at 1000

{code}
// Now don't allow ridiculous allocations here, if the size is > 1,000, 
we'll just deal with
// adding cores as they're opened. This blows up with the marker value of 
-1.
transientCores = new LinkedHashMap(Math.min(cacheSize, 
1000), 0.75f, true
{code}

We should improve the log message.

Anyways not related to the actual fail 

> BasicAuthIntegrationTest fails a lot with No registered leader message
> --
>
> Key: SOLR-11294
> URL: https://issues.apache.org/jira/browse/SOLR-11294
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Priority: Blocker
> Fix For: 6.6.1
>
> Attachments: TestBasicAuthIntegration_Fail.log
>
>
> I can see 20+ failures over the last 7 days with Jenkins enabled on 
> branch_6_6 for BasicAuthIntegrationTest
> {code}
> Error Message:
> Error from server at http://127.0.0.1:61124/solr/authCollection: No 
> registered leader was found after waiting for 4000ms , collection: 
> authCollection slice: shard2
> {code}
> Attaching the seed and logs for 1 such test run that failed on my machine
> {code}
> ant test  -Dtestcase=BasicAuthIntegrationTest -Dtests.method=testBasicAuth 
> -Dtests.seed=82AFFEAD74267467 -Dtests.slow=true -Dtests.locale=hu 
> -Dtests.timezone=Etc/Universal -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
> {code}
> It's failed 3/4 times I attempted to build the RC for 6.6.1 today so I am 
> marking this as a blocker currently



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-11278) CdcrBootstrapTest failing in branch_6_6

2017-08-28 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16143926#comment-16143926
 ] 

Amrit Sarkar edited comment on SOLR-11278 at 8/28/17 4:05 PM:
--

{code} 
 [beaster] Tests with failures [seed: 8D740119BA9589F1]:
  [beaster]   - 
org.apache.solr.cloud.CdcrBootstrapTest.testConvertClusterToCdcrAndBootstrap
  [beaster]   - 
org.apache.solr.cloud.CdcrBootstrapTest.testBootstrapWithSourceCluster
{code}

Safe to say, all the three tests are NOT passable at all the seeds.


was (Author: sarkaramr...@gmail.com):
{code} 
 [beaster] Tests with failures [seed: 8D740119BA9589F1]:
  [beaster]   - 
org.apache.solr.cloud.CdcrBootstrapTest.testConvertClusterToCdcrAndBootstrap
  [beaster]   - 
org.apache.solr.cloud.CdcrBootstrapTest.testBootstrapWithSourceCluster
{code}

Safe to say, all the three tests are passable at all the seeds.

> CdcrBootstrapTest failing in branch_6_6
> ---
>
> Key: SOLR-11278
> URL: https://issues.apache.org/jira/browse/SOLR-11278
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR
>Affects Versions: 6.6.1
>Reporter: Amrit Sarkar
> Attachments: test_results
>
>
> I ran beast for 10 rounds:
> ant beast -Dtestcase=CdcrBootstrapTest -Dtests.multiplier=2 -Dtests.slow=true 
> -Dtests.locale=vi -Dtests.timezone=Asia/Yekaterinburg -Dtests.asserts=true 
> -Dtests.file.encoding=US-ASCII -Dbeast.iters=10
> and seeing following failure:
> {code}
>   [beaster] [01:37:16.282] FAILURE  153s | 
> CdcrBootstrapTest.testBootstrapWithSourceCluster <<<
>   [beaster]> Throwable #1: java.lang.AssertionError: Document mismatch on 
> target after sync expected:<2000> but was:<1000>
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11294) BasicAuthIntegrationTest fails a lot with No registered leader message

2017-08-28 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11294?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-11294:
-
Attachment: TestBasicAuthIntegration_Fail.log

> BasicAuthIntegrationTest fails a lot with No registered leader message
> --
>
> Key: SOLR-11294
> URL: https://issues.apache.org/jira/browse/SOLR-11294
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Priority: Blocker
> Fix For: 6.6.1
>
> Attachments: TestBasicAuthIntegration_Fail.log
>
>
> I can see 20+ failures over the last 7 days with Jenkins enabled on 
> branch_6_6 for BasicAuthIntegrationTest
> {code}
> Error Message:
> Error from server at http://127.0.0.1:61124/solr/authCollection: No 
> registered leader was found after waiting for 4000ms , collection: 
> authCollection slice: shard2
> {code}
> Attaching the seed and logs for 1 such test run that failed on my machine
> {code}
> ant test  -Dtestcase=BasicAuthIntegrationTest -Dtests.method=testBasicAuth 
> -Dtests.seed=82AFFEAD74267467 -Dtests.slow=true -Dtests.locale=hu 
> -Dtests.timezone=Etc/Universal -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
> {code}
> It's failed 3/4 times I attempted to build the RC for 6.6.1 today so I am 
> marking this as a blocker currently



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-11294) BasicAuthIntegrationTest fails a lot with No registered leader message

2017-08-28 Thread Varun Thacker (JIRA)
Varun Thacker created SOLR-11294:


 Summary: BasicAuthIntegrationTest fails a lot with No registered 
leader message
 Key: SOLR-11294
 URL: https://issues.apache.org/jira/browse/SOLR-11294
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Varun Thacker
Priority: Blocker
 Fix For: 6.6.1


I can see 20+ failures over the last 7 days with Jenkins enabled on branch_6_6 
for BasicAuthIntegrationTest

{code}
Error Message:
Error from server at http://127.0.0.1:61124/solr/authCollection: No registered 
leader was found after waiting for 4000ms , collection: authCollection slice: 
shard2
{code}


Attaching the seed and logs for 1 such test run that failed on my machine

{code}
ant test  -Dtestcase=BasicAuthIntegrationTest -Dtests.method=testBasicAuth 
-Dtests.seed=82AFFEAD74267467 -Dtests.slow=true -Dtests.locale=hu 
-Dtests.timezone=Etc/Universal -Dtests.asserts=true -Dtests.file.encoding=UTF-8
{code}

It's failed 3/4 times I attempted to build the RC for 6.6.1 today so I am 
marking this as a blocker currently



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-11003) Enabling bi-directional CDCR active-active clusters

2017-08-28 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16143932#comment-16143932
 ] 

Amrit Sarkar edited comment on SOLR-11003 at 8/28/17 4:00 PM:
--

[~erickerickson] Are you talking about this? _SOLR-11278: CdcrBootstrapTest 
failing in branch_6_6_. I am trying to understand what's wrong with it, and 
have narrowed down to: 

All the tests in this class fails where *we stop CDCR, index docs in source and 
then turns on CDCR again and expect BOOTSTRAP to happen*. If I debug on IDE, 
all tests passes successfully (as the steps slows down), suggesting the _time 
to wait for target to sync_ is low. But increasing it to 5 minutes even, 
instead of default 2 minutes, doesn't work. I increased explicit commit issued 
while waiting from 1 to 3 second, doesn't work either.

Let me know if there are other tests which are failing too related to CDCR.


was (Author: sarkaramr...@gmail.com):
[~erickerickson] Are you talking about this? _SOLR-11278: CdcrBootstrapTest 
failing in branch_6_6_. I am trying to understand what's wrong with it, and 
have narrowed down to: 

All the tests in this class fails where *we stop CDCR, index docs in source and 
then turns on CDCR again and expect BOOTSTRAP to happen*. If I debug on IDE, 
all tests passes successfully (as the steps slows down), suggesting the _time 
to wait for target to sync_ is low. But increasing it to 5 minutes even, 
instead of default 2 minutes, doesn't work.

Let me know if there are other tests which are failing too related to CDCR.

> Enabling bi-directional CDCR active-active clusters
> ---
>
> Key: SOLR-11003
> URL: https://issues.apache.org/jira/browse/SOLR-11003
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR
>Reporter: Amrit Sarkar
>Assignee: Varun Thacker
> Attachments: sample-configs.zip, SOLR-11003.patch, SOLR-11003.patch, 
> SOLR-11003.patch, SOLR-11003.patch, SOLR-11003.patch, 
> SOLR-11003-tlogutils.patch
>
>
> The latest version of Solr CDCR across collections / clusters is in 
> active-passive format, where we can index into source collection and the 
> updates gets forwarded to the passive one and vice-versa is not supported.
> https://lucene.apache.org/solr/guide/6_6/cross-data-center-replication-cdcr.html
> https://issues.apache.org/jira/browse/SOLR-6273
> We are try to get a  design ready to index in both collections and the 
> updates gets reflected across the collections in real-time. 
> ClusterACollectionA => ClusterBCollectionB | ClusterBCollectionB => 
> ClusterACollectionA.
> The best use-case would be to we keep indexing in ClusterACollectionA which 
> forwards the updates to ClusterBCollectionB. If ClusterACollectionA gets 
> down, we point the indexer and searcher application to ClusterBCollectionB. 
> Once ClusterACollectionA is up, depending on updates count, they will be 
> bootstrapped or forwarded to ClusterACollectionA from ClusterBCollectionB and 
> keep indexing on the ClusterBCollectionB.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11281) SolrSlf4jReporterTest fails on jenkins

2017-08-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16143934#comment-16143934
 ] 

ASF subversion and git services commented on SOLR-11281:


Commit 40f999b08e8dc8b515d83c0a56b3e96d84547f5d in lucene-solr's branch 
refs/heads/master from [~ab]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=40f999b ]

SOLR-11281: Remove the diagnostic additions and apply a patch from Jason 
Gerlowski.


> SolrSlf4jReporterTest fails on jenkins
> --
>
> Key: SOLR-11281
> URL: https://issues.apache.org/jira/browse/SOLR-11281
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>Priority: Minor
> Fix For: master (8.0), 7.1
>
> Attachments: SOLR-11281.patch
>
>
> This test fails frequently on jenkins with a failed assertion:
> {code}
> FAILED:  org.apache.solr.metrics.reporters.SolrSlf4jReporterTest.testReporter
> Error Message:
> Stack Trace:
> java.lang.AssertionError
>   at 
> __randomizedtesting.SeedInfo.seed([7B977D6F04FCA50C:247350586FF03649]:0)
>   at org.junit.Assert.fail(Assert.java:92)
>   at org.junit.Assert.assertTrue(Assert.java:43)
>   at org.junit.Assert.assertTrue(Assert.java:54)
>   at 
> org.apache.solr.metrics.reporters.SolrSlf4jReporterTest.testReporter(SolrSlf4jReporterTest.java:84)
> {code}
> A better failure message is needed first, then the test needs a fix to be 
> more robust.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11003) Enabling bi-directional CDCR active-active clusters

2017-08-28 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16143935#comment-16143935
 ] 

Amrit Sarkar commented on SOLR-11003:
-

Though the tests which are constantly failing I mentioned above, do *Solr Core 
reload every second when it waits for target to sync*. It can very well be 
variant of SOLR-11034 and/or SOLR-11035, as we can see occasional NPE at 
IndexFetcher.java:753.

> Enabling bi-directional CDCR active-active clusters
> ---
>
> Key: SOLR-11003
> URL: https://issues.apache.org/jira/browse/SOLR-11003
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR
>Reporter: Amrit Sarkar
>Assignee: Varun Thacker
> Attachments: sample-configs.zip, SOLR-11003.patch, SOLR-11003.patch, 
> SOLR-11003.patch, SOLR-11003.patch, SOLR-11003.patch, 
> SOLR-11003-tlogutils.patch
>
>
> The latest version of Solr CDCR across collections / clusters is in 
> active-passive format, where we can index into source collection and the 
> updates gets forwarded to the passive one and vice-versa is not supported.
> https://lucene.apache.org/solr/guide/6_6/cross-data-center-replication-cdcr.html
> https://issues.apache.org/jira/browse/SOLR-6273
> We are try to get a  design ready to index in both collections and the 
> updates gets reflected across the collections in real-time. 
> ClusterACollectionA => ClusterBCollectionB | ClusterBCollectionB => 
> ClusterACollectionA.
> The best use-case would be to we keep indexing in ClusterACollectionA which 
> forwards the updates to ClusterBCollectionB. If ClusterACollectionA gets 
> down, we point the indexer and searcher application to ClusterBCollectionB. 
> Once ClusterACollectionA is up, depending on updates count, they will be 
> bootstrapped or forwarded to ClusterACollectionA from ClusterBCollectionB and 
> keep indexing on the ClusterBCollectionB.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-6.6-Linux (64bit/jdk-9-ea+181) - Build # 107 - Still Unstable!

2017-08-28 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.6-Linux/107/
Java: 64bit/jdk-9-ea+181 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC 
--illegal-access=deny

1 tests failed.
FAILED:  org.apache.solr.security.BasicAuthIntegrationTest.testBasicAuth

Error Message:
Error from server at http://127.0.0.1:44207/solr/authCollection: No registered 
leader was found after waiting for 4000ms , collection: authCollection slice: 
shard2

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:44207/solr/authCollection: No registered leader 
was found after waiting for 4000ms , collection: authCollection slice: shard2
at 
__randomizedtesting.SeedInfo.seed([190B2A84EEFF99E2:A5655C964AAC1A98]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:612)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:279)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:268)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:447)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:388)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1383)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1134)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:1073)
at 
org.apache.solr.security.BasicAuthIntegrationTest.testBasicAuth(BasicAuthIntegrationTest.java:194)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-11003) Enabling bi-directional CDCR active-active clusters

2017-08-28 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16143932#comment-16143932
 ] 

Amrit Sarkar commented on SOLR-11003:
-

[~erickerickson] Are you talking about this? _SOLR-11278: CdcrBootstrapTest 
failing in branch_6_6_. I am trying to understand what's wrong with it, and 
have narrowed down to: 

All the tests in this class fails where *we stop CDCR, index docs in source and 
then turns on CDCR again and expect BOOTSTRAP to happen*. If I debug on IDE, 
all tests passes successfully (as the steps slows down), suggesting the _time 
to wait for target to sync_ is low. But increasing it to 5 minutes even, 
instead of default 2 minutes, doesn't work.

Let me know if there are other tests which are failing too related to CDCR.

> Enabling bi-directional CDCR active-active clusters
> ---
>
> Key: SOLR-11003
> URL: https://issues.apache.org/jira/browse/SOLR-11003
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR
>Reporter: Amrit Sarkar
>Assignee: Varun Thacker
> Attachments: sample-configs.zip, SOLR-11003.patch, SOLR-11003.patch, 
> SOLR-11003.patch, SOLR-11003.patch, SOLR-11003.patch, 
> SOLR-11003-tlogutils.patch
>
>
> The latest version of Solr CDCR across collections / clusters is in 
> active-passive format, where we can index into source collection and the 
> updates gets forwarded to the passive one and vice-versa is not supported.
> https://lucene.apache.org/solr/guide/6_6/cross-data-center-replication-cdcr.html
> https://issues.apache.org/jira/browse/SOLR-6273
> We are try to get a  design ready to index in both collections and the 
> updates gets reflected across the collections in real-time. 
> ClusterACollectionA => ClusterBCollectionB | ClusterBCollectionB => 
> ClusterACollectionA.
> The best use-case would be to we keep indexing in ClusterACollectionA which 
> forwards the updates to ClusterBCollectionB. If ClusterACollectionA gets 
> down, we point the indexer and searcher application to ClusterBCollectionB. 
> Once ClusterACollectionA is up, depending on updates count, they will be 
> bootstrapped or forwarded to ClusterACollectionA from ClusterBCollectionB and 
> keep indexing on the ClusterBCollectionB.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11278) CdcrBootstrapTest failing in branch_6_6

2017-08-28 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16143931#comment-16143931
 ] 

Erick Erickson commented on SOLR-11278:
---

My bet, and this would be for verification purposes only:

If on failure we added a single document to the source and tried again we 
wouldn't fail. Which would implicate SOLR-11034 and SOLR-11035. Hmmm, let me 
give that a whirl. Those two JIRAs are holding up several other JIRAs,...

> CdcrBootstrapTest failing in branch_6_6
> ---
>
> Key: SOLR-11278
> URL: https://issues.apache.org/jira/browse/SOLR-11278
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR
>Affects Versions: 6.6.1
>Reporter: Amrit Sarkar
> Attachments: test_results
>
>
> I ran beast for 10 rounds:
> ant beast -Dtestcase=CdcrBootstrapTest -Dtests.multiplier=2 -Dtests.slow=true 
> -Dtests.locale=vi -Dtests.timezone=Asia/Yekaterinburg -Dtests.asserts=true 
> -Dtests.file.encoding=US-ASCII -Dbeast.iters=10
> and seeing following failure:
> {code}
>   [beaster] [01:37:16.282] FAILURE  153s | 
> CdcrBootstrapTest.testBootstrapWithSourceCluster <<<
>   [beaster]> Throwable #1: java.lang.AssertionError: Document mismatch on 
> target after sync expected:<2000> but was:<1000>
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11278) CdcrBootstrapTest failing in branch_6_6

2017-08-28 Thread Amrit Sarkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-11278:

Summary: CdcrBootstrapTest failing in branch_6_6  (was: 
CdcrBootstrapTest.testBootstrapWithSourceCluster failing in branch_6_6)

> CdcrBootstrapTest failing in branch_6_6
> ---
>
> Key: SOLR-11278
> URL: https://issues.apache.org/jira/browse/SOLR-11278
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR
>Affects Versions: 6.6.1
>Reporter: Amrit Sarkar
> Attachments: test_results
>
>
> I ran beast for 10 rounds:
> ant beast -Dtestcase=CdcrBootstrapTest -Dtests.multiplier=2 -Dtests.slow=true 
> -Dtests.locale=vi -Dtests.timezone=Asia/Yekaterinburg -Dtests.asserts=true 
> -Dtests.file.encoding=US-ASCII -Dbeast.iters=10
> and seeing following failure:
> {code}
>   [beaster] [01:37:16.282] FAILURE  153s | 
> CdcrBootstrapTest.testBootstrapWithSourceCluster <<<
>   [beaster]> Throwable #1: java.lang.AssertionError: Document mismatch on 
> target after sync expected:<2000> but was:<1000>
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Release 7.0 process starts

2017-08-28 Thread Adrien Grand
You mentioned you tried to run the script multiple times. Have you run git
clean at some point? Maybe this is due to a stale working copy?

Le lun. 28 août 2017 à 08:53, Adrien Grand  a écrit :

> Hi Anshum,
>
> Does running ant check-licenses from the Lucene directory fail as well?
> The error message that you are getting looks weird to me since Lucene 7.0
> depends on ICU 59.1, not 56.1 since
> https://issues.apache.org/jira/browse/LUCENE-7540.
>
> Le ven. 25 août 2017 à 23:42, Anshum Gupta  a écrit :
>
>> A quick question, in case someone has an idea around what’s going on.
>> When I run the following command:
>>
>> python3 -u dev-tools/scripts/buildAndPushRelease.py --push-local
>> /Users/anshum/solr/release/7.0.0/rc0 --rc-num 1 --sign 
>>
>> I end up with the following error:
>>
>> BUILD FAILED
>> /Users/anshum/workspace/lucene-solr/build.xml:117: The following error
>> occurred while executing this line:
>> /Users/anshum/workspace/lucene-solr/lucene/build.xml:90: The following
>> error occurred while executing this line:
>> /Users/anshum/workspace/lucene-solr/lucene/tools/custom-tasks.xml:62: JAR
>> resource does not exist: analysis/icu/lib/icu4j-56.1.jar
>>
>> Any idea as to what’s going on? This generally fails after the tests have
>> run, and the script has processed for about 45 minutes and it’s consistent
>> i.e. all the times when the tests pass, the process fails with this warning.
>>
>> I can also confirm that this file exists at
>> lucene/analysis/icy/lib/icu4j-56.1.jar .
>>
>> Has anyone else seen this when working on the release?
>>
>> -Anshum
>>
>>
>>
>> On Aug 23, 2017, at 4:21 AM, Andrzej Białecki <
>> andrzej.biale...@lucidworks.com> wrote:
>>
>>
>> On 23 Aug 2017, at 13:06, Uwe Schindler  wrote:
>>
>> Keep in mind that there is also branch_7_0.
>>
>>
>> Right, but the changes related to these issues were committed to master
>> before branch_7_0 was created, and these specific issues are only about
>> back-porting to 6x.
>>
>>
>> Uww
>>
>> Am 23. August 2017 12:26:42 MESZ schrieb "Andrzej Białecki" <
>> a...@getopt.org>:
>>>
>>>
>>> On 23 Aug 2017, at 08:15, Anshum Gupta  wrote:
>>>
>>> I also found more issues when comparing 7x, with 6x this time. I’ll take
>>> a look at wether it’s just the CHANGES entries or have these actually
>>> missed the branch. I assume it’s just the CHANGES, but want to be sure. If
>>> the committers involved can pitch in, I’d appreciate, else I’ll work on
>>> this for a bit right now and continue with this tomorrow morning.
>>>
>>> - SOLR-10477 (Ab)
>>>
>>>
>>> This is a partial back-port of relevant improvements from master to 6x,
>>> so there are no strictly corresponding commits on 7x/master.
>>>
>>> - SOLR-10631: Metric reporters leak on 6x. (Ab)
>>>
>>>
>>> This one has been fixed as part of other related issues in branches 7.x
>>> / master, so it only required a specific fix for 6x.
>>>
>>> - SOLR-1 (Ab)
>>>
>>>
>>> This has been committed first to 7x, then to 6x and it’s present in
>>> branch_6_6.
>>>
>>>
>>> ---
>>> Best regards,
>>>
>>> Andrzej Bialecki
>>>
>>>
>> --
>> Uwe Schindler
>> Achterdiek 19, 28357 Bremen
>> https://www.thetaphi.de
>>
>>
>>
>>


[jira] [Commented] (SOLR-11278) CdcrBootstrapTest.testBootstrapWithSourceCluster failing in branch_6_6

2017-08-28 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16143926#comment-16143926
 ] 

Amrit Sarkar commented on SOLR-11278:
-

{code} 
 [beaster] Tests with failures [seed: 8D740119BA9589F1]:
  [beaster]   - 
org.apache.solr.cloud.CdcrBootstrapTest.testConvertClusterToCdcrAndBootstrap
  [beaster]   - 
org.apache.solr.cloud.CdcrBootstrapTest.testBootstrapWithSourceCluster
{code}

Safe to say, all the three tests are passable at all the seeds.

> CdcrBootstrapTest.testBootstrapWithSourceCluster failing in branch_6_6
> --
>
> Key: SOLR-11278
> URL: https://issues.apache.org/jira/browse/SOLR-11278
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR
>Affects Versions: 6.6.1
>Reporter: Amrit Sarkar
> Attachments: test_results
>
>
> I ran beast for 10 rounds:
> ant beast -Dtestcase=CdcrBootstrapTest -Dtests.multiplier=2 -Dtests.slow=true 
> -Dtests.locale=vi -Dtests.timezone=Asia/Yekaterinburg -Dtests.asserts=true 
> -Dtests.file.encoding=US-ASCII -Dbeast.iters=10
> and seeing following failure:
> {code}
>   [beaster] [01:37:16.282] FAILURE  153s | 
> CdcrBootstrapTest.testBootstrapWithSourceCluster <<<
>   [beaster]> Throwable #1: java.lang.AssertionError: Document mismatch on 
> target after sync expected:<2000> but was:<1000>
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11003) Enabling bi-directional CDCR active-active clusters

2017-08-28 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16143924#comment-16143924
 ] 

Erick Erickson commented on SOLR-11003:
---

[~sarkaramr...@gmail.com] There has been a regularly-failing CDCR test for a 
while, do you have any insight what's going on there (since you're in the 
code...).

I'll try beasting that test without and with this patch just to see if it has 
any effect just for yucks.

Or is this yet another variant of SOLR-11034 and/or SOLR-11035?

> Enabling bi-directional CDCR active-active clusters
> ---
>
> Key: SOLR-11003
> URL: https://issues.apache.org/jira/browse/SOLR-11003
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR
>Reporter: Amrit Sarkar
>Assignee: Varun Thacker
> Attachments: sample-configs.zip, SOLR-11003.patch, SOLR-11003.patch, 
> SOLR-11003.patch, SOLR-11003.patch, SOLR-11003.patch, 
> SOLR-11003-tlogutils.patch
>
>
> The latest version of Solr CDCR across collections / clusters is in 
> active-passive format, where we can index into source collection and the 
> updates gets forwarded to the passive one and vice-versa is not supported.
> https://lucene.apache.org/solr/guide/6_6/cross-data-center-replication-cdcr.html
> https://issues.apache.org/jira/browse/SOLR-6273
> We are try to get a  design ready to index in both collections and the 
> updates gets reflected across the collections in real-time. 
> ClusterACollectionA => ClusterBCollectionB | ClusterBCollectionB => 
> ClusterACollectionA.
> The best use-case would be to we keep indexing in ClusterACollectionA which 
> forwards the updates to ClusterBCollectionB. If ClusterACollectionA gets 
> down, we point the indexer and searcher application to ClusterBCollectionB. 
> Once ClusterACollectionA is up, depending on updates count, they will be 
> bootstrapped or forwarded to ClusterACollectionA from ClusterBCollectionB and 
> keep indexing on the ClusterBCollectionB.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-4960) Require minimum ivy version

2017-08-28 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe resolved LUCENE-4960.

   Resolution: Fixed
 Assignee: Steve Rowe
Fix Version/s: 7.1
   master (8.0)

> Require minimum ivy version
> ---
>
> Key: LUCENE-4960
> URL: https://issues.apache.org/jira/browse/LUCENE-4960
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: general/build
>Affects Versions: 4.2.1
>Reporter: Shawn Heisey
>Assignee: Steve Rowe
>Priority: Minor
> Fix For: master (8.0), 7.1
>
> Attachments: LUCENE-4960.patch
>
>
> Someone on solr-user ran into a problem while trying to run 'ant idea' so 
> they could work on Solr in their IDE.  [~steve_rowe] indicated that this is 
> probably due to IVY-1194, requiring an ivy jar upgrade.
> The build system should check for a minimum ivy version, just like it does 
> with ant.  The absolute minimum we require appears to be 2.2.0, but do we 
> want to make it 2.3.0 due to IVY-1388?
> I'm not sure how to go about checking the ivy version.  Checking the ant 
> version is easy because it's ant itself that does the checking.
> There might be other component versions that should be checked too.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4960) Require minimum ivy version

2017-08-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16143918#comment-16143918
 ] 

ASF subversion and git services commented on LUCENE-4960:
-

Commit 7a9870e3f94215cf4167e6ccd75c011b7e50d114 in lucene-solr's branch 
refs/heads/branch_7x from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=7a9870e ]

LUCENE-4960: Fail the build in the presence of Ivy jar(s) with unsupported 
versions.


> Require minimum ivy version
> ---
>
> Key: LUCENE-4960
> URL: https://issues.apache.org/jira/browse/LUCENE-4960
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: general/build
>Affects Versions: 4.2.1
>Reporter: Shawn Heisey
>Priority: Minor
> Attachments: LUCENE-4960.patch
>
>
> Someone on solr-user ran into a problem while trying to run 'ant idea' so 
> they could work on Solr in their IDE.  [~steve_rowe] indicated that this is 
> probably due to IVY-1194, requiring an ivy jar upgrade.
> The build system should check for a minimum ivy version, just like it does 
> with ant.  The absolute minimum we require appears to be 2.2.0, but do we 
> want to make it 2.3.0 due to IVY-1388?
> I'm not sure how to go about checking the ivy version.  Checking the ant 
> version is easy because it's ant itself that does the checking.
> There might be other component versions that should be checked too.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4960) Require minimum ivy version

2017-08-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16143919#comment-16143919
 ] 

ASF subversion and git services commented on LUCENE-4960:
-

Commit f5c2e10222d9014c434ee42411b8857926fb3c23 in lucene-solr's branch 
refs/heads/master from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f5c2e10 ]

LUCENE-4960: Fail the build in the presence of Ivy jar(s) with unsupported 
versions.


> Require minimum ivy version
> ---
>
> Key: LUCENE-4960
> URL: https://issues.apache.org/jira/browse/LUCENE-4960
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: general/build
>Affects Versions: 4.2.1
>Reporter: Shawn Heisey
>Priority: Minor
> Attachments: LUCENE-4960.patch
>
>
> Someone on solr-user ran into a problem while trying to run 'ant idea' so 
> they could work on Solr in their IDE.  [~steve_rowe] indicated that this is 
> probably due to IVY-1194, requiring an ivy jar upgrade.
> The build system should check for a minimum ivy version, just like it does 
> with ant.  The absolute minimum we require appears to be 2.2.0, but do we 
> want to make it 2.3.0 due to IVY-1388?
> I'm not sure how to go about checking the ivy version.  Checking the ant 
> version is easy because it's ant itself that does the checking.
> There might be other component versions that should be checked too.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-4960) Require minimum ivy version

2017-08-28 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16143914#comment-16143914
 ] 

Steve Rowe edited comment on LUCENE-4960 at 8/28/17 3:40 PM:
-

Patch causing the build to fail when a regex matches disallowed Ivy jars: those 
with versions {{2.0.\*}}, {{2.1.\*}}, and {{2.2.\*}}.

Committing shortly.


was (Author: steve_rowe):
Patch causing the build to fail when a regex matches disallowed Ivy jars: those 
with versions 2.0.*, 2.1.*, and 2.2.*.

Committing shortly.

> Require minimum ivy version
> ---
>
> Key: LUCENE-4960
> URL: https://issues.apache.org/jira/browse/LUCENE-4960
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: general/build
>Affects Versions: 4.2.1
>Reporter: Shawn Heisey
>Priority: Minor
> Attachments: LUCENE-4960.patch
>
>
> Someone on solr-user ran into a problem while trying to run 'ant idea' so 
> they could work on Solr in their IDE.  [~steve_rowe] indicated that this is 
> probably due to IVY-1194, requiring an ivy jar upgrade.
> The build system should check for a minimum ivy version, just like it does 
> with ant.  The absolute minimum we require appears to be 2.2.0, but do we 
> want to make it 2.3.0 due to IVY-1388?
> I'm not sure how to go about checking the ivy version.  Checking the ant 
> version is easy because it's ant itself that does the checking.
> There might be other component versions that should be checked too.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4960) Require minimum ivy version

2017-08-28 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated LUCENE-4960:
---
Attachment: LUCENE-4960.patch

Patch causing the build to fail when a regex matches disallowed Ivy jars: those 
with versions 2.0.*, 2.1.*, and 2.2.*.

Committing shortly.

> Require minimum ivy version
> ---
>
> Key: LUCENE-4960
> URL: https://issues.apache.org/jira/browse/LUCENE-4960
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: general/build
>Affects Versions: 4.2.1
>Reporter: Shawn Heisey
>Priority: Minor
> Attachments: LUCENE-4960.patch
>
>
> Someone on solr-user ran into a problem while trying to run 'ant idea' so 
> they could work on Solr in their IDE.  [~steve_rowe] indicated that this is 
> probably due to IVY-1194, requiring an ivy jar upgrade.
> The build system should check for a minimum ivy version, just like it does 
> with ant.  The absolute minimum we require appears to be 2.2.0, but do we 
> want to make it 2.3.0 due to IVY-1388?
> I'm not sure how to go about checking the ivy version.  Checking the ant 
> version is easy because it's ant itself that does the checking.
> There might be other component versions that should be checked too.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 4152 - Still unstable!

2017-08-28 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/4152/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

2 tests failed.
FAILED:  org.apache.solr.metrics.reporters.SolrSlf4jReporterTest.testReporter

Error Message:
count1=6, count2=6 - no 'solr.node' logs in: {numFound=0,start=0,docs=[]}

Stack Trace:
java.lang.AssertionError: count1=6, count2=6 - no 'solr.node' logs in: 
{numFound=0,start=0,docs=[]}
at 
__randomizedtesting.SeedInfo.seed([D561C3510019443A:8A85EE666B15D77F]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.metrics.reporters.SolrSlf4jReporterTest.testReporter(SolrSlf4jReporterTest.java:90)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
org.apache.solr.update.processor.UpdateRequestProcessorFactoryTest.testUpdateDistribChainSkipping

Error Message:
Tests must be run with INFO level logging otherwise LogUpdateProcessor isn't 
used and can't be tested.

Stack 

[jira] [Commented] (SOLR-11250) Add new LTR model which loads the model definition from the external resource

2017-08-28 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16143855#comment-16143855
 ] 

Shalin Shekhar Mangar commented on SOLR-11250:
--

Please use SolrResourceLoader.openResource methods to access files so that Solr 
can check and disallow accessing files outside the instance directory by 
default.

> Add new LTR model which loads the model definition from the external resource
> -
>
> Key: SOLR-11250
> URL: https://issues.apache.org/jira/browse/SOLR-11250
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - LTR
>Reporter: Yuki Yano
>Priority: Minor
> Attachments: SOLR-11250_master.patch, SOLR-11250_master_v2.patch, 
> SOLR-11250_master_v3.patch, SOLR-11250.patch
>
>
> We add new model which contains only the location of the external model and 
> loads it during the initialization.
> By this procedure, large models which are difficult to upload to ZooKeeper 
> can be available.
> The new model works as the wrapper of existing models, and deligates APIs to 
> them.
> We add two classes by this patch:
> * {{ExternalModel}} : a base class for models with external resources.
> * {{URIExternalModel}} : an implementation of {{ExternalModel}} which loads 
> the external model from specified URI (ex. file:, http:, etc.).
> For example, if you have a model on the local disk 
> "file:///var/models/myModel.json", the definition of {{URIExternalModel}} 
> will be like the following.
> {code}
> {
>   "class" : "org.apache.solr.ltr.model.URIExternalModel",
>   "name" : "myURIExternalModel",
>   "features" : [],
>   "params" : {
> "uri" : "file:///var/models/myModel.json"
>   }
> }
> {code}
> If you use LTR with {{model=myURIExternalModel}}, the model of 
> {{myModel.json}} will be used for scoring documents.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Linux (32bit/jdk-9-ea+181) - Build # 20385 - Still Unstable!

2017-08-28 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/20385/
Java: 32bit/jdk-9-ea+181 -server -XX:+UseParallelGC --illegal-access=deny

1 tests failed.
FAILED:  org.apache.solr.metrics.reporters.SolrSlf4jReporterTest.testReporter

Error Message:
count1=6, count2=6 - no 'solr.node' logs in: {numFound=0,start=0,docs=[]}

Stack Trace:
java.lang.AssertionError: count1=6, count2=6 - no 'solr.node' logs in: 
{numFound=0,start=0,docs=[]}
at 
__randomizedtesting.SeedInfo.seed([1E44960D9D7E1E56:41A0BB3AF6728D13]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.metrics.reporters.SolrSlf4jReporterTest.testReporter(SolrSlf4jReporterTest.java:90)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)




Build Log:
[...truncated 12833 lines...]
   [junit4] Suite: org.apache.solr.metrics.reporters.SolrSlf4jReporterTest
   [junit4]   2> Creating dataDir: 

[jira] [Commented] (SOLR-11244) Query DSL for Solr

2017-08-28 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16143848#comment-16143848
 ] 

Shalin Shekhar Mangar commented on SOLR-11244:
--

Ah, right. I missed that. Thanks!

> Query DSL for Solr
> --
>
> Key: SOLR-11244
> URL: https://issues.apache.org/jira/browse/SOLR-11244
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
> Attachments: SOLR-11244.patch, SOLR-11244.patch, SOLR-11244.patch, 
> Solr Query DSL - examples.html
>
>
> It will be great if Solr has a powerful query DSL. This ticket is an 
> extension of [http://yonik.com/solr-json-request-api/].
> Here are several examples of Query DSL
> {code}
> curl -XGET http://localhost:8983/solr/query -d '
> {
> "query" : {
> "lucene" : {
> "df" : "content",
> "query" : "solr lucene"
> }
> }
> }
> {code}
> the above example can be rewritten as (because lucene is the default qparser)
> {code}
> curl -XGET http://localhost:8983/solr/query -d '
> {
> "query" : "content:(solr lucene)"
> }
> {code}
> more complex example:
> {code}
> curl -XGET http://localhost:8983/solr/query -d '
> { 
> "query" : {
> "boost" : {
> "query" : {
> "lucene" : {
> "q.op" : "AND",
> "df" : "cat_s",
> "query" : "A"
> }
> }
> "b" : "log(popularity)"
> }
> }
> }
> {code}
> I call it Json Query Object (JQO) and It defined as :
> - It can be a valid query string for Lucene query parser, for example : 
> "title:solr"
> - It can be a valid local parameters string, for example : "{!dismax 
> qf=myfield}solr rocks"
> - It can be a json object with structure like this 
> {code}
> {
>   "query-parser-name" : {
>  "param1" : "value1",
>  "param2" : "value2",
>  "query" : ,
>  "another-param" : 
>   }
> }
> {code}
> Therefore the above dismax query can be rewritten as ( be noticed that the 
> query argument in local parameters, is put as value of {{query}} field )
> {code}
> {
>   "dismax" : {
>  "qf" : "myfield"
>  "query" : "solr rocks"
>   }
> }
> {code}
> I will attach an HTML, contain more examples of Query DSL.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11293) HttpPartitionTest fails often

2017-08-28 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16143801#comment-16143801
 ] 

Noble Paul commented on SOLR-11293:
---

diagnosis. TLOG type replica loses updates and they don't get fetched in 
real-time gets. This may cause data-loss for TLOG replica types  

> HttpPartitionTest fails often
> -
>
> Key: SOLR-11293
> URL: https://issues.apache.org/jira/browse/SOLR-11293
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>
> https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/4140/testReport/org.apache.solr.cloud/HttpPartitionTest/test/
> {code}
> Error Message
> Doc with id=1 not found in http://127.0.0.1:60897/b/xj/collMinRf_1x3 due to: 
> Path not found: /id; rsp={doc=null}
> Stacktrace
> java.lang.AssertionError: Doc with id=1 not found in 
> http://127.0.0.1:60897/b/xj/collMinRf_1x3 due to: Path not found: /id; 
> rsp={doc=null}
>   at 
> __randomizedtesting.SeedInfo.seed([ACF841744A332569:24AC7EAEE4CF4891]:0)
>   at org.junit.Assert.fail(Assert.java:93)
>   at org.junit.Assert.assertTrue(Assert.java:43)
>   at 
> org.apache.solr.cloud.HttpPartitionTest.assertDocExists(HttpPartitionTest.java:603)
>   at 
> org.apache.solr.cloud.HttpPartitionTest.assertDocsExistInAllReplicas(HttpPartitionTest.java:558)
>   at 
> org.apache.solr.cloud.HttpPartitionTest.testMinRf(HttpPartitionTest.java:249)
>   at 
> org.apache.solr.cloud.HttpPartitionTest.test(HttpPartitionTest.java:127)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11293) HttpPartitionTest fails often

2017-08-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16143795#comment-16143795
 ] 

ASF subversion and git services commented on SOLR-11293:


Commit d86bc63e7041ced644fd609e922d6e8c0f2e in lucene-solr's branch 
refs/heads/master from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=d86bc63 ]

SOLR-11293: Awaits fix


> HttpPartitionTest fails often
> -
>
> Key: SOLR-11293
> URL: https://issues.apache.org/jira/browse/SOLR-11293
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>
> https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/4140/testReport/org.apache.solr.cloud/HttpPartitionTest/test/
> {code}
> Error Message
> Doc with id=1 not found in http://127.0.0.1:60897/b/xj/collMinRf_1x3 due to: 
> Path not found: /id; rsp={doc=null}
> Stacktrace
> java.lang.AssertionError: Doc with id=1 not found in 
> http://127.0.0.1:60897/b/xj/collMinRf_1x3 due to: Path not found: /id; 
> rsp={doc=null}
>   at 
> __randomizedtesting.SeedInfo.seed([ACF841744A332569:24AC7EAEE4CF4891]:0)
>   at org.junit.Assert.fail(Assert.java:93)
>   at org.junit.Assert.assertTrue(Assert.java:43)
>   at 
> org.apache.solr.cloud.HttpPartitionTest.assertDocExists(HttpPartitionTest.java:603)
>   at 
> org.apache.solr.cloud.HttpPartitionTest.assertDocsExistInAllReplicas(HttpPartitionTest.java:558)
>   at 
> org.apache.solr.cloud.HttpPartitionTest.testMinRf(HttpPartitionTest.java:249)
>   at 
> org.apache.solr.cloud.HttpPartitionTest.test(HttpPartitionTest.java:127)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11003) Enabling bi-directional CDCR active-active clusters

2017-08-28 Thread Amrit Sarkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-11003:

Attachment: SOLR-11003.patch

Fixed closure of CloudSolrClients to make the patch pass beasts rounds of 100.

[~varunthacker], this is ready to ship.

> Enabling bi-directional CDCR active-active clusters
> ---
>
> Key: SOLR-11003
> URL: https://issues.apache.org/jira/browse/SOLR-11003
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR
>Reporter: Amrit Sarkar
>Assignee: Varun Thacker
> Attachments: sample-configs.zip, SOLR-11003.patch, SOLR-11003.patch, 
> SOLR-11003.patch, SOLR-11003.patch, SOLR-11003.patch, 
> SOLR-11003-tlogutils.patch
>
>
> The latest version of Solr CDCR across collections / clusters is in 
> active-passive format, where we can index into source collection and the 
> updates gets forwarded to the passive one and vice-versa is not supported.
> https://lucene.apache.org/solr/guide/6_6/cross-data-center-replication-cdcr.html
> https://issues.apache.org/jira/browse/SOLR-6273
> We are try to get a  design ready to index in both collections and the 
> updates gets reflected across the collections in real-time. 
> ClusterACollectionA => ClusterBCollectionB | ClusterBCollectionB => 
> ClusterACollectionA.
> The best use-case would be to we keep indexing in ClusterACollectionA which 
> forwards the updates to ClusterBCollectionB. If ClusterACollectionA gets 
> down, we point the indexer and searcher application to ClusterBCollectionB. 
> Once ClusterACollectionA is up, depending on updates count, they will be 
> bootstrapped or forwarded to ClusterACollectionA from ClusterBCollectionB and 
> keep indexing on the ClusterBCollectionB.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-MacOSX (64bit/jdk1.8.0) - Build # 141 - Still Unstable!

2017-08-28 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-MacOSX/141/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseSerialGC

2 tests failed.
FAILED:  org.apache.solr.cloud.HttpPartitionTest.test

Error Message:
Doc with id=1 not found in http://127.0.0.1:56374/_hot/ig/collMinRf_1x3 due to: 
Path not found: /id; rsp={doc=null}

Stack Trace:
java.lang.AssertionError: Doc with id=1 not found in 
http://127.0.0.1:56374/_hot/ig/collMinRf_1x3 due to: Path not found: /id; 
rsp={doc=null}
at 
__randomizedtesting.SeedInfo.seed([F2D95E61B405E12F:7A8D61BB1AF98CD7]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.HttpPartitionTest.assertDocExists(HttpPartitionTest.java:603)
at 
org.apache.solr.cloud.HttpPartitionTest.assertDocsExistInAllReplicas(HttpPartitionTest.java:558)
at 
org.apache.solr.cloud.HttpPartitionTest.testMinRf(HttpPartitionTest.java:249)
at 
org.apache.solr.cloud.HttpPartitionTest.test(HttpPartitionTest.java:127)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:993)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 

[jira] [Updated] (SOLR-11003) Enabling bi-directional CDCR active-active clusters

2017-08-28 Thread Amrit Sarkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-11003:

Attachment: SOLR-11003.patch

Fixed closure of cloudsolrclients to make the patch pass beasts rounds of 100.

> Enabling bi-directional CDCR active-active clusters
> ---
>
> Key: SOLR-11003
> URL: https://issues.apache.org/jira/browse/SOLR-11003
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR
>Reporter: Amrit Sarkar
>Assignee: Varun Thacker
> Attachments: sample-configs.zip, SOLR-11003.patch, SOLR-11003.patch, 
> SOLR-11003.patch, SOLR-11003.patch, SOLR-11003-tlogutils.patch
>
>
> The latest version of Solr CDCR across collections / clusters is in 
> active-passive format, where we can index into source collection and the 
> updates gets forwarded to the passive one and vice-versa is not supported.
> https://lucene.apache.org/solr/guide/6_6/cross-data-center-replication-cdcr.html
> https://issues.apache.org/jira/browse/SOLR-6273
> We are try to get a  design ready to index in both collections and the 
> updates gets reflected across the collections in real-time. 
> ClusterACollectionA => ClusterBCollectionB | ClusterBCollectionB => 
> ClusterACollectionA.
> The best use-case would be to we keep indexing in ClusterACollectionA which 
> forwards the updates to ClusterBCollectionB. If ClusterACollectionA gets 
> down, we point the indexer and searcher application to ClusterBCollectionB. 
> Once ClusterACollectionA is up, depending on updates count, they will be 
> bootstrapped or forwarded to ClusterACollectionA from ClusterBCollectionB and 
> keep indexing on the ClusterBCollectionB.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-7.x-Linux (64bit/jdk-9-ea+181) - Build # 328 - Still Unstable!

2017-08-28 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/328/
Java: 64bit/jdk-9-ea+181 -XX:+UseCompressedOops -XX:+UseG1GC 
--illegal-access=deny

2 tests failed.
FAILED:  
org.apache.solr.client.solrj.io.stream.StreamExpressionTest.testExecutorStream

Error Message:
Error from server at https://127.0.0.1:42947/solr/workQueue_shard2_replica_n3: 
Expected mime type application/octet-stream but got text/html.   
 
Error 404HTTP ERROR: 404 Problem 
accessing /solr/workQueue_shard2_replica_n3/update. Reason: Can not 
find: /solr/workQueue_shard2_replica_n3/update http://eclipse.org/jetty;>Powered by Jetty:// 9.3.20.v20170531 
  

Stack Trace:
org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from 
server at https://127.0.0.1:42947/solr/workQueue_shard2_replica_n3: Expected 
mime type application/octet-stream but got text/html. 


Error 404 


HTTP ERROR: 404
Problem accessing /solr/workQueue_shard2_replica_n3/update. Reason:
Can not find: /solr/workQueue_shard2_replica_n3/update
http://eclipse.org/jetty;>Powered by Jetty:// 
9.3.20.v20170531



at 
__randomizedtesting.SeedInfo.seed([F9513F517F9102A4:DB91BEAA5CFB28B4]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:539)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:993)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:862)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:793)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:178)
at 
org.apache.solr.client.solrj.request.UpdateRequest.commit(UpdateRequest.java:233)
at 
org.apache.solr.client.solrj.io.stream.StreamExpressionTest.testExecutorStream(StreamExpressionTest.java:6822)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 

[jira] [Updated] (SOLR-11003) Enabling bi-directional CDCR active-active clusters

2017-08-28 Thread Amrit Sarkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-11003:

Attachment: SOLR-11003.patch

Patch uploaded with everything fixed, working in {{master}} and others.

> Enabling bi-directional CDCR active-active clusters
> ---
>
> Key: SOLR-11003
> URL: https://issues.apache.org/jira/browse/SOLR-11003
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR
>Reporter: Amrit Sarkar
>Assignee: Varun Thacker
> Attachments: sample-configs.zip, SOLR-11003.patch, SOLR-11003.patch, 
> SOLR-11003.patch, SOLR-11003-tlogutils.patch
>
>
> The latest version of Solr CDCR across collections / clusters is in 
> active-passive format, where we can index into source collection and the 
> updates gets forwarded to the passive one and vice-versa is not supported.
> https://lucene.apache.org/solr/guide/6_6/cross-data-center-replication-cdcr.html
> https://issues.apache.org/jira/browse/SOLR-6273
> We are try to get a  design ready to index in both collections and the 
> updates gets reflected across the collections in real-time. 
> ClusterACollectionA => ClusterBCollectionB | ClusterBCollectionB => 
> ClusterACollectionA.
> The best use-case would be to we keep indexing in ClusterACollectionA which 
> forwards the updates to ClusterBCollectionB. If ClusterACollectionA gets 
> down, we point the indexer and searcher application to ClusterBCollectionB. 
> Once ClusterACollectionA is up, depending on updates count, they will be 
> bootstrapped or forwarded to ClusterACollectionA from ClusterBCollectionB and 
> keep indexing on the ClusterBCollectionB.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11278) CdcrBootstrapTest.testBootstrapWithSourceCluster failing in branch_6_6

2017-08-28 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16143712#comment-16143712
 ] 

Varun Thacker commented on SOLR-11278:
--

Here's another fail on branch_6_6 : 
https://jenkins.thetaphi.de/job/Lucene-Solr-6.6-Windows/40/

> CdcrBootstrapTest.testBootstrapWithSourceCluster failing in branch_6_6
> --
>
> Key: SOLR-11278
> URL: https://issues.apache.org/jira/browse/SOLR-11278
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR
>Affects Versions: 6.6.1
>Reporter: Amrit Sarkar
> Attachments: test_results
>
>
> I ran beast for 10 rounds:
> ant beast -Dtestcase=CdcrBootstrapTest -Dtests.multiplier=2 -Dtests.slow=true 
> -Dtests.locale=vi -Dtests.timezone=Asia/Yekaterinburg -Dtests.asserts=true 
> -Dtests.file.encoding=US-ASCII -Dbeast.iters=10
> and seeing following failure:
> {code}
>   [beaster] [01:37:16.282] FAILURE  153s | 
> CdcrBootstrapTest.testBootstrapWithSourceCluster <<<
>   [beaster]> Throwable #1: java.lang.AssertionError: Document mismatch on 
> target after sync expected:<2000> but was:<1000>
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-6.6-Windows (64bit/jdk-9-ea+181) - Build # 40 - Still Unstable!

2017-08-28 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.6-Windows/40/
Java: 64bit/jdk-9-ea+181 -XX:-UseCompressedOops -XX:+UseParallelGC 
--illegal-access=deny

3 tests failed.
FAILED:  
org.apache.solr.cloud.CdcrBootstrapTest.testBootstrapWithContinousIndexingOnSourceCluster

Error Message:
Document mismatch on target after sync expected:<2000> but was:<1100>

Stack Trace:
java.lang.AssertionError: Document mismatch on target after sync 
expected:<2000> but was:<1100>
at 
__randomizedtesting.SeedInfo.seed([9A9809F299BDAE9A:4EDD42AB7EEB1D61]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.cloud.CdcrBootstrapTest.testBootstrapWithContinousIndexingOnSourceCluster(CdcrBootstrapTest.java:309)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  

[jira] [Commented] (SOLR-11003) Enabling bi-directional CDCR active-active clusters

2017-08-28 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16143670#comment-16143670
 ] 

Amrit Sarkar commented on SOLR-11003:
-

The patch is failing on {{master}} and previous versions:  {{branch_6_5}}

{code}
[junit4]   2> Caused by: java.util.concurrent.ExecutionException: 
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at https://127.0.0.1:55565/solr/cdcr-cluster1_shard1_replica1: 
Invalid shift value (64) in prefixCoded bytes (is encoded value really an INT?)
   [junit4]   2>at 
java.util.concurrent.FutureTask.report(FutureTask.java:122)
   [junit4]   2>at 
java.util.concurrent.FutureTask.get(FutureTask.java:192)
   [junit4]   2>at 
org.apache.solr.handler.CdcrRequestHandler.handleCollectionCheckpointAction(CdcrRequestHandler.java:414)
   [junit4]   2>... 34 more 
{code}

bq. Invalid shift value (64) in prefixCoded bytes (is encoded value really an 
INT?)

Need to fix this for {{CollectionCheckpoint}}.

> Enabling bi-directional CDCR active-active clusters
> ---
>
> Key: SOLR-11003
> URL: https://issues.apache.org/jira/browse/SOLR-11003
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR
>Reporter: Amrit Sarkar
>Assignee: Varun Thacker
> Attachments: sample-configs.zip, SOLR-11003.patch, SOLR-11003.patch, 
> SOLR-11003-tlogutils.patch
>
>
> The latest version of Solr CDCR across collections / clusters is in 
> active-passive format, where we can index into source collection and the 
> updates gets forwarded to the passive one and vice-versa is not supported.
> https://lucene.apache.org/solr/guide/6_6/cross-data-center-replication-cdcr.html
> https://issues.apache.org/jira/browse/SOLR-6273
> We are try to get a  design ready to index in both collections and the 
> updates gets reflected across the collections in real-time. 
> ClusterACollectionA => ClusterBCollectionB | ClusterBCollectionB => 
> ClusterACollectionA.
> The best use-case would be to we keep indexing in ClusterACollectionA which 
> forwards the updates to ClusterBCollectionB. If ClusterACollectionA gets 
> down, we point the indexer and searcher application to ClusterBCollectionB. 
> Once ClusterACollectionA is up, depending on updates count, they will be 
> bootstrapped or forwarded to ClusterACollectionA from ClusterBCollectionB and 
> keep indexing on the ClusterBCollectionB.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.0-Linux (64bit/jdk1.8.0_144) - Build # 270 - Unstable!

2017-08-28 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.0-Linux/270/
Java: 64bit/jdk1.8.0_144 -XX:+UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.TestStressCloudBlindAtomicUpdates.test_dv

Error Message:
java.lang.RuntimeException: Error from server at 
http://127.0.0.1:45079/solr/test_col_shard1_replica_n2: Failed synchronous 
update on shard StdNode: 
http://127.0.0.1:33249/solr/test_col_shard1_replica_n1/ update: 
org.apache.solr.client.solrj.request.UpdateRequest@c78368a

Stack Trace:
java.util.concurrent.ExecutionException: java.lang.RuntimeException: Error from 
server at http://127.0.0.1:45079/solr/test_col_shard1_replica_n2: Failed 
synchronous update on shard StdNode: 
http://127.0.0.1:33249/solr/test_col_shard1_replica_n1/ update: 
org.apache.solr.client.solrj.request.UpdateRequest@c78368a
at 
__randomizedtesting.SeedInfo.seed([35D6F4F4474CCF00:3C296B2CD11F511]:0)
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:192)
at 
org.apache.solr.cloud.TestStressCloudBlindAtomicUpdates.checkField(TestStressCloudBlindAtomicUpdates.java:283)
at 
org.apache.solr.cloud.TestStressCloudBlindAtomicUpdates.test_dv(TestStressCloudBlindAtomicUpdates.java:195)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 

[JENKINS] Lucene-Solr-7.x-Windows (64bit/jdk1.8.0_144) - Build # 144 - Still Unstable!

2017-08-28 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/144/
Java: 64bit/jdk1.8.0_144 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

6 tests failed.
FAILED:  
org.apache.solr.cloud.CdcrBootstrapTest.testBootstrapWithContinousIndexingOnSourceCluster

Error Message:
Document mismatch on target after sync expected:<2000> but was:<1009>

Stack Trace:
java.lang.AssertionError: Document mismatch on target after sync 
expected:<2000> but was:<1009>
at 
__randomizedtesting.SeedInfo.seed([2EF71A277FB54C1F:FAB2517E98E3FFE4]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.cloud.CdcrBootstrapTest.testBootstrapWithContinousIndexingOnSourceCluster(CdcrBootstrapTest.java:309)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  

[jira] [Commented] (SOLR-11003) Enabling bi-directional CDCR active-active clusters

2017-08-28 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16143630#comment-16143630
 ] 

Amrit Sarkar commented on SOLR-11003:
-

The patch is failing on {{branch_6_6}} :

{code}
   [junit4]   2> Caused by: 
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:55117/solr/cdcr-cluster1: Error while 
requesting shard's checkpoints
{code}
{code}
   [junit4]   2> Caused by: 
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:55117/solr/cdcr-cluster1_shard1_replica1: 
Invalid shift value (64) in prefixCoded bytes (is encoded value really an INT?)
{code}

I will check on other branches to see the matter.

> Enabling bi-directional CDCR active-active clusters
> ---
>
> Key: SOLR-11003
> URL: https://issues.apache.org/jira/browse/SOLR-11003
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR
>Reporter: Amrit Sarkar
>Assignee: Varun Thacker
> Attachments: sample-configs.zip, SOLR-11003.patch, SOLR-11003.patch, 
> SOLR-11003-tlogutils.patch
>
>
> The latest version of Solr CDCR across collections / clusters is in 
> active-passive format, where we can index into source collection and the 
> updates gets forwarded to the passive one and vice-versa is not supported.
> https://lucene.apache.org/solr/guide/6_6/cross-data-center-replication-cdcr.html
> https://issues.apache.org/jira/browse/SOLR-6273
> We are try to get a  design ready to index in both collections and the 
> updates gets reflected across the collections in real-time. 
> ClusterACollectionA => ClusterBCollectionB | ClusterBCollectionB => 
> ClusterACollectionA.
> The best use-case would be to we keep indexing in ClusterACollectionA which 
> forwards the updates to ClusterBCollectionB. If ClusterACollectionA gets 
> down, we point the indexer and searcher application to ClusterBCollectionB. 
> Once ClusterACollectionA is up, depending on updates count, they will be 
> bootstrapped or forwarded to ClusterACollectionA from ClusterBCollectionB and 
> keep indexing on the ClusterBCollectionB.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11250) Add new LTR model which loads the model definition from the external resource

2017-08-28 Thread Yuki Yano (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16143604#comment-16143604
 ] 

Yuki Yano commented on SOLR-11250:
--

[~cpoerschke]
I attached new patch (v3) which refactors codes based on your first and third 
advices (second one is not yet though). Details are as follows:

1. Changed {{ExternalModel}} to {{WrapperModel}}.
2. Added {{ModelParser}} for parsing model configurations in {{WrapperModel}}.

I only implemented {{ModelParser}} for json format because I feel some 
difficulties for other formats:

* XML : because the content of "params" is not fixed, we can't distinguish some 
structures like List and Map only from XML text.
* YAML : we need yaml library (e.g., snakeyaml), but I don't know the policy of 
adding new library in Solr community...

Well, I think only supporting json is enough for now. If we need new formats, 
we can add them simply by extending {{ModelParser}}. What do you think?

> Add new LTR model which loads the model definition from the external resource
> -
>
> Key: SOLR-11250
> URL: https://issues.apache.org/jira/browse/SOLR-11250
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - LTR
>Reporter: Yuki Yano
>Priority: Minor
> Attachments: SOLR-11250_master.patch, SOLR-11250_master_v2.patch, 
> SOLR-11250_master_v3.patch, SOLR-11250.patch
>
>
> We add new model which contains only the location of the external model and 
> loads it during the initialization.
> By this procedure, large models which are difficult to upload to ZooKeeper 
> can be available.
> The new model works as the wrapper of existing models, and deligates APIs to 
> them.
> We add two classes by this patch:
> * {{ExternalModel}} : a base class for models with external resources.
> * {{URIExternalModel}} : an implementation of {{ExternalModel}} which loads 
> the external model from specified URI (ex. file:, http:, etc.).
> For example, if you have a model on the local disk 
> "file:///var/models/myModel.json", the definition of {{URIExternalModel}} 
> will be like the following.
> {code}
> {
>   "class" : "org.apache.solr.ltr.model.URIExternalModel",
>   "name" : "myURIExternalModel",
>   "features" : [],
>   "params" : {
> "uri" : "file:///var/models/myModel.json"
>   }
> }
> {code}
> If you use LTR with {{model=myURIExternalModel}}, the model of 
> {{myModel.json}} will be used for scoring documents.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7941) GeoDegeneratePoints return intersects when located in edge shape

2017-08-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16143584#comment-16143584
 ] 

ASF subversion and git services commented on LUCENE-7941:
-

Commit 9c450c8c2f3e87f142d3b3be337c33097152d9a7 in lucene-solr's branch 
refs/heads/branch_6x from [~kwri...@metacarta.com]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=9c450c8 ]

LUCENE-7941: Test for GeoDegeneratePoints relationships, committed on behalf of 
Ignacio Vera.


> GeoDegeneratePoints return intersects when located in edge shape 
> -
>
> Key: LUCENE-7941
> URL: https://issues.apache.org/jira/browse/LUCENE-7941
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Ignacio Vera
>Assignee: Karl Wright
>Priority: Minor
> Fix For: 6.7, master (8.0), 7.1
>
> Attachments: LUCENE-7941-test.patch, LUCENE-7941-test.patch
>
>
>  If the degenerate Geopoint lays on the boundary of a shape, the 
> relationships between the objects are not symetrical:
> The bounding box "thinks" it contains the degenerated point.
> The degenerated point "thinks" it intersects the shape.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7941) GeoDegeneratePoints return intersects when located in edge shape

2017-08-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16143586#comment-16143586
 ] 

ASF subversion and git services commented on LUCENE-7941:
-

Commit 23ae00eaa11a5265d0284a7e31a2b63530bc2e47 in lucene-solr's branch 
refs/heads/branch_7x from [~kwri...@metacarta.com]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=23ae00e ]

LUCENE-7941: Test for GeoDegeneratePoints relationships, committed on behalf of 
Ignacio Vera.


> GeoDegeneratePoints return intersects when located in edge shape 
> -
>
> Key: LUCENE-7941
> URL: https://issues.apache.org/jira/browse/LUCENE-7941
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Ignacio Vera
>Assignee: Karl Wright
>Priority: Minor
> Fix For: 6.7, master (8.0), 7.1
>
> Attachments: LUCENE-7941-test.patch, LUCENE-7941-test.patch
>
>
>  If the degenerate Geopoint lays on the boundary of a shape, the 
> relationships between the objects are not symetrical:
> The bounding box "thinks" it contains the degenerated point.
> The degenerated point "thinks" it intersects the shape.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7941) GeoDegeneratePoints return intersects when located in edge shape

2017-08-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16143583#comment-16143583
 ] 

ASF subversion and git services commented on LUCENE-7941:
-

Commit 72818637f28a962843faa113e0fd6d1de8b25869 in lucene-solr's branch 
refs/heads/master from [~kwri...@metacarta.com]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=7281863 ]

LUCENE-7941: Test for GeoDegeneratePoints relationships, committed on behalf of 
Ignacio Vera.


> GeoDegeneratePoints return intersects when located in edge shape 
> -
>
> Key: LUCENE-7941
> URL: https://issues.apache.org/jira/browse/LUCENE-7941
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Ignacio Vera
>Assignee: Karl Wright
>Priority: Minor
> Fix For: 6.7, master (8.0), 7.1
>
> Attachments: LUCENE-7941-test.patch, LUCENE-7941-test.patch
>
>
>  If the degenerate Geopoint lays on the boundary of a shape, the 
> relationships between the objects are not symetrical:
> The bounding box "thinks" it contains the degenerated point.
> The degenerated point "thinks" it intersects the shape.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-6.6-Linux (32bit/jdk-9-ea+181) - Build # 106 - Still Unstable!

2017-08-28 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.6-Linux/106/
Java: 32bit/jdk-9-ea+181 -server -XX:+UseSerialGC --illegal-access=deny

2 tests failed.
FAILED:  org.apache.solr.cloud.TestStressCloudBlindAtomicUpdates.test_dv

Error Message:
java.lang.RuntimeException: Error from server at 
http://127.0.0.1:41251/solr/test_col: Failed synchronous update on shard 
StdNode: http://127.0.0.1:36727/solr/test_col_shard1_replica1/ update: 
org.apache.solr.client.solrj.request.UpdateRequest@1cf683e

Stack Trace:
java.util.concurrent.ExecutionException: java.lang.RuntimeException: Error from 
server at http://127.0.0.1:41251/solr/test_col: Failed synchronous update on 
shard StdNode: http://127.0.0.1:36727/solr/test_col_shard1_replica1/ update: 
org.apache.solr.client.solrj.request.UpdateRequest@1cf683e
at 
__randomizedtesting.SeedInfo.seed([108ECBC86A134AFF:269AA98EE04E70EE]:0)
at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191)
at 
org.apache.solr.cloud.TestStressCloudBlindAtomicUpdates.checkField(TestStressCloudBlindAtomicUpdates.java:281)
at 
org.apache.solr.cloud.TestStressCloudBlindAtomicUpdates.test_dv(TestStressCloudBlindAtomicUpdates.java:193)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[jira] [Updated] (LUCENE-7941) GeoDegeneratePoints return intersects when located in edge shape

2017-08-28 Thread Ignacio Vera (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ignacio Vera updated LUCENE-7941:
-
Attachment: LUCENE-7941-test.patch

Thanks for the explanation, codes behaves as it should. For some reason I 
though it should only return true if both intersection were on bounds. 
I attach the test with current values. 


> GeoDegeneratePoints return intersects when located in edge shape 
> -
>
> Key: LUCENE-7941
> URL: https://issues.apache.org/jira/browse/LUCENE-7941
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Ignacio Vera
>Assignee: Karl Wright
>Priority: Minor
> Fix For: 6.7, master (8.0), 7.1
>
> Attachments: LUCENE-7941-test.patch, LUCENE-7941-test.patch
>
>
>  If the degenerate Geopoint lays on the boundary of a shape, the 
> relationships between the objects are not symetrical:
> The bounding box "thinks" it contains the degenerated point.
> The degenerated point "thinks" it intersects the shape.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >