[jira] [Commented] (LUCENE-8320) WindowFS#move should consider hard-link when transferring ownership

2018-05-17 Thread Nhat Nguyen (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16480200#comment-16480200
 ] 

Nhat Nguyen commented on LUCENE-8320:
-

/cc [~simonw]

> WindowFS#move should consider hard-link when transferring ownership
> ---
>
> Key: LUCENE-8320
> URL: https://issues.apache.org/jira/browse/LUCENE-8320
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 7.4, master (8.0)
>Reporter: Nhat Nguyen
>Priority: Major
> Attachments: test-hardlink.patch
>
>
> The attached test strips an assertion in `WindowFS#onClose`.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-8320) WindowFS#move should consider hard-link when transferring ownership

2018-05-17 Thread Nhat Nguyen (JIRA)
Nhat Nguyen created LUCENE-8320:
---

 Summary: WindowFS#move should consider hard-link when transferring 
ownership
 Key: LUCENE-8320
 URL: https://issues.apache.org/jira/browse/LUCENE-8320
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 7.4, master (8.0)
Reporter: Nhat Nguyen
 Attachments: test-hardlink.patch

The attached test strips an assertion in `WindowFS#onClose`.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9480) Graph Traversal for Significantly Related Terms (Semantic Knowledge Graph)

2018-05-17 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16480172#comment-16480172
 ] 

Mikhail Khludnev commented on SOLR-9480:


Great answer. Thanks, [~hossman]!

> Graph Traversal for Significantly Related Terms (Semantic Knowledge Graph)
> --
>
> Key: SOLR-9480
> URL: https://issues.apache.org/jira/browse/SOLR-9480
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Trey Grainger
>Priority: Major
> Attachments: SOLR-9480.patch, SOLR-9480.patch, SOLR-9480.patch, 
> SOLR-9480.patch, SOLR-9480.patch, SOLR-9480.patch
>
>
> This issue is to track the contribution of the Semantic Knowledge Graph Solr 
> Plugin (request handler), which exposes a graph-like interface for 
> discovering and traversing significant relationships between entities within 
> an inverted index.
> This data model has been described in the following research paper: [The 
> Semantic Knowledge Graph: A compact, auto-generated model for real-time 
> traversal and ranking of any relationship within a 
> domain|https://arxiv.org/abs/1609.00464], as well as in presentations I gave 
> in October 2015 at [Lucene/Solr 
> Revolution|http://www.slideshare.net/treygrainger/leveraging-lucenesolr-as-a-knowledge-graph-and-intent-engine]
>  and November 2015 at the [Bay Area Search 
> Meetup|http://www.treygrainger.com/posts/presentations/searching-on-intent-knowledge-graphs-personalization-and-contextual-disambiguation/].
> The source code for this project is currently available at 
> [https://github.com/careerbuilder/semantic-knowledge-graph], and the folks at 
> CareerBuilder (where this was built) have given me the go-ahead to now 
> contribute this back to the Apache Solr Project, as well.
> Check out the Github repository, research paper, or presentations for a more 
> detailed description of this contribution. Initial patch coming soon.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-repro - Build # 650 - Still Unstable

2018-05-17 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/650/

[...truncated 28 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1544/consoleText

[repro] Revision: 3fe612bed2080af0b3dd47ece7067ae56794fc82

[repro] Ant options: -Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
[repro] Repro line:  ant test  -Dtestcase=TestMixedDocValuesUpdates 
-Dtests.method=testTonsOfUpdates -Dtests.seed=4753723B75C963B3 
-Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=de -Dtests.timezone=America/Argentina/Ushuaia 
-Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1

[repro] Repro line:  ant test  -Dtestcase=TestBinaryDocValuesUpdates 
-Dtests.method=testTonsOfUpdates -Dtests.seed=4753723B75C963B3 
-Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=fr-CA -Dtests.timezone=America/Thule -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[repro] Repro line:  ant test  -Dtestcase=SearchRateTriggerTest 
-Dtests.method=testTrigger -Dtests.seed=D66C2ADD4867CC79 -Dtests.multiplier=2 
-Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=zh-CN -Dtests.timezone=America/Argentina/Ushuaia 
-Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1

[repro] Repro line:  ant test  -Dtestcase=IndexSizeTriggerTest 
-Dtests.method=testSplitIntegration -Dtests.seed=D66C2ADD4867CC79 
-Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=en-US -Dtests.timezone=America/St_Vincent -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
9cbaf327e8c0240b948f5eaaa333baf3a4282be9
[repro] git fetch

[...truncated 2 lines...]
[repro] git checkout 3fe612bed2080af0b3dd47ece7067ae56794fc82

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   SearchRateTriggerTest
[repro]   IndexSizeTriggerTest
[repro]lucene/core
[repro]   TestMixedDocValuesUpdates
[repro]   TestBinaryDocValuesUpdates
[repro] ant compile-test

[...truncated 3298 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=10 
-Dtests.class="*.SearchRateTriggerTest|*.IndexSizeTriggerTest" 
-Dtests.showOutput=onerror -Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.seed=D66C2ADD4867CC79 -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=zh-CN -Dtests.timezone=America/Argentina/Ushuaia 
-Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1

[...truncated 7560 lines...]
[repro] Setting last failure code to 256

[repro] ant compile-test

[...truncated 92 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=10 
-Dtests.class="*.TestMixedDocValuesUpdates|*.TestBinaryDocValuesUpdates" 
-Dtests.showOutput=onerror -Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.seed=4753723B75C963B3 -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=de -Dtests.timezone=America/Argentina/Ushuaia 
-Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1

[...truncated 400 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   1/5 failed: org.apache.solr.cloud.autoscaling.SearchRateTriggerTest
[repro]   2/5 failed: org.apache.lucene.index.TestMixedDocValuesUpdates
[repro]   3/5 failed: org.apache.lucene.index.TestBinaryDocValuesUpdates
[repro]   5/5 failed: org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest

[repro] Re-testing 100% failures at the tip of master
[repro] ant clean

[...truncated 8 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   IndexSizeTriggerTest
[repro] ant compile-test

[...truncated 3298 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.IndexSizeTriggerTest" -Dtests.showOutput=onerror 

[jira] [Commented] (SOLR-12028) BadApple and AwaitsFix annotations usage

2018-05-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16480095#comment-16480095
 ] 

ASF subversion and git services commented on SOLR-12028:


Commit 96f6c65e43445bbb0b77604c8c6550ca654b5de8 in lucene-solr's branch 
refs/heads/branch_7x from [~caomanhdat]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=96f6c65 ]

SOLR-12028: Remove BadApple for TestCloudRecovery


> BadApple and AwaitsFix annotations usage
> 
>
> Key: SOLR-12028
> URL: https://issues.apache.org/jira/browse/SOLR-12028
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
> Attachments: SOLR-12016-buildsystem.patch, SOLR-12028-3-Mar.patch, 
> SOLR-12028-sysprops-reproduce.patch, SOLR-12028.patch, SOLR-12028.patch
>
>
> There's a long discussion of this topic at SOLR-12016. Here's a summary:
> - BadApple annotations are used for tests that intermittently fail, say < 30% 
> of the time. Tests that fail more often shold be moved to AwaitsFix. This is, 
> of course, a judgement call
> - AwaitsFix annotations are used for tests that, for some reason, the problem 
> can't be fixed immediately. Likely reasons are third-party dependencies, 
> extreme difficulty tracking down, dependency on another JIRA etc.
> Jenkins jobs will typically run with BadApple disabled to cut down on noise. 
> Periodically Jenkins jobs will be run with BadApples enabled so BadApple 
> tests won't be lost and reports can be generated. Tests that run with 
> BadApples disabled that fail require _immediate_ attention.
> The default for developers is that BadApple is enabled.
> If you are working on one of these tests and cannot get the test to fail 
> locally, it is perfectly acceptable to comment the annotation out. You should 
> let the dev list know that this is deliberate.
> This JIRA is a placeholder for BadApple tests to point to between the times 
> they're identified as BadApple and they're either fixed or changed to 
> AwaitsFix or assigned their own JIRA.
> I've assigned this to myself to track so I don't lose track of it. No one 
> person will fix all of these issues, this will be an ongoing technical debt 
> cleanup effort.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12028) BadApple and AwaitsFix annotations usage

2018-05-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16480094#comment-16480094
 ] 

ASF subversion and git services commented on SOLR-12028:


Commit 4a9a8397e458a5805c55fe494ba4b6de18233f90 in lucene-solr's branch 
refs/heads/master from [~caomanhdat]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=4a9a839 ]

SOLR-12028: Remove BadApple for TestCloudRecovery


> BadApple and AwaitsFix annotations usage
> 
>
> Key: SOLR-12028
> URL: https://issues.apache.org/jira/browse/SOLR-12028
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
> Attachments: SOLR-12016-buildsystem.patch, SOLR-12028-3-Mar.patch, 
> SOLR-12028-sysprops-reproduce.patch, SOLR-12028.patch, SOLR-12028.patch
>
>
> There's a long discussion of this topic at SOLR-12016. Here's a summary:
> - BadApple annotations are used for tests that intermittently fail, say < 30% 
> of the time. Tests that fail more often shold be moved to AwaitsFix. This is, 
> of course, a judgement call
> - AwaitsFix annotations are used for tests that, for some reason, the problem 
> can't be fixed immediately. Likely reasons are third-party dependencies, 
> extreme difficulty tracking down, dependency on another JIRA etc.
> Jenkins jobs will typically run with BadApple disabled to cut down on noise. 
> Periodically Jenkins jobs will be run with BadApples enabled so BadApple 
> tests won't be lost and reports can be generated. Tests that run with 
> BadApples disabled that fail require _immediate_ attention.
> The default for developers is that BadApple is enabled.
> If you are working on one of these tests and cannot get the test to fail 
> locally, it is perfectly acceptable to comment the annotation out. You should 
> let the dev list know that this is deliberate.
> This JIRA is a placeholder for BadApple tests to point to between the times 
> they're identified as BadApple and they're either fixed or changed to 
> AwaitsFix or assigned their own JIRA.
> I've assigned this to myself to track so I don't lose track of it. No one 
> person will fix all of these issues, this will be an ongoing technical debt 
> cleanup effort.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8292) Fix FilterLeafReader.FilterTermsEnum to delegate all seekExact methods

2018-05-17 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16480093#comment-16480093
 ] 

David Smiley commented on LUCENE-8292:
--

Great example!  I was playing around with TestExitableDirectoryReader and 
there's definitely a loss of passing the term state.  I set a breakpoint here 
[https://github.com/apache/lucene-solr/blob/master/lucene/core/src/java/org/apache/lucene/search/TermQuery.java#L136]
 then ran in a debugger (after increasing the test timeout and removing the 
Ignore annotation) then stepped in to the default implementation of 
seekExact(term,state) for TestTermsEnum – which doesn't delegate.  I manually 
added delegation of this method there.  Then _again_ ran into the default 
implementation for ExtiableDirectoryReader's ExitableTermsEnum.  In this one 
little adventure, I his this thing twice.

_There's certainly a bug here_.  Either FilterTermsEnum should delegate 
everything, or these two subclasses of TermsEnum mentioned above ought to 
delegate these methods but I bet there are more out there if we look closer.  I 
appreciate modifying the delegation policy of FilterTermsEnum is not a decision 
to be taken lightly and would probably not happen until a major release.

> Fix FilterLeafReader.FilterTermsEnum to delegate all seekExact methods
> --
>
> Key: LUCENE-8292
> URL: https://issues.apache.org/jira/browse/LUCENE-8292
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 7.2.1
>Reporter: Bruno Roustant
>Priority: Major
> Fix For: trunk
>
> Attachments: 
> 0001-Fix-FilterLeafReader.FilterTermsEnum-to-delegate-see.patch, 
> LUCENE-8292.patch
>
>
> FilterLeafReader#FilterTermsEnum wraps another TermsEnum and delegates many 
> methods.
> It misses some seekExact() methods, thus it is not possible to the delegate 
> to override these methods to have specific behavior (unlike the TermsEnum API 
> which allows that).
> The fix is straightforward: simply override these seekExact() methods and 
> delegate.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12338) Replay buffering tlog in parallel

2018-05-17 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16480089#comment-16480089
 ] 

Yonik Seeley commented on SOLR-12338:
-

{quote}This is a very costly/risky logic to handle reordered updates
{quote}
Indeed.  As an aside, my vote for the long term continues to be: "don't reorder 
updates between leader and replica" :)

> Replay buffering tlog in parallel
> -
>
> Key: SOLR-12338
> URL: https://issues.apache.org/jira/browse/SOLR-12338
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Major
> Attachments: SOLR-12338.patch, SOLR-12338.patch, SOLR-12338.patch
>
>
> Since updates with different id are independent, therefore it is safe to 
> replay them in parallel. This will significantly reduce recovering time of 
> replicas in high load indexing environment. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-master - Build # 1026 - Still Failing

2018-05-17 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/1026/

No tests ran.

Build Log:
[...truncated 24174 lines...]
[asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: 
invalid part, must have at least one section (e.g., chapter, appendix, etc.)
[asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid 
part, must have at least one section (e.g., chapter, appendix, etc.)
 [java] Processed 2207 links (1763 relative) to 3080 anchors in 245 files
 [echo] Validated Links & Anchors via: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr-ref-guide/bare-bones-html/

-dist-changes:
 [copy] Copying 4 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/changes

-dist-keys:
  [get] Getting: http://home.apache.org/keys/group/lucene.asc
  [get] To: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/KEYS

package:

-unpack-solr-tgz:

-ensure-solr-tgz-exists:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr.tgz.unpacked
[untar] Expanding: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/solr-8.0.0.tgz
 into 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr.tgz.unpacked

generate-maven-artifacts:

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 

[JENKINS] Lucene-Solr-repro - Build # 649 - Still Unstable

2018-05-17 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/649/

[...truncated 28 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-SmokeRelease-7.x/219/consoleText

[repro] Revision: 6b4daf8590b02e6304530c11042a733d127afd95

[repro] Ant options: -DsmokeTestRelease.java9=/home/jenkins/tools/java/latest1.9
[repro] Repro line:  ant test  -Dtestcase=SearchRateTriggerTest 
-Dtests.method=testTrigger -Dtests.seed=B325FC8EE42BA064 -Dtests.multiplier=2 
-Dtests.locale=en-SG -Dtests.timezone=US/East-Indiana -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=IndexSizeTriggerTest 
-Dtests.method=testSplitIntegration -Dtests.seed=B325FC8EE42BA064 
-Dtests.multiplier=2 -Dtests.locale=sr-Latn -Dtests.timezone=Europe/Andorra 
-Dtests.asserts=true -Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=IndexSizeTriggerTest 
-Dtests.method=testTrigger -Dtests.seed=B325FC8EE42BA064 -Dtests.multiplier=2 
-Dtests.locale=sr-Latn -Dtests.timezone=Europe/Andorra -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
9cbaf327e8c0240b948f5eaaa333baf3a4282be9
[repro] git fetch
[repro] git checkout 6b4daf8590b02e6304530c11042a733d127afd95

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   IndexSizeTriggerTest
[repro]   SearchRateTriggerTest
[repro] ant compile-test

[...truncated 3316 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=10 
-Dtests.class="*.IndexSizeTriggerTest|*.SearchRateTriggerTest" 
-Dtests.showOutput=onerror 
-DsmokeTestRelease.java9=/home/jenkins/tools/java/latest1.9 
-Dtests.seed=B325FC8EE42BA064 -Dtests.multiplier=2 -Dtests.locale=sr-Latn 
-Dtests.timezone=Europe/Andorra -Dtests.asserts=true -Dtests.file.encoding=UTF-8

[...truncated 12462 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   4/5 failed: org.apache.solr.cloud.autoscaling.SearchRateTriggerTest
[repro]   5/5 failed: org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest

[repro] Re-testing 100% failures at the tip of branch_7x
[repro] ant clean

[...truncated 8 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   IndexSizeTriggerTest
[repro] ant compile-test

[...truncated 3316 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.IndexSizeTriggerTest" -Dtests.showOutput=onerror 
-DsmokeTestRelease.java9=/home/jenkins/tools/java/latest1.9 
-Dtests.seed=B325FC8EE42BA064 -Dtests.multiplier=2 -Dtests.locale=sr-Latn 
-Dtests.timezone=Europe/Andorra -Dtests.asserts=true -Dtests.file.encoding=UTF-8

[...truncated 4354 lines...]
[repro] Setting last failure code to 256

[repro] Failures at the tip of branch_7x:
[repro]   3/5 failed: org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest
[repro] git checkout 9cbaf327e8c0240b948f5eaaa333baf3a4282be9

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 5 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-12338) Replay buffering tlog in parallel

2018-05-17 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16480078#comment-16480078
 ] 

Cao Manh Dat commented on SOLR-12338:
-

[~ysee...@gmail.com] The need to order things come from how we currently handle 
reordered in-place updates. Currently, if a replica receives in-place update u2 
which point to in-place update u1 which does not arrive yet, the replica will 
fetch the full document from the leader. This is a very costly/risky logic to 
handle reordered updates (ie: what if there are no leader to ask for the full 
document). Luckily for us that reorder is not a common case right now, but if 
we replay updates in a parallel and non-order way, above case can happen much 
more frequently. Therefore In my opinion, it should be avoided. 

> Replay buffering tlog in parallel
> -
>
> Key: SOLR-12338
> URL: https://issues.apache.org/jira/browse/SOLR-12338
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Major
> Attachments: SOLR-12338.patch, SOLR-12338.patch, SOLR-12338.patch
>
>
> Since updates with different id are independent, therefore it is safe to 
> replay them in parallel. This will significantly reduce recovering time of 
> replicas in high load indexing environment. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12369) Create core sometimes failed because of ZkController.waitForShardId

2018-05-17 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16480075#comment-16480075
 ] 

Cao Manh Dat commented on SOLR-12369:
-

One thing I don't understand here is why the core can pass 
{{ZkController.checkStateInZk}} check.

> Create core sometimes failed because of ZkController.waitForShardId
> ---
>
> Key: SOLR-12369
> URL: https://issues.apache.org/jira/browse/SOLR-12369
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Major
> Attachments: stdout
>
>
> When I beast tests I sometimes see failures of not being able to create a 
> core. This turn out that {{ZkController.waitForShardId}} was failed because 
> node's clusterstate is never updated. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-BadApples-Tests-7.x - Build # 61 - Still Unstable

2018-05-17 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-7.x/61/

11 tests failed.
FAILED:  org.apache.solr.cloud.MoveReplicaHDFSTest.testFailedMove

Error Message:
No live SolrServers available to handle this 
request:[http://127.0.0.1:46031/solr/MoveReplicaHDFSTest_failed_coll_true, 
http://127.0.0.1:37179/solr/MoveReplicaHDFSTest_failed_coll_true]

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this 
request:[http://127.0.0.1:46031/solr/MoveReplicaHDFSTest_failed_coll_true, 
http://127.0.0.1:37179/solr/MoveReplicaHDFSTest_failed_coll_true]
at 
__randomizedtesting.SeedInfo.seed([8448C61E6428EE4D:2E8515ECD3FB3B9D]:0)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:462)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1106)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:886)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:993)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:819)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:942)
at 
org.apache.solr.cloud.MoveReplicaTest.testFailedMove(MoveReplicaTest.java:288)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 

[jira] [Created] (SOLR-12374) Add SolrCore.withSearcher(lambda accepting SolrIndexSearcher)

2018-05-17 Thread David Smiley (JIRA)
David Smiley created SOLR-12374:
---

 Summary: Add SolrCore.withSearcher(lambda accepting 
SolrIndexSearcher)
 Key: SOLR-12374
 URL: https://issues.apache.org/jira/browse/SOLR-12374
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: David Smiley
Assignee: David Smiley


I propose adding the following to SolrCore:
{code:java}
  /**
   * Executes the lambda with the {@link SolrIndexSearcher}.  This is more 
convenience than using
   * {@link #getSearcher()} since there is no ref-counting business to worry 
about.
   * Example:
   * 
   *   IndexReader reader = 
h.getCore().withSearcher(SolrIndexSearcher::getIndexReader);
   * 
   */
  @SuppressWarnings("unchecked")
  public  R withSearcher(Function lambda) {
final RefCounted refCounted = getSearcher();
try {
  return lambda.apply(refCounted.get());
} finally {
  refCounted.decref();
}
  }
{code}
This is a nice tight convenience method, avoiding the clumsy RefCounted API 
which is easy to accidentally incorrectly use – see 
https://issues.apache.org/jira/browse/SOLR-11616?focusedCommentId=16477719=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16477719

I guess my only (small) concern is if hypothetically you might make the lambda 
short because it's easy to do that (see the one-liner example above) but the 
object you return that you're interested in  (say IndexReader) could 
potentially become invalid if the SolrIndexSearcher closes.  But I think/hope 
that's impossible normally based on when this getSearcher() used?  I could at 
least add a warning to the docs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12366) Avoid SlowAtomicReader.getLiveDocs -- it's slow

2018-05-17 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16480048#comment-16480048
 ] 

David Smiley commented on SOLR-12366:
-

When I tested locally, I got no reproducing test failures.

> Avoid SlowAtomicReader.getLiveDocs -- it's slow
> ---
>
> Key: SOLR-12366
> URL: https://issues.apache.org/jira/browse/SOLR-12366
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Major
> Attachments: SOLR-12366.patch, SOLR-12366.patch
>
>
> SlowAtomicReader is of course slow, and it's getLiveDocs (based on MultiBits) 
> is slow as it uses a binary search for each lookup.  There are various places 
> in Solr that use SolrIndexSearcher.getSlowAtomicReader and then get the 
> liveDocs.  Most of these places ought to work with SolrIndexSearcher's 
> getLiveDocs method.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Windows (32bit/jdk1.8.0_144) - Build # 596 - Still Unstable!

2018-05-17 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/596/
Java: 32bit/jdk1.8.0_144 -client -XX:+UseG1GC

9 tests failed.
FAILED:  
org.apache.lucene.store.TestSimpleFSDirectory.testCreateOutputWithPendingDeletes

Error Message:
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.store.TestSimpleFSDirectory_F22ABB177BC7695A-001\tempDir-006\file.txt

Stack Trace:
java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.store.TestSimpleFSDirectory_F22ABB177BC7695A-001\tempDir-006\file.txt
at 
__randomizedtesting.SeedInfo.seed([F22ABB177BC7695A:DB8FB808E8E4D606]:0)
at 
sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:83)
at 
sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
at 
sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
at 
sun.nio.fs.WindowsFileAttributeViews$Basic.readAttributes(WindowsFileAttributeViews.java:53)
at 
sun.nio.fs.WindowsFileAttributeViews$Basic.readAttributes(WindowsFileAttributeViews.java:38)
at org.apache.lucene.mockfile.WindowsFS.getKey(WindowsFS.java:55)
at org.apache.lucene.mockfile.WindowsFS.onClose(WindowsFS.java:77)
at 
org.apache.lucene.mockfile.HandleTrackingFS$5.close(HandleTrackingFS.java:249)
at 
org.apache.lucene.store.SimpleFSDirectory$SimpleFSIndexInput.close(SimpleFSDirectory.java:119)
at org.apache.lucene.util.IOUtils.close(IOUtils.java:88)
at org.apache.lucene.util.IOUtils.close(IOUtils.java:76)
at 
org.apache.lucene.store.TestSimpleFSDirectory.testCreateOutputWithPendingDeletes(TestSimpleFSDirectory.java:78)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)

[jira] [Commented] (SOLR-12361) Change _childDocuments to Map

2018-05-17 Thread Lucene/Solr QA (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16479953#comment-16479953
 ] 

Lucene/Solr QA commented on SOLR-12361:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m  
4s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Release audit (RAT) {color} | 
{color:green}  2m 29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Check forbidden APIs {color} | 
{color:green}  2m 17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Validate source patterns {color} | 
{color:green}  2m 17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}192m 58s{color} 
| {color:red} core in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  6m 41s{color} 
| {color:red} solrj in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
45s{color} | {color:green} test-framework in the patch passed. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}210m  5s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | solr.search.join.TestCloudNestedDocsSort |
|   | solr.cloud.MoveReplicaHDFSTest |
|   | solr.common.util.TestJavaBinCodec |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | SOLR-12361 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12923887/SOLR-12361.patch |
| Optional Tests |  compile  javac  unit  ratsources  checkforbiddenapis  
validatesourcepatterns  |
| uname | Linux lucene1-us-west 3.13.0-88-generic #135-Ubuntu SMP Wed Jun 8 
21:10:42 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | ant |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-SOLR-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh
 |
| git revision | master / 7bb3e5c |
| ant | version: Apache Ant(TM) version 1.9.3 compiled on April 8 2014 |
| Default Java | 1.8.0_172 |
| unit | 
https://builds.apache.org/job/PreCommit-SOLR-Build/97/artifact/out/patch-unit-solr_core.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-SOLR-Build/97/artifact/out/patch-unit-solr_solrj.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-SOLR-Build/97/testReport/ |
| modules | C: solr/core solr/solrj solr/test-framework U: solr |
| Console output | 
https://builds.apache.org/job/PreCommit-SOLR-Build/97/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> Change _childDocuments to Map
> -
>
> Key: SOLR-12361
> URL: https://issues.apache.org/jira/browse/SOLR-12361
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: mosh
>Priority: Major
> Attachments: SOLR-12361.patch, SOLR-12361.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> During the discussion on SOLR-12298, there was a proposal to change 
> _childDocuments in SolrDocumentBase to a Map, to incorporate the relationship 
> between the parent and its child documents.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9480) Graph Traversal for Significantly Related Terms (Semantic Knowledge Graph)

2018-05-17 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16479952#comment-16479952
 ] 

Hoss Man commented on SOLR-9480:


Updated patch with all nocommits resolved and new ref-guide content on the 
relatedness() aggregate function and using them to build SKGs.

I think this is pretty much good to go.

{quote}can you give a clue what are {{$fore,$back}} ?
{quote}
I'm not sure if i understand your question... are you asking about the syntax, 
or about the general concepts of foreground/background query as used in the 
relatedness function scores?

Syntactically they are regular query param {{$variable}} references passed as 
function arguments ... the sample request in the comment you replied to defined 
them as {{fore=body:%22harry+potter%22=\*:*}} ...but they can also just be 
passed in as string literals.

In general, the {{relatedness()}} function takes 2 parameters that define a 
"foreground query" and a "background query" which are then used to compute the 
hueristic score indicating what sort of statistical corrolation there is 
between the query for each facet bucket and the foreground set, relative to the 
background set.

There's a more self contained example in the ref-guide edits included in the 
latest patch...
{noformat}
.Sample Documents
[source,bash,subs="verbatim,callouts"]

curl -sS -X POST 'http://localhost:8983/solr/gettingstarted/update?commit=true' 
-d '[
{"id":"01",age:15,"state":"AZ","hobbies":["soccer","painting","cycling"]},
{"id":"02",age:22,"state":"AZ","hobbies":["swimming","darts","cycling"]},
{"id":"03",age:27,"state":"AZ","hobbies":["swimming","frisbee","painting"]},
{"id":"04",age:33,"state":"AZ","hobbies":["darts"]},
{"id":"05",age:42,"state":"AZ","hobbies":["swimming","golf","painting"]},
{"id":"06",age:54,"state":"AZ","hobbies":["swimming","golf"]},
{"id":"07",age:67,"state":"AZ","hobbies":["golf","painting"]},
{"id":"08",age:71,"state":"AZ","hobbies":["painting"]},
{"id":"09",age:14,"state":"CO","hobbies":["soccer","frisbee","skiing","swimming","skating"]},
{"id":"10",age:23,"state":"CO","hobbies":["skiing","darts","cycling","swimming"]},
{"id":"11",age:26,"state":"CO","hobbies":["skiing","golf"]},
{"id":"12",age:35,"state":"CO","hobbies":["golf","frisbee","painting","skiing"]},
{"id":"13",age:47,"state":"CO","hobbies":["skiing","darts","painting","skating"]},
{"id":"14",age:51,"state":"CO","hobbies":["skiing","golf"]},
{"id":"15",age:64,"state":"CO","hobbies":["skating","cycling"]},
{"id":"16",age:73,"state":"CO","hobbies":["painting"]},
]'


.Example Query
[source,bash,subs="verbatim,callouts"]

curl -sS -X POST http://localhost:8983/solr/gettingstarted/query -d 
'rows=0=*:*
=*:*  # <1>
=age:[35 TO *]# <2>
={
  hobby : {
type : terms,
field : hobbies,
limit : 5,
sort : { r1: desc },   # <3>
facet : {
  r1 : "relatedness($fore,$back)", # <4>
  location : {
type : terms,
field : state,
limit : 2,
sort : { r2: desc },   # <3>
facet : {
  r2 : "relatedness($fore,$back)"  # <4>
}
  }
}
  }
}'

<1> Use the entire collection as our "Background Set"
<2> Use a query for "age >= 35" to define our (initial) "Foreground Set"
<3> For both the top level `hobbies` facet & the sub-facet on `state` we will 
be sorting on the `relatedness(...)` values
<4> In both calls to the `relatedness(...)` function, we use 
<> to refer to the previously defined `fore` and `back` queries. 

.The Facet Response
[source,javascript,subs="verbatim,callouts"]

"facets":{
  "count":16,
  "hobby":{
"buckets":[{
"val":"golf",
"count":6,// <1>
"r1":{
  "relatedness":0.01225,
  "foreground_popularity":0.3125, // <2>
  "background_popularity":0.375}, // <3>
"location":{
  "buckets":[{
  "val":"az",
  "count":3,
  "r2":{
"relatedness":0.00496,// <4>
"foreground_popularity":0.1875,   // <6>
"background_popularity":0.5}},// <7>
{
  "val":"co",
  "count":3,
  "r2":{
"relatedness":-0.00496,   // <5>
"foreground_popularity":0.125,
"background_popularity":0.5}}]}},
  {
"val":"painting",
"count":8,// <1>
"r1":{
  "relatedness":0.01097,
  "foreground_popularity":0.375,
  "background_popularity":0.5},
"location":{
  "buckets":[{
...

<1> Even though `hobbies:golf` has a lower total facet `count` then 

[jira] [Updated] (SOLR-9480) Graph Traversal for Significantly Related Terms (Semantic Knowledge Graph)

2018-05-17 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-9480:
---
Attachment: SOLR-9480.patch

> Graph Traversal for Significantly Related Terms (Semantic Knowledge Graph)
> --
>
> Key: SOLR-9480
> URL: https://issues.apache.org/jira/browse/SOLR-9480
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Trey Grainger
>Priority: Major
> Attachments: SOLR-9480.patch, SOLR-9480.patch, SOLR-9480.patch, 
> SOLR-9480.patch, SOLR-9480.patch, SOLR-9480.patch
>
>
> This issue is to track the contribution of the Semantic Knowledge Graph Solr 
> Plugin (request handler), which exposes a graph-like interface for 
> discovering and traversing significant relationships between entities within 
> an inverted index.
> This data model has been described in the following research paper: [The 
> Semantic Knowledge Graph: A compact, auto-generated model for real-time 
> traversal and ranking of any relationship within a 
> domain|https://arxiv.org/abs/1609.00464], as well as in presentations I gave 
> in October 2015 at [Lucene/Solr 
> Revolution|http://www.slideshare.net/treygrainger/leveraging-lucenesolr-as-a-knowledge-graph-and-intent-engine]
>  and November 2015 at the [Bay Area Search 
> Meetup|http://www.treygrainger.com/posts/presentations/searching-on-intent-knowledge-graphs-personalization-and-contextual-disambiguation/].
> The source code for this project is currently available at 
> [https://github.com/careerbuilder/semantic-knowledge-graph], and the folks at 
> CareerBuilder (where this was built) have given me the go-ahead to now 
> contribute this back to the Apache Solr Project, as well.
> Check out the Github repository, research paper, or presentations for a more 
> detailed description of this contribution. Initial patch coming soon.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6733) Umbrella issue - Solr as a standalone application

2018-05-17 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16479920#comment-16479920
 ] 

Shawn Heisey commented on SOLR-6733:


This comment is really long, and I've made a lot of edits before posting it.  
Hopefully it's all coherent!

You both make good points.  The overall intent would be to make things *easier* 
to configure, not harder.  Mostly by having all the configuration for the 
server centralized, with reasonable defaults for anything that's not mentioned. 
 Starting up with an entirely missing config would be a highly desired 
secondary goal.  Users would not be able to configure Jetty settings that are 
not explicitly handled by our code.  I think that's a plus for stability, but I 
acknowledge that flexibility does take a hit.  If embedded Jetty has a facility 
for reading and processing configs similar to the jetty.xml used by the full 
server, that could be an answer.

There will be plenty of opportunity for bikeshedding about the precise location 
and format (json, xml, properties, etc) of the new config file(s).  I have some 
ideas for a starting point, discussed below.

[~janhoy], your comments do bring up a notion that I've mentioned elsewhere: 
Always using ZK and eliminating the cloud mode distinction.  After some thought 
I decided to offer this solution instead, which i think has a lot of the same 
code simplification without the additional administrative overhead for users 
who don't want cloud:

In some cases the primary config file might only contain the TCP port, or it 
could be missing/empty.  Other typical settings for that config file would be 
things that may differ from node to node, and might include heap size, network 
interfaces to listen on, SSL config, and possibly a few more.  For cloud mode, 
ZK information (equivalents to zkRun and/or zkHost) would be required, and it 
might have hostname information for use when registering in live_nodes.  
Everything else (mostly handled by solr.in.sh, jetty config files, and solr.xml 
currently) would be loaded (if found) from a conf directory or a secondary 
config file that could exist in the filesystem or in ZK.  Leaning more towards 
a secondary config file, but if all properly named files in conf (or maybe 
conf.d) were loaded, it could be a way for a user to split their config up into 
logical pieces.  I'm torn on whether to support per-node secondary configs in 
ZK, but leaning away from it.

The startup could also look for an optional properties file and load that.  For 
backwards compatibility, environment variables could be checked and used when 
an explicit configuration doesn't exist.

For flexibility in what a user can do, I think that all possible settings 
should be honored whether they are in the primary config or the secondary, with 
the exception of things that only make sense in the primary config file, such 
as ZK settings and java options like heap size.  If somebody puts the same 
setting in different files (and it's not a setting where multiple mentions make 
sense), I think the one encountered first should take precedence, and a warning 
should be logged for any further occurrences.

Cores and their configs would be handled much as they are now, with 
improvements handled in other issues.

For SolrCloud, SSL config could be loaded from ZK instead of a config file.  
Mostly I imagine that being useful for users who create one certificate good 
for all nodes, but an idea that I'm leaning away from is per-node SSL configs 
in ZK.

For service installations, we can create /etc/solr/servicename, where the 
primary config for the service would live.  If the secondary config is not in 
ZK, it would also live there.


> Umbrella issue - Solr as a standalone application
> -
>
> Key: SOLR-6733
> URL: https://issues.apache.org/jira/browse/SOLR-6733
> Project: Solr
>  Issue Type: New Feature
>Reporter: Shawn Heisey
>Priority: Major
>
> Umbrella issue.
> Solr should be a standalone application, where the main method is provided by 
> Solr source code.
> Here are the major tasks I envision, if we choose to embed Jetty:
>  * Create org.apache.solr.start.Main (and possibly other classes in the same 
> package), to be placed in solr-start.jar.  The Main class will contain the 
> main method that starts the embedded Jetty and Solr.  I do not know how to 
> adjust the build system to do this successfully.
>  * Handle central configurations in code -- TCP port, SSL, and things like 
> web.xml.
>  * For each of these steps, clean up any test fallout.
>  * Handle cloud-related configurations in code -- port, hostname, protocol, 
> etc.  Use the same information as the central configurations.
>  * Consider whether things like authentication need changes.
>  * Handle any remaining container 

[jira] [Commented] (SOLR-6733) Umbrella issue - Solr as a standalone application

2018-05-17 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16479922#comment-16479922
 ] 

Shawn Heisey commented on SOLR-6733:


An idea for a safety valve when running cloud mode: If the hostname is not 
specified in the config AND a special parameter is not set, starting SolrCloud 
should fail, displaying a message with the detected hostname/address and a note 
detailing the need to define either the hostname or the special parameter.  
This would ensure that unworkable hostnames like 127.1.0.1 will only end up in 
zookeeper with explicit user action.  A possible name for the parameter: 
useDetectedHost


> Umbrella issue - Solr as a standalone application
> -
>
> Key: SOLR-6733
> URL: https://issues.apache.org/jira/browse/SOLR-6733
> Project: Solr
>  Issue Type: New Feature
>Reporter: Shawn Heisey
>Priority: Major
>
> Umbrella issue.
> Solr should be a standalone application, where the main method is provided by 
> Solr source code.
> Here are the major tasks I envision, if we choose to embed Jetty:
>  * Create org.apache.solr.start.Main (and possibly other classes in the same 
> package), to be placed in solr-start.jar.  The Main class will contain the 
> main method that starts the embedded Jetty and Solr.  I do not know how to 
> adjust the build system to do this successfully.
>  * Handle central configurations in code -- TCP port, SSL, and things like 
> web.xml.
>  * For each of these steps, clean up any test fallout.
>  * Handle cloud-related configurations in code -- port, hostname, protocol, 
> etc.  Use the same information as the central configurations.
>  * Consider whether things like authentication need changes.
>  * Handle any remaining container configurations.
> I am currently imagining this work happening in a new branch and ultimately 
> being applied only to master, not the stable branch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Windows (32bit/jdk1.8.0_144) - Build # 7321 - Still Unstable!

2018-05-17 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7321/
Java: 32bit/jdk1.8.0_144 -server -XX:+UseSerialGC

20 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testSplitIntegration

Error Message:
did not finish processing in time

Stack Trace:
java.lang.AssertionError: did not finish processing in time
at 
__randomizedtesting.SeedInfo.seed([7F7DAB4DC907FC6B:46F3120DE6F83595]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testSplitIntegration(IndexSizeTriggerTest.java:309)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testSplitIntegration

Error Message:
did not finish processing in time

Stack Trace:
java.lang.AssertionError: did not finish processing in time
at 

[jira] [Comment Edited] (SOLR-9685) tag a query in JSON syntax

2018-05-17 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16479895#comment-16479895
 ] 

Yonik Seeley edited comment on SOLR-9685 at 5/17/18 11:47 PM:
--

Thanks Mikhail, I like the syntax!

bq. omitting leading # is illegal and causes exception

Not really... valid queries are also of the form { "query_type" : "query_val }, 
so the "#" disambiguates between a query type and a tag.  For example, "join" 
would cause a join query to be parsed, while "#join" would mean a tag.

bq. Leading # is kept in the tag name.

That will cause confusion for people switching between multiple styles of 
tagging.  We already have established uses of tags without hashes in them:
{code}
fq={!tag=color}item_color:blue
{code}
Also, we already have a way to add multiple tags via comma separation:
{code}
fq={!tag=color,item_description}item_color:blue
{code}
So the JSON equivalent should be: 
{code}
{ "#color,item_description" : "item_color:blue" }
{code}



was (Author: ysee...@gmail.com):
Thanks Mikhail, I like the syntax!

.bq omitting leading # is illegal and causes exception

Not really... valid queries are also of the form { "query_type" : "query_val }, 
so the "#" disambiguates between a query type and a tag.  For example, "join" 
would cause a join query to be parsed, while "#join" would mean a tag.

.bq Leading # is kept in the tag name.

That will cause confusion for people switching between multiple styles of 
tagging.  We already have established uses of tags without hashes in them:
{code}
fq={!tag=color}item_color:blue
{code}
Also, we already have a way to add multiple tags via comma separation:
{code}
fq={!tag=color,item_description}item_color:blue
{code}
So the JSON equivalent should be: 
{code}
{ "#color,item_description" : "item_color:blue" }
{code}


> tag a query in JSON syntax
> --
>
> Key: SOLR-9685
> URL: https://issues.apache.org/jira/browse/SOLR-9685
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module, JSON Request API
>Reporter: Yonik Seeley
>Priority: Major
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> There should be a way to tag a query/filter in JSON syntax.
> Perhaps these two forms could be equivalent:
> {code}
> "{!tag=COLOR}color:blue"
> { tagged : { COLOR : "color:blue" }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9685) tag a query in JSON syntax

2018-05-17 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16479895#comment-16479895
 ] 

Yonik Seeley commented on SOLR-9685:


Thanks Mikhail, I like the syntax!

.bq omitting leading # is illegal and causes exception

Not really... valid queries are also of the form { "query_type" : "query_val }, 
so the "#" disambiguates between a query type and a tag.  For example, "join" 
would cause a join query to be parsed, while "#join" would mean a tag.

.bq Leading # is kept in the tag name.

That will cause confusion for people switching between multiple styles of 
tagging.  We already have established uses of tags without hashes in them:
{code}
fq={!tag=color}item_color:blue
{code}
Also, we already have a way to add multiple tags via comma separation:
{code}
fq={!tag=color,item_description}item_color:blue
{code}
So the JSON equivalent should be: 
{code}
{ "#color,item_description" : "item_color:blue" }
{code}


> tag a query in JSON syntax
> --
>
> Key: SOLR-9685
> URL: https://issues.apache.org/jira/browse/SOLR-9685
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module, JSON Request API
>Reporter: Yonik Seeley
>Priority: Major
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> There should be a way to tag a query/filter in JSON syntax.
> Perhaps these two forms could be equivalent:
> {code}
> "{!tag=COLOR}color:blue"
> { tagged : { COLOR : "color:blue" }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12373) DocBasedVersionConstraintsProcessor doesn't work when schema has required fields

2018-05-17 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-12373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16479862#comment-16479862
 ] 

Tomás Fernández Löbbe commented on SOLR-12373:
--

Created CR [https://reviews.apache.org/r/67203/] in case someone wants to review

> DocBasedVersionConstraintsProcessor doesn't work when schema has required 
> fields
> 
>
> Key: SOLR-12373
> URL: https://issues.apache.org/jira/browse/SOLR-12373
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Tomás Fernández Löbbe
>Assignee: Tomás Fernández Löbbe
>Priority: Minor
> Attachments: SOLR-12373.patch
>
>
> DocBasedVersionConstraintsProcessor creates tombstones when processing a 
> delete by id. Those tombstones only have id (or whatever the unique key name 
> is) and version field(s), however, if the schema defines some required 
> fields, adding the tombstone will fail.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Review Request 67203: SOLR-12373: DocBasedVersionConstraintsProcessor doesn't work when schema has required fields

2018-05-17 Thread Tomás Fernández Löbbe

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/67203/
---

Review request for lucene.


Repository: lucene-solr


Description
---

In case of the schema having required fields, tombstones will include a default 
value for such a field.


Diffs
-

  
solr/core/src/java/org/apache/solr/update/processor/DocBasedVersionConstraintsProcessor.java
 5bc60ec5f8 
  
solr/core/src/java/org/apache/solr/update/processor/DocBasedVersionConstraintsProcessorFactory.java
 ff4d78a81e 
  
solr/core/src/test-files/solr/collection1/conf/schema-externalversionconstraint-required.xml
 PRE-CREATION 
  solr/core/src/test/org/apache/solr/update/TestDocBasedVersionConstraints.java 
20d64cf0d7 
  solr/test-framework/src/java/org/apache/solr/SolrTestCaseJ4.java 9fec7e64d6 


Diff: https://reviews.apache.org/r/67203/diff/1/


Testing
---


Thanks,

Tomás Fernández Löbbe



[jira] [Updated] (SOLR-12373) DocBasedVersionConstraintsProcessor doesn't work when schema has required fields

2018-05-17 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-12373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe updated SOLR-12373:
-
Attachment: SOLR-12373.patch

> DocBasedVersionConstraintsProcessor doesn't work when schema has required 
> fields
> 
>
> Key: SOLR-12373
> URL: https://issues.apache.org/jira/browse/SOLR-12373
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Tomás Fernández Löbbe
>Assignee: Tomás Fernández Löbbe
>Priority: Minor
> Attachments: SOLR-12373.patch
>
>
> DocBasedVersionConstraintsProcessor creates tombstones when processing a 
> delete by id. Those tombstones only have id (or whatever the unique key name 
> is) and version field(s), however, if the schema defines some required 
> fields, adding the tombstone will fail.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12373) DocBasedVersionConstraintsProcessor doesn't work when schema has required fields

2018-05-17 Thread JIRA
Tomás Fernández Löbbe created SOLR-12373:


 Summary: DocBasedVersionConstraintsProcessor doesn't work when 
schema has required fields
 Key: SOLR-12373
 URL: https://issues.apache.org/jira/browse/SOLR-12373
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Tomás Fernández Löbbe
Assignee: Tomás Fernández Löbbe


DocBasedVersionConstraintsProcessor creates tombstones when processing a delete 
by id. Those tombstones only have id (or whatever the unique key name is) and 
version field(s), however, if the schema defines some required fields, adding 
the tombstone will fail.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Solaris (64bit/jdk1.8.0) - Build # 630 - Unstable!

2018-05-17 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Solaris/630/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseParallelGC

9 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testSplitIntegration

Error Message:
did not finish processing in time

Stack Trace:
java.lang.AssertionError: did not finish processing in time
at 
__randomizedtesting.SeedInfo.seed([9355289020EA302F:AADB91D00F15F9D1]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testSplitIntegration(IndexSizeTriggerTest.java:298)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testMergeIntegration

Error Message:
did not finish processing in time

Stack Trace:
java.lang.AssertionError: did not finish processing in time
at 

[JENKINS] Lucene-Solr-repro - Build # 648 - Still Unstable

2018-05-17 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/648/

[...truncated 28 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-NightlyTests-7.x/220/consoleText

[repro] Revision: 6b4daf8590b02e6304530c11042a733d127afd95

[repro] Ant options: -Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt
[repro] Repro line:  ant test  -Dtestcase=TestReplicationHandler 
-Dtests.method=doTestIndexFetchWithMasterUrl -Dtests.seed=22B4683D20069F6E 
-Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=nl -Dtests.timezone=Europe/Sofia -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[repro] Repro line:  ant test  -Dtestcase=IndexSizeTriggerTest 
-Dtests.method=testTrigger -Dtests.seed=22B4683D20069F6E -Dtests.multiplier=2 
-Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=fi -Dtests.timezone=PLT -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[repro] Repro line:  ant test  -Dtestcase=HdfsUnloadDistributedZkTest 
-Dtests.method=test -Dtests.seed=22B4683D20069F6E -Dtests.multiplier=2 
-Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=zh-TW -Dtests.timezone=Australia/West -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
7bb3e5c2482c7b73ed2dd26ff4be4613e7f44872
[repro] git fetch
[repro] git checkout 6b4daf8590b02e6304530c11042a733d127afd95

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   IndexSizeTriggerTest
[repro]   HdfsUnloadDistributedZkTest
[repro]   TestReplicationHandler
[repro] ant compile-test

[...truncated 3316 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=15 
-Dtests.class="*.IndexSizeTriggerTest|*.HdfsUnloadDistributedZkTest|*.TestReplicationHandler"
 -Dtests.showOutput=onerror -Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt
 -Dtests.seed=22B4683D20069F6E -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=fi -Dtests.timezone=PLT -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[...truncated 22137 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   0/5 failed: org.apache.solr.handler.TestReplicationHandler
[repro]   3/5 failed: org.apache.solr.cloud.hdfs.HdfsUnloadDistributedZkTest
[repro]   4/5 failed: org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest
[repro] git checkout 7bb3e5c2482c7b73ed2dd26ff4be4613e7f44872

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 5 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-11865) Refactor QueryElevationComponent to prepare query subset matching

2018-05-17 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16479764#comment-16479764
 ] 

David Smiley commented on SOLR-11865:
-

BTW random comment; it seems inconsistent that 
MapElevationProvider.buildElevationMap will throw an exception of the elevation 
is duplicated, yet earlier we merge elevations.  Is there a rhyme/reason to the 
disparity?  It could be merged here again, as is done earlier.

> Refactor QueryElevationComponent to prepare query subset matching
> -
>
> Key: SOLR-11865
> URL: https://issues.apache.org/jira/browse/SOLR-11865
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SearchComponents - other
>Affects Versions: master (8.0)
>Reporter: Bruno Roustant
>Priority: Minor
>  Labels: QueryComponent
> Fix For: master (8.0)
>
> Attachments: 
> 0001-Refactor-QueryElevationComponent-to-introduce-Elevat.patch, 
> 0002-Refactor-QueryElevationComponent-after-review.patch, 
> 0003-Remove-exception-handlers-and-refactor-getBoostDocs.patch, 
> SOLR-11865.patch
>
>
> The goal is to prepare a second improvement to support query terms subset 
> matching or query elevation rules.
> Before that, we need to refactor the QueryElevationComponent. We make it 
> extendible. We introduce the ElevationProvider interface which will be 
> implemented later in a second patch to support subset matching. The current 
> full-query match policy becomes a default simple MapElevationProvider.
> - Add overridable methods to handle exceptions during the component 
> initialization.
> - Add overridable methods to provide the default values for config properties.
> - No functional change beyond refactoring.
> - Adapt unit test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11865) Refactor QueryElevationComponent to prepare query subset matching

2018-05-17 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16479751#comment-16479751
 ] 

David Smiley commented on SOLR-11865:
-

Bruno, can you please use a GitHub PR (referencing this issue in the title so 
that it auto-links) to push your commits/patches?  It's way easier to do back & 
forth code review using GitHub.  I have a new patch but I'd rather apply it to 
a feature-branch/PR where you (and anyone else) can see the deltas more easily.

> Refactor QueryElevationComponent to prepare query subset matching
> -
>
> Key: SOLR-11865
> URL: https://issues.apache.org/jira/browse/SOLR-11865
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SearchComponents - other
>Affects Versions: master (8.0)
>Reporter: Bruno Roustant
>Priority: Minor
>  Labels: QueryComponent
> Fix For: master (8.0)
>
> Attachments: 
> 0001-Refactor-QueryElevationComponent-to-introduce-Elevat.patch, 
> 0002-Refactor-QueryElevationComponent-after-review.patch, 
> 0003-Remove-exception-handlers-and-refactor-getBoostDocs.patch, 
> SOLR-11865.patch
>
>
> The goal is to prepare a second improvement to support query terms subset 
> matching or query elevation rules.
> Before that, we need to refactor the QueryElevationComponent. We make it 
> extendible. We introduce the ElevationProvider interface which will be 
> implemented later in a second patch to support subset matching. The current 
> full-query match policy becomes a default simple MapElevationProvider.
> - Add overridable methods to handle exceptions during the component 
> initialization.
> - Add overridable methods to provide the default values for config properties.
> - No functional change beyond refactoring.
> - Adapt unit test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11358) Support DelimitedTermFrequencyTokenFilter out of the box with dynamic field mapping

2018-05-17 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16479760#comment-16479760
 ] 

David Smiley commented on SOLR-11358:
-

(sigh), I guess that ship has sailed

> Support DelimitedTermFrequencyTokenFilter out of the box with dynamic field 
> mapping
> ---
>
> Key: SOLR-11358
> URL: https://issues.apache.org/jira/browse/SOLR-11358
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erik Hatcher
>Assignee: Erik Hatcher
>Priority: Major
> Attachments: SOLR-11358.patch
>
>
> payload() works values encoded with DelimitedPayloadTokenFilter.   payload() 
> can be modified to return the term frequency instead, when the field uses 
> DelimitedTermFrequencyTokenFilter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12243) Edismax missing phrase queries when phrases contain multiterm synonyms

2018-05-17 Thread Alessandro Benedetti (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16479714#comment-16479714
 ] 

Alessandro Benedetti commented on SOLR-12243:
-

Thanks [~elyograg], I read about that issue and there's plenty of work going on 
in that direction...
Anything we can do our side to speed up the review and contribution of this one 
?

> Edismax missing phrase queries when phrases contain multiterm synonyms
> --
>
> Key: SOLR-12243
> URL: https://issues.apache.org/jira/browse/SOLR-12243
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: query parsers
>Affects Versions: 7.1
> Environment: RHEL, MacOS X
> Do not believe this is environment-specific.
>Reporter: Elizabeth Haubert
>Priority: Major
> Attachments: SOLR-12243.patch, SOLR-12243.patch, SOLR-12243.patch, 
> SOLR-12243.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> synonyms.txt:
> allergic, hypersensitive
> aspirin, acetylsalicylic acid
> dog, canine, canis familiris, k 9
> rat, rattus
> request handler:
> 
>  
> 
>  edismax
>   0.4
>  title^100
>  title~20^5000
>  title~11
>  title~22^1000
>  text
>  
>  3-1 6-3 930%
>  *:*
>  25
> 
>  
> Phrase queries (pf, pf2, pf3) containing "dog" or "aspirin"  against the 
> above list will not be generated.
> "allergic reaction dog" will generate pf2: "allergic reaction", but not 
> pf:"allergic reaction dog", pf2: "reaction dog", or pf3: "allergic reaction 
> dog"
> "aspirin dose in rats" will generate pf3: "dose ? rats" but not pf2: "aspirin 
> dose" or pf3:"aspirin dose ?"
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12238) Synonym Query Style Boost By Payload

2018-05-17 Thread Alessandro Benedetti (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16479708#comment-16479708
 ] 

Alessandro Benedetti commented on SOLR-12238:
-

Anything I can do my side to speed up the review and contribution ?
Absolutely happy to improve the patch if necessary, but I didn't get any code 
review yet !

> Synonym Query Style Boost By Payload
> 
>
> Key: SOLR-12238
> URL: https://issues.apache.org/jira/browse/SOLR-12238
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: query parsers
>Affects Versions: 7.2
>Reporter: Alessandro Benedetti
>Priority: Major
> Attachments: SOLR-12238.patch, SOLR-12238.patch, SOLR-12238.patch, 
> SOLR-12238.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This improvement is built on top of the Synonym Query Style feature and 
> brings the possibility of boosting synonym queries using the payload 
> associated.
> It introduces two new modalities for the Synonym Query Style :
> PICK_BEST_BOOST_BY_PAYLOAD -> build a Disjunction query with the clauses 
> boosted by payload
> AS_DISTINCT_TERMS_BOOST_BY_PAYLOAD -> build a Boolean query with the clauses 
> boosted by payload
> This new synonym query styles will assume payloads are available so they must 
> be used in conjunction with a token filter able to produce payloads.
> An synonym.txt example could be :
> # Synonyms used by Payload Boost
> tiger => tiger|1.0, Big_Cat|0.8, Shere_Khan|0.9
> leopard => leopard, Big_Cat|0.8, Bagheera|0.9
> lion => lion|1.0, panthera leo|0.99, Simba|0.8
> snow_leopard => panthera uncia|0.99, snow leopard|1.0
> A simple token filter to populate the payloads from such synonym.txt is :
>  delimiter="|"/>



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12299) More Like This Params Refactor

2018-05-17 Thread Alessandro Benedetti (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16479707#comment-16479707
 ] 

Alessandro Benedetti commented on SOLR-12299:
-

I am quite keen in moving forward the More Like This refactor, can I help in 
any way to proceed with a review ?
Anyone out there that could help ?
I have already split up the big refactor to become more review-friendly, happy 
to do whatever it needs to push this forward ( I would like to proceed with 
further developments More Like This side, but first I want the refactor to be 
there)

> More Like This Params Refactor
> --
>
> Key: SOLR-12299
> URL: https://issues.apache.org/jira/browse/SOLR-12299
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: MoreLikeThis
>Reporter: Alessandro Benedetti
>Priority: Major
> Attachments: SOLR-12299.patch, SOLR-12299.patch, SOLR-12299.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> More Like This ca be refactored to improve the code readability, test 
> coverage and maintenance.
> Scope of this Jira issue is to start the More Like This refactor from the 
> More Like This Params.
> This Jira will not improve the current More Like This but just keep the same 
> functionality with a refactored code.
> Other Jira issues will follow improving the overall code readability, test 
> coverage and maintenance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12304) Interesting Terms parameter is ignored by MLT Component

2018-05-17 Thread Alessandro Benedetti (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16479704#comment-16479704
 ] 

Alessandro Benedetti commented on SOLR-12304:
-

Anything I can do my side to proceed with the review and merge of this patch ?

Should I open a new Jira issue to deprecate the redundant More Like This 
components and just leave the query parser as official supported approach ?

> Interesting Terms parameter is ignored by MLT Component
> ---
>
> Key: SOLR-12304
> URL: https://issues.apache.org/jira/browse/SOLR-12304
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: MoreLikeThis
>Affects Versions: 7.2
>Reporter: Alessandro Benedetti
>Priority: Major
> Attachments: SOLR-12304.patch, SOLR-12304.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently the More Like This component just ignores the mlt.InterestingTerms 
> parameter ( which is usable by the MoreLikeThisHandler).
> Scope of this issue is to fix the bug and add related tests ( which will 
> succeed after the fix )
> *N.B.* MoreLikeThisComponent and MoreLikeThisHandler are very coupled and the 
> tests for the MoreLikeThisHandler are intersecting the MoreLikeThisComponent 
> ones .
>  It is out of scope for this issue any consideration or refactor of that.
>  Other issues will follow.
> *N.B.* out of scope for this issue is the distributed case, which is much 
> more complicated and requires much deeper investigations



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-9.0.4) - Build # 22030 - Unstable!

2018-05-17 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/22030/
Java: 64bit/jdk-9.0.4 -XX:+UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.cloud.autoscaling.SearchRateTriggerTest.testTrigger

Error Message:
expected:<1> but was:<2>

Stack Trace:
java.lang.AssertionError: expected:<1> but was:<2>
at 
__randomizedtesting.SeedInfo.seed([6848B4AD625A05F1:B83822FFB9576DC]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.cloud.autoscaling.SearchRateTriggerTest.testTrigger(SearchRateTriggerTest.java:133)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)




Build Log:
[...truncated 14541 lines...]
   [junit4] Suite: org.apache.solr.cloud.autoscaling.SearchRateTriggerTest
   [junit4]   2> 

[jira] [Commented] (SOLR-11277) Add auto hard commit setting based on tlog size

2018-05-17 Thread Lucene/Solr QA (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16479687#comment-16479687
 ] 

Lucene/Solr QA commented on SOLR-11277:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
56s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Release audit (RAT) {color} | 
{color:green}  2m 14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Check forbidden APIs {color} | 
{color:green}  2m 14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Validate source patterns {color} | 
{color:green}  2m 14s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 69m 33s{color} 
| {color:red} core in the patch failed. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 78m 10s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | solr.cloud.autoscaling.sim.TestTriggerIntegration |
|   | solr.cloud.autoscaling.sim.TestLargeCluster |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | SOLR-11277 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12923873/SOLR-11277.01.patch |
| Optional Tests |  compile  javac  unit  ratsources  checkforbiddenapis  
validatesourcepatterns  |
| uname | Linux lucene1-us-west 3.13.0-88-generic #135-Ubuntu SMP Wed Jun 8 
21:10:42 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | ant |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-SOLR-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh
 |
| git revision | master / 99c4adf |
| ant | version: Apache Ant(TM) version 1.9.3 compiled on April 8 2014 |
| Default Java | 1.8.0_172 |
| unit | 
https://builds.apache.org/job/PreCommit-SOLR-Build/96/artifact/out/patch-unit-solr_core.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-SOLR-Build/96/testReport/ |
| modules | C: solr/core U: solr/core |
| Console output | 
https://builds.apache.org/job/PreCommit-SOLR-Build/96/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> Add auto hard commit setting based on tlog size
> ---
>
> Key: SOLR-11277
> URL: https://issues.apache.org/jira/browse/SOLR-11277
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Rupa Shankar
>Assignee: Anshum Gupta
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: SOLR-11277.01.patch, SOLR-11277.patch, SOLR-11277.patch, 
> SOLR-11277.patch, SOLR-11277.patch, SOLR-11277.patch, SOLR-11277.patch, 
> SOLR-11277.patch, max_size_auto_commit.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> When indexing documents of variable sizes and at variable schedules, it can 
> be hard to estimate the optimal auto hard commit maxDocs or maxTime settings. 
> We’ve had some occurrences of really huge tlogs, resulting in serious issues, 
> so in an attempt to avoid this, it would be great to have a “maxSize” setting 
> based on the tlog size on disk. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-7.x - Build # 610 - Still Unstable

2018-05-17 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/610/

2 tests failed.
FAILED:  
org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica

Error Message:
Expected 1x2 collections null Live Nodes: [127.0.0.1:34397_solr, 
127.0.0.1:38555_solr, 127.0.0.1:39241_solr, 127.0.0.1:42973_solr] Last 
available state: null

Stack Trace:
java.lang.AssertionError: Expected 1x2 collections
null
Live Nodes: [127.0.0.1:34397_solr, 127.0.0.1:38555_solr, 127.0.0.1:39241_solr, 
127.0.0.1:42973_solr]
Last available state: null
at 
__randomizedtesting.SeedInfo.seed([11A29456F299A760:7BB4F5869A6BEDAA]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.SolrCloudTestCase.waitForState(SolrCloudTestCase.java:278)
at 
org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica(DeleteReplicaTest.java:235)
at 
org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica(DeleteReplicaTest.java:226)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-12338) Replay buffering tlog in parallel

2018-05-17 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16479675#comment-16479675
 ] 

Yonik Seeley commented on SOLR-12338:
-

I haven't been following this issue, but the need to order things caught my 
eye, primarily because we have a bunch of logic already that handles reordered 
updates.  I guess the issue is that buffered updates may not have a version (if 
they haven't been through a leader?)  If that's the case, perhaps an easier 
path would be to assign a version and then let the existing reorder logic do 
it's thing.  I don't have the full picture here, so it's just some input to 
consider.

> Replay buffering tlog in parallel
> -
>
> Key: SOLR-12338
> URL: https://issues.apache.org/jira/browse/SOLR-12338
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Major
> Attachments: SOLR-12338.patch, SOLR-12338.patch, SOLR-12338.patch
>
>
> Since updates with different id are independent, therefore it is safe to 
> replay them in parallel. This will significantly reduce recovering time of 
> replicas in high load indexing environment. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1874 - Still Unstable!

2018-05-17 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1874/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseG1GC

3 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.autoscaling.NodeLostTriggerIntegrationTest

Error Message:
java.util.concurrent.TimeoutException: Could not connect to ZooKeeper 
127.0.0.1:55437 within 3 ms

Stack Trace:
org.apache.solr.common.SolrException: java.util.concurrent.TimeoutException: 
Could not connect to ZooKeeper 127.0.0.1:55437 within 3 ms
at __randomizedtesting.SeedInfo.seed([20ACFA2EDC5F7C6B]:0)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:183)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:120)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:115)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:102)
at 
org.apache.solr.cloud.MiniSolrCloudCluster.(MiniSolrCloudCluster.java:233)
at 
org.apache.solr.cloud.SolrCloudTestCase$Builder.build(SolrCloudTestCase.java:198)
at 
org.apache.solr.cloud.SolrCloudTestCase$Builder.configure(SolrCloudTestCase.java:190)
at 
org.apache.solr.cloud.autoscaling.NodeLostTriggerIntegrationTest.setupCluster(NodeLostTriggerIntegrationTest.java:70)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:874)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.util.concurrent.TimeoutException: Could not connect to 
ZooKeeper 127.0.0.1:55437 within 3 ms
at 
org.apache.solr.common.cloud.ConnectionManager.waitForConnected(ConnectionManager.java:232)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:175)
... 31 more


FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.autoscaling.NodeLostTriggerIntegrationTest

Error Message:
4 threads leaked from SUITE scope at 
org.apache.solr.cloud.autoscaling.NodeLostTriggerIntegrationTest: 1) 
Thread[id=604, name=Thread-69, state=WAITING, 
group=TGRP-NodeLostTriggerIntegrationTest] at 
java.lang.Object.wait(Native Method) at 
java.lang.Thread.join(Thread.java:1252) at 
java.lang.Thread.join(Thread.java:1326) at 
org.apache.zookeeper.server.NIOServerCnxnFactory.join(NIOServerCnxnFactory.java:313)
 at 
org.apache.solr.cloud.ZkTestServer$ZKServerMain.runFromConfig(ZkTestServer.java:313)
 at org.apache.solr.cloud.ZkTestServer$2.run(ZkTestServer.java:496)
2) Thread[id=608, name=ProcessThread(sid:0 cport:55437):, state=WAITING, 
group=TGRP-NodeLostTriggerIntegrationTest] at 
sun.misc.Unsafe.park(Native 

[jira] [Updated] (SOLR-12371) SecurityConfHandlerLocal fails to read back security.json meta version (SecurityConfig.getVersion() always -1), never increased

2018-05-17 Thread Pascal Proulx (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pascal Proulx updated SOLR-12371:
-
Description: 
Hello again,

We use 6.6.3 and I was trying to update my security.json (in solr home, 
non-zookeeper) using:
{code:java}
curl -u myuser:mypass -H 'Content-type:application/json' -d 
'{"set-user-role":{"dummy":"dummy"}}' 
http://localhost:8080/solr/admin/authorization
{code}
The first time this is called, the security.json is written AND reloaded in 
memory correctly. The output json then contains at the end:
{code:java}
"":{"v":0}
{code}
However, subsequent calls using the same command, no matter the users specifed, 
always output the same meta version, 0.

The result is that the the security.json file is correctly updated, but the 
RuleBasedAuthorizationPlugin is never reloaded in memory, so the new settings 
never take effect.

The version never increases, so this condition in 
org.apache.solr.core.CoreContainer.initializeAuthorizationPlugin always returns 
and memory plugin reload is skipped:
{code:java}
if (old != null && old.getZnodeVersion() == readVersion(authorizationConf)) {
  return;
}
{code}
The core of the issue is somewhere in 
org.apache.solr.handler.admin.SecurityConfHandler.doEdit:
{code:java}
  SecurityConfig securityConfig = getSecurityConfig(true);
  Map data = securityConfig.getData();
  Map latestConf = (Map) data.get(key);
  if (latestConf == null) {
    throw new SolrException(SERVER_ERROR, "No configuration present for " + 
key);
  }
  List commandsCopy = CommandOperation.clone(ops);
  Map out = 
configEditablePlugin.edit(Utils.getDeepCopy(latestConf, 4) , commandsCopy);
  if (out == null) {
    List errs = CommandOperation.captureErrors(commandsCopy);
    if (!errs.isEmpty()) {
  rsp.add(CommandOperation.ERR_MSGS, errs);
  return;
    }
    log.debug("No edits made");
    return;
  } else {
    if(!Objects.equals(latestConf.get("class") , out.get("class"))){
  throw new SolrException(SERVER_ERROR, "class cannot be modified");
    }
    Map meta = getMapValue(out, "");
    meta.put("v", securityConfig.getVersion()+1);//encode the expected 
zkversion
    data.put(key, out);
    
    if(persistConf(securityConfig)) {
  securityConfEdited();
  return;
    }
  }
{code}
In my case, getSecurityConfig(true) delegates to 
org.apache.solr.handler.admin.SecurityConfHandlerLocal.getSecurityConfig(boolean)

But the instance variable SecurityConfig.version is never set to anything other 
than -1; it is not read back from security.json in other words the data map, 
such that
{code:java}
meta.put("v", securityConfig.getVersion()+1);//encode the expected zkversion
{code}
always puts a value of 0 for the version, leading to the aforementioned memory 
reload skip.

There does not seem to be any code calling SecurityConfig.setVersion anywhere 
or any of SecurityConfig's methods updating the version variable.

The only code that does call it is in the SecurityConfHandlerZk for zookeeper, 
but we are not using zookeeper.

Ultimately, I can't seem to use the set-user-role command because of this. I 
hope this is just a duplicate. Thanks

  was:
Hello again,

We use 6.6.3 and I was trying to update my security.json (in solr home, 
non-zookeeper) using:
{code:java}
curl -u myuser:mypass -H 'Content-type:application/json' -d 
'{"set-user-role":{"dummy":"dummy"}}' 
http://localhost:8080/solr/admin/authorization
{code}
The first time this is called, the security.json is written AND reloaded in 
memory correctly. The output json then contains at the end:
{code:java}
"":{"v":0}
{code}
However, subsequent calls using the same command, no matter the users specifed, 
always output the same meta version, 0.

The result is that the the security.json file is correctly updated, but the 
RuleBasedAuthorizationPlugin is never reloaded in memory, so the new settings 
never take effect.

The version never increases, so this condition in 
org.apache.solr.core.CoreContainer.initializeAuthorizationPlugin always returns 
and memory plugin reload is skipped:
{code:java}
if (old != null && old.getZnodeVersion() == readVersion(authorizationConf)) {
  return;
}
{code}
The core of the issue is somewhere in 
org.apache.solr.handler.admin.SecurityConfHandler.doEdit:
{code:java}
  SecurityConfig securityConfig = getSecurityConfig(true);
  Map data = securityConfig.getData();
  Map latestConf = (Map) data.get(key);
  if (latestConf == null) {
    throw new SolrException(SERVER_ERROR, "No configuration present for " + 
key);
  }
  List commandsCopy = CommandOperation.clone(ops);
  Map out = 

[JENKINS] Lucene-Solr-repro - Build # 647 - Unstable

2018-05-17 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/647/

[...truncated 36 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-Tests-master/2531/consoleText

[repro] Revision: 0c3628920afdc27bbaf1c057bf6519319ea78e51

[repro] Repro line:  ant test  -Dtestcase=TestCollectionStateWatchers 
-Dtests.method=testWaitForStateWatcherIsRetainedOnPredicateFailure 
-Dtests.seed=9BB942761B40B0A2 -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.locale=ja-JP -Dtests.timezone=PRC -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=NoFacetTest -Dtests.method=meanTest 
-Dtests.seed=AE133F7198F12532 -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.locale=ms -Dtests.timezone=Pacific/Galapagos -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=NoFacetTest -Dtests.method=medianTest 
-Dtests.seed=AE133F7198F12532 -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.locale=ms -Dtests.timezone=Pacific/Galapagos -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
99c4adfb6ab83614874904cf366f32a110cb6ee0
[repro] git fetch

[...truncated 2 lines...]
[repro] git checkout 0c3628920afdc27bbaf1c057bf6519319ea78e51

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/contrib/analytics
[repro]   NoFacetTest
[repro]solr/solrj
[repro]   TestCollectionStateWatchers
[repro] ant compile-test

[...truncated 2579 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.NoFacetTest" -Dtests.showOutput=onerror  
-Dtests.seed=AE133F7198F12532 -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.locale=ms -Dtests.timezone=Pacific/Galapagos -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[...truncated 138 lines...]
[repro] ant compile-test

[...truncated 447 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.TestCollectionStateWatchers" -Dtests.showOutput=onerror  
-Dtests.seed=9BB942761B40B0A2 -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.locale=ja-JP -Dtests.timezone=PRC -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[...truncated 2342 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   0/5 failed: org.apache.solr.analytics.NoFacetTest
[repro]   2/5 failed: org.apache.solr.common.cloud.TestCollectionStateWatchers
[repro] git checkout 99c4adfb6ab83614874904cf366f32a110cb6ee0

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 5 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-Tests-master - Build # 2532 - Still Unstable

2018-05-17 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/2532/

2 tests failed.
FAILED:  org.apache.solr.cloud.TestCloudPivotFacet.test

Error Message:
{main(facet=true={!stats%3Dst3}pivot_x_s,pivot_x_s,pivot_i1=3=6=2=true=index=0=0),extra(rows=0=id:[*+TO+381]=true={!key%3Dsk1+tag%3Dst1,st2}pivot_dt={!key%3Dsk2+tag%3Dst2,st3}pivot_i={!key%3Dsk3+tag%3Dst3,st4}pivot_z_s&_test_min=2&_test_miss=true&_test_sort=index)}
 ==> pivot_x_s,pivot_x_s,pivot_i1: 
{params(rows=0),defaults({main({main(rows=0=id:[*+TO+381]=true={!key%3Dsk1+tag%3Dst1,st2}pivot_dt={!key%3Dsk2+tag%3Dst2,st3}pivot_i={!key%3Dsk3+tag%3Dst3,st4}pivot_z_s&_test_min=2&_test_miss=true&_test_sort=index),extra(fq={!term+f%3Dpivot_x_s}g)}),extra(fq={!term+f%3Dpivot_x_s}l)})}
 expected:<2> but was:<3>

Stack Trace:
java.lang.AssertionError: 
{main(facet=true={!stats%3Dst3}pivot_x_s,pivot_x_s,pivot_i1=3=6=2=true=index=0=0),extra(rows=0=id:[*+TO+381]=true={!key%3Dsk1+tag%3Dst1,st2}pivot_dt={!key%3Dsk2+tag%3Dst2,st3}pivot_i={!key%3Dsk3+tag%3Dst3,st4}pivot_z_s&_test_min=2&_test_miss=true&_test_sort=index)}
 ==> pivot_x_s,pivot_x_s,pivot_i1: 
{params(rows=0),defaults({main({main(rows=0=id:[*+TO+381]=true={!key%3Dsk1+tag%3Dst1,st2}pivot_dt={!key%3Dsk2+tag%3Dst2,st3}pivot_i={!key%3Dsk3+tag%3Dst3,st4}pivot_z_s&_test_min=2&_test_miss=true&_test_sort=index),extra(fq={!term+f%3Dpivot_x_s}g)}),extra(fq={!term+f%3Dpivot_x_s}l)})}
 expected:<2> but was:<3>
at 
__randomizedtesting.SeedInfo.seed([D144A70E42EC6D6D:591098D4EC100095]:0)
at 
org.apache.solr.cloud.TestCloudPivotFacet.assertPivotCountsAreCorrect(TestCloudPivotFacet.java:291)
at 
org.apache.solr.cloud.TestCloudPivotFacet.test(TestCloudPivotFacet.java:238)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:993)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 

[JENKINS] Lucene-Solr-7.x-MacOSX (64bit/jdk-9) - Build # 649 - Unstable!

2018-05-17 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-MacOSX/649/
Java: 64bit/jdk-9 -XX:-UseCompressedOops -XX:+UseSerialGC

2 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testSplitIntegration

Error Message:
last state: 
DocCollection(testSplitIntegration_collection//clusterstate.json/38)={   
"replicationFactor":"2",   "pullReplicas":"0",   
"router":{"name":"compositeId"},   "maxShardsPerNode":"2",   
"autoAddReplicas":"false",   "nrtReplicas":"2",   "tlogReplicas":"0",   
"autoCreated":"true",   "shards":{ "shard2":{   "replicas":{ 
"core_node3":{   
"core":"testSplitIntegration_collection_shard2_replica_n3",   
"leader":"true",   "SEARCHER.searcher.maxDoc":11,   
"SEARCHER.searcher.deletedDocs":0,   "INDEX.sizeInBytes":1,   
"node_name":"127.0.0.1:10003_solr",   "state":"active",   
"type":"NRT",   "SEARCHER.searcher.numDocs":11}, "core_node4":{ 
  "core":"testSplitIntegration_collection_shard2_replica_n4",   
"SEARCHER.searcher.maxDoc":11,   "SEARCHER.searcher.deletedDocs":0, 
  "INDEX.sizeInBytes":1,   "node_name":"127.0.0.1:10002_solr",  
 "state":"active",   "type":"NRT",   
"SEARCHER.searcher.numDocs":11}},   "range":"0-7fff",   
"state":"active"}, "shard1":{   "stateTimestamp":"1526584363627304850", 
  "replicas":{ "core_node1":{   
"core":"testSplitIntegration_collection_shard1_replica_n1",   
"leader":"true",   "SEARCHER.searcher.maxDoc":14,   
"SEARCHER.searcher.deletedDocs":0,   "INDEX.sizeInBytes":1,   
"node_name":"127.0.0.1:10003_solr",   "state":"active",   
"type":"NRT",   "SEARCHER.searcher.numDocs":14}, "core_node2":{ 
  "core":"testSplitIntegration_collection_shard1_replica_n2",   
"SEARCHER.searcher.maxDoc":14,   "SEARCHER.searcher.deletedDocs":0, 
  "INDEX.sizeInBytes":1,   "node_name":"127.0.0.1:10002_solr",  
 "state":"active",   "type":"NRT",   
"SEARCHER.searcher.numDocs":14}},   "range":"8000-",   
"state":"inactive"}, "shard1_1":{   "parent":"shard1",   
"stateTimestamp":"1526584363628357350",   "range":"c000-",  
 "state":"active",   "replicas":{ "core_node10":{   
"leader":"true",   
"core":"testSplitIntegration_collection_shard1_1_replica1",   
"SEARCHER.searcher.maxDoc":6,   "SEARCHER.searcher.deletedDocs":0,  
 "INDEX.sizeInBytes":1,   "node_name":"127.0.0.1:10002_solr",   
"base_url":"http://127.0.0.1:10002/solr;,   "state":"active",   
"type":"NRT",   "SEARCHER.searcher.numDocs":6}, 
"core_node9":{   
"core":"testSplitIntegration_collection_shard1_1_replica0",   
"SEARCHER.searcher.maxDoc":6,   "SEARCHER.searcher.deletedDocs":0,  
 "INDEX.sizeInBytes":1,   "node_name":"127.0.0.1:10003_solr",   
"base_url":"http://127.0.0.1:10003/solr;,   "state":"active",   
"type":"NRT",   "SEARCHER.searcher.numDocs":6}}}, "shard1_0":{  
 "parent":"shard1",   "stateTimestamp":"1526584363628142500",   
"range":"8000-bfff",   "state":"active",   "replicas":{ 
"core_node7":{   "leader":"true",   
"core":"testSplitIntegration_collection_shard1_0_replica0",   
"SEARCHER.searcher.maxDoc":7,   "SEARCHER.searcher.deletedDocs":0,  
 "INDEX.sizeInBytes":1,   "node_name":"127.0.0.1:10003_solr",   
"base_url":"http://127.0.0.1:10003/solr;,   "state":"active",   
"type":"NRT",   "SEARCHER.searcher.numDocs":7}, 
"core_node8":{   
"core":"testSplitIntegration_collection_shard1_0_replica1",   
"SEARCHER.searcher.maxDoc":7,   "SEARCHER.searcher.deletedDocs":0,  
 "INDEX.sizeInBytes":1,   "node_name":"127.0.0.1:10002_solr",   
"base_url":"http://127.0.0.1:10002/solr;,   "state":"active",   
"type":"NRT",   "SEARCHER.searcher.numDocs":7}

Stack Trace:
java.util.concurrent.TimeoutException: last state: 
DocCollection(testSplitIntegration_collection//clusterstate.json/38)={
  "replicationFactor":"2",
  "pullReplicas":"0",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"2",
  "autoAddReplicas":"false",
  "nrtReplicas":"2",
  "tlogReplicas":"0",
  "autoCreated":"true",
  "shards":{
"shard2":{
  "replicas":{
"core_node3":{
  "core":"testSplitIntegration_collection_shard2_replica_n3",
  "leader":"true",
  "SEARCHER.searcher.maxDoc":11,
  "SEARCHER.searcher.deletedDocs":0,
  "INDEX.sizeInBytes":1,
  "node_name":"127.0.0.1:10003_solr",
  

[jira] [Commented] (LUCENE-7788) fail precommit on unparameterised log.trace messages

2018-05-17 Thread Christine Poerschke (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16479563#comment-16479563
 ] 

Christine Poerschke commented on LUCENE-7788:
-

bq. 3> Since we're going through the review in the first place we can 
regularize the names of the loggers to whatever we want. It looks like "log" is 
the least number of changes so it wins by default. WDYT about adding a 
precommit check for that too?

+1 to regularizing logger names. SOLR-12372 gives it a go starting with (part 
of) {{solr/contrib}} and looking at the patch made me wonder/question how 
conversion to unparameterised logging would best work with (a) long log 
messages e.g.
{code}
 log
 .warn(
 "Could not instantiate Lucene stemmer for Arabic, clustering 
quality "
 + "of Arabic content may be degraded. For best quality 
clusters, "
 + "make sure Lucene's Arabic analyzer JAR is in the 
classpath",
 e);
{code}
and (b) exceptions e.g.
{code}
 log.warn("Could not instantiate snowball stemmer"
 + " for language: " + language.name()
 + ". Quality of clustering may be degraded.", e);
{code}
scenarios?

> fail precommit on unparameterised log.trace messages
> 
>
> Key: LUCENE-7788
> URL: https://issues.apache.org/jira/browse/LUCENE-7788
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: LUCENE-7788.patch, LUCENE-7788.patch
>
>
> SOLR-10415 would be removing existing unparameterised log.trace messages use 
> and once that is in place then this ticket's one-line change would be for 
> 'ant precommit' to reject any future unparameterised log.trace message use.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7788) fail precommit on unparameterised log.trace messages

2018-05-17 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16479557#comment-16479557
 ] 

Erick Erickson commented on LUCENE-7788:


bq. ... would we need to differentiate log.debug("Foo.bar() called");...

If (and only if) it would be easy IMO. I don't think it's too onerous that, 
assuming supporting your example turns into a rat-hole, we'd require re-wording 
the message. Something like this, say 
{code}
log.debug("Method Foo.bar called");
{code}

And how far we go down various cases .vs. doing a bit of rewording gets weird 
pretty quickly.
{code}
log.debug("method\"Foo.bar()\" ");
{code}

is the same as your example, just puts quotes around Foo.bar(). So still would 
be legal, but any simple check that just looked for method calls outside pairs 
of double quotes would mistakenly fail it.

Personally I don't feel the need to support everything anyone wants to put in, 
what we enforce via precommit just becomes the norm. So we'll make a best 
effort to accommodate things like this example but if it takes more than a few 
minutes not bother. If someone feels strongly enough about it to put the work 
into supporting it, they're perfectly free to do so ;)


> fail precommit on unparameterised log.trace messages
> 
>
> Key: LUCENE-7788
> URL: https://issues.apache.org/jira/browse/LUCENE-7788
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: LUCENE-7788.patch, LUCENE-7788.patch
>
>
> SOLR-10415 would be removing existing unparameterised log.trace messages use 
> and once that is in place then this ticket's one-line change would be for 
> 'ant precommit' to reject any future unparameterised log.trace message use.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12372) LuceneCarrot2(Stemmer|Tokenizer)Factory logger rename

2018-05-17 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated SOLR-12372:
---
Attachment: SOLR-12372.patch

> LuceneCarrot2(Stemmer|Tokenizer)Factory logger rename 
> --
>
> Key: SOLR-12372
> URL: https://issues.apache.org/jira/browse/SOLR-12372
> Project: Solr
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-12372.patch
>
>
> Rename private static variable from {{logger}} to the more/most commonly used 
> {{log}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12371) SecurityConfHandlerLocal fails to read back security.json meta version (SecurityConfig.getVersion() always -1), never increased

2018-05-17 Thread Pascal Proulx (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16479536#comment-16479536
 ] 

Pascal Proulx edited comment on SOLR-12371 at 5/17/18 6:48 PM:
---

For reference, here is what the zookeeper handler does: 
org.apache.solr.handler.admin.SecurityConfHandlerZk.getSecurityConfig(boolean)
{code:java}
  @Override
  public SecurityConfig getSecurityConfig(boolean getFresh) {
    ZkStateReader.ConfigData configDataFromZk = 
cores.getZkController().getZkStateReader().getSecurityProps(getFresh);
    return configDataFromZk == null ?
    new SecurityConfig() :
    new 
SecurityConfig().setData(configDataFromZk.data).setVersion(configDataFromZk.version);
  }
{code}
So presumably 
org.apache.solr.handler.admin.SecurityConfHandlerLocal.getSecurityConfig(boolean)
 is missing a call to setVersion after calling setData:
{code:java}
  @Override
  public SecurityConfig getSecurityConfig(boolean getFresh) {
    if (Files.exists(securityJsonPath)) {
  try (InputStream securityJsonIs = Files.newInputStream(securityJsonPath)) 
{
    return new SecurityConfig().setData(securityJsonIs);
  } catch (Exception e) {
    throw new SolrException(SolrException.ErrorCode.SERVER_ERROR, "Failed 
opening existing security.json file: " + securityJsonPath, e);
  }
    }
    return new SecurityConfig();
  }
{code}
 (or SecurityConfig could encapsulate the initialization of version from data, 
but I have no idea if that can be generalized there)

 


was (Author: pplx):
For reference, here is what the zookeeper handler does: 
org.apache.solr.handler.admin.SecurityConfHandlerZk.getSecurityConfig(boolean)
{code:java}
  @Override
  public SecurityConfig getSecurityConfig(boolean getFresh) {
    ZkStateReader.ConfigData configDataFromZk = 
cores.getZkController().getZkStateReader().getSecurityProps(getFresh);
    return configDataFromZk == null ?
    new SecurityConfig() :
    new 
SecurityConfig().setData(configDataFromZk.data).setVersion(configDataFromZk.version);
  }
{code}
So presumably 
org.apache.solr.handler.admin.SecurityConfHandlerLocal.getSecurityConfig(boolean)
 is missing a call to setVersion after calling setData:
{code:java}
  @Override
  public SecurityConfig getSecurityConfig(boolean getFresh) {
    if (Files.exists(securityJsonPath)) {
  try (InputStream securityJsonIs = Files.newInputStream(securityJsonPath)) 
{
    return new SecurityConfig().setData(securityJsonIs);
  } catch (Exception e) {
    throw new SolrException(SolrException.ErrorCode.SERVER_ERROR, "Failed 
opening existing security.json file: " + securityJsonPath, e);
  }
    }
    return new SecurityConfig();
  }
{code}
 (or SecurityConfig should encapsulate the initialization of version from data)

 

> SecurityConfHandlerLocal fails to read back security.json meta version 
> (SecurityConfig.getVersion() always -1), never increased
> ---
>
> Key: SOLR-12371
> URL: https://issues.apache.org/jira/browse/SOLR-12371
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: JSON Request API, security
>Affects Versions: 6.6.3
>Reporter: Pascal Proulx
>Priority: Major
>
> Hello again,
> We use 6.6.3 and I was trying to update my security.json (in solr home, 
> non-zookeeper) using:
> {code:java}
> curl -u myuser:mypass -H 'Content-type:application/json' -d 
> '{"set-user-role":{"dummy":"dummy"}}' 
> http://localhost:8080/solr/admin/authorization
> {code}
> The first time this is called, the security.json is written AND reloaded in 
> memory correctly. The output json then contains at the end:
> {code:java}
> "":{"v":0}
> {code}
> However, subsequent calls using the same command, no matter the users 
> specifed, always output the same meta version, 0.
> The result is that the the security.json file is correctly updated, but the 
> RuleBasedAuthorizationPlugin is never reloaded in memory, so the new settings 
> never take effect.
> The version never increases, so this condition in 
> org.apache.solr.core.CoreContainer.initializeAuthorizationPlugin always 
> returns and memory plugin reload is skipped:
> {code:java}
> if (old != null && old.getZnodeVersion() == readVersion(authorizationConf)) {
>   return;
> }
> {code}
> The core of the issue is somewhere in 
> org.apache.solr.handler.admin.SecurityConfHandler.doEdit:
> {code:java}
>   SecurityConfig securityConfig = getSecurityConfig(true);
>   Map data = securityConfig.getData();
>   Map latestConf = (Map) data.get(key);
>   if (latestConf == null) {
>     throw new SolrException(SERVER_ERROR, "No 

[jira] [Created] (SOLR-12372) LuceneCarrot2(Stemmer|Tokenizer)Factory logger rename

2018-05-17 Thread Christine Poerschke (JIRA)
Christine Poerschke created SOLR-12372:
--

 Summary: LuceneCarrot2(Stemmer|Tokenizer)Factory logger rename 
 Key: SOLR-12372
 URL: https://issues.apache.org/jira/browse/SOLR-12372
 Project: Solr
  Issue Type: Task
Reporter: Christine Poerschke
Assignee: Christine Poerschke


Rename private static variable from {{logger}} to the more/most commonly used 
{{log}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12371) SecurityConfHandlerLocal fails to read back security.json meta version (SecurityConfig.getVersion() always -1), never increased

2018-05-17 Thread Pascal Proulx (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16479536#comment-16479536
 ] 

Pascal Proulx edited comment on SOLR-12371 at 5/17/18 6:46 PM:
---

For reference, here is what the zookeeper handler does: 
org.apache.solr.handler.admin.SecurityConfHandlerZk.getSecurityConfig(boolean)
{code:java}
  @Override
  public SecurityConfig getSecurityConfig(boolean getFresh) {
    ZkStateReader.ConfigData configDataFromZk = 
cores.getZkController().getZkStateReader().getSecurityProps(getFresh);
    return configDataFromZk == null ?
    new SecurityConfig() :
    new 
SecurityConfig().setData(configDataFromZk.data).setVersion(configDataFromZk.version);
  }
{code}
So presumably 
org.apache.solr.handler.admin.SecurityConfHandlerLocal.getSecurityConfig(boolean)
 is missing a call to setVersion after calling setData:
{code:java}
  @Override
  public SecurityConfig getSecurityConfig(boolean getFresh) {
    if (Files.exists(securityJsonPath)) {
  try (InputStream securityJsonIs = Files.newInputStream(securityJsonPath)) 
{
    return new SecurityConfig().setData(securityJsonIs);
  } catch (Exception e) {
    throw new SolrException(SolrException.ErrorCode.SERVER_ERROR, "Failed 
opening existing security.json file: " + securityJsonPath, e);
  }
    }
    return new SecurityConfig();
  }
{code}
 (or SecurityConfig should encapsulate the initialization of version from data)

 


was (Author: pplx):
For reference, here is what the zookeeper handler does: 
org.apache.solr.handler.admin.SecurityConfHandlerZk.getSecurityConfig(boolean)
{code:java}
  @Override
  public SecurityConfig getSecurityConfig(boolean getFresh) {
    ZkStateReader.ConfigData configDataFromZk = 
cores.getZkController().getZkStateReader().getSecurityProps(getFresh);
    return configDataFromZk == null ?
    new SecurityConfig() :
    new 
SecurityConfig().setData(configDataFromZk.data).setVersion(configDataFromZk.version);
  }
{code}
So presumably 
org.apache.solr.handler.admin.SecurityConfHandlerLocal.getSecurityConfig(boolean)
 is missing a call to setVersion after calling setData:
{code:java}
  @Override
  public SecurityConfig getSecurityConfig(boolean getFresh) {
    if (Files.exists(securityJsonPath)) {
  try (InputStream securityJsonIs = Files.newInputStream(securityJsonPath)) 
{
    return new SecurityConfig().setData(securityJsonIs);
  } catch (Exception e) {
    throw new SolrException(SolrException.ErrorCode.SERVER_ERROR, "Failed 
opening existing security.json file: " + securityJsonPath, e);
  }
    }
    return new SecurityConfig();
  }
{code}
 

 

> SecurityConfHandlerLocal fails to read back security.json meta version 
> (SecurityConfig.getVersion() always -1), never increased
> ---
>
> Key: SOLR-12371
> URL: https://issues.apache.org/jira/browse/SOLR-12371
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: JSON Request API, security
>Affects Versions: 6.6.3
>Reporter: Pascal Proulx
>Priority: Major
>
> Hello again,
> We use 6.6.3 and I was trying to update my security.json (in solr home, 
> non-zookeeper) using:
> {code:java}
> curl -u myuser:mypass -H 'Content-type:application/json' -d 
> '{"set-user-role":{"dummy":"dummy"}}' 
> http://localhost:8080/solr/admin/authorization
> {code}
> The first time this is called, the security.json is written AND reloaded in 
> memory correctly. The output json then contains at the end:
> {code:java}
> "":{"v":0}
> {code}
> However, subsequent calls using the same command, no matter the users 
> specifed, always output the same meta version, 0.
> The result is that the the security.json file is correctly updated, but the 
> RuleBasedAuthorizationPlugin is never reloaded in memory, so the new settings 
> never take effect.
> The version never increases, so this condition in 
> org.apache.solr.core.CoreContainer.initializeAuthorizationPlugin always 
> returns and memory plugin reload is skipped:
> {code:java}
> if (old != null && old.getZnodeVersion() == readVersion(authorizationConf)) {
>   return;
> }
> {code}
> The core of the issue is somewhere in 
> org.apache.solr.handler.admin.SecurityConfHandler.doEdit:
> {code:java}
>   SecurityConfig securityConfig = getSecurityConfig(true);
>   Map data = securityConfig.getData();
>   Map latestConf = (Map) data.get(key);
>   if (latestConf == null) {
>     throw new SolrException(SERVER_ERROR, "No configuration present for " 
> + key);
>   }
>   List commandsCopy = CommandOperation.clone(ops);
>   Map

[jira] [Commented] (SOLR-12371) SecurityConfHandlerLocal fails to read back security.json meta version (SecurityConfig.getVersion() always -1), never increased

2018-05-17 Thread Pascal Proulx (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16479536#comment-16479536
 ] 

Pascal Proulx commented on SOLR-12371:
--

For reference, here is what the zookeeper handler does: 
org.apache.solr.handler.admin.SecurityConfHandlerZk.getSecurityConfig(boolean)
{code:java}
  @Override
  public SecurityConfig getSecurityConfig(boolean getFresh) {
    ZkStateReader.ConfigData configDataFromZk = 
cores.getZkController().getZkStateReader().getSecurityProps(getFresh);
    return configDataFromZk == null ?
    new SecurityConfig() :
    new 
SecurityConfig().setData(configDataFromZk.data).setVersion(configDataFromZk.version);
  }
{code}
So presumably 
org.apache.solr.handler.admin.SecurityConfHandlerLocal.getSecurityConfig(boolean)
 is missing a call to setVersion after calling setData:
{code:java}
  @Override
  public SecurityConfig getSecurityConfig(boolean getFresh) {
    if (Files.exists(securityJsonPath)) {
  try (InputStream securityJsonIs = Files.newInputStream(securityJsonPath)) 
{
    return new SecurityConfig().setData(securityJsonIs);
  } catch (Exception e) {
    throw new SolrException(SolrException.ErrorCode.SERVER_ERROR, "Failed 
opening existing security.json file: " + securityJsonPath, e);
  }
    }
    return new SecurityConfig();
  }
{code}
 

 

> SecurityConfHandlerLocal fails to read back security.json meta version 
> (SecurityConfig.getVersion() always -1), never increased
> ---
>
> Key: SOLR-12371
> URL: https://issues.apache.org/jira/browse/SOLR-12371
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: JSON Request API, security
>Affects Versions: 6.6.3
>Reporter: Pascal Proulx
>Priority: Major
>
> Hello again,
> We use 6.6.3 and I was trying to update my security.json (in solr home, 
> non-zookeeper) using:
> {code:java}
> curl -u myuser:mypass -H 'Content-type:application/json' -d 
> '{"set-user-role":{"dummy":"dummy"}}' 
> http://localhost:8080/solr/admin/authorization
> {code}
> The first time this is called, the security.json is written AND reloaded in 
> memory correctly. The output json then contains at the end:
> {code:java}
> "":{"v":0}
> {code}
> However, subsequent calls using the same command, no matter the users 
> specifed, always output the same meta version, 0.
> The result is that the the security.json file is correctly updated, but the 
> RuleBasedAuthorizationPlugin is never reloaded in memory, so the new settings 
> never take effect.
> The version never increases, so this condition in 
> org.apache.solr.core.CoreContainer.initializeAuthorizationPlugin always 
> returns and memory plugin reload is skipped:
> {code:java}
> if (old != null && old.getZnodeVersion() == readVersion(authorizationConf)) {
>   return;
> }
> {code}
> The core of the issue is somewhere in 
> org.apache.solr.handler.admin.SecurityConfHandler.doEdit:
> {code:java}
>   SecurityConfig securityConfig = getSecurityConfig(true);
>   Map data = securityConfig.getData();
>   Map latestConf = (Map) data.get(key);
>   if (latestConf == null) {
>     throw new SolrException(SERVER_ERROR, "No configuration present for " 
> + key);
>   }
>   List commandsCopy = CommandOperation.clone(ops);
>   Map out = 
> configEditablePlugin.edit(Utils.getDeepCopy(latestConf, 4) , commandsCopy);
>   if (out == null) {
>     List errs = CommandOperation.captureErrors(commandsCopy);
>     if (!errs.isEmpty()) {
>   rsp.add(CommandOperation.ERR_MSGS, errs);
>   return;
>     }
>     log.debug("No edits made");
>     return;
>   } else {
>     if(!Objects.equals(latestConf.get("class") , out.get("class"))){
>   throw new SolrException(SERVER_ERROR, "class cannot be modified");
>     }
>     Map meta = getMapValue(out, "");
>     meta.put("v", securityConfig.getVersion()+1);//encode the expected 
> zkversion
>     data.put(key, out);
>     
>     if(persistConf(securityConfig)) {
>   securityConfEdited();
>   return;
>     }
>   }
> {code}
> In my case, getSecurityConfig(true) delegates to 
> org.apache.solr.handler.admin.SecurityConfHandlerLocal.getSecurityConfig(boolean)
> But the instance variable SecurityConfig.version is never set to anything 
> other than -1; it is not read back from security.json in other words the data 
> map, such that
> {code:java}
> meta.put("v", securityConfig.getVersion()+1);//encode the expected zkversion
> {code}
> always puts a value of 0 for the version, leading to the 

[jira] [Updated] (SOLR-12371) SecurityConfHandlerLocal fails to read back security.json meta version (SecurityConfig.getVersion() always -1), never increased

2018-05-17 Thread Pascal Proulx (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pascal Proulx updated SOLR-12371:
-
Description: 
Hello again,

We use 6.6.3 and I was trying to update my security.json (in solr home, 
non-zookeeper) using:
{code:java}
curl -u myuser:mypass -H 'Content-type:application/json' -d 
'{"set-user-role":{"dummy":"dummy"}}' 
http://localhost:8080/solr/admin/authorization
{code}
The first time this is called, the security.json is written AND reloaded in 
memory correctly. The output json then contains at the end:
{code:java}
"":{"v":0}
{code}
However, subsequent calls using the same command, no matter the users specifed, 
always output the same meta version, 0.

The result is that the the security.json file is correctly updated, but the 
RuleBasedAuthorizationPlugin is never reloaded in memory, so the new settings 
never take effect.

The version never increases, so this condition in 
org.apache.solr.core.CoreContainer.initializeAuthorizationPlugin always returns 
and memory plugin reload is skipped:
{code:java}
if (old != null && old.getZnodeVersion() == readVersion(authorizationConf)) {
  return;
}
{code}
The core of the issue is somewhere in 
org.apache.solr.handler.admin.SecurityConfHandler.doEdit:
{code:java}
  SecurityConfig securityConfig = getSecurityConfig(true);
  Map data = securityConfig.getData();
  Map latestConf = (Map) data.get(key);
  if (latestConf == null) {
    throw new SolrException(SERVER_ERROR, "No configuration present for " + 
key);
  }
  List commandsCopy = CommandOperation.clone(ops);
  Map out = 
configEditablePlugin.edit(Utils.getDeepCopy(latestConf, 4) , commandsCopy);
  if (out == null) {
    List errs = CommandOperation.captureErrors(commandsCopy);
    if (!errs.isEmpty()) {
  rsp.add(CommandOperation.ERR_MSGS, errs);
  return;
    }
    log.debug("No edits made");
    return;
  } else {
    if(!Objects.equals(latestConf.get("class") , out.get("class"))){
  throw new SolrException(SERVER_ERROR, "class cannot be modified");
    }
    Map meta = getMapValue(out, "");
    meta.put("v", securityConfig.getVersion()+1);//encode the expected 
zkversion
    data.put(key, out);
    
    if(persistConf(securityConfig)) {
  securityConfEdited();
  return;
    }
  }
{code}
In my case, getSecurityConfig(true) delegates to 
org.apache.solr.handler.admin.SecurityConfHandlerLocal.getSecurityConfig(boolean)

But the instance variable SecurityConfig.version is never set to anything other 
than -1; it is not read back from security.json in other words the data map, 
such that
{code:java}
meta.put("v", securityConfig.getVersion()+1);//encode the expected zkversion
{code}
always puts a value of 0 for the version, leading to the aforementioned memory 
reload skip.

There does not seem to be any code calling SecurityConfig.setVersion anywhere 
or any of SecurityConfig's methods updating the version variable.

The only code that does call it is in the SecurityConfHandlerZk for zookeeper, 
but we are not using zookeeper.

Ultimately, I can't seem to use the set-user command because of this. I hope 
this is just a duplicate. Thanks

  was:
Hello again,

We use 6.6.3 and I was trying to update my security.json (in solr home, 
non-zookeeper) using:
{code:java}
curl -u myuser:mypass -H 'Content-type:application/json' -d 
'{"set-user-role":{"dummy":"dummy"}}' 
http://localhost:8080/solr/admin/authorization
{code}
The first time this is called, the security.json is written AND reloaded in 
memory correctly. The output json then contains at the end:
{code:java}
"":{"v":0}
{code}
However, subsequent calls using the same command, no matter the users specifed, 
always output the same meta version, 0.

The result is that the the security.json file is correctly updated, but the 
RuleBasedAuthorizationPlugin is never reloaded in memory, so the new settings 
never take effect.

The version never increases, so this condition in 
org.apache.solr.core.CoreContainer.initializeAuthorizationPlugin always returns 
and reload is skipped:
{code:java}
if (old != null && old.getZnodeVersion() == readVersion(authorizationConf)) {
  return;
}
{code}
The core of the issue is somewhere in 
org.apache.solr.handler.admin.SecurityConfHandler.doEdit:
{code:java}
  SecurityConfig securityConfig = getSecurityConfig(true);
  Map data = securityConfig.getData();
  Map latestConf = (Map) data.get(key);
  if (latestConf == null) {
    throw new SolrException(SERVER_ERROR, "No configuration present for " + 
key);
  }
  List commandsCopy = CommandOperation.clone(ops);
  Map out = 
configEditablePlugin.edit(Utils.getDeepCopy(latestConf, 4) , 

[jira] [Updated] (SOLR-12371) SecurityConfHandlerLocal fails to read back (increase) security.json meta version (SecurityConfig.getVersion() always -1)

2018-05-17 Thread Pascal Proulx (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pascal Proulx updated SOLR-12371:
-
Summary: SecurityConfHandlerLocal fails to read back (increase) 
security.json meta version (SecurityConfig.getVersion() always -1)  (was: 
SecurityConfHandlerLocal fails to read back (increase) security.json meta 
version (SecurityConfig.getVersion() always -1), non-zookeeper)

> SecurityConfHandlerLocal fails to read back (increase) security.json meta 
> version (SecurityConfig.getVersion() always -1)
> -
>
> Key: SOLR-12371
> URL: https://issues.apache.org/jira/browse/SOLR-12371
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: JSON Request API, security
>Affects Versions: 6.6.3
>Reporter: Pascal Proulx
>Priority: Major
>
> Hello again,
> We use 6.6.3 and I was trying to update my security.json (in solr home, 
> non-zookeeper) using:
> {code:java}
> curl -u myuser:mypass -H 'Content-type:application/json' -d 
> '{"set-user-role":{"dummy":"dummy"}}' 
> http://localhost:8080/solr/admin/authorization
> {code}
> The first time this is called, the security.json is written AND reloaded in 
> memory correctly. The output json then contains at the end:
> {code:java}
> "":{"v":0}
> {code}
> However, subsequent calls using the same command, no matter the users 
> specifed, always output the same meta version, 0.
> The result is that the the security.json file is correctly updated, but the 
> RuleBasedAuthorizationPlugin is never reloaded in memory, so the new settings 
> never take effect.
> The version never increases, so this condition in 
> org.apache.solr.core.CoreContainer.initializeAuthorizationPlugin always 
> returns and reload is skipped:
> {code:java}
> if (old != null && old.getZnodeVersion() == readVersion(authorizationConf)) {
>   return;
> }
> {code}
> The core of the issue is somewhere in 
> org.apache.solr.handler.admin.SecurityConfHandler.doEdit:
> {code:java}
>   SecurityConfig securityConfig = getSecurityConfig(true);
>   Map data = securityConfig.getData();
>   Map latestConf = (Map) data.get(key);
>   if (latestConf == null) {
>     throw new SolrException(SERVER_ERROR, "No configuration present for " 
> + key);
>   }
>   List commandsCopy = CommandOperation.clone(ops);
>   Map out = 
> configEditablePlugin.edit(Utils.getDeepCopy(latestConf, 4) , commandsCopy);
>   if (out == null) {
>     List errs = CommandOperation.captureErrors(commandsCopy);
>     if (!errs.isEmpty()) {
>   rsp.add(CommandOperation.ERR_MSGS, errs);
>   return;
>     }
>     log.debug("No edits made");
>     return;
>   } else {
>     if(!Objects.equals(latestConf.get("class") , out.get("class"))){
>   throw new SolrException(SERVER_ERROR, "class cannot be modified");
>     }
>     Map meta = getMapValue(out, "");
>     meta.put("v", securityConfig.getVersion()+1);//encode the expected 
> zkversion
>     data.put(key, out);
>     
>     if(persistConf(securityConfig)) {
>   securityConfEdited();
>   return;
>     }
>   }
> {code}
> In my case, getSecurityConfig(true) delegates to 
> org.apache.solr.handler.admin.SecurityConfHandlerLocal.getSecurityConfig(boolean)
> But the instance variable SecurityConfig.version is never set to anything 
> other than -1 (it is not read back from security.json in other words the data 
> map), such that
> {code:java}
> meta.put("v", securityConfig.getVersion()+1);//encode the expected zkversion
> {code}
> always puts a value of 0 for the version, leading to the aforementioned 
> memory reload skip.
> There does not seem to be any code calling SecurityConfig.setVersion anywhere 
> or any of SecurityConfig's methods updating the version variable.
> The only code that does call it is in the SecurityConfHandlerZk for 
> zookeeper, but we are not using zookeeper.
> Ultimately, I can't seem to use the set-user command because of this. I hope 
> this is just a duplicate. Thanks



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12371) SecurityConfHandlerLocal fails to read back security.json meta version (SecurityConfig.getVersion() always -1), never increased

2018-05-17 Thread Pascal Proulx (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pascal Proulx updated SOLR-12371:
-
Summary: SecurityConfHandlerLocal fails to read back security.json meta 
version (SecurityConfig.getVersion() always -1), never increased  (was: 
SecurityConfHandlerLocal fails to read back (increase) security.json meta 
version (SecurityConfig.getVersion() always -1))

> SecurityConfHandlerLocal fails to read back security.json meta version 
> (SecurityConfig.getVersion() always -1), never increased
> ---
>
> Key: SOLR-12371
> URL: https://issues.apache.org/jira/browse/SOLR-12371
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: JSON Request API, security
>Affects Versions: 6.6.3
>Reporter: Pascal Proulx
>Priority: Major
>
> Hello again,
> We use 6.6.3 and I was trying to update my security.json (in solr home, 
> non-zookeeper) using:
> {code:java}
> curl -u myuser:mypass -H 'Content-type:application/json' -d 
> '{"set-user-role":{"dummy":"dummy"}}' 
> http://localhost:8080/solr/admin/authorization
> {code}
> The first time this is called, the security.json is written AND reloaded in 
> memory correctly. The output json then contains at the end:
> {code:java}
> "":{"v":0}
> {code}
> However, subsequent calls using the same command, no matter the users 
> specifed, always output the same meta version, 0.
> The result is that the the security.json file is correctly updated, but the 
> RuleBasedAuthorizationPlugin is never reloaded in memory, so the new settings 
> never take effect.
> The version never increases, so this condition in 
> org.apache.solr.core.CoreContainer.initializeAuthorizationPlugin always 
> returns and reload is skipped:
> {code:java}
> if (old != null && old.getZnodeVersion() == readVersion(authorizationConf)) {
>   return;
> }
> {code}
> The core of the issue is somewhere in 
> org.apache.solr.handler.admin.SecurityConfHandler.doEdit:
> {code:java}
>   SecurityConfig securityConfig = getSecurityConfig(true);
>   Map data = securityConfig.getData();
>   Map latestConf = (Map) data.get(key);
>   if (latestConf == null) {
>     throw new SolrException(SERVER_ERROR, "No configuration present for " 
> + key);
>   }
>   List commandsCopy = CommandOperation.clone(ops);
>   Map out = 
> configEditablePlugin.edit(Utils.getDeepCopy(latestConf, 4) , commandsCopy);
>   if (out == null) {
>     List errs = CommandOperation.captureErrors(commandsCopy);
>     if (!errs.isEmpty()) {
>   rsp.add(CommandOperation.ERR_MSGS, errs);
>   return;
>     }
>     log.debug("No edits made");
>     return;
>   } else {
>     if(!Objects.equals(latestConf.get("class") , out.get("class"))){
>   throw new SolrException(SERVER_ERROR, "class cannot be modified");
>     }
>     Map meta = getMapValue(out, "");
>     meta.put("v", securityConfig.getVersion()+1);//encode the expected 
> zkversion
>     data.put(key, out);
>     
>     if(persistConf(securityConfig)) {
>   securityConfEdited();
>   return;
>     }
>   }
> {code}
> In my case, getSecurityConfig(true) delegates to 
> org.apache.solr.handler.admin.SecurityConfHandlerLocal.getSecurityConfig(boolean)
> But the instance variable SecurityConfig.version is never set to anything 
> other than -1 (it is not read back from security.json in other words the data 
> map), such that
> {code:java}
> meta.put("v", securityConfig.getVersion()+1);//encode the expected zkversion
> {code}
> always puts a value of 0 for the version, leading to the aforementioned 
> memory reload skip.
> There does not seem to be any code calling SecurityConfig.setVersion anywhere 
> or any of SecurityConfig's methods updating the version variable.
> The only code that does call it is in the SecurityConfHandlerZk for 
> zookeeper, but we are not using zookeeper.
> Ultimately, I can't seem to use the set-user command because of this. I hope 
> this is just a duplicate. Thanks



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12371) SecurityConfHandlerLocal fails to increase security.json meta version (SecurityConfig.getVersion() always -1), non-zookeeper

2018-05-17 Thread Pascal Proulx (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pascal Proulx updated SOLR-12371:
-
Summary: SecurityConfHandlerLocal fails to increase security.json meta 
version (SecurityConfig.getVersion() always -1), non-zookeeper  (was: 
SecurityConfHandler.doEdit fails to increase security.json meta version 
(SecurityConfig.getVersion() always -1), non-zookeeper)

> SecurityConfHandlerLocal fails to increase security.json meta version 
> (SecurityConfig.getVersion() always -1), non-zookeeper
> 
>
> Key: SOLR-12371
> URL: https://issues.apache.org/jira/browse/SOLR-12371
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: JSON Request API, security
>Affects Versions: 6.6.3
>Reporter: Pascal Proulx
>Priority: Major
>
> Hello again,
> We use 6.6.3 and I was trying to update my security.json (in solr home, 
> non-zookeeper) using:
> {code:java}
> curl -u myuser:mypass -H 'Content-type:application/json' -d 
> '{"set-user-role":{"dummy":"dummy"}}' 
> http://localhost:8080/solr/admin/authorization
> {code}
> The first time this is called, the security.json is written AND reloaded in 
> memory correctly. The output json then contains at the end:
> {code:java}
> "":{"v":0}
> {code}
> However, subsequent calls using the same command, no matter the users 
> specifed, always output the same meta version, 0.
> The result is that the the security.json file is correctly updated, but the 
> RuleBasedAuthorizationPlugin is never reloaded in memory, so the new settings 
> never take effect.
> The version never increases, so this condition in 
> org.apache.solr.core.CoreContainer.initializeAuthorizationPlugin always 
> returns and reload is skipped:
> {code:java}
> if (old != null && old.getZnodeVersion() == readVersion(authorizationConf)) {
>   return;
> }
> {code}
> The core of the issue is somewhere in 
> org.apache.solr.handler.admin.SecurityConfHandler.doEdit:
> {code:java}
>   SecurityConfig securityConfig = getSecurityConfig(true);
>   Map data = securityConfig.getData();
>   Map latestConf = (Map) data.get(key);
>   if (latestConf == null) {
>     throw new SolrException(SERVER_ERROR, "No configuration present for " 
> + key);
>   }
>   List commandsCopy = CommandOperation.clone(ops);
>   Map out = 
> configEditablePlugin.edit(Utils.getDeepCopy(latestConf, 4) , commandsCopy);
>   if (out == null) {
>     List errs = CommandOperation.captureErrors(commandsCopy);
>     if (!errs.isEmpty()) {
>   rsp.add(CommandOperation.ERR_MSGS, errs);
>   return;
>     }
>     log.debug("No edits made");
>     return;
>   } else {
>     if(!Objects.equals(latestConf.get("class") , out.get("class"))){
>   throw new SolrException(SERVER_ERROR, "class cannot be modified");
>     }
>     Map meta = getMapValue(out, "");
>     meta.put("v", securityConfig.getVersion()+1);//encode the expected 
> zkversion
>     data.put(key, out);
>     
>     if(persistConf(securityConfig)) {
>   securityConfEdited();
>   return;
>     }
>   }
> {code}
> In my case, getSecurityConfig(true) delegates to 
> org.apache.solr.handler.admin.SecurityConfHandlerLocal.getSecurityConfig(boolean)
> But the instance variable SecurityConfig.version is never set to anything 
> other than -1 (it is not read back from security.json in other words the data 
> map), such that
> {code:java}
> meta.put("v", securityConfig.getVersion()+1);//encode the expected zkversion
> {code}
> always puts a value of 0 for the version, leading to the aforementioned 
> memory reload skip.
> There does not seem to be any code calling SecurityConfig.setVersion anywhere 
> or any of SecurityConfig's methods updating the version variable.
> The only code that does call it is in the SecurityConfHandlerZk for 
> zookeeper, but we are not using zookeeper.
> Ultimately, I can't seem to use the set-user command because of this. I hope 
> this is just a duplicate. Thanks



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12371) SecurityConfHandlerLocal fails to read back (increase) security.json meta version (SecurityConfig.getVersion() always -1), non-zookeeper

2018-05-17 Thread Pascal Proulx (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pascal Proulx updated SOLR-12371:
-
Summary: SecurityConfHandlerLocal fails to read back (increase) 
security.json meta version (SecurityConfig.getVersion() always -1), 
non-zookeeper  (was: SecurityConfHandlerLocal fails to increase security.json 
meta version (SecurityConfig.getVersion() always -1), non-zookeeper)

> SecurityConfHandlerLocal fails to read back (increase) security.json meta 
> version (SecurityConfig.getVersion() always -1), non-zookeeper
> 
>
> Key: SOLR-12371
> URL: https://issues.apache.org/jira/browse/SOLR-12371
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: JSON Request API, security
>Affects Versions: 6.6.3
>Reporter: Pascal Proulx
>Priority: Major
>
> Hello again,
> We use 6.6.3 and I was trying to update my security.json (in solr home, 
> non-zookeeper) using:
> {code:java}
> curl -u myuser:mypass -H 'Content-type:application/json' -d 
> '{"set-user-role":{"dummy":"dummy"}}' 
> http://localhost:8080/solr/admin/authorization
> {code}
> The first time this is called, the security.json is written AND reloaded in 
> memory correctly. The output json then contains at the end:
> {code:java}
> "":{"v":0}
> {code}
> However, subsequent calls using the same command, no matter the users 
> specifed, always output the same meta version, 0.
> The result is that the the security.json file is correctly updated, but the 
> RuleBasedAuthorizationPlugin is never reloaded in memory, so the new settings 
> never take effect.
> The version never increases, so this condition in 
> org.apache.solr.core.CoreContainer.initializeAuthorizationPlugin always 
> returns and reload is skipped:
> {code:java}
> if (old != null && old.getZnodeVersion() == readVersion(authorizationConf)) {
>   return;
> }
> {code}
> The core of the issue is somewhere in 
> org.apache.solr.handler.admin.SecurityConfHandler.doEdit:
> {code:java}
>   SecurityConfig securityConfig = getSecurityConfig(true);
>   Map data = securityConfig.getData();
>   Map latestConf = (Map) data.get(key);
>   if (latestConf == null) {
>     throw new SolrException(SERVER_ERROR, "No configuration present for " 
> + key);
>   }
>   List commandsCopy = CommandOperation.clone(ops);
>   Map out = 
> configEditablePlugin.edit(Utils.getDeepCopy(latestConf, 4) , commandsCopy);
>   if (out == null) {
>     List errs = CommandOperation.captureErrors(commandsCopy);
>     if (!errs.isEmpty()) {
>   rsp.add(CommandOperation.ERR_MSGS, errs);
>   return;
>     }
>     log.debug("No edits made");
>     return;
>   } else {
>     if(!Objects.equals(latestConf.get("class") , out.get("class"))){
>   throw new SolrException(SERVER_ERROR, "class cannot be modified");
>     }
>     Map meta = getMapValue(out, "");
>     meta.put("v", securityConfig.getVersion()+1);//encode the expected 
> zkversion
>     data.put(key, out);
>     
>     if(persistConf(securityConfig)) {
>   securityConfEdited();
>   return;
>     }
>   }
> {code}
> In my case, getSecurityConfig(true) delegates to 
> org.apache.solr.handler.admin.SecurityConfHandlerLocal.getSecurityConfig(boolean)
> But the instance variable SecurityConfig.version is never set to anything 
> other than -1 (it is not read back from security.json in other words the data 
> map), such that
> {code:java}
> meta.put("v", securityConfig.getVersion()+1);//encode the expected zkversion
> {code}
> always puts a value of 0 for the version, leading to the aforementioned 
> memory reload skip.
> There does not seem to be any code calling SecurityConfig.setVersion anywhere 
> or any of SecurityConfig's methods updating the version variable.
> The only code that does call it is in the SecurityConfHandlerZk for 
> zookeeper, but we are not using zookeeper.
> Ultimately, I can't seem to use the set-user command because of this. I hope 
> this is just a duplicate. Thanks



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #375: LUCENE-8287: Ensure that empty regex completi...

2018-05-17 Thread jtibshirani
Github user jtibshirani closed the pull request at:

https://github.com/apache/lucene-solr/pull/375


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12371) SecurityConfHandler.doEdit fails to increase security.json meta version (SecurityConfig.getVersion() always -1), non-zookeeper

2018-05-17 Thread Pascal Proulx (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pascal Proulx updated SOLR-12371:
-
Summary: SecurityConfHandler.doEdit fails to increase security.json meta 
version (SecurityConfig.getVersion() always -1), non-zookeeper  (was: 
SecurityConfHandler.doEdit fails to increase security.json meta version 
(SecurityConfig.getVersion() always -1))

> SecurityConfHandler.doEdit fails to increase security.json meta version 
> (SecurityConfig.getVersion() always -1), non-zookeeper
> --
>
> Key: SOLR-12371
> URL: https://issues.apache.org/jira/browse/SOLR-12371
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: JSON Request API, security
>Affects Versions: 6.6.3
>Reporter: Pascal Proulx
>Priority: Major
>
> Hello again,
> We use 6.6.3 and I was trying to update my security.json (in solr home, 
> non-zookeeper) using:
> {code:java}
> curl -u myuser:mypass -H 'Content-type:application/json' -d 
> '{"set-user-role":{"dummy":"dummy"}}' 
> http://localhost:8080/solr/admin/authorization
> {code}
> The first time this is called, the security.json is written AND reloaded in 
> memory correctly. The output json then contains at the end:
> {code:java}
> "":{"v":0}
> {code}
> However, subsequent calls using the same command, no matter the users 
> specifed, always output the same meta version, 0.
> The result is that the the security.json file is correctly updated, but the 
> RuleBasedAuthorizationPlugin is never reloaded in memory, so the new settings 
> never take effect.
> The version never increases, so this condition in 
> org.apache.solr.core.CoreContainer.initializeAuthorizationPlugin always 
> returns and reload is skipped:
> {code:java}
> if (old != null && old.getZnodeVersion() == readVersion(authorizationConf)) {
>   return;
> }
> {code}
> The core of the issue is somewhere in 
> org.apache.solr.handler.admin.SecurityConfHandler.doEdit:
> {code:java}
>   SecurityConfig securityConfig = getSecurityConfig(true);
>   Map data = securityConfig.getData();
>   Map latestConf = (Map) data.get(key);
>   if (latestConf == null) {
>     throw new SolrException(SERVER_ERROR, "No configuration present for " 
> + key);
>   }
>   List commandsCopy = CommandOperation.clone(ops);
>   Map out = 
> configEditablePlugin.edit(Utils.getDeepCopy(latestConf, 4) , commandsCopy);
>   if (out == null) {
>     List errs = CommandOperation.captureErrors(commandsCopy);
>     if (!errs.isEmpty()) {
>   rsp.add(CommandOperation.ERR_MSGS, errs);
>   return;
>     }
>     log.debug("No edits made");
>     return;
>   } else {
>     if(!Objects.equals(latestConf.get("class") , out.get("class"))){
>   throw new SolrException(SERVER_ERROR, "class cannot be modified");
>     }
>     Map meta = getMapValue(out, "");
>     meta.put("v", securityConfig.getVersion()+1);//encode the expected 
> zkversion
>     data.put(key, out);
>     
>     if(persistConf(securityConfig)) {
>   securityConfEdited();
>   return;
>     }
>   }
> {code}
> In my case, getSecurityConfig(true) delegates to 
> org.apache.solr.handler.admin.SecurityConfHandlerLocal.getSecurityConfig(boolean)
> But the instance variable SecurityConfig.version is never set to anything 
> other than -1 (it is not read back from security.json in other words the data 
> map), such that
> {code:java}
> meta.put("v", securityConfig.getVersion()+1);//encode the expected zkversion
> {code}
> always puts a value of 0 for the version, leading to the aforementioned 
> memory reload skip.
> There does not seem to be any code calling SecurityConfig.setVersion anywhere 
> or any of SecurityConfig's methods updating the version variable.
> The only code that does call it is in the SecurityConfHandlerZk for 
> zookeeper, but we are not using zookeeper.
> Ultimately, I can't seem to use the set-user command because of this. I hope 
> this is just a duplicate. Thanks



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12371) SecurityConfHandler.doEdit fails to increase security.json meta version (SecurityConfig.getVersion() always -1)

2018-05-17 Thread Pascal Proulx (JIRA)
Pascal Proulx created SOLR-12371:


 Summary: SecurityConfHandler.doEdit fails to increase 
security.json meta version (SecurityConfig.getVersion() always -1)
 Key: SOLR-12371
 URL: https://issues.apache.org/jira/browse/SOLR-12371
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: JSON Request API, security
Affects Versions: 6.6.3
Reporter: Pascal Proulx


Hello again,

We use 6.6.3 and I was trying to update my security.json (in solr home, 
non-zookeeper) using:
{code:java}
curl -u myuser:mypass -H 'Content-type:application/json' -d 
'{"set-user-role":{"dummy":"dummy"}}' 
http://localhost:8080/solr/admin/authorization
{code}
The first time this is called, the security.json is written AND reloaded in 
memory correctly. The output json then contains at the end:
{code:java}
"":{"v":0}
{code}
However, subsequent calls using the same command, no matter the users specifed, 
always output the same meta version, 0.

The result is that the the security.json file is correctly updated, but the 
RuleBasedAuthorizationPlugin is never reloaded in memory, so the new settings 
never take effect.

The version never increases, so this condition in 
org.apache.solr.core.CoreContainer.initializeAuthorizationPlugin always returns 
and reload is skipped:
{code:java}
if (old != null && old.getZnodeVersion() == readVersion(authorizationConf)) {
  return;
}
{code}
The core of the issue is somewhere in 
org.apache.solr.handler.admin.SecurityConfHandler.doEdit:
{code:java}
  SecurityConfig securityConfig = getSecurityConfig(true);
  Map data = securityConfig.getData();
  Map latestConf = (Map) data.get(key);
  if (latestConf == null) {
    throw new SolrException(SERVER_ERROR, "No configuration present for " + 
key);
  }
  List commandsCopy = CommandOperation.clone(ops);
  Map out = 
configEditablePlugin.edit(Utils.getDeepCopy(latestConf, 4) , commandsCopy);
  if (out == null) {
    List errs = CommandOperation.captureErrors(commandsCopy);
    if (!errs.isEmpty()) {
  rsp.add(CommandOperation.ERR_MSGS, errs);
  return;
    }
    log.debug("No edits made");
    return;
  } else {
    if(!Objects.equals(latestConf.get("class") , out.get("class"))){
  throw new SolrException(SERVER_ERROR, "class cannot be modified");
    }
    Map meta = getMapValue(out, "");
    meta.put("v", securityConfig.getVersion()+1);//encode the expected 
zkversion
    data.put(key, out);
    
    if(persistConf(securityConfig)) {
  securityConfEdited();
  return;
    }
  }
{code}
In my case, getSecurityConfig(true) delegates to 
org.apache.solr.handler.admin.SecurityConfHandlerLocal.getSecurityConfig(boolean)

But the instance variable SecurityConfig.version is never set to anything other 
than -1 (it is not read back from security.json in other words the data map), 
such that
{code:java}
meta.put("v", securityConfig.getVersion()+1);//encode the expected zkversion
{code}
always puts a value of 0 for the version, leading to the aforementioned memory 
reload skip.

There does not seem to be any code calling SecurityConfig.setVersion anywhere 
or any of SecurityConfig's methods updating the version variable.

The only code that does call it is in the SecurityConfHandlerZk for zookeeper, 
but we are not using zookeeper.

Ultimately, I can't seem to use the set-user command because of this. I hope 
this is just a duplicate. Thanks



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7788) fail precommit on unparameterised log.trace messages

2018-05-17 Thread Christine Poerschke (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16479503#comment-16479503
 ] 

Christine Poerschke commented on LUCENE-7788:
-

Hi [~erickerickson] - thanks for the questions!

bq. 1> WDYT about failing all logging messages that aren't parameterised? Is 
there any reason any logging message should not be parameterised?

Interesting point, would we need to differentiate {{log.debug("Foo.bar() 
called");}} as legitimately(\?) unparameterised from
{code}
log.debug("Foo.bar(param='"+param+"') called");
{code}
as wrongly unparameterised since it should be
{code}
log.debug("Foo.bar(param='{}') called", param);
{code}
instead? Or would the expectation be that the first unparameterised logging is 
actually discouraged and instead it should include a parameter e.g. to 
differentiate different {{Foo}} object instances?

bq.2> Let's say we fix up one directory (solr/core for example). Can we turn on 
the precommit check on a per-directory basis?

I like the idea of incrementally changing things and 'locking in' changes made. 
For javadocs 
https://github.com/apache/lucene-solr/blob/releases/lucene-solr/7.3.0/lucene/build.xml#L156-L199
 is for Lucene and 
https://github.com/apache/lucene-solr/blob/releases/lucene-solr/7.3.0/solr/build.xml#L678
 for Solr have such per-directory differentiation, I don't know what it would 
take for other precommit checks to do something similar.


> fail precommit on unparameterised log.trace messages
> 
>
> Key: LUCENE-7788
> URL: https://issues.apache.org/jira/browse/LUCENE-7788
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: LUCENE-7788.patch, LUCENE-7788.patch
>
>
> SOLR-10415 would be removing existing unparameterised log.trace messages use 
> and once that is in place then this ticket's one-line change would be for 
> 'ant precommit' to reject any future unparameterised log.trace message use.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6733) Umbrella issue - Solr as a standalone application

2018-05-17 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-6733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16479481#comment-16479481
 ] 

Jan Høydahl commented on SOLR-6733:
---

It can be a good thing too - if we had all jetty config in Solr Java code then 
we’d probably have a standard way of configuring CORS already, since there 
would be no workaround, like we have with SSL :)

Moving to embedded jetty we’d of course need to do a thorough review of what 
config options we need to make configurable in Solr’s config and what 
extensions to include. I’d love for more jetty-level config to live in 
Zookeeper too, such as SSL config and CORS config instead of having it all in 
solr.in.sh

> Umbrella issue - Solr as a standalone application
> -
>
> Key: SOLR-6733
> URL: https://issues.apache.org/jira/browse/SOLR-6733
> Project: Solr
>  Issue Type: New Feature
>Reporter: Shawn Heisey
>Priority: Major
>
> Umbrella issue.
> Solr should be a standalone application, where the main method is provided by 
> Solr source code.
> Here are the major tasks I envision, if we choose to embed Jetty:
>  * Create org.apache.solr.start.Main (and possibly other classes in the same 
> package), to be placed in solr-start.jar.  The Main class will contain the 
> main method that starts the embedded Jetty and Solr.  I do not know how to 
> adjust the build system to do this successfully.
>  * Handle central configurations in code -- TCP port, SSL, and things like 
> web.xml.
>  * For each of these steps, clean up any test fallout.
>  * Handle cloud-related configurations in code -- port, hostname, protocol, 
> etc.  Use the same information as the central configurations.
>  * Consider whether things like authentication need changes.
>  * Handle any remaining container configurations.
> I am currently imagining this work happening in a new branch and ultimately 
> being applied only to master, not the stable branch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11752) add gzip to jetty

2018-05-17 Thread Matthew Sporleder (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16479469#comment-16479469
 ] 

Matthew Sporleder commented on SOLR-11752:
--

so what's next for this issue? 

> add gzip to jetty
> -
>
> Key: SOLR-11752
> URL: https://issues.apache.org/jira/browse/SOLR-11752
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Server
>Affects Versions: master (8.0)
>Reporter: Matthew Sporleder
>Priority: Trivial
>  Labels: jetty
> Attachments: SOLR-11752.patch, SOLR-11752.patch
>
>
> with a little bit of typing I am able to add gzip to my solr's jetty, which 
> is a big help for WAN access and completely out-of-band to solr, *and* only 
> happens if the client requests it so I think it is is a good default.
> I will just inline my code to this ticket:
> {code:java}
> #server/etc/jetty-gzip.xml
> #just download it from here: 
> http://grepcode.com/file/repo1.maven.org/maven2/org.eclipse.jetty/jetty-server/9.3.0.v20150612/etc/jetty-gzip.xml?av=f
> {code}
> {code:java}
> #server/modules/gzip.mod
> [depend]
> server
> [xml]
> etc/jetty-gzip.xml
> {code}
> This is where you might want to add an option, but the result should look 
> like this:
> {code:java}
> #bin/solr
> else
>   SOLR_JETTY_CONFIG+=("--module=http,gzip")
> fi
> {code}
> I can now do this:
> {code:java}
> curl -vvv --compressed localhost:8983/solr/ > /dev/null
> {code}
> With:
> {code:java}
> < Content-Encoding: gzip
> < Content-Length: 2890
> {code}
> Without:
> {code:java}
> < Content-Length: 13349
> {code}
> —
> A regular query:
>  With:
> {code:java}
> < Content-Encoding: gzip
> < Content-Length: 2876
> {code}
> Without:
> {code:java}
> < Content-Length: 17761
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12365) Rename Config.java to XmlConfigFile.java to clarify it's use

2018-05-17 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-12365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16479435#comment-16479435
 ] 

Jan Høydahl commented on SOLR-12365:


+1

> Rename Config.java to XmlConfigFile.java to clarify it's use
> 
>
> Key: SOLR-12365
> URL: https://issues.apache.org/jira/browse/SOLR-12365
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>Priority: Minor
>
> Seeing "Config"; I was confused what sort of config it was.  Turns out it's a 
> wrapper around an XML document providing some convenience methods around it. 
> It ought to have class javadocs too.  XmlConfigFile would be a clearer name 
> IMO.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 4639 - Unstable!

2018-05-17 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/4639/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

4 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testMergeIntegration

Error Message:
did not finish processing in time

Stack Trace:
java.lang.AssertionError: did not finish processing in time
at 
__randomizedtesting.SeedInfo.seed([534FD6E37F683114:F694539D79A4EE]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testMergeIntegration(IndexSizeTriggerTest.java:425)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testMergeIntegration

Error Message:
did not finish processing in time

Stack Trace:
java.lang.AssertionError: did not finish processing in time
at 

[jira] [Commented] (SOLR-11277) Add auto hard commit setting based on tlog size

2018-05-17 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16479402#comment-16479402
 ] 

Anshum Gupta commented on SOLR-11277:
-

{quote}More of a performance implication, but probably not significant compared 
to the cost of a commit.
{quote}
True. I think it should be ok, but if have anything reported, we can go back 
and make it better.

 
{quote}bq. docsSinceCommit will also be incorrectly zeroed, but given it's use, 
it shouldn't be a big deal if it can be off by a few.
{quote}
Yes, I thought about that and considering the use here, we don't really need to 
be accurate. 

> Add auto hard commit setting based on tlog size
> ---
>
> Key: SOLR-11277
> URL: https://issues.apache.org/jira/browse/SOLR-11277
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Rupa Shankar
>Assignee: Anshum Gupta
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: SOLR-11277.01.patch, SOLR-11277.patch, SOLR-11277.patch, 
> SOLR-11277.patch, SOLR-11277.patch, SOLR-11277.patch, SOLR-11277.patch, 
> SOLR-11277.patch, max_size_auto_commit.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> When indexing documents of variable sizes and at variable schedules, it can 
> be hard to estimate the optimal auto hard commit maxDocs or maxTime settings. 
> We’ve had some occurrences of really huge tlogs, resulting in serious issues, 
> so in an attempt to avoid this, it would be great to have a “maxSize” setting 
> based on the tlog size on disk. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11358) Support DelimitedTermFrequencyTokenFilter out of the box with dynamic field mapping

2018-05-17 Thread Erik Hatcher (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Hatcher updated SOLR-11358:

Summary: Support DelimitedTermFrequencyTokenFilter out of the box with 
dynamic field mapping  (was: Support DelimitedTermFrequencyTokenFilter-using 
fields with payload() function)

> Support DelimitedTermFrequencyTokenFilter out of the box with dynamic field 
> mapping
> ---
>
> Key: SOLR-11358
> URL: https://issues.apache.org/jira/browse/SOLR-11358
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erik Hatcher
>Assignee: Erik Hatcher
>Priority: Major
> Attachments: SOLR-11358.patch
>
>
> payload() works values encoded with DelimitedPayloadTokenFilter.   payload() 
> can be modified to return the term frequency instead, when the field uses 
> DelimitedTermFrequencyTokenFilter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11358) Support DelimitedTermFrequencyTokenFilter-using fields with payload() function

2018-05-17 Thread Erik Hatcher (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16479349#comment-16479349
 ] 

Erik Hatcher commented on SOLR-11358:
-

{quote}Lets not bloat it?
{quote}
The default schema has dynamic field mapping for every language Solr supports 
and bunch of other dynamic fields including the payload float/string/int ones.  
 Surely this one is ok to add too?

> Support DelimitedTermFrequencyTokenFilter-using fields with payload() function
> --
>
> Key: SOLR-11358
> URL: https://issues.apache.org/jira/browse/SOLR-11358
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erik Hatcher
>Assignee: Erik Hatcher
>Priority: Major
> Attachments: SOLR-11358.patch
>
>
> payload() works values encoded with DelimitedPayloadTokenFilter.   payload() 
> can be modified to return the term frequency instead, when the field uses 
> DelimitedTermFrequencyTokenFilter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9685) tag a query in JSON syntax

2018-05-17 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16479317#comment-16479317
 ] 

David Smiley commented on SOLR-9685:


I'm +1 to the proposed syntax  But, a disclaimer, I don't regularly use the 
JSON syntax, and I have not reviewed the patch.

I like insisting on the "#" which will make these stand out both at declaration 
and use.  I like that it appears to be fairly compact.  Perhaps internally the 
pound may or may not be retained but it doesn't matter I suppose.

> tag a query in JSON syntax
> --
>
> Key: SOLR-9685
> URL: https://issues.apache.org/jira/browse/SOLR-9685
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module, JSON Request API
>Reporter: Yonik Seeley
>Priority: Major
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> There should be a way to tag a query/filter in JSON syntax.
> Perhaps these two forms could be equivalent:
> {code}
> "{!tag=COLOR}color:blue"
> { tagged : { COLOR : "color:blue" }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk-9.0.4) - Build # 1920 - Unstable!

2018-05-17 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/1920/
Java: 64bit/jdk-9.0.4 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.update.MaxSizeAutoCommitTest.deleteTest

Error Message:
Tlog size exceeds the max size bound. Tlog path: 
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/build/solr-core/test/J0/temp/solr.update.MaxSizeAutoCommitTest_A569C2CBB7547EDE-001/init-core-data-001/tlog/tlog.003,
 tlog size: 1302

Stack Trace:
java.lang.AssertionError: Tlog size exceeds the max size bound. Tlog path: 
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/build/solr-core/test/J0/temp/solr.update.MaxSizeAutoCommitTest_A569C2CBB7547EDE-001/init-core-data-001/tlog/tlog.003,
 tlog size: 1302
at 
__randomizedtesting.SeedInfo.seed([A569C2CBB7547EDE:B5272734CCFA472F]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.update.MaxSizeAutoCommitTest.getTlogFileSizes(MaxSizeAutoCommitTest.java:379)
at 
org.apache.solr.update.MaxSizeAutoCommitTest.deleteTest(MaxSizeAutoCommitTest.java:200)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 

[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1544 - Still unstable

2018-05-17 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1544/

4 tests failed.
FAILED:  org.apache.lucene.index.TestBinaryDocValuesUpdates.testTonsOfUpdates

Error Message:
_1_q.fnm

Stack Trace:
java.io.FileNotFoundException: _1_q.fnm
at 
__randomizedtesting.SeedInfo.seed([4753723B75C963B3:3F76AC3097E94C51]:0)
at org.apache.lucene.store.RAMDirectory.openInput(RAMDirectory.java:243)
at 
org.apache.lucene.store.Directory.openChecksumInput(Directory.java:121)
at 
org.apache.lucene.store.RawDirectoryWrapper.openChecksumInput(RawDirectoryWrapper.java:41)
at 
org.apache.lucene.codecs.lucene60.Lucene60FieldInfosFormat.read(Lucene60FieldInfosFormat.java:113)
at 
org.apache.lucene.index.SegmentReader.initFieldInfos(SegmentReader.java:190)
at org.apache.lucene.index.SegmentReader.(SegmentReader.java:93)
at 
org.apache.lucene.index.ReadersAndUpdates.writeFieldUpdates(ReadersAndUpdates.java:559)
at 
org.apache.lucene.index.IndexWriter.writeSomeDocValuesUpdates(IndexWriter.java:606)
at 
org.apache.lucene.index.FrozenBufferedUpdates.apply(FrozenBufferedUpdates.java:299)
at 
org.apache.lucene.index.IndexWriter.lambda$publishFrozenUpdates$3(IndexWriter.java:2575)
at 
org.apache.lucene.index.IndexWriter.processEvents(IndexWriter.java:5047)
at 
org.apache.lucene.index.IndexWriter.updateDocValues(IndexWriter.java:1758)
at 
org.apache.lucene.index.TestBinaryDocValuesUpdates.testTonsOfUpdates(TestBinaryDocValuesUpdates.java:1323)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[jira] [Commented] (SOLR-11358) Support DelimitedTermFrequencyTokenFilter-using fields with payload() function

2018-05-17 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16479250#comment-16479250
 ] 

David Smiley commented on SOLR-11358:
-

bq. Coming back to this, and double-checking the test cases and implementation, 
I question whether this is really useful, to have `payload()` return the same 
value that `termfreq()` would. 

I agree with that; freq != payload!

I'm not to keen on seeing advanced stuff getting added to the _default_ schema. 
 Lets not bloat it?

> Support DelimitedTermFrequencyTokenFilter-using fields with payload() function
> --
>
> Key: SOLR-11358
> URL: https://issues.apache.org/jira/browse/SOLR-11358
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erik Hatcher
>Assignee: Erik Hatcher
>Priority: Major
> Attachments: SOLR-11358.patch
>
>
> payload() works values encoded with DelimitedPayloadTokenFilter.   payload() 
> can be modified to return the term frequency instead, when the field uses 
> DelimitedTermFrequencyTokenFilter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6733) Umbrella issue - Solr as a standalone application

2018-05-17 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16479242#comment-16479242
 ] 

David Smiley commented on SOLR-6733:


I think this comes down to the distinction of what do we want to "officially 
support", vs. allow something to be possible for those that know what they are 
doing (and forgo support).  And not producing a war does not mean we need to 
hide jetty configs; there's plenty of middle-round from early days of 'war 
only' to a total black box.  Perhaps the current jetty config in plain sight 
makes it too easy to tempt people to modify it; I dunno.  For me; I have zero 
motivation to make things _harder_ to configure, and this would be more 
annoying to users.  It would annoy me.  I've seen stackoverflow tips on, for 
example, adding CORS to Solr this way, and some additional things that I forget 
as well but have found useful.  I've modified Solr's jetty configs before, and 
it'd be nice to continue to do so with the same ease.

As a side note, I think it would be helpful to rename server/lib to 
server/jetty-lib so as it's less confusing/tempting to put libs there when 
that's almost always the wrong thing to do.  Someone recently made this mistake 
and I helped them.  I suppose we'd need an internal/jetty lib dir wether we 
have jetty config files or not, so this may not actually relate to this issue.

> Umbrella issue - Solr as a standalone application
> -
>
> Key: SOLR-6733
> URL: https://issues.apache.org/jira/browse/SOLR-6733
> Project: Solr
>  Issue Type: New Feature
>Reporter: Shawn Heisey
>Priority: Major
>
> Umbrella issue.
> Solr should be a standalone application, where the main method is provided by 
> Solr source code.
> Here are the major tasks I envision, if we choose to embed Jetty:
>  * Create org.apache.solr.start.Main (and possibly other classes in the same 
> package), to be placed in solr-start.jar.  The Main class will contain the 
> main method that starts the embedded Jetty and Solr.  I do not know how to 
> adjust the build system to do this successfully.
>  * Handle central configurations in code -- TCP port, SSL, and things like 
> web.xml.
>  * For each of these steps, clean up any test fallout.
>  * Handle cloud-related configurations in code -- port, hostname, protocol, 
> etc.  Use the same information as the central configurations.
>  * Consider whether things like authentication need changes.
>  * Handle any remaining container configurations.
> I am currently imagining this work happening in a new branch and ultimately 
> being applied only to master, not the stable branch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Windows (64bit/jdk-10) - Build # 595 - Still Unstable!

2018-05-17 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/595/
Java: 64bit/jdk-10 -XX:+UseCompressedOops -XX:+UseSerialGC

37 tests failed.
FAILED:  
org.apache.lucene.store.TestSimpleFSDirectory.testRenameWithPendingDeletes

Error Message:
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.store.TestSimpleFSDirectory_98038394CC6ED50-001\tempDir-009\source.txt
 -> 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.store.TestSimpleFSDirectory_98038394CC6ED50-001\tempDir-009\target.txt

Stack Trace:
java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.store.TestSimpleFSDirectory_98038394CC6ED50-001\tempDir-009\source.txt
 -> 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.store.TestSimpleFSDirectory_98038394CC6ED50-001\tempDir-009\target.txt
at 
__randomizedtesting.SeedInfo.seed([98038394CC6ED50:BF618D45A850B14A]:0)
at 
java.base/sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:89)
at 
java.base/sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:103)
at java.base/sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:298)
at 
java.base/sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:288)
at 
org.apache.lucene.mockfile.FilterFileSystemProvider.move(FilterFileSystemProvider.java:147)
at 
org.apache.lucene.mockfile.FilterFileSystemProvider.move(FilterFileSystemProvider.java:147)
at 
org.apache.lucene.mockfile.FilterFileSystemProvider.move(FilterFileSystemProvider.java:147)
at 
org.apache.lucene.mockfile.FilterFileSystemProvider.move(FilterFileSystemProvider.java:147)
at 
org.apache.lucene.mockfile.FilterFileSystemProvider.move(FilterFileSystemProvider.java:147)
at 
org.apache.lucene.mockfile.FilterFileSystemProvider.move(FilterFileSystemProvider.java:147)
at org.apache.lucene.mockfile.WindowsFS.move(WindowsFS.java:129)
at java.base/java.nio.file.Files.move(Files.java:1413)
at org.apache.lucene.store.FSDirectory.rename(FSDirectory.java:303)
at 
org.apache.lucene.store.TestSimpleFSDirectory.testRenameWithPendingDeletes(TestSimpleFSDirectory.java:54)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)

[JENKINS] Lucene-Solr-repro - Build # 645 - Still Unstable

2018-05-17 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/645/

[...truncated 28 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-master/57/consoleText

[repro] Revision: 0c3628920afdc27bbaf1c057bf6519319ea78e51

[repro] Repro line:  ant test  -Dtestcase=SearchRateTriggerIntegrationTest 
-Dtests.method=testDeleteNode -Dtests.seed=49E93EC5A57C649A 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true 
-Dtests.locale=hr-HR -Dtests.timezone=Brazil/West -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[repro] Repro line:  ant test  -Dtestcase=NodeAddedTriggerTest 
-Dtests.method=testRestoreState -Dtests.seed=49E93EC5A57C649A 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true 
-Dtests.locale=es-ES -Dtests.timezone=Europe/Ulyanovsk -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[repro] Repro line:  ant test  -Dtestcase=AutoScalingHandlerTest 
-Dtests.method=testReadApi -Dtests.seed=49E93EC5A57C649A -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.badapples=true -Dtests.locale=ar-JO 
-Dtests.timezone=Antarctica/Davis -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[repro] Repro line:  ant test  -Dtestcase=TestLargeCluster 
-Dtests.method=testNodeLost -Dtests.seed=49E93EC5A57C649A -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.badapples=true -Dtests.locale=es-BO 
-Dtests.timezone=US/Mountain -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[repro] Repro line:  ant test  -Dtestcase=TestLargeCluster 
-Dtests.method=testBasic -Dtests.seed=49E93EC5A57C649A -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.badapples=true -Dtests.locale=es-BO 
-Dtests.timezone=US/Mountain -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[repro] Repro line:  ant test  -Dtestcase=AutoscalingHistoryHandlerTest 
-Dtests.method=testHistory -Dtests.seed=49E93EC5A57C649A -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.badapples=true -Dtests.locale=es-PR 
-Dtests.timezone=America/Curacao -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
88f8718f1bfe0e5aeddc6f960cc74513a89c0610
[repro] git fetch
[repro] git checkout 0c3628920afdc27bbaf1c057bf6519319ea78e51

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   TestLargeCluster
[repro]   AutoScalingHandlerTest
[repro]   SearchRateTriggerIntegrationTest
[repro]   AutoscalingHistoryHandlerTest
[repro]   NodeAddedTriggerTest
[repro] ant compile-test

[...truncated 3298 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=25 
-Dtests.class="*.TestLargeCluster|*.AutoScalingHandlerTest|*.SearchRateTriggerIntegrationTest|*.AutoscalingHistoryHandlerTest|*.NodeAddedTriggerTest"
 -Dtests.showOutput=onerror  -Dtests.seed=49E93EC5A57C649A -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.badapples=true -Dtests.locale=es-BO 
-Dtests.timezone=US/Mountain -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[...truncated 29433 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   0/5 failed: org.apache.solr.cloud.autoscaling.AutoScalingHandlerTest
[repro]   0/5 failed: 
org.apache.solr.cloud.autoscaling.SearchRateTriggerIntegrationTest
[repro]   2/5 failed: 
org.apache.solr.handler.admin.AutoscalingHistoryHandlerTest
[repro]   3/5 failed: org.apache.solr.cloud.autoscaling.NodeAddedTriggerTest
[repro]   4/5 failed: org.apache.solr.cloud.autoscaling.sim.TestLargeCluster
[repro] git checkout 88f8718f1bfe0e5aeddc6f960cc74513a89c0610

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 5 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-11358) Support DelimitedTermFrequencyTokenFilter-using fields with payload() function

2018-05-17 Thread Erik Hatcher (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16479196#comment-16479196
 ] 

Erik Hatcher commented on SOLR-11358:
-

Coming back to this, and double-checking the test cases and implementation, I 
question whether this is really useful, to have `payload()` return the same 
value that `termfreq()` would.   

At least let's add:

{{    }}

{{ }}{{    }}

{{      }}

{{        }}{{          }}

{{        }}{{   
   }}

{{    }}

to the default managed-schema.

I could see it being handy if you're testing the difference between *_dpi and 
*_dtf performance and potentially toggling back and forth and want it to be 
transparent, but these delimited tf fields aren't going to work as if they were 
truly payloaded with the payload scoring queries currently.

Thoughts?   

 

> Support DelimitedTermFrequencyTokenFilter-using fields with payload() function
> --
>
> Key: SOLR-11358
> URL: https://issues.apache.org/jira/browse/SOLR-11358
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erik Hatcher
>Assignee: Erik Hatcher
>Priority: Major
> Attachments: SOLR-11358.patch
>
>
> payload() works values encoded with DelimitedPayloadTokenFilter.   payload() 
> can be modified to return the term frequency instead, when the field uses 
> DelimitedTermFrequencyTokenFilter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-11358) Support DelimitedTermFrequencyTokenFilter-using fields with payload() function

2018-05-17 Thread Erik Hatcher (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16479196#comment-16479196
 ] 

Erik Hatcher edited comment on SOLR-11358 at 5/17/18 3:18 PM:
--

Coming back to this, and double-checking the test cases and implementation, I 
question whether this is really useful, to have `payload()` return the same 
value that `termfreq()` would.   

At least let's add:

{{    }}

{{ }}{{    }}

{{      }}

{{        }}

{{        }}{{   
   }}

{{    }}

to the default managed-schema.

I could see it being handy if you're testing the difference between *_dpi and 
*_dtf performance and potentially toggling back and forth and want it to be 
transparent, but these delimited tf fields aren't going to work as if they were 
truly payloaded with the payload scoring queries currently.

Thoughts?   

 


was (Author: ehatcher):
Coming back to this, and double-checking the test cases and implementation, I 
question whether this is really useful, to have `payload()` return the same 
value that `termfreq()` would.   

At least let's add:

{{    }}

{{ }}{{    }}

{{      }}

{{        }}{{          }}

{{        }}{{   
   }}

{{    }}

to the default managed-schema.

I could see it being handy if you're testing the difference between *_dpi and 
*_dtf performance and potentially toggling back and forth and want it to be 
transparent, but these delimited tf fields aren't going to work as if they were 
truly payloaded with the payload scoring queries currently.

Thoughts?   

 

> Support DelimitedTermFrequencyTokenFilter-using fields with payload() function
> --
>
> Key: SOLR-11358
> URL: https://issues.apache.org/jira/browse/SOLR-11358
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erik Hatcher
>Assignee: Erik Hatcher
>Priority: Major
> Attachments: SOLR-11358.patch
>
>
> payload() works values encoded with DelimitedPayloadTokenFilter.   payload() 
> can be modified to return the term frequency instead, when the field uses 
> DelimitedTermFrequencyTokenFilter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12338) Replay buffering tlog in parallel

2018-05-17 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16479197#comment-16479197
 ] 

David Smiley commented on SOLR-12338:
-

Maybe you can propose {{SetBlockingQueue}} (or whatever name we settle on) to 
Guava?  Even if it's not accepted ultimately; there might be some great 
feedback and/or pointers to something similar that proves useful, as this stuff 
is hard so the more eyes the better.

I like that you've avoided hash collisions altogether by not doing hashes!  Use 
of ConcurrentHashMap makes sense to me for such an approach.  
However it appears we have some complexity to deal with since keys need to be 
added and removed on demand, safely, which seems to be quite tricky.

* I think the "hash" variable should not be called this to avoid confusion as 
there is no hashing.  Maybe just "id" or "lockId"
* Do we still need the Random stuff?
* Maybe rename your "SetBlockingQueue" to "SetSemaphore" or probably better 
"SetLock" as it does not hold anything (Queues hold stuff)
* Can "Semaphore sizeLock" be renamed to "sizeSemaphore" or "sizePermits" is it 
does not extend Lock?
* Can the "closed" state be removed from SetBlockingQueue altogether?  It's not 
clear it actually needs to be "closed".  It seems wrong; other concurrent 
mechanisms don't have this notion (no Queue, Lock, or Semaphore does, etc.)  
FWIW I stripped this from the class and the test passed.
* Perhaps its better to acquire() the size permit first in add() instead of 
last to prevent lots of producing threads inserting keys into a map only to 
eventually wait.  Although it might add annoying try-finally to add() to ensure 
we put the permit back if there's an exception after (e.g. interrupt).  Heck; 
maybe that's an issue no matter what the sequence is.
* Can the value side of the ConcurrentHashMap be a Lock (I guess ReentrantLock 
impl)?  It seems like the most direct concept we want; Semaphore is more than a 
Lock as it tracks permits that we don't need here?
* The hot while loop of map.putIfAbsent seems fishy to me.  Even if it may be 
rare in practice, I wonder if we can do something simpler?  You may get luck 
with map.compute\* methods on ConcurrentHashMap which execute the lambda 
atomically.  Though I don't know if it's bad to block if we try to acquire a 
lock within there.  I see remove() removes the value of the Map but perhaps it 
the value were a mechanism that tracked that there's a producer pending, then 
we should not remove the value from the lock?  If we did this, then maybe that 
would simplify add()?  I'm not sure.

Perhaps a simpler approach would involve involve a Set of weakly referenced 
objects, and thus we don't need to worry about removal.  In such a design add() 
would need to return a reference to the member of the set, and that object 
would have a "release()" method when done.  I'm not sure if in practice these 
might be GC'ed fast enough if they end up being usually very temporary?  Shrug.

> Replay buffering tlog in parallel
> -
>
> Key: SOLR-12338
> URL: https://issues.apache.org/jira/browse/SOLR-12338
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Major
> Attachments: SOLR-12338.patch, SOLR-12338.patch, SOLR-12338.patch
>
>
> Since updates with different id are independent, therefore it is safe to 
> replay them in parallel. This will significantly reduce recovering time of 
> replicas in high load indexing environment. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12361) Change _childDocuments to Map

2018-05-17 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16479166#comment-16479166
 ] 

David Smiley commented on SOLR-12361:
-

Oh yeah; the multi-valued question makes this solution path more complicated 
because it needs to duplicate similar logic that exists for plain field values. 
 add(key,Object) is ugly.  Ugh.  Could you please try the approach of adding 
SolrInputDocuments as if they were fields values?

> Change _childDocuments to Map
> -
>
> Key: SOLR-12361
> URL: https://issues.apache.org/jira/browse/SOLR-12361
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: mosh
>Priority: Major
> Attachments: SOLR-12361.patch, SOLR-12361.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> During the discussion on SOLR-12298, there was a proposal to change 
> _childDocuments in SolrDocumentBase to a Map, to incorporate the relationship 
> between the parent and its child documents.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_162) - Build # 22028 - Still Unstable!

2018-05-17 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/22028/
Java: 32bit/jdk1.8.0_162 -server -XX:+UseSerialGC

3 tests failed.
FAILED:  org.apache.solr.cloud.autoscaling.NodeAddedTriggerTest.testRestoreState

Error Message:
Did not expect the processor to fire on first run! event={   
"id":"68458a51063c3T37uqtv7cvsjb0dyfvp6f5aphr",   
"source":"node_added_trigger",   "eventTime":1834366121567171,   
"eventType":"NODEADDED",   "properties":{ "eventTimes":[   
1834366121567171,   1834366121570076,   1834366121570651,   
1834366121571116], "nodeNames":[   "127.0.0.1:39881_solr",   
"127.0.0.1:44337_solr",   "127.0.0.1:44751_solr",   
"127.0.0.1:38285_solr"]}}

Stack Trace:
java.lang.AssertionError: Did not expect the processor to fire on first run! 
event={
  "id":"68458a51063c3T37uqtv7cvsjb0dyfvp6f5aphr",
  "source":"node_added_trigger",
  "eventTime":1834366121567171,
  "eventType":"NODEADDED",
  "properties":{
"eventTimes":[
  1834366121567171,
  1834366121570076,
  1834366121570651,
  1834366121571116],
"nodeNames":[
  "127.0.0.1:39881_solr",
  "127.0.0.1:44337_solr",
  "127.0.0.1:44751_solr",
  "127.0.0.1:38285_solr"]}}
at 
__randomizedtesting.SeedInfo.seed([B6F283E92F120591:785C277AD72B7D87]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.autoscaling.NodeAddedTriggerTest.lambda$new$0(NodeAddedTriggerTest.java:49)
at 
org.apache.solr.cloud.autoscaling.NodeAddedTrigger.run(NodeAddedTrigger.java:161)
at 
org.apache.solr.cloud.autoscaling.NodeAddedTriggerTest.testRestoreState(NodeAddedTriggerTest.java:257)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-4455) Stored value of "NOW" differs between replicas

2018-05-17 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16479154#comment-16479154
 ] 

Erick Erickson commented on SOLR-4455:
--

[~hossman]Is this still current? Happened to run across it searching for 
something else.


> Stored value of "NOW" differs between replicas
> --
>
> Key: SOLR-4455
> URL: https://issues.apache.org/jira/browse/SOLR-4455
> Project: Solr
>  Issue Type: Bug
>  Components: update
>Affects Versions: 4.1
>Reporter: Colin Bartolome
>Assignee: Hoss Man
>Priority: Minor
> Attachments: SOLR-4455.patch
>
>
> I have a field in {{schema.xml}} defined like this:
> {code:xml}
>  default="NOW" />
> {code}
> When I perform a query that's load-balanced across the servers in my cloud, 
> the value stored in that field differs slightly between each replica for the 
> same returned document.
> I haven't seen this field differ by more than a tenth of a second and I'm not 
> running queries against it, but I can picture a situation where somebody has 
> one replica returning 23:59:59.990 and another returning 00:00:00.010 and a 
> query starts behaving oddly.
> It seems like the leader should evaluate what "NOW" means and the replicas 
> should copy that value.
> {panel:title=Possible Workaround}
> A possible workaround for this issue is to use the 
> TimestampUpdateProcessorFactory in your update processor chain prior to the 
> DistributedUpdateProcessor instead of relying on the using "NOW" as a default 
> value for date fields.
> This will cause the timestamp field of each document to be filled in with a 
> value before the documents are forwarded to any shards (or written to the 
> transaction log) 
> https://lucene.apache.org/solr/4_1_0/solr-core/org/apache/solr/update/processor/TimestampUpdateProcessorFactory.html
> {panel}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-7.x - Build # 219 - Still Failing

2018-05-17 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-7.x/219/

No tests ran.

Build Log:
[...truncated 24218 lines...]
[asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: 
invalid part, must have at least one section (e.g., chapter, appendix, etc.)
[asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid 
part, must have at least one section (e.g., chapter, appendix, etc.)
 [java] Processed 2194 links (1751 relative) to 2949 anchors in 228 files
 [echo] Validated Links & Anchors via: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/build/solr-ref-guide/bare-bones-html/

-dist-changes:
 [copy] Copying 4 files to 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/package/changes

-dist-keys:
  [get] Getting: http://home.apache.org/keys/group/lucene.asc
  [get] To: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/package/KEYS

package:

-unpack-solr-tgz:

-ensure-solr-tgz-exists:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/build/solr.tgz.unpacked
[untar] Expanding: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/package/solr-7.4.0.tgz
 into 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/build/solr.tgz.unpacked

generate-maven-artifacts:

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml


[jira] [Commented] (SOLR-12338) Replay buffering tlog in parallel

2018-05-17 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16479131#comment-16479131
 ] 

Cao Manh Dat commented on SOLR-12338:
-

bq. BTW I've twice gotten confused in this issue conversation when you referred 
to things I didn't know existed before because it was unclear if I simply 
didn't know about it or if you were adding/introducing some new mechanism. It 
would be helpful to me if you try to clarify that new things are new things, 
e.g. "(added in this patch)" or "added a new ..." or some-such.
Yeah, sorry about that, I was just to lazy with the detail.

bq. It's super tempting to simply use Striped as it's difficult to write & 
review concurrent control structures such as this. I have a bunch of pending 
commentary/review for your SetBlockingQueue but are you choosing to not use it 
because the numThreads * 1000 is too much internal memory/waste?
I think current {{SetBlockingQueue}} is quite effective and compact. Can you 
mention some comments/reviews for {{SetBlockingQueue}}?


> Replay buffering tlog in parallel
> -
>
> Key: SOLR-12338
> URL: https://issues.apache.org/jira/browse/SOLR-12338
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Major
> Attachments: SOLR-12338.patch, SOLR-12338.patch, SOLR-12338.patch
>
>
> Since updates with different id are independent, therefore it is safe to 
> replay them in parallel. This will significantly reduce recovering time of 
> replicas in high load indexing environment. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Windows (64bit/jdk-10) - Build # 7320 - Still Unstable!

2018-05-17 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7320/
Java: 64bit/jdk-10 -XX:+UseCompressedOops -XX:+UseG1GC

19 tests failed.
FAILED:  
org.apache.lucene.store.TestSimpleFSDirectory.testCreateOutputWithPendingDeletes

Error Message:
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J1\temp\lucene.store.TestSimpleFSDirectory_976D3D73E4F3D4F1-001\tempDir-006\file.txt

Stack Trace:
java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J1\temp\lucene.store.TestSimpleFSDirectory_976D3D73E4F3D4F1-001\tempDir-006\file.txt
at 
__randomizedtesting.SeedInfo.seed([976D3D73E4F3D4F1:BEC83E6C77D06BAD]:0)
at 
java.base/sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:89)
at 
java.base/sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:103)
at 
java.base/sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:108)
at 
java.base/sun.nio.fs.WindowsFileAttributeViews$Basic.readAttributes(WindowsFileAttributeViews.java:53)
at 
java.base/sun.nio.fs.WindowsFileAttributeViews$Basic.readAttributes(WindowsFileAttributeViews.java:38)
at org.apache.lucene.mockfile.WindowsFS.getKey(WindowsFS.java:55)
at org.apache.lucene.mockfile.WindowsFS.onClose(WindowsFS.java:77)
at 
org.apache.lucene.mockfile.HandleTrackingFS$5.close(HandleTrackingFS.java:249)
at 
org.apache.lucene.store.SimpleFSDirectory$SimpleFSIndexInput.close(SimpleFSDirectory.java:119)
at org.apache.lucene.util.IOUtils.close(IOUtils.java:88)
at org.apache.lucene.util.IOUtils.close(IOUtils.java:76)
at 
org.apache.lucene.store.TestSimpleFSDirectory.testCreateOutputWithPendingDeletes(TestSimpleFSDirectory.java:78)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 

[jira] [Commented] (SOLR-12338) Replay buffering tlog in parallel

2018-05-17 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16479083#comment-16479083
 ] 

David Smiley commented on SOLR-12338:
-

{quote}Upload a patch that makes a change from using an array of lock into a 
{{SetBlockingQueue}}.
{quote}
BTW I've twice gotten confused in this issue conversation when you referred to 
things I didn't know existed before because it was unclear if I simply didn't 
know about it or if you were adding/introducing some new mechanism.  It would 
be helpful to me if you try to clarify that new things are new things, e.g. 
"(added in this patch)" or "added a new ..." or some-such.

It's super tempting to simply use Striped as it's difficult to write & review 
concurrent control structures such as this.  I have a bunch of pending 
commentary/review for your SetBlockingQueue but are you choosing to not use it 
because the numThreads * 1000 is too much internal memory/waste?

> Replay buffering tlog in parallel
> -
>
> Key: SOLR-12338
> URL: https://issues.apache.org/jira/browse/SOLR-12338
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Major
> Attachments: SOLR-12338.patch, SOLR-12338.patch, SOLR-12338.patch
>
>
> Since updates with different id are independent, therefore it is safe to 
> replay them in parallel. This will significantly reduce recovering time of 
> replicas in high load indexing environment. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11277) Add auto hard commit setting based on tlog size

2018-05-17 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16479071#comment-16479071
 ] 

Yonik Seeley commented on SOLR-11277:
-

bq. is this what you'd suggested? 
Yes, that should handle the observed NPE.

Another thing I noticed:
It seems like under heavy indexing, many different threads will detect tlog 
sizes greater than the limit (and continue to until the part of the commit that 
rolls over the tlog happens).  All of those threads will call 
_scheduleCommitWithin(1ms) which will all call getDelay on the pending commit 
task to see if it needs to do it sooner.  More of a performance implication, 
but probably not significant compared to the cost of a commit.  docsSinceCommit 
will also be incorrectly zeroed, but given it's use, it shouldn't be a big deal 
if it can be off by a few.


> Add auto hard commit setting based on tlog size
> ---
>
> Key: SOLR-11277
> URL: https://issues.apache.org/jira/browse/SOLR-11277
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Rupa Shankar
>Assignee: Anshum Gupta
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: SOLR-11277.01.patch, SOLR-11277.patch, SOLR-11277.patch, 
> SOLR-11277.patch, SOLR-11277.patch, SOLR-11277.patch, SOLR-11277.patch, 
> SOLR-11277.patch, max_size_auto_commit.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> When indexing documents of variable sizes and at variable schedules, it can 
> be hard to estimate the optimal auto hard commit maxDocs or maxTime settings. 
> We’ve had some occurrences of really huge tlogs, resulting in serious issues, 
> so in an attempt to avoid this, it would be great to have a “maxSize” setting 
> based on the tlog size on disk. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-8317) TestStressNRT fails with missing document

2018-05-17 Thread Simon Willnauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Willnauer resolved LUCENE-8317.
-
   Resolution: Fixed
Fix Version/s: master (8.0)
   7.4

> TestStressNRT fails with missing document
> -
>
> Key: LUCENE-8317
> URL: https://issues.apache.org/jira/browse/LUCENE-8317
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Simon Willnauer
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: LUCENE-8317.patch, LUCENE-8317.patch, LUCENE-8317.patch, 
> LUCENE-8317.patch, LUCENE-8317.patch
>
>
> {noformat}
> 11:39:01[junit4] Suite: org.apache.lucene.index.TestStressNRT
> 11:39:01[junit4]   1> READER1: FAILED: unexpected exception
> 11:39:01[junit4]   1> java.lang.AssertionError: No documents or 
> tombstones found for id 49, expected at least 66 
> reader=StandardDirectoryReader(segments_g:325:nrt 
> _2i(7.4.0):c114/106:delGen=1 _2h(7.4.0):c76/75:delGen=1 
> _2j(7.4.0):c32/28:delGen=1 _2k(7.4.0):c1 _2l(7.4.0):C38/23:delGen=1 
> _2n(7.4.0):C6/4:delGen=1 _2m(7.4.0):C23/16:delGen=1)
> 11:39:01[junit4]   1> at org.junit.Assert.fail(Assert.java:93)
> 11:39:01[junit4]   1> at 
> org.apache.lucene.index.TestStressNRT$2.run(TestStressNRT.java:353)
> 11:39:01[junit4]   2> May 16, 2018 7:39:01 PM 
> com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
>  uncaughtException
> 11:39:01[junit4]   2> WARNING: Uncaught exception in thread: 
> Thread[READER1,5,TGRP-TestStressNRT]
> 11:39:01[junit4]   2> java.lang.RuntimeException: 
> java.lang.AssertionError: No documents or tombstones found for id 49, 
> expected at least 66 reader=StandardDirectoryReader(segments_g:325:nrt 
> _2i(7.4.0):c114/106:delGen=1 _2h(7.4.0):c76/75:delGen=1 
> _2j(7.4.0):c32/28:delGen=1 _2k(7.4.0):c1 _2l(7.4.0):C38/23:delGen=1 
> _2n(7.4.0):C6/4:delGen=1 _2m(7.4.0):C23/16:delGen=1)
> 11:39:01[junit4]   2> at 
> __randomizedtesting.SeedInfo.seed([B7A02DE785EE2387]:0)
> 11:39:01[junit4]   2> at 
> org.apache.lucene.index.TestStressNRT$2.run(TestStressNRT.java:382)
> 11:39:01[junit4]   2> Caused by: java.lang.AssertionError: No documents 
> or tombstones found for id 49, expected at least 66 
> reader=StandardDirectoryReader(segments_g:325:nrt 
> _2i(7.4.0):c114/106:delGen=1 _2h(7.4.0):c76/75:delGen=1 
> _2j(7.4.0):c32/28:delGen=1 _2k(7.4.0):c1 _2l(7.4.0):C38/23:delGen=1 
> _2n(7.4.0):C6/4:delGen=1 _2m(7.4.0):C23/16:delGen=1)
> 11:39:01[junit4]   2> at org.junit.Assert.fail(Assert.java:93)
> 11:39:01[junit4]   2> at 
> org.apache.lucene.index.TestStressNRT$2.run(TestStressNRT.java:353)
> 11:39:01[junit4]   2> 
> 11:39:01[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestStressNRT -Dtests.method=test -Dtests.seed=B7A02DE785EE2387 
> -Dtests.slow=true -Dtests.badapples=true -Dtests.locale=en-AU 
> -Dtests.timezone=Asia/Taipei -Dtests.asserts=true 
> -Dtests.file.encoding=US-ASCII
> 11:39:01[junit4] ERROR   0.61s J1 | TestStressNRT.test <<<
> 11:39:01[junit4]> Throwable #1: java.lang.RuntimeException: 
> MockDirectoryWrapper: cannot close: there are still 7 open files: {_2h.cfs=1, 
> _2n.fdt=1, _2k.cfs=1, _2l.fdt=1, _2j.cfs=1, _2i.cfs=1, _2m.fdt=1}
> 11:39:01[junit4]> at 
> org.apache.lucene.store.MockDirectoryWrapper.close(MockDirectoryWrapper.java:841)
> 11:39:01[junit4]> at 
> org.apache.lucene.index.TestStressNRT.test(TestStressNRT.java:403)
> 11:39:01[junit4]> at java.lang.Thread.run(Thread.java:748)
> 11:39:01[junit4]> Caused by: java.lang.RuntimeException: unclosed 
> IndexInput: _2l.fdt
> 11:39:01[junit4]> at 
> org.apache.lucene.store.MockDirectoryWrapper.addFileHandle(MockDirectoryWrapper.java:732)
> 11:39:01[junit4]> at 
> org.apache.lucene.store.MockDirectoryWrapper.openInput(MockDirectoryWrapper.java:776)
> 11:39:01[junit4]> at 
> org.apache.lucene.codecs.compressing.CompressingStoredFieldsReader.(CompressingStoredFieldsReader.java:150)
> 11:39:01[junit4]> at 
> org.apache.lucene.codecs.compressing.CompressingStoredFieldsFormat.fieldsReader(CompressingStoredFieldsFormat.java:121)
> 11:39:01[junit4]> at 
> org.apache.lucene.codecs.lucene50.Lucene50StoredFieldsFormat.fieldsReader(Lucene50StoredFieldsFormat.java:173)
> 11:39:01[junit4]> at 
> org.apache.lucene.codecs.asserting.AssertingStoredFieldsFormat.fieldsReader(AssertingStoredFieldsFormat.java:43)
> 11:39:01[junit4]> at 
> org.apache.lucene.index.SegmentCoreReaders.(SegmentCoreReaders.java:126)
> 11:39:01[junit4]> at 
> org.apache.lucene.index.SegmentReader.(SegmentReader.java:78)
> 11:39:01[junit4]> at 
> 

[jira] [Commented] (LUCENE-8317) TestStressNRT fails with missing document

2018-05-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478965#comment-16478965
 ] 

ASF subversion and git services commented on LUCENE-8317:
-

Commit 922fd26859cd1e288c8e9ed0d1f22bf75306de90 in lucene-solr's branch 
refs/heads/branch_7x from [~simonw]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=922fd26 ]

LUCENE-8317: Prevent concurrent deletes from being applied during full flush

Future deletes could potentially be exposed to flushes/commits/refreshes if the
amount of RAM used by deletes is greater than half of the IW RAM buffer.


> TestStressNRT fails with missing document
> -
>
> Key: LUCENE-8317
> URL: https://issues.apache.org/jira/browse/LUCENE-8317
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Simon Willnauer
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: LUCENE-8317.patch, LUCENE-8317.patch, LUCENE-8317.patch, 
> LUCENE-8317.patch, LUCENE-8317.patch
>
>
> {noformat}
> 11:39:01[junit4] Suite: org.apache.lucene.index.TestStressNRT
> 11:39:01[junit4]   1> READER1: FAILED: unexpected exception
> 11:39:01[junit4]   1> java.lang.AssertionError: No documents or 
> tombstones found for id 49, expected at least 66 
> reader=StandardDirectoryReader(segments_g:325:nrt 
> _2i(7.4.0):c114/106:delGen=1 _2h(7.4.0):c76/75:delGen=1 
> _2j(7.4.0):c32/28:delGen=1 _2k(7.4.0):c1 _2l(7.4.0):C38/23:delGen=1 
> _2n(7.4.0):C6/4:delGen=1 _2m(7.4.0):C23/16:delGen=1)
> 11:39:01[junit4]   1> at org.junit.Assert.fail(Assert.java:93)
> 11:39:01[junit4]   1> at 
> org.apache.lucene.index.TestStressNRT$2.run(TestStressNRT.java:353)
> 11:39:01[junit4]   2> May 16, 2018 7:39:01 PM 
> com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
>  uncaughtException
> 11:39:01[junit4]   2> WARNING: Uncaught exception in thread: 
> Thread[READER1,5,TGRP-TestStressNRT]
> 11:39:01[junit4]   2> java.lang.RuntimeException: 
> java.lang.AssertionError: No documents or tombstones found for id 49, 
> expected at least 66 reader=StandardDirectoryReader(segments_g:325:nrt 
> _2i(7.4.0):c114/106:delGen=1 _2h(7.4.0):c76/75:delGen=1 
> _2j(7.4.0):c32/28:delGen=1 _2k(7.4.0):c1 _2l(7.4.0):C38/23:delGen=1 
> _2n(7.4.0):C6/4:delGen=1 _2m(7.4.0):C23/16:delGen=1)
> 11:39:01[junit4]   2> at 
> __randomizedtesting.SeedInfo.seed([B7A02DE785EE2387]:0)
> 11:39:01[junit4]   2> at 
> org.apache.lucene.index.TestStressNRT$2.run(TestStressNRT.java:382)
> 11:39:01[junit4]   2> Caused by: java.lang.AssertionError: No documents 
> or tombstones found for id 49, expected at least 66 
> reader=StandardDirectoryReader(segments_g:325:nrt 
> _2i(7.4.0):c114/106:delGen=1 _2h(7.4.0):c76/75:delGen=1 
> _2j(7.4.0):c32/28:delGen=1 _2k(7.4.0):c1 _2l(7.4.0):C38/23:delGen=1 
> _2n(7.4.0):C6/4:delGen=1 _2m(7.4.0):C23/16:delGen=1)
> 11:39:01[junit4]   2> at org.junit.Assert.fail(Assert.java:93)
> 11:39:01[junit4]   2> at 
> org.apache.lucene.index.TestStressNRT$2.run(TestStressNRT.java:353)
> 11:39:01[junit4]   2> 
> 11:39:01[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestStressNRT -Dtests.method=test -Dtests.seed=B7A02DE785EE2387 
> -Dtests.slow=true -Dtests.badapples=true -Dtests.locale=en-AU 
> -Dtests.timezone=Asia/Taipei -Dtests.asserts=true 
> -Dtests.file.encoding=US-ASCII
> 11:39:01[junit4] ERROR   0.61s J1 | TestStressNRT.test <<<
> 11:39:01[junit4]> Throwable #1: java.lang.RuntimeException: 
> MockDirectoryWrapper: cannot close: there are still 7 open files: {_2h.cfs=1, 
> _2n.fdt=1, _2k.cfs=1, _2l.fdt=1, _2j.cfs=1, _2i.cfs=1, _2m.fdt=1}
> 11:39:01[junit4]> at 
> org.apache.lucene.store.MockDirectoryWrapper.close(MockDirectoryWrapper.java:841)
> 11:39:01[junit4]> at 
> org.apache.lucene.index.TestStressNRT.test(TestStressNRT.java:403)
> 11:39:01[junit4]> at java.lang.Thread.run(Thread.java:748)
> 11:39:01[junit4]> Caused by: java.lang.RuntimeException: unclosed 
> IndexInput: _2l.fdt
> 11:39:01[junit4]> at 
> org.apache.lucene.store.MockDirectoryWrapper.addFileHandle(MockDirectoryWrapper.java:732)
> 11:39:01[junit4]> at 
> org.apache.lucene.store.MockDirectoryWrapper.openInput(MockDirectoryWrapper.java:776)
> 11:39:01[junit4]> at 
> org.apache.lucene.codecs.compressing.CompressingStoredFieldsReader.(CompressingStoredFieldsReader.java:150)
> 11:39:01[junit4]> at 
> org.apache.lucene.codecs.compressing.CompressingStoredFieldsFormat.fieldsReader(CompressingStoredFieldsFormat.java:121)
> 11:39:01[junit4]> at 
> org.apache.lucene.codecs.lucene50.Lucene50StoredFieldsFormat.fieldsReader(Lucene50StoredFieldsFormat.java:173)
> 11:39:01  

[jira] [Commented] (LUCENE-8317) TestStressNRT fails with missing document

2018-05-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478963#comment-16478963
 ] 

ASF subversion and git services commented on LUCENE-8317:
-

Commit 88f8718f1bfe0e5aeddc6f960cc74513a89c0610 in lucene-solr's branch 
refs/heads/master from [~simonw]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=88f8718 ]

LUCENE-8317: Prevent concurrent deletes from being applied during full flush

Future deletes could potentially be exposed to flushes/commits/refreshes if the
amount of RAM used by deletes is greater than half of the IW RAM buffer.


> TestStressNRT fails with missing document
> -
>
> Key: LUCENE-8317
> URL: https://issues.apache.org/jira/browse/LUCENE-8317
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Simon Willnauer
>Priority: Major
> Attachments: LUCENE-8317.patch, LUCENE-8317.patch, LUCENE-8317.patch, 
> LUCENE-8317.patch, LUCENE-8317.patch
>
>
> {noformat}
> 11:39:01[junit4] Suite: org.apache.lucene.index.TestStressNRT
> 11:39:01[junit4]   1> READER1: FAILED: unexpected exception
> 11:39:01[junit4]   1> java.lang.AssertionError: No documents or 
> tombstones found for id 49, expected at least 66 
> reader=StandardDirectoryReader(segments_g:325:nrt 
> _2i(7.4.0):c114/106:delGen=1 _2h(7.4.0):c76/75:delGen=1 
> _2j(7.4.0):c32/28:delGen=1 _2k(7.4.0):c1 _2l(7.4.0):C38/23:delGen=1 
> _2n(7.4.0):C6/4:delGen=1 _2m(7.4.0):C23/16:delGen=1)
> 11:39:01[junit4]   1> at org.junit.Assert.fail(Assert.java:93)
> 11:39:01[junit4]   1> at 
> org.apache.lucene.index.TestStressNRT$2.run(TestStressNRT.java:353)
> 11:39:01[junit4]   2> May 16, 2018 7:39:01 PM 
> com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
>  uncaughtException
> 11:39:01[junit4]   2> WARNING: Uncaught exception in thread: 
> Thread[READER1,5,TGRP-TestStressNRT]
> 11:39:01[junit4]   2> java.lang.RuntimeException: 
> java.lang.AssertionError: No documents or tombstones found for id 49, 
> expected at least 66 reader=StandardDirectoryReader(segments_g:325:nrt 
> _2i(7.4.0):c114/106:delGen=1 _2h(7.4.0):c76/75:delGen=1 
> _2j(7.4.0):c32/28:delGen=1 _2k(7.4.0):c1 _2l(7.4.0):C38/23:delGen=1 
> _2n(7.4.0):C6/4:delGen=1 _2m(7.4.0):C23/16:delGen=1)
> 11:39:01[junit4]   2> at 
> __randomizedtesting.SeedInfo.seed([B7A02DE785EE2387]:0)
> 11:39:01[junit4]   2> at 
> org.apache.lucene.index.TestStressNRT$2.run(TestStressNRT.java:382)
> 11:39:01[junit4]   2> Caused by: java.lang.AssertionError: No documents 
> or tombstones found for id 49, expected at least 66 
> reader=StandardDirectoryReader(segments_g:325:nrt 
> _2i(7.4.0):c114/106:delGen=1 _2h(7.4.0):c76/75:delGen=1 
> _2j(7.4.0):c32/28:delGen=1 _2k(7.4.0):c1 _2l(7.4.0):C38/23:delGen=1 
> _2n(7.4.0):C6/4:delGen=1 _2m(7.4.0):C23/16:delGen=1)
> 11:39:01[junit4]   2> at org.junit.Assert.fail(Assert.java:93)
> 11:39:01[junit4]   2> at 
> org.apache.lucene.index.TestStressNRT$2.run(TestStressNRT.java:353)
> 11:39:01[junit4]   2> 
> 11:39:01[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestStressNRT -Dtests.method=test -Dtests.seed=B7A02DE785EE2387 
> -Dtests.slow=true -Dtests.badapples=true -Dtests.locale=en-AU 
> -Dtests.timezone=Asia/Taipei -Dtests.asserts=true 
> -Dtests.file.encoding=US-ASCII
> 11:39:01[junit4] ERROR   0.61s J1 | TestStressNRT.test <<<
> 11:39:01[junit4]> Throwable #1: java.lang.RuntimeException: 
> MockDirectoryWrapper: cannot close: there are still 7 open files: {_2h.cfs=1, 
> _2n.fdt=1, _2k.cfs=1, _2l.fdt=1, _2j.cfs=1, _2i.cfs=1, _2m.fdt=1}
> 11:39:01[junit4]> at 
> org.apache.lucene.store.MockDirectoryWrapper.close(MockDirectoryWrapper.java:841)
> 11:39:01[junit4]> at 
> org.apache.lucene.index.TestStressNRT.test(TestStressNRT.java:403)
> 11:39:01[junit4]> at java.lang.Thread.run(Thread.java:748)
> 11:39:01[junit4]> Caused by: java.lang.RuntimeException: unclosed 
> IndexInput: _2l.fdt
> 11:39:01[junit4]> at 
> org.apache.lucene.store.MockDirectoryWrapper.addFileHandle(MockDirectoryWrapper.java:732)
> 11:39:01[junit4]> at 
> org.apache.lucene.store.MockDirectoryWrapper.openInput(MockDirectoryWrapper.java:776)
> 11:39:01[junit4]> at 
> org.apache.lucene.codecs.compressing.CompressingStoredFieldsReader.(CompressingStoredFieldsReader.java:150)
> 11:39:01[junit4]> at 
> org.apache.lucene.codecs.compressing.CompressingStoredFieldsFormat.fieldsReader(CompressingStoredFieldsFormat.java:121)
> 11:39:01[junit4]> at 
> org.apache.lucene.codecs.lucene50.Lucene50StoredFieldsFormat.fieldsReader(Lucene50StoredFieldsFormat.java:173)
> 11:39:01[junit4]> at 
> 

[jira] [Commented] (LUCENE-8317) TestStressNRT fails with missing document

2018-05-17 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478955#comment-16478955
 ] 

Michael McCandless commented on LUCENE-8317:


+1 phew.

> TestStressNRT fails with missing document
> -
>
> Key: LUCENE-8317
> URL: https://issues.apache.org/jira/browse/LUCENE-8317
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Simon Willnauer
>Priority: Major
> Attachments: LUCENE-8317.patch, LUCENE-8317.patch, LUCENE-8317.patch, 
> LUCENE-8317.patch, LUCENE-8317.patch
>
>
> {noformat}
> 11:39:01[junit4] Suite: org.apache.lucene.index.TestStressNRT
> 11:39:01[junit4]   1> READER1: FAILED: unexpected exception
> 11:39:01[junit4]   1> java.lang.AssertionError: No documents or 
> tombstones found for id 49, expected at least 66 
> reader=StandardDirectoryReader(segments_g:325:nrt 
> _2i(7.4.0):c114/106:delGen=1 _2h(7.4.0):c76/75:delGen=1 
> _2j(7.4.0):c32/28:delGen=1 _2k(7.4.0):c1 _2l(7.4.0):C38/23:delGen=1 
> _2n(7.4.0):C6/4:delGen=1 _2m(7.4.0):C23/16:delGen=1)
> 11:39:01[junit4]   1> at org.junit.Assert.fail(Assert.java:93)
> 11:39:01[junit4]   1> at 
> org.apache.lucene.index.TestStressNRT$2.run(TestStressNRT.java:353)
> 11:39:01[junit4]   2> May 16, 2018 7:39:01 PM 
> com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
>  uncaughtException
> 11:39:01[junit4]   2> WARNING: Uncaught exception in thread: 
> Thread[READER1,5,TGRP-TestStressNRT]
> 11:39:01[junit4]   2> java.lang.RuntimeException: 
> java.lang.AssertionError: No documents or tombstones found for id 49, 
> expected at least 66 reader=StandardDirectoryReader(segments_g:325:nrt 
> _2i(7.4.0):c114/106:delGen=1 _2h(7.4.0):c76/75:delGen=1 
> _2j(7.4.0):c32/28:delGen=1 _2k(7.4.0):c1 _2l(7.4.0):C38/23:delGen=1 
> _2n(7.4.0):C6/4:delGen=1 _2m(7.4.0):C23/16:delGen=1)
> 11:39:01[junit4]   2> at 
> __randomizedtesting.SeedInfo.seed([B7A02DE785EE2387]:0)
> 11:39:01[junit4]   2> at 
> org.apache.lucene.index.TestStressNRT$2.run(TestStressNRT.java:382)
> 11:39:01[junit4]   2> Caused by: java.lang.AssertionError: No documents 
> or tombstones found for id 49, expected at least 66 
> reader=StandardDirectoryReader(segments_g:325:nrt 
> _2i(7.4.0):c114/106:delGen=1 _2h(7.4.0):c76/75:delGen=1 
> _2j(7.4.0):c32/28:delGen=1 _2k(7.4.0):c1 _2l(7.4.0):C38/23:delGen=1 
> _2n(7.4.0):C6/4:delGen=1 _2m(7.4.0):C23/16:delGen=1)
> 11:39:01[junit4]   2> at org.junit.Assert.fail(Assert.java:93)
> 11:39:01[junit4]   2> at 
> org.apache.lucene.index.TestStressNRT$2.run(TestStressNRT.java:353)
> 11:39:01[junit4]   2> 
> 11:39:01[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestStressNRT -Dtests.method=test -Dtests.seed=B7A02DE785EE2387 
> -Dtests.slow=true -Dtests.badapples=true -Dtests.locale=en-AU 
> -Dtests.timezone=Asia/Taipei -Dtests.asserts=true 
> -Dtests.file.encoding=US-ASCII
> 11:39:01[junit4] ERROR   0.61s J1 | TestStressNRT.test <<<
> 11:39:01[junit4]> Throwable #1: java.lang.RuntimeException: 
> MockDirectoryWrapper: cannot close: there are still 7 open files: {_2h.cfs=1, 
> _2n.fdt=1, _2k.cfs=1, _2l.fdt=1, _2j.cfs=1, _2i.cfs=1, _2m.fdt=1}
> 11:39:01[junit4]> at 
> org.apache.lucene.store.MockDirectoryWrapper.close(MockDirectoryWrapper.java:841)
> 11:39:01[junit4]> at 
> org.apache.lucene.index.TestStressNRT.test(TestStressNRT.java:403)
> 11:39:01[junit4]> at java.lang.Thread.run(Thread.java:748)
> 11:39:01[junit4]> Caused by: java.lang.RuntimeException: unclosed 
> IndexInput: _2l.fdt
> 11:39:01[junit4]> at 
> org.apache.lucene.store.MockDirectoryWrapper.addFileHandle(MockDirectoryWrapper.java:732)
> 11:39:01[junit4]> at 
> org.apache.lucene.store.MockDirectoryWrapper.openInput(MockDirectoryWrapper.java:776)
> 11:39:01[junit4]> at 
> org.apache.lucene.codecs.compressing.CompressingStoredFieldsReader.(CompressingStoredFieldsReader.java:150)
> 11:39:01[junit4]> at 
> org.apache.lucene.codecs.compressing.CompressingStoredFieldsFormat.fieldsReader(CompressingStoredFieldsFormat.java:121)
> 11:39:01[junit4]> at 
> org.apache.lucene.codecs.lucene50.Lucene50StoredFieldsFormat.fieldsReader(Lucene50StoredFieldsFormat.java:173)
> 11:39:01[junit4]> at 
> org.apache.lucene.codecs.asserting.AssertingStoredFieldsFormat.fieldsReader(AssertingStoredFieldsFormat.java:43)
> 11:39:01[junit4]> at 
> org.apache.lucene.index.SegmentCoreReaders.(SegmentCoreReaders.java:126)
> 11:39:01[junit4]> at 
> org.apache.lucene.index.SegmentReader.(SegmentReader.java:78)
> 11:39:01[junit4]> at 
> org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:193)
> 

[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-11-ea+5) - Build # 22027 - Still Unstable!

2018-05-17 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/22027/
Java: 64bit/jdk-11-ea+5 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

4 tests failed.
FAILED:  
org.apache.solr.handler.admin.SegmentsInfoRequestHandlerTest.testSegmentInfosVersion

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at 
__randomizedtesting.SeedInfo.seed([9ED9F8C40FBACA00:66076D2C77D61B53]:0)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:916)
at 
org.apache.solr.handler.admin.SegmentsInfoRequestHandlerTest.testSegmentInfosVersion(SegmentsInfoRequestHandlerTest.java:68)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:841)
Caused by: java.lang.RuntimeException: REQUEST FAILED: 
xpath=2=count(//lst[@name='segments']/lst/str[@name='version'][.='8.0.0'])
xml response was: 


[jira] [Updated] (SOLR-12358) Autoscaling suggestions fail randomly and for certain policies

2018-05-17 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett updated SOLR-12358:
-
Component/s: AutoScaling

> Autoscaling suggestions fail randomly and for certain policies
> --
>
> Key: SOLR-12358
> URL: https://issues.apache.org/jira/browse/SOLR-12358
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Affects Versions: 7.3.1
>Reporter: Jerry Bao
>Priority: Critical
>
> For the following policy
> {code:java}
> {"replica": "<3", "node": "#ANY", "collection": "collection"}{code}
> the suggestions endpoint fails
> {code:java}
> "error": {"msg": "Comparison method violates its general contract!","trace": 
> "java.lang.IllegalArgumentException: Comparison method violates its general 
> contract!\n\tat java.util.TimSort.mergeHi(TimSort.java:899)\n\tat 
> java.util.TimSort.mergeAt(TimSort.java:516)\n\tat 
> java.util.TimSort.mergeCollapse(TimSort.java:441)\n\tat 
> java.util.TimSort.sort(TimSort.java:245)\n\tat 
> java.util.Arrays.sort(Arrays.java:1512)\n\tat 
> java.util.ArrayList.sort(ArrayList.java:1462)\n\tat 
> java.util.Collections.sort(Collections.java:175)\n\tat 
> org.apache.solr.client.solrj.cloud.autoscaling.Policy.setApproxValuesAndSortNodes(Policy.java:363)\n\tat
>  
> org.apache.solr.client.solrj.cloud.autoscaling.Policy$Session.applyRules(Policy.java:310)\n\tat
>  
> org.apache.solr.client.solrj.cloud.autoscaling.Policy$Session.(Policy.java:272)\n\tat
>  
> org.apache.solr.client.solrj.cloud.autoscaling.Policy.createSession(Policy.java:376)\n\tat
>  
> org.apache.solr.client.solrj.cloud.autoscaling.PolicyHelper.getSuggestions(PolicyHelper.java:214)\n\tat
>  
> org.apache.solr.cloud.autoscaling.AutoScalingHandler.handleSuggestions(AutoScalingHandler.java:158)\n\tat
>  
> org.apache.solr.cloud.autoscaling.AutoScalingHandler.handleRequestBody(AutoScalingHandler.java:133)\n\tat
>  
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:195)\n\tat
>  org.apache.solr.api.ApiBag$ReqHandlerToApi.call(ApiBag.java:242)\n\tat 
> org.apache.solr.api.V2HttpCall.handleAdmin(V2HttpCall.java:311)\n\tat 
> org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:717)\n\tat
>  org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:498)\n\tat 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:384)\n\tat
>  
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:330)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1629)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)\n\tat
>  
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:190)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:188)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1253)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:168)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:166)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1155)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:219)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:126)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat
>  
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat
>  org.eclipse.jetty.server.Server.handle(Server.java:530)\n\tat 
> org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:347)\n\tat 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:256)\n\tat
>  
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:279)\n\tat
>  

[jira] [Updated] (SOLR-12370) NullPointerException on MoreLikeThisComponent

2018-05-17 Thread Gilles Bodart (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gilles Bodart updated SOLR-12370:
-
Priority: Blocker  (was: Critical)

> NullPointerException on MoreLikeThisComponent
> -
>
> Key: SOLR-12370
> URL: https://issues.apache.org/jira/browse/SOLR-12370
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: MoreLikeThis
>Affects Versions: 7.3.1
>Reporter: Gilles Bodart
>Priority: Blocker
>
> I'm trying to use the MoreLikeThis component under a suggest call, but I 
> receive a npe every time (here's the stacktrace)
> {code:java}
> java.lang.NullPointerException
> at 
> org.apache.solr.handler.component.MoreLikeThisComponent.process(MoreLikeThisComponent.java:127)
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:295)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:177)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:2503)
> at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:710)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:516)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:382)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:326)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1751)
> at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
> at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
> ...{code}
> and here's the config of my requestHandlers:
> {code:java}
> 
> 
> true
> 10
> default
> true
> default
> wordbreak
> true
> true
> 10
> true
> true
> 5
> 5
> 10
> 5
> true
> _text_
> on
> content description title
> true
> html
> b
> /b
> 
> 
> suggest
> spellcheck
> mlt
> highlight
> 
> 
> 
> {code}
> I also tried with 
> {code:java}
> on{code}
> When I call
> {code:java}
> /mlt?df=_text_=pann=_text_
> {code}
>  it works fine but with
> {code:java}
> /suggest?df=_text_=pann=_text_
> {code}
> I got the npe
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12368) in-place DV updates should no longer have to jump through hoops if field does not yet exist

2018-05-17 Thread Simon Willnauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Willnauer updated SOLR-12368:
---
Attachment: SOLR-12368.patch

> in-place DV updates should no longer have to jump through hoops if field does 
> not yet exist
> ---
>
> Key: SOLR-12368
> URL: https://issues.apache.org/jira/browse/SOLR-12368
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Priority: Major
> Attachments: SOLR-12368.patch
>
>
> When SOLR-5944 first added "in-place" DocValue updates to Solr, one of the 
> edge cases thta had to be dealt with was the limitation imposed by 
> IndexWriter that docValues could only be updated if they already existed - if 
> a shard did not yet have a document w/a value in the field where the update 
> was attempted, we would get an error.
> LUCENE-8316 seems to have removed this error, which i believe means we can 
> simplify & speed up some of the checks in Solr, and support this situation as 
> well, rather then falling back on full "read stored fields & reindex" atomic 
> update



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12368) in-place DV updates should no longer have to jump through hoops if field does not yet exist

2018-05-17 Thread Simon Willnauer (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478919#comment-16478919
 ] 

Simon Willnauer commented on SOLR-12368:


I attached my changes if anybody want's to pick it up

> in-place DV updates should no longer have to jump through hoops if field does 
> not yet exist
> ---
>
> Key: SOLR-12368
> URL: https://issues.apache.org/jira/browse/SOLR-12368
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Priority: Major
> Attachments: SOLR-12368.patch
>
>
> When SOLR-5944 first added "in-place" DocValue updates to Solr, one of the 
> edge cases thta had to be dealt with was the limitation imposed by 
> IndexWriter that docValues could only be updated if they already existed - if 
> a shard did not yet have a document w/a value in the field where the update 
> was attempted, we would get an error.
> LUCENE-8316 seems to have removed this error, which i believe means we can 
> simplify & speed up some of the checks in Solr, and support this situation as 
> well, rather then falling back on full "read stored fields & reindex" atomic 
> update



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12370) NullPointerException on MoreLikeThisComponent

2018-05-17 Thread Gilles Bodart (JIRA)
Gilles Bodart created SOLR-12370:


 Summary: NullPointerException on MoreLikeThisComponent
 Key: SOLR-12370
 URL: https://issues.apache.org/jira/browse/SOLR-12370
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: MoreLikeThis
Affects Versions: 7.3.1
Reporter: Gilles Bodart


I'm trying to use the MoreLikeThis component under a suggest call, but I 
receive a npe every time (here's the stacktrace)


{code:java}
java.lang.NullPointerException
at 
org.apache.solr.handler.component.MoreLikeThisComponent.process(MoreLikeThisComponent.java:127)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:295)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:177)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:2503)
at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:710)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:516)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:382)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:326)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1751)
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
...{code}

and here's the config of my requestHandlers:
{code:java}


true
10
default
true
default
wordbreak
true
true
10
true
true
5
5
10
5

true
_text_

on
content description title
true
html
b
/b


suggest
spellcheck
mlt
highlight




{code}
I also tried with 
{code:java}
on{code}

When I call
{code:java}
/mlt?df=_text_=pann=_text_
{code}
 it works fine but with
{code:java}
/suggest?df=_text_=pann=_text_
{code}
I got the npe

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12368) in-place DV updates should no longer have to jump through hoops if field does not yet exist

2018-05-17 Thread Simon Willnauer (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478912#comment-16478912
 ] 

Simon Willnauer commented on SOLR-12368:


I tried to do this but I can't get the solr tests to pass. I spent an entire 
day on it but the overhead is too big for me here sorry. I would have loved to 
get this out here too. sorry folks

> in-place DV updates should no longer have to jump through hoops if field does 
> not yet exist
> ---
>
> Key: SOLR-12368
> URL: https://issues.apache.org/jira/browse/SOLR-12368
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Priority: Major
>
> When SOLR-5944 first added "in-place" DocValue updates to Solr, one of the 
> edge cases thta had to be dealt with was the limitation imposed by 
> IndexWriter that docValues could only be updated if they already existed - if 
> a shard did not yet have a document w/a value in the field where the update 
> was attempted, we would get an error.
> LUCENE-8316 seems to have removed this error, which i believe means we can 
> simplify & speed up some of the checks in Solr, and support this situation as 
> well, rather then falling back on full "read stored fields & reindex" atomic 
> update



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12368) in-place DV updates should no longer have to jump through hoops if field does not yet exist

2018-05-17 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16478911#comment-16478911
 ] 

Adrien Grand commented on SOLR-12368:
-

Would be nice to be able to remove {{IndexWriter.getFieldNames}} as well, which 
was added in LUCENE-7659 only for this workaround.

> in-place DV updates should no longer have to jump through hoops if field does 
> not yet exist
> ---
>
> Key: SOLR-12368
> URL: https://issues.apache.org/jira/browse/SOLR-12368
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Priority: Major
>
> When SOLR-5944 first added "in-place" DocValue updates to Solr, one of the 
> edge cases thta had to be dealt with was the limitation imposed by 
> IndexWriter that docValues could only be updated if they already existed - if 
> a shard did not yet have a document w/a value in the field where the update 
> was attempted, we would get an error.
> LUCENE-8316 seems to have removed this error, which i believe means we can 
> simplify & speed up some of the checks in Solr, and support this situation as 
> well, rather then falling back on full "read stored fields & reindex" atomic 
> update



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk-9.0.4) - Build # 1918 - Unstable!

2018-05-17 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/1918/
Java: 64bit/jdk-9.0.4 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testSplitIntegration

Error Message:
last state: 
DocCollection(testSplitIntegration_collection//clusterstate.json/61)={   
"replicationFactor":"2",   "pullReplicas":"0",   
"router":{"name":"compositeId"},   "maxShardsPerNode":"2",   
"autoAddReplicas":"false",   "nrtReplicas":"2",   "tlogReplicas":"0",   
"autoCreated":"true",   "shards":{ "shard2":{   "replicas":{ 
"core_node3":{   
"core":"testSplitIntegration_collection_shard2_replica_n3",   
"leader":"true",   "SEARCHER.searcher.maxDoc":11,   
"SEARCHER.searcher.deletedDocs":0,   "INDEX.sizeInBytes":1,   
"node_name":"127.0.0.1:10010_solr",   "state":"active",   
"type":"NRT",   "SEARCHER.searcher.numDocs":11}, "core_node4":{ 
  "core":"testSplitIntegration_collection_shard2_replica_n4",   
"SEARCHER.searcher.maxDoc":11,   "SEARCHER.searcher.deletedDocs":0, 
  "INDEX.sizeInBytes":1,   "node_name":"127.0.0.1:10009_solr",  
 "state":"active",   "type":"NRT",   
"SEARCHER.searcher.numDocs":11}},   "range":"0-7fff",   
"state":"active"}, "shard1":{   "stateTimestamp":"1526631096577005400", 
  "replicas":{ "core_node1":{   
"core":"testSplitIntegration_collection_shard1_replica_n1",   
"leader":"true",   "SEARCHER.searcher.maxDoc":14,   
"SEARCHER.searcher.deletedDocs":0,   "INDEX.sizeInBytes":1,   
"node_name":"127.0.0.1:10010_solr",   "state":"active",   
"type":"NRT",   "SEARCHER.searcher.numDocs":14}, "core_node2":{ 
  "core":"testSplitIntegration_collection_shard1_replica_n2",   
"SEARCHER.searcher.maxDoc":14,   "SEARCHER.searcher.deletedDocs":0, 
  "INDEX.sizeInBytes":1,   "node_name":"127.0.0.1:10009_solr",  
 "state":"active",   "type":"NRT",   
"SEARCHER.searcher.numDocs":14}},   "range":"8000-",   
"state":"inactive"}, "shard1_1":{   "parent":"shard1",   
"stateTimestamp":"1526631096586879350",   "range":"c000-",  
 "state":"active",   "replicas":{ "core_node10":{   
"leader":"true",   
"core":"testSplitIntegration_collection_shard1_1_replica1",   
"SEARCHER.searcher.maxDoc":7,   "SEARCHER.searcher.deletedDocs":0,  
 "INDEX.sizeInBytes":1,   "node_name":"127.0.0.1:10009_solr",   
"base_url":"http://127.0.0.1:10009/solr;,   "state":"active",   
"type":"NRT",   "SEARCHER.searcher.numDocs":7}, 
"core_node9":{   
"core":"testSplitIntegration_collection_shard1_1_replica0",   
"SEARCHER.searcher.maxDoc":7,   "SEARCHER.searcher.deletedDocs":0,  
 "INDEX.sizeInBytes":1,   "node_name":"127.0.0.1:10010_solr",   
"base_url":"http://127.0.0.1:10010/solr;,   "state":"active",   
"type":"NRT",   "SEARCHER.searcher.numDocs":7}}}, "shard1_0":{  
 "parent":"shard1",   "stateTimestamp":"1526631096586751500",   
"range":"8000-bfff",   "state":"active",   "replicas":{ 
"core_node7":{   "leader":"true",   
"core":"testSplitIntegration_collection_shard1_0_replica0",   
"SEARCHER.searcher.maxDoc":7,   "SEARCHER.searcher.deletedDocs":0,  
 "INDEX.sizeInBytes":1,   "node_name":"127.0.0.1:10010_solr",   
"base_url":"http://127.0.0.1:10010/solr;,   "state":"active",   
"type":"NRT",   "SEARCHER.searcher.numDocs":7}, 
"core_node8":{   
"core":"testSplitIntegration_collection_shard1_0_replica1",   
"SEARCHER.searcher.maxDoc":7,   "SEARCHER.searcher.deletedDocs":0,  
 "INDEX.sizeInBytes":1,   "node_name":"127.0.0.1:10009_solr",   
"base_url":"http://127.0.0.1:10009/solr;,   "state":"active",   
"type":"NRT",   "SEARCHER.searcher.numDocs":7}

Stack Trace:
java.util.concurrent.TimeoutException: last state: 
DocCollection(testSplitIntegration_collection//clusterstate.json/61)={
  "replicationFactor":"2",
  "pullReplicas":"0",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"2",
  "autoAddReplicas":"false",
  "nrtReplicas":"2",
  "tlogReplicas":"0",
  "autoCreated":"true",
  "shards":{
"shard2":{
  "replicas":{
"core_node3":{
  "core":"testSplitIntegration_collection_shard2_replica_n3",
  "leader":"true",
  "SEARCHER.searcher.maxDoc":11,
  "SEARCHER.searcher.deletedDocs":0,
  "INDEX.sizeInBytes":1,
  "node_name":"127.0.0.1:10010_solr",
 

  1   2   >