[jira] [Commented] (LUCENE-8341) Record soft deletes in SegmentCommitInfo

2018-06-03 Thread Simon Willnauer (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16499821#comment-16499821
 ] 

Simon Willnauer commented on LUCENE-8341:
-

new patch incorporating _FieldInfos#getSoftDeletesField_ I will push this later 
today

>  Record soft deletes in SegmentCommitInfo
> -
>
> Key: LUCENE-8341
> URL: https://issues.apache.org/jira/browse/LUCENE-8341
> Project: Lucene - Core
>  Issue Type: Improvement
>Affects Versions: 7.4, master (8.0)
>Reporter: Simon Willnauer
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: LUCENE-8341.patch, LUCENE-8341.patch, LUCENE-8341.patch, 
> LUCENE-8341.patch
>
>
>  This change add the number of documents that are soft deletes but
> not hard deleted to the segment commit info. This is the last step
> towards making soft deletes as powerful as hard deltes since now the
> number of document can be read from commit points without opening a
> full blown reader. This also allows merge posliies to make decisions
> without requiring an NRT reader to get the relevant statistics. This
> change doesn't enforce any field to be used as soft deletes and the 
> statistic
> is maintained per segment.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8341) Record soft deletes in SegmentCommitInfo

2018-06-03 Thread Simon Willnauer (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Willnauer updated LUCENE-8341:

Attachment: LUCENE-8341.patch

>  Record soft deletes in SegmentCommitInfo
> -
>
> Key: LUCENE-8341
> URL: https://issues.apache.org/jira/browse/LUCENE-8341
> Project: Lucene - Core
>  Issue Type: Improvement
>Affects Versions: 7.4, master (8.0)
>Reporter: Simon Willnauer
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: LUCENE-8341.patch, LUCENE-8341.patch, LUCENE-8341.patch, 
> LUCENE-8341.patch
>
>
>  This change add the number of documents that are soft deletes but
> not hard deleted to the segment commit info. This is the last step
> towards making soft deletes as powerful as hard deltes since now the
> number of document can be read from commit points without opening a
> full blown reader. This also allows merge posliies to make decisions
> without requiring an NRT reader to get the relevant statistics. This
> change doesn't enforce any field to be used as soft deletes and the 
> statistic
> is maintained per segment.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12361) Change _childDocuments to Map

2018-06-03 Thread Cao Manh Dat (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16499819#comment-16499819
 ] 

Cao Manh Dat commented on SOLR-12361:
-

[~dsmiley] In {{fastEstimate(map)}} and {{fastEstimate(collection)}} we only 
count its element if it is a primitive object (we do not go further here).
{quote}line 115 could be changed to loop on the SolrInputFields instead of the 
field names to avoid doing an internal hashmap lookup on all field names.
{quote}
+1 for this one.

With the change in pull request, we can change {{fastEstimate}} to {{estimate}} 
and merge all the duplications.

> Change _childDocuments to Map
> -
>
> Key: SOLR-12361
> URL: https://issues.apache.org/jira/browse/SOLR-12361
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: mosh
>Priority: Major
> Attachments: SOLR-12361.patch, SOLR-12361.patch
>
>  Time Spent: 9h 40m
>  Remaining Estimate: 0h
>
> During the discussion on SOLR-12298, there was a proposal to change 
> _childDocuments in SolrDocumentBase to a Map, to incorporate the relationship 
> between the parent and its child documents.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8335) Do not allow changing soft-deletes field

2018-06-03 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16499812#comment-16499812
 ] 

ASF subversion and git services commented on LUCENE-8335:
-

Commit e7a0a12926c399758a4021715a7419e22e59dab6 in lucene-solr's branch 
refs/heads/master from [~simonw]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=e7a0a12 ]

LUCENE-8335: Enforce soft-deletes field up-front

Soft deletes field must be marked as such once it's introduced
and can't be changed after the fact.

Co-authored-by: Nhat Nguyen 


> Do not allow changing soft-deletes field
> 
>
> Key: LUCENE-8335
> URL: https://issues.apache.org/jira/browse/LUCENE-8335
> Project: Lucene - Core
>  Issue Type: Improvement
>Affects Versions: 7.4, master (8.0)
>Reporter: Nhat Nguyen
>Assignee: Simon Willnauer
>Priority: Minor
> Attachments: LUCENE-8335.patch, LUCENE-8335.patch, LUCENE-8335.patch
>
>
> Today we do not enforce an index to use a single soft-deletes field. A user 
> can create an index with one soft-deletes field then open an IW with another 
> field or add an index with a different soft-deletes field. This should not be 
> allowed and reported the error to users as soon as possible.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8335) Do not allow changing soft-deletes field

2018-06-03 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16499811#comment-16499811
 ] 

ASF subversion and git services commented on LUCENE-8335:
-

Commit 67b6593e7adbce76532e285cef42118e6cc3448f in lucene-solr's branch 
refs/heads/branch_7x from [~simonw]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=67b6593 ]

LUCENE-8335: Enforce soft-deletes field up-front

Soft deletes field must be marked as such once it's introduced
and can't be changed after the fact.

Co-authored-by: Nhat Nguyen 


> Do not allow changing soft-deletes field
> 
>
> Key: LUCENE-8335
> URL: https://issues.apache.org/jira/browse/LUCENE-8335
> Project: Lucene - Core
>  Issue Type: Improvement
>Affects Versions: 7.4, master (8.0)
>Reporter: Nhat Nguyen
>Assignee: Simon Willnauer
>Priority: Minor
> Attachments: LUCENE-8335.patch, LUCENE-8335.patch, LUCENE-8335.patch
>
>
> Today we do not enforce an index to use a single soft-deletes field. A user 
> can create an index with one soft-deletes field then open an IW with another 
> field or add an index with a different soft-deletes field. This should not be 
> allowed and reported the error to users as soon as possible.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #385: WIP: SOLR-12361

2018-06-03 Thread moshebla
Github user moshebla commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/385#discussion_r192638234
  
--- Diff: 
solr/core/src/java/org/apache/solr/update/DirectUpdateHandler2.java ---
@@ -952,12 +941,15 @@ public void split(SplitIndexCommand cmd) throws 
IOException {
*
* @param cmd - cmd apply to IndexWriter
* @param writer - IndexWriter to use
-   * @param updateTerm - used if this cmd results in calling {@link 
IndexWriter#updateDocument}
*/
-  private void updateDocOrDocValues(AddUpdateCommand cmd, IndexWriter 
writer, Term updateTerm) throws IOException {
+  private Term updateDocOrDocValues(AddUpdateCommand cmd, IndexWriter 
writer) throws IOException {
 assert null != cmd;
 final SchemaField uniqueKeyField = 
cmd.req.getSchema().getUniqueKeyField();
 final String uniqueKeyFieldName = null == uniqueKeyField ? null : 
uniqueKeyField.getName();
+List docs = cmd.computeFinalFlattenedSolrDocs();
--- End diff --

Yeah, but that means we wouldn't have a way to figure out if the update is 
a block, which is a parameter which needs to be passed to getIdTerm.
I'm not as familiar with the internals, would it always be false during an 
inPlace update?


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12209) add Paging Streaming Expression

2018-06-03 Thread mosh (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16499799#comment-16499799
 ] 

mosh edited comment on SOLR-12209 at 6/4/18 6:32 AM:
-

I have just uploaded a new patch in which the tests were moved to 
StreamDecoratorTest.


was (Author: moshebla):
I just uploaded a new patch in which the tests were moved to 
StreamDecoratorTest.

> add Paging Streaming Expression
> ---
>
> Key: SOLR-12209
> URL: https://issues.apache.org/jira/browse/SOLR-12209
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Reporter: mosh
>Priority: Major
> Attachments: SOLR-12209.patch, SOLR-12209.patch, SOLR-12209.patch
>
>
> Currently the closest streaming expression that allows some sort of 
> pagination is top.
> I propose we add a new streaming expression, which is based on the 
> RankedStream class to add offset to the stream. currently it can only be done 
> in code by reading the stream until the desired offset is reached.
> The new expression will be used as such:
> {{paging(rows=3, search(collection1, q="*:*", qt="/export", 
> fl="id,a_s,a_i,a_f", sort="a_f desc, a_i desc"), sort="a_f asc, a_i asc", 
> start=100)}}
> {{this will offset the returned stream by 100 documents}}
>  
> [~joel.bernstein] what to you think?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-7.x - Build # 633 - Unstable

2018-06-03 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/633/

1 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.sim.TestComputePlanAction.testNodeAdded

Error Message:
ComputePlanAction should have computed exactly 1 operation, but was: 
[org.apache.solr.client.solrj.request.CollectionAdminRequest$MoveReplica@5f34a90,
 
org.apache.solr.client.solrj.request.CollectionAdminRequest$MoveReplica@7e1a3942]
 expected:<1> but was:<2>

Stack Trace:
java.lang.AssertionError: ComputePlanAction should have computed exactly 1 
operation, but was: 
[org.apache.solr.client.solrj.request.CollectionAdminRequest$MoveReplica@5f34a90,
 
org.apache.solr.client.solrj.request.CollectionAdminRequest$MoveReplica@7e1a3942]
 expected:<1> but was:<2>
at 
__randomizedtesting.SeedInfo.seed([B235FFDC6C682C40:D7F6A9ABCECB8443]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.cloud.autoscaling.sim.TestComputePlanAction.testNodeAdded(TestComputePlanAction.java:313)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.ca

[jira] [Updated] (SOLR-12444) Updating a cluster policy fails

2018-06-03 Thread Varun Thacker (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12444?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-12444:
-
Environment: (was: If I create a policy like this
{code:java}
curl -X POST -H 'Content-type:application/json' --data-binary '{
"set-cluster-policy": [
{"cores": "<4","node": "#ANY"}
]
}' http://localhost:8983/solr/admin/autoscaling{code}
Then I can never update this policy subsequently .

To reproduce try changing the policy to 
{code:java}
curl -X POST -H 'Content-type:application/json' --data-binary '{
"set-cluster-policy": [
{"cores": "<3","node": "#ANY"}
]
}' http://localhost:8983/solr/admin/autoscaling{code}
The policy will never change. The workaround is to clear the policy by sending 
an empty array and then re-creating it )

> Updating a cluster policy fails
> ---
>
> Key: SOLR-12444
> URL: https://issues.apache.org/jira/browse/SOLR-12444
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Varun Thacker
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12444) Updating a cluster policy fails

2018-06-03 Thread Varun Thacker (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12444?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-12444:
-
Description: 
If I create a policy like this
{code:java}
curl -X POST -H 'Content-type:application/json' --data-binary '{
"set-cluster-policy": [
{"cores": "<4","node": "#ANY"}
]
}' http://localhost:8983/solr/admin/autoscaling{code}
Then I can never update this policy subsequently .

To reproduce try changing the policy to 
{code:java}
curl -X POST -H 'Content-type:application/json' --data-binary '{
"set-cluster-policy": [
{"cores": "<3","node": "#ANY"}
]
}' http://localhost:8983/solr/admin/autoscaling{code}
The policy will never change. The workaround is to clear the policy by sending 
an empty array and then re-creating it 

> Updating a cluster policy fails
> ---
>
> Key: SOLR-12444
> URL: https://issues.apache.org/jira/browse/SOLR-12444
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Varun Thacker
>Priority: Major
>
> If I create a policy like this
> {code:java}
> curl -X POST -H 'Content-type:application/json' --data-binary '{
> "set-cluster-policy": [
> {"cores": "<4","node": "#ANY"}
> ]
> }' http://localhost:8983/solr/admin/autoscaling{code}
> Then I can never update this policy subsequently .
> To reproduce try changing the policy to 
> {code:java}
> curl -X POST -H 'Content-type:application/json' --data-binary '{
> "set-cluster-policy": [
> {"cores": "<3","node": "#ANY"}
> ]
> }' http://localhost:8983/solr/admin/autoscaling{code}
> The policy will never change. The workaround is to clear the policy by 
> sending an empty array and then re-creating it 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12444) Updating a cluster policy fails

2018-06-03 Thread Varun Thacker (JIRA)
Varun Thacker created SOLR-12444:


 Summary: Updating a cluster policy fails
 Key: SOLR-12444
 URL: https://issues.apache.org/jira/browse/SOLR-12444
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: AutoScaling
 Environment: If I create a policy like this
{code:java}
curl -X POST -H 'Content-type:application/json' --data-binary '{
"set-cluster-policy": [
{"cores": "<4","node": "#ANY"}
]
}' http://localhost:8983/solr/admin/autoscaling{code}
Then I can never update this policy subsequently .

To reproduce try changing the policy to 
{code:java}
curl -X POST -H 'Content-type:application/json' --data-binary '{
"set-cluster-policy": [
{"cores": "<3","node": "#ANY"}
]
}' http://localhost:8983/solr/admin/autoscaling{code}
The policy will never change. The workaround is to clear the policy by sending 
an empty array and then re-creating it 
Reporter: Varun Thacker






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12443) Add a default policy set for solr test framework

2018-06-03 Thread Varun Thacker (JIRA)
Varun Thacker created SOLR-12443:


 Summary: Add a default policy set for solr test framework
 Key: SOLR-12443
 URL: https://issues.apache.org/jira/browse/SOLR-12443
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: AutoScaling
 Environment: The goal here is to increase test coverage for the policy 
framework.

We can add policies that we know will never get violated  - Do not allow more 
than 100 cores on any node 

But in doing so we will ensue that the policy framework is always being 
executed .  I'd imagine bugs like SOLR-12358 would have been caught earlier 
this way

 
Reporter: Varun Thacker






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12209) add Paging Streaming Expression

2018-06-03 Thread mosh (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mosh updated SOLR-12209:

Attachment: (was: 0001-added-skip-and-limit-stream-decorators.patch)

> add Paging Streaming Expression
> ---
>
> Key: SOLR-12209
> URL: https://issues.apache.org/jira/browse/SOLR-12209
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Affects Versions: 7.3
>Reporter: mosh
>Priority: Major
> Attachments: SOLR-12209.patch, SOLR-12209.patch
>
>
> Currently the closest streaming expression that allows some sort of 
> pagination is top.
> I propose we add a new streaming expression, which is based on the 
> RankedStream class to add offset to the stream. currently it can only be done 
> in code by reading the stream until the desired offset is reached.
> The new expression will be used as such:
> {{paging(rows=3, search(collection1, q="*:*", qt="/export", 
> fl="id,a_s,a_i,a_f", sort="a_f desc, a_i desc"), sort="a_f asc, a_i asc", 
> start=100)}}
> {{this will offset the returned stream by 100 documents}}
>  
> [~joel.bernstein] what to you think?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12209) add Paging Streaming Expression

2018-06-03 Thread mosh (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16499751#comment-16499751
 ] 

mosh commented on SOLR-12209:
-

I have accidentally added the tests to StreamingTest, I'll move them to 
StreamDecoratorTest.
Thanks.

> add Paging Streaming Expression
> ---
>
> Key: SOLR-12209
> URL: https://issues.apache.org/jira/browse/SOLR-12209
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Affects Versions: 7.3
>Reporter: mosh
>Priority: Major
> Attachments: 0001-added-skip-and-limit-stream-decorators.patch, 
> SOLR-12209.patch, SOLR-12209.patch
>
>
> Currently the closest streaming expression that allows some sort of 
> pagination is top.
> I propose we add a new streaming expression, which is based on the 
> RankedStream class to add offset to the stream. currently it can only be done 
> in code by reading the stream until the desired offset is reached.
> The new expression will be used as such:
> {{paging(rows=3, search(collection1, q="*:*", qt="/export", 
> fl="id,a_s,a_i,a_f", sort="a_f desc, a_i desc"), sort="a_f asc, a_i asc", 
> start=100)}}
> {{this will offset the returned stream by 100 documents}}
>  
> [~joel.bernstein] what to you think?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12439) Switch from Apache HttpClient to Jetty HttpClient which offers a single API for Async, Http/1.1 Http/2.h

2018-06-03 Thread Mark Miller (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16499741#comment-16499741
 ] 

Mark Miller commented on SOLR-12439:


Yeah, I looked at 5, but it is was still in beta. I saw a separate async 
project but did not look into whether 5 was incorporating that. For a variety 
of reasons, the Jetty HttpClient fits our needs very well and these features 
have been out for some time maturing. I've had a much nicer experience than 
with Apache HttpClient in the past.

> Switch from Apache HttpClient to Jetty HttpClient which offers a single API 
> for Async, Http/1.1 Http/2.h
> 
>
> Key: SOLR-12439
> URL: https://issues.apache.org/jira/browse/SOLR-12439
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-9922) Write buffering updates to another tlog

2018-06-03 Thread Cao Manh Dat (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-9922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cao Manh Dat resolved SOLR-9922.

   Resolution: Fixed
Fix Version/s: 7.4

> Write buffering updates to another tlog
> ---
>
> Key: SOLR-9922
> URL: https://issues.apache.org/jira/browse/SOLR-9922
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Major
> Fix For: 7.4
>
> Attachments: SOLR-9922.patch, SOLR-9922.patch, SOLR-9922.patch, 
> SOLR-9922.patch, SOLR-9922.patch, SOLR-9922.patch
>
>
> Currently, we write buffering logs to current tlog and not apply that updates 
> to index. Then we rely on replay log to apply that updates to index. But at 
> the same time there are some updates also write to current tlog and applied 
> to the index. 
> For example, during peersync, if new updates come to replica we will end up 
> with this tlog
> tlog : old1, new1, new2, old2, new3, old3
> old updates belong to peersync, and these updates are applied to the index.
> new updates belong to buffering updates, and these updates are not applied to 
> the index.
> But writing all the updates to same current tlog make code base very complex. 
> We should write buffering updates to another tlog file.
> By doing this, it will help our code base simpler. It also makes replica 
> recovery for SOLR-9835 more easier. Because after peersync success we can 
> copy new updates from temporary file to current tlog, for example
> tlog : old1, old2, old3
> temporary tlog : new1, new2, new3
> -->
> tlog : old1, old2, old3, new1, new2, new3



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12442) Improve error message when a policy is violated while creating a collection

2018-06-03 Thread Varun Thacker (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16499739#comment-16499739
 ] 

Varun Thacker commented on SOLR-12442:
--

Looks like the full error from the overseer is either not propagated to the 
client or the client isn't checking the underlying caused by

> Improve error message when a policy is violated while creating a collection
> ---
>
> Key: SOLR-12442
> URL: https://issues.apache.org/jira/browse/SOLR-12442
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Varun Thacker
>Priority: Major
>
> On master when I have this policy setup
> {code:java}
> curl -X POST -H 'Content-type:application/json' --data-binary '{
> "set-cluster-policy": [
> {"cores": "<4","node": "#ANY"}
> ]
> }' http://localhost:8983/solr/admin/autoscaling{code}
> Now if I try to create a collection where the policy will get violated , I'll 
> get the following message in as a response 
>  
> {code:java}
> {
>   "responseHeader":{
> "status":500,
> "QTime":13},
>   "Operation create caused 
> exception:":"org.apache.solr.common.SolrException:org.apache.solr.common.SolrException:
>  Error getting replica locations",
>   "exception":{
> "msg":"Error getting replica locations",
> "rspCode":500},
>   "error":{
> "metadata":[
>   "error-class","org.apache.solr.common.SolrException",
>   "root-error-class","org.apache.solr.common.SolrException"],
> "msg":"Error getting replica locations",
> "trace":"org.apache.solr.common.SolrException: Error getting replica 
> locations\n\tat 
> org.apache.solr.client.solrj.SolrResponse.getException(SolrResponse.java:53)\n\tat
>  
> org.apache.solr.handler.admin.CollectionsHandler.invokeAction(CollectionsHandler.java:269)\n\tat
>  
> org.apache.solr.handler.admin.CollectionsHandler.handleRequestBody(CollectionsHandler.java:241)\n\tat
>  
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)\n\tat
>  
> org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:734)\n\tat 
> org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:715)\n\tat
>  org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:496)\n\tat 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:378)\n\tat
>  
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:324)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1634)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)\n\tat
>  
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1253)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1155)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:219)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:126)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat
>  
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat
>  org.eclipse.jetty.server.Server.handle(Server.java:531)\n\tat 
> org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:352)\n\tat 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:260)\n\tat
>  
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:281)\n\tat
>  org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:102)\n\tat 
> org.e

[JENKINS] Lucene-Solr-BadApples-master-Linux (64bit/jdk1.8.0_172) - Build # 45 - Still Unstable!

2018-06-03 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-BadApples-master-Linux/45/
Java: 64bit/jdk1.8.0_172 -XX:+UseCompressedOops -XX:+UseSerialGC

5 tests failed.
FAILED:  org.apache.solr.cloud.autoscaling.sim.TestLargeCluster.testSearchRate

Error Message:
java.util.concurrent.ExecutionException: java.io.IOException: 
java.lang.NullPointerException

Stack Trace:
java.io.IOException: java.util.concurrent.ExecutionException: 
java.io.IOException: java.lang.NullPointerException
at 
__randomizedtesting.SeedInfo.seed([828C65886443FEE9:DFC47B01AB8558A6]:0)
at 
org.apache.solr.cloud.autoscaling.sim.SimCloudManager.request(SimCloudManager.java:621)
at 
org.apache.solr.cloud.autoscaling.sim.SimCloudManager$1.request(SimCloudManager.java:210)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
at 
org.apache.solr.cloud.autoscaling.sim.TestLargeCluster.setupTest(TestLargeCluster.java:108)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:968)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.util.concurrent.ExecutionException: java.io.IOException: 
ja

[jira] [Comment Edited] (SOLR-12069) Default operator parameter q.op ignored.

2018-06-03 Thread Munendra S N (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16499474#comment-16499474
 ] 

Munendra S N edited comment on SOLR-12069 at 6/4/18 4:37 AM:
-

[~cstrobl]
Facing the same issue.

>From solr-7, split on whitespaces has been disabled by default(sow=false)
In Solr-6, sow=true by default

So, I tried passing sow=true in the request, the results were as expected i.e, 
in this can number of products are 0 and *parsedquery_toString* is 
*+(+inStock:T +inStock:F)*

Looks like this issue is due to change in sow default behavior
https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/parser/QueryParser.java#L560
 - The way the multiTerm query is handled

Also here,
https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/parser/SolrQueryParserBase.java#L1014
 - should the operator picked from params (I didn't understand why it is always 
SHOULD clause)


was (Author: munendrasn):
[~cstrobl]
Facing the same issue.

>From solr-7, split on whitespaces has been disabled by default(sow=false)
In Solr-6, sow=true by default

So, I tried passing sow=true in the request, the results were as expected i.e, 
in this can number of products are 0 and *parsedquery_toString* is 
*+(+inStock:T +inStock:F)*

Looks like this issue is due to change in sow default behavior
https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/parser/QueryParser.java#L560
 - The way the multiTerm query is handled


> Default operator parameter q.op ignored.
> 
>
> Key: SOLR-12069
> URL: https://issues.apache.org/jira/browse/SOLR-12069
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: query parsers
>Affects Versions: 7.2
>Reporter: Christoph Strobl
>Priority: Critical
>
> The {{q.op}} parameter as described in the [7.2 reference for the Standard 
> Query 
> Parser|http://lucene.apache.org/solr/guide/7_2/the-standard-query-parser.html#standard-query-parser-parameters]
>  does not get used when executing a query.
> Reproduce via _techproducts_ example.
>  # {{./bin/solr -e tchproducts}}
>  # {{./bin/post -c techproducts example/exampledocs/*.xml}}
>  # {{http "http://localhost:8983/solr/techproducts/select?q=inStock:(true 
> false)&q.op=AND"}}
> The search above is expected to return {{0}} results but instead matches all 
> {{21}} entries.
> The response header lists the {{q.op}} parameter 
> {code:javascript}
> {
>   "responseHeader": {
> "QTime": 3,
> "params": {
>   "debugQuery": "on",
>   "q": "inStock:(true false)",
>   "q.op": "AND"
> },
> "status": 0
>   },
>   // ...
> }
> {code}
> The debug output is as follows:
> {code:javascript}
> debug": {
>   "QParser": "LuceneQParser",
>   "explain": {
>   "100-435805": "\n1.5869651 = sum of:\n  1.5869651 = weight(inStock:F in 
> 31) [SchemaSimilarity], result of:\n1.5869651 = score(doc=31,freq=1.0 = 
> termFreq=1.0\n), product of:\n  1.5869651 = idf, computed as log(1 + 
> (docCount - docFreq + 0.5) / (docFreq + 0.5)) from:\n4.0 = docFreq\n  
>   21.0 = docCount\n  1.0 = tfNorm, computed as (freq * (k1 + 1)) / 
> (freq + k1) from:\n1.0 = termFreq=1.0\n1.2 = parameter k1\n   
>  0.0 = parameter b (norms omitted for field)\n",
>   "6H500F0": "\n0.22884157 = sum of:\n  0.22884157 = weight(inStock:T in 
> 2) [SchemaSimilarity], result of:\n0.22884157 = score(doc=2,freq=1.0 = 
> termFreq=1.0\n), product of:\n  0.22884157 = idf, computed as log(1 + 
> (docCount - docFreq + 0.5) / (docFreq + 0.5)) from:\n17.0 = docFreq\n 
>21.0 = docCount\n  1.0 = tfNorm, computed as (freq * (k1 + 1)) / 
> (freq + k1) from:\n1.0 = termFreq=1.0\n1.2 = parameter k1\n   
>  0.0 = parameter b (norms omitted for field)\n",
>   "EN7800GTX/2DHTV/256M": "\n1.5869651 = sum of:\n  1.5869651 = 
> weight(inStock:F in 30) [SchemaSimilarity], result of:\n1.5869651 = 
> score(doc=30,freq=1.0 = termFreq=1.0\n), product of:\n  1.5869651 = idf, 
> computed as log(1 + (docCount - docFreq + 0.5) / (docFreq + 0.5)) from:\n 
>4.0 = docFreq\n21.0 = docCount\n  1.0 = tfNorm, computed as 
> (freq * (k1 + 1)) / (freq + k1) from:\n1.0 = termFreq=1.0\n
> 1.2 = parameter k1\n0.0 = parameter b (norms omitted for field)\n",
>   "F8V7067-APL-KIT": "\n1.5869651 = sum of:\n  1.5869651 = 
> weight(inStock:F in 3) [SchemaSimilarity], result of:\n1.5869651 = 
> score(doc=3,freq=1.0 = termFreq=1.0\n), product of:\n  1.5869651 = idf, 
> computed as log(1 + (docCount - docFreq + 0.5) / (docFreq + 0.5)) from:\n 
>4.0 = docFreq\n 

[jira] [Commented] (SOLR-9922) Write buffering updates to another tlog

2018-06-03 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16499730#comment-16499730
 ] 

ASF subversion and git services commented on SOLR-9922:
---

Commit 2a6f4862f96456c2aee4dcd3a51a5eb52ff8a0ad in lucene-solr's branch 
refs/heads/branch_7x from [~caomanhdat]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=2a6f486 ]

SOLR-9922: Write buffering updates to another tlog


> Write buffering updates to another tlog
> ---
>
> Key: SOLR-9922
> URL: https://issues.apache.org/jira/browse/SOLR-9922
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Major
> Attachments: SOLR-9922.patch, SOLR-9922.patch, SOLR-9922.patch, 
> SOLR-9922.patch, SOLR-9922.patch, SOLR-9922.patch
>
>
> Currently, we write buffering logs to current tlog and not apply that updates 
> to index. Then we rely on replay log to apply that updates to index. But at 
> the same time there are some updates also write to current tlog and applied 
> to the index. 
> For example, during peersync, if new updates come to replica we will end up 
> with this tlog
> tlog : old1, new1, new2, old2, new3, old3
> old updates belong to peersync, and these updates are applied to the index.
> new updates belong to buffering updates, and these updates are not applied to 
> the index.
> But writing all the updates to same current tlog make code base very complex. 
> We should write buffering updates to another tlog file.
> By doing this, it will help our code base simpler. It also makes replica 
> recovery for SOLR-9835 more easier. Because after peersync success we can 
> copy new updates from temporary file to current tlog, for example
> tlog : old1, old2, old3
> temporary tlog : new1, new2, new3
> -->
> tlog : old1, old2, old3, new1, new2, new3



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9922) Write buffering updates to another tlog

2018-06-03 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16499729#comment-16499729
 ] 

ASF subversion and git services commented on SOLR-9922:
---

Commit ab316bbc91c273b13c851a38ad5d14ef64ab3eec in lucene-solr's branch 
refs/heads/master from [~caomanhdat]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ab316bb ]

SOLR-9922: Write buffering updates to another tlog


> Write buffering updates to another tlog
> ---
>
> Key: SOLR-9922
> URL: https://issues.apache.org/jira/browse/SOLR-9922
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Major
> Attachments: SOLR-9922.patch, SOLR-9922.patch, SOLR-9922.patch, 
> SOLR-9922.patch, SOLR-9922.patch, SOLR-9922.patch
>
>
> Currently, we write buffering logs to current tlog and not apply that updates 
> to index. Then we rely on replay log to apply that updates to index. But at 
> the same time there are some updates also write to current tlog and applied 
> to the index. 
> For example, during peersync, if new updates come to replica we will end up 
> with this tlog
> tlog : old1, new1, new2, old2, new3, old3
> old updates belong to peersync, and these updates are applied to the index.
> new updates belong to buffering updates, and these updates are not applied to 
> the index.
> But writing all the updates to same current tlog make code base very complex. 
> We should write buffering updates to another tlog file.
> By doing this, it will help our code base simpler. It also makes replica 
> recovery for SOLR-9835 more easier. Because after peersync success we can 
> copy new updates from temporary file to current tlog, for example
> tlog : old1, old2, old3
> temporary tlog : new1, new2, new3
> -->
> tlog : old1, old2, old3, new1, new2, new3



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12424) Search trigger does not accept handler

2018-06-03 Thread Varun Thacker (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-12424:
-
Component/s: AutoScaling

> Search trigger does not accept handler
> --
>
> Key: SOLR-12424
> URL: https://issues.apache.org/jira/browse/SOLR-12424
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Varun Thacker
>Priority: Major
>
> This ref guide page ( 
> [http://lucene.apache.org/solr/guide/7_3/solrcloud-autoscaling-triggers.html#search-rate-trigger]
>  ) provides an example JSON for a search trigger.
> If I post that on master I get the following error
> {code:java}
> ERROR - 2018-05-30 02:29:54.837; [   ] 
> org.apache.solr.handler.RequestHandlerBase; 
> org.apache.solr.api.ApiBag$ExceptionWithErrObject: Error in command payload, 
> errors: [{set-trigger={name=search_rate_trigger, event=searchRate, 
> collection=test, handler=/select, rate=100.0, waitFor=1m, enabled=true, 
> actions=[{name=compute_plan, class=solr.ComputePlanAction}, 
> {name=execute_plan, class=solr.ExecutePlanAction}]}, errorMessages=[Error 
> validating trigger config search_rate_trigger: 
> TriggerValidationException{name=search_rate_trigger, 
> details='{handler=unknown property}'}]}], 
>  at 
> org.apache.solr.cloud.autoscaling.AutoScalingHandler.processOps(AutoScalingHandler.java:210)
>  at 
> org.apache.solr.cloud.autoscaling.AutoScalingHandler.handleRequestBody(AutoScalingHandler.java:148)
>  at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)
>  at org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:734)
>  at 
> org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:715)
>  at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:496)
>  at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:378)
>  at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:324)
>  at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1634)
>  at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)
>  at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
> ...{code}
> From the JSON payload if I remove the "handler" key it works. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12442) Improve error message when a policy is violated while creating a collection

2018-06-03 Thread Varun Thacker (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-12442:
-
Component/s: AutoScaling

> Improve error message when a policy is violated while creating a collection
> ---
>
> Key: SOLR-12442
> URL: https://issues.apache.org/jira/browse/SOLR-12442
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Varun Thacker
>Priority: Major
>
> On master when I have this policy setup
> {code:java}
> curl -X POST -H 'Content-type:application/json' --data-binary '{
> "set-cluster-policy": [
> {"cores": "<4","node": "#ANY"}
> ]
> }' http://localhost:8983/solr/admin/autoscaling{code}
> Now if I try to create a collection where the policy will get violated , I'll 
> get the following message in as a response 
>  
> {code:java}
> {
>   "responseHeader":{
> "status":500,
> "QTime":13},
>   "Operation create caused 
> exception:":"org.apache.solr.common.SolrException:org.apache.solr.common.SolrException:
>  Error getting replica locations",
>   "exception":{
> "msg":"Error getting replica locations",
> "rspCode":500},
>   "error":{
> "metadata":[
>   "error-class","org.apache.solr.common.SolrException",
>   "root-error-class","org.apache.solr.common.SolrException"],
> "msg":"Error getting replica locations",
> "trace":"org.apache.solr.common.SolrException: Error getting replica 
> locations\n\tat 
> org.apache.solr.client.solrj.SolrResponse.getException(SolrResponse.java:53)\n\tat
>  
> org.apache.solr.handler.admin.CollectionsHandler.invokeAction(CollectionsHandler.java:269)\n\tat
>  
> org.apache.solr.handler.admin.CollectionsHandler.handleRequestBody(CollectionsHandler.java:241)\n\tat
>  
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)\n\tat
>  
> org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:734)\n\tat 
> org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:715)\n\tat
>  org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:496)\n\tat 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:378)\n\tat
>  
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:324)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1634)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)\n\tat
>  
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1253)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1155)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:219)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:126)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat
>  
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat
>  org.eclipse.jetty.server.Server.handle(Server.java:531)\n\tat 
> org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:352)\n\tat 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:260)\n\tat
>  
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:281)\n\tat
>  org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:102)\n\tat 
> org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:118)\n\tat 
> org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:333)\n\tat
>

[GitHub] lucene-solr pull request #385: WIP: SOLR-12361

2018-06-03 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/385#discussion_r192623213
  
--- Diff: 
solr/core/src/java/org/apache/solr/update/DirectUpdateHandler2.java ---
@@ -952,12 +941,15 @@ public void split(SplitIndexCommand cmd) throws 
IOException {
*
* @param cmd - cmd apply to IndexWriter
* @param writer - IndexWriter to use
-   * @param updateTerm - used if this cmd results in calling {@link 
IndexWriter#updateDocument}
*/
-  private void updateDocOrDocValues(AddUpdateCommand cmd, IndexWriter 
writer, Term updateTerm) throws IOException {
+  private Term updateDocOrDocValues(AddUpdateCommand cmd, IndexWriter 
writer) throws IOException {
 assert null != cmd;
 final SchemaField uniqueKeyField = 
cmd.req.getSchema().getUniqueKeyField();
 final String uniqueKeyFieldName = null == uniqueKeyField ? null : 
uniqueKeyField.getName();
+List docs = cmd.computeFinalFlattenedSolrDocs();
--- End diff --

I don't think these lines you've added should be placed here, as it's 
computing final flattened docs when they may not even be needed -- which is 
when cmd.isInPlaceUpdate.  You could move this entirely to updateDocument()?


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #385: WIP: SOLR-12361

2018-06-03 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/385#discussion_r192623693
  
--- Diff: 
solr/core/src/java/org/apache/solr/update/DirectUpdateHandler2.java ---
@@ -972,27 +964,22 @@ private void updateDocOrDocValues(AddUpdateCommand 
cmd, IndexWriter writer, Term
   log.debug("updateDocValues({})", cmd);
   writer.updateDocValues(updateTerm, fieldsToUpdate.toArray(new 
Field[fieldsToUpdate.size()]));
 } else {
-  updateDocument(cmd, writer, updateTerm);
+  updateDocument(cmd, docs, writer, updateTerm, isBlock);
--- End diff --

IMO of updateDocument is a method then the code above underneath 
cmd.isInPlaceUpdate should be a method as well -- `updateDocValues` to be 
balanced.  Or inline updateDocument.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #385: WIP: SOLR-12361

2018-06-03 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/385#discussion_r192622147
  
--- Diff: solr/core/src/java/org/apache/solr/update/AddUpdateCommand.java 
---
@@ -270,4 +275,8 @@ public String toString() {
   public boolean isInPlaceUpdate() {
 return (prevVersion >= 0);
   }
+
+  public boolean hasUpdateTerm() {
--- End diff --

ehh; this method isn't helpful as it's just a simple field value (not 
something expensive to figure out)


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #385: WIP: SOLR-12361

2018-06-03 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/385#discussion_r192622725
  
--- Diff: 
solr/core/src/java/org/apache/solr/update/DirectUpdateHandler2.java ---
@@ -333,25 +331,18 @@ private void allowDuplicateUpdate(AddUpdateCommand 
cmd) throws IOException {
   }
 
   private void doNormalUpdate(AddUpdateCommand cmd) throws IOException {
-Term updateTerm;
-Term idTerm = getIdTerm(cmd);
-boolean del = false;
-if (cmd.updateTerm == null) {
-  updateTerm = idTerm;
-} else {
-  // this is only used by the dedup update processor
-  del = true;
-  updateTerm = cmd.updateTerm;
-}
+boolean del = cmd.hasUpdateTerm();
 
 RefCounted iw = solrCoreState.getIndexWriter(core);
 try {
   IndexWriter writer = iw.get();
 
-  updateDocOrDocValues(cmd, writer, updateTerm);
+  Term idTerm = updateDocOrDocValues(cmd, writer);
+  Term updateTerm = !del ? idTerm : cmd.updateTerm;
--- End diff --

You can move this declaration closer to where you will actually use it, 
which is inside `if (del)`.  That in turn suggests it will in fact be 
cmd.updateTerm and not idTerm; right?


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #385: WIP: SOLR-12361

2018-06-03 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/385#discussion_r192623800
  
--- Diff: 
solr/core/src/java/org/apache/solr/update/DirectUpdateHandler2.java ---
@@ -952,12 +941,15 @@ public void split(SplitIndexCommand cmd) throws 
IOException {
*
* @param cmd - cmd apply to IndexWriter
* @param writer - IndexWriter to use
-   * @param updateTerm - used if this cmd results in calling {@link 
IndexWriter#updateDocument}
*/
-  private void updateDocOrDocValues(AddUpdateCommand cmd, IndexWriter 
writer, Term updateTerm) throws IOException {
+  private Term updateDocOrDocValues(AddUpdateCommand cmd, IndexWriter 
writer) throws IOException {
 assert null != cmd;
 final SchemaField uniqueKeyField = 
cmd.req.getSchema().getUniqueKeyField();
 final String uniqueKeyFieldName = null == uniqueKeyField ? null : 
uniqueKeyField.getName();
+List docs = cmd.computeFinalFlattenedSolrDocs();
+boolean isBlock = docs.size() > 1;
+Term idTerm = getIdTerm(cmd, isBlock);
+Term updateTerm = cmd.hasUpdateTerm() ? cmd.updateTerm: idTerm;
 
 if (cmd.isInPlaceUpdate()) {
--- End diff --

IMO since updateDocument is a method then the code here underneath 
cmd.isInPlaceUpdate should be a method as well -- `updateDocValues` to be 
balanced.  Or inline updateDocument.   (I know you didn't touch this)


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #385: WIP: SOLR-12361

2018-06-03 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/385#discussion_r192622171
  
--- Diff: solr/core/src/java/org/apache/solr/update/AddUpdateCommand.java 
---
@@ -66,6 +67,10 @@ public AddUpdateCommand(SolrQueryRequest req) {
  super(req);
}
 
+  public static Iterable 
toDocumentsIter(Collection docs, IndexSchema schema) {
--- End diff --

Yes I like this better here


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12358) Autoscaling suggestions fail randomly and for certain policies

2018-06-03 Thread Varun Thacker (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16499723#comment-16499723
 ] 

Varun Thacker commented on SOLR-12358:
--

I tried out the example mentioned here against master , and I don't get an 
error. But when I violate the policy while creating the collection the error 
message isn't very helpful . I filed SOLR-12442 for it

> Autoscaling suggestions fail randomly and for certain policies
> --
>
> Key: SOLR-12358
> URL: https://issues.apache.org/jira/browse/SOLR-12358
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Affects Versions: 7.3.1
>Reporter: Jerry Bao
>Assignee: Noble Paul
>Priority: Critical
> Attachments: SOLR-12358.patch, SOLR-12358.patch, SOLR-12358.patch, 
> SOLR-12358.patch, diagnostics, nodes
>
>
> For the following policy
> {code:java}
> {"cores": "<4","node": "#ANY"}{code}
> the suggestions endpoint fails
> {code:java}
> "error": {"msg": "Comparison method violates its general contract!","trace": 
> "java.lang.IllegalArgumentException: Comparison method violates its general 
> contract!\n\tat java.util.TimSort.mergeHi(TimSort.java:899)\n\tat 
> java.util.TimSort.mergeAt(TimSort.java:516)\n\tat 
> java.util.TimSort.mergeCollapse(TimSort.java:441)\n\tat 
> java.util.TimSort.sort(TimSort.java:245)\n\tat 
> java.util.Arrays.sort(Arrays.java:1512)\n\tat 
> java.util.ArrayList.sort(ArrayList.java:1462)\n\tat 
> java.util.Collections.sort(Collections.java:175)\n\tat 
> org.apache.solr.client.solrj.cloud.autoscaling.Policy.setApproxValuesAndSortNodes(Policy.java:363)\n\tat
>  
> org.apache.solr.client.solrj.cloud.autoscaling.Policy$Session.applyRules(Policy.java:310)\n\tat
>  
> org.apache.solr.client.solrj.cloud.autoscaling.Policy$Session.(Policy.java:272)\n\tat
>  
> org.apache.solr.client.solrj.cloud.autoscaling.Policy.createSession(Policy.java:376)\n\tat
>  
> org.apache.solr.client.solrj.cloud.autoscaling.PolicyHelper.getSuggestions(PolicyHelper.java:214)\n\tat
>  
> org.apache.solr.cloud.autoscaling.AutoScalingHandler.handleSuggestions(AutoScalingHandler.java:158)\n\tat
>  
> org.apache.solr.cloud.autoscaling.AutoScalingHandler.handleRequestBody(AutoScalingHandler.java:133)\n\tat
>  
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:195)\n\tat
>  org.apache.solr.api.ApiBag$ReqHandlerToApi.call(ApiBag.java:242)\n\tat 
> org.apache.solr.api.V2HttpCall.handleAdmin(V2HttpCall.java:311)\n\tat 
> org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:717)\n\tat
>  org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:498)\n\tat 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:384)\n\tat
>  
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:330)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1629)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)\n\tat
>  
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:190)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:188)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1253)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:168)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:166)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1155)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:219)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:126)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat
>  
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat
>  org.ecl

[jira] [Created] (SOLR-12442) Improve error message when a policy is violated while creating a collection

2018-06-03 Thread Varun Thacker (JIRA)
Varun Thacker created SOLR-12442:


 Summary: Improve error message when a policy is violated while 
creating a collection
 Key: SOLR-12442
 URL: https://issues.apache.org/jira/browse/SOLR-12442
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
 Environment: On master when I have this policy setup
{code:java}
curl -X POST -H 'Content-type:application/json' --data-binary '{
"set-cluster-policy": [
{"cores": "<4","node": "#ANY"}
]
}' http://localhost:8983/solr/admin/autoscaling{code}
Now if I try to create a collection where the policy will get violated , I'll 
get the following message in as a response 

 
{code:java}
{
  "responseHeader":{
"status":500,
"QTime":13},
  "Operation create caused 
exception:":"org.apache.solr.common.SolrException:org.apache.solr.common.SolrException:
 Error getting replica locations",
  "exception":{
"msg":"Error getting replica locations",
"rspCode":500},
  "error":{
"metadata":[
  "error-class","org.apache.solr.common.SolrException",
  "root-error-class","org.apache.solr.common.SolrException"],
"msg":"Error getting replica locations",
"trace":"org.apache.solr.common.SolrException: Error getting replica 
locations\n\tat 
org.apache.solr.client.solrj.SolrResponse.getException(SolrResponse.java:53)\n\tat
 
org.apache.solr.handler.admin.CollectionsHandler.invokeAction(CollectionsHandler.java:269)\n\tat
 
org.apache.solr.handler.admin.CollectionsHandler.handleRequestBody(CollectionsHandler.java:241)\n\tat
 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)\n\tat
 org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:734)\n\tat 
org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:715)\n\tat
 org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:496)\n\tat 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:378)\n\tat
 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:324)\n\tat
 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1634)\n\tat
 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)\n\tat
 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)\n\tat
 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)\n\tat
 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat
 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)\n\tat
 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)\n\tat
 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)\n\tat
 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1253)\n\tat
 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)\n\tat
 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473)\n\tat 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564)\n\tat
 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)\n\tat
 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1155)\n\tat
 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)\n\tat
 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:219)\n\tat
 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:126)\n\tat
 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat
 
org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)\n\tat
 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat
 org.eclipse.jetty.server.Server.handle(Server.java:531)\n\tat 
org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:352)\n\tat 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:260)\n\tat
 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:281)\n\tat
 org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:102)\n\tat 
org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:118)\n\tat 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:333)\n\tat
 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:310)\n\tat
 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:168)\n\tat
 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:126)\n\tat
 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:366)\n\tat
 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:760)\

[jira] [Updated] (SOLR-12442) Improve error message when a policy is violated while creating a collection

2018-06-03 Thread Varun Thacker (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-12442:
-
Description: 
On master when I have this policy setup
{code:java}
curl -X POST -H 'Content-type:application/json' --data-binary '{
"set-cluster-policy": [
{"cores": "<4","node": "#ANY"}
]
}' http://localhost:8983/solr/admin/autoscaling{code}
Now if I try to create a collection where the policy will get violated , I'll 
get the following message in as a response 

 
{code:java}
{
  "responseHeader":{
"status":500,
"QTime":13},
  "Operation create caused 
exception:":"org.apache.solr.common.SolrException:org.apache.solr.common.SolrException:
 Error getting replica locations",
  "exception":{
"msg":"Error getting replica locations",
"rspCode":500},
  "error":{
"metadata":[
  "error-class","org.apache.solr.common.SolrException",
  "root-error-class","org.apache.solr.common.SolrException"],
"msg":"Error getting replica locations",
"trace":"org.apache.solr.common.SolrException: Error getting replica 
locations\n\tat 
org.apache.solr.client.solrj.SolrResponse.getException(SolrResponse.java:53)\n\tat
 
org.apache.solr.handler.admin.CollectionsHandler.invokeAction(CollectionsHandler.java:269)\n\tat
 
org.apache.solr.handler.admin.CollectionsHandler.handleRequestBody(CollectionsHandler.java:241)\n\tat
 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)\n\tat
 org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:734)\n\tat 
org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:715)\n\tat
 org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:496)\n\tat 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:378)\n\tat
 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:324)\n\tat
 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1634)\n\tat
 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)\n\tat
 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)\n\tat
 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)\n\tat
 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat
 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)\n\tat
 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)\n\tat
 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)\n\tat
 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1253)\n\tat
 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)\n\tat
 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473)\n\tat 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564)\n\tat
 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)\n\tat
 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1155)\n\tat
 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)\n\tat
 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:219)\n\tat
 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:126)\n\tat
 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat
 
org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)\n\tat
 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat
 org.eclipse.jetty.server.Server.handle(Server.java:531)\n\tat 
org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:352)\n\tat 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:260)\n\tat
 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:281)\n\tat
 org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:102)\n\tat 
org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:118)\n\tat 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:333)\n\tat
 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:310)\n\tat
 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:168)\n\tat
 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:126)\n\tat
 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:366)\n\tat
 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:760)\n\tat
 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:678)\n\tat
 java.lang.Thread.run(Thread.java:745)\n",
"code":500}}{code}
Which doesn't tell the user much about the error

[jira] [Updated] (SOLR-12442) Improve error message when a policy is violated while creating a collection

2018-06-03 Thread Varun Thacker (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-12442:
-
Environment: (was: On master when I have this policy setup
{code:java}
curl -X POST -H 'Content-type:application/json' --data-binary '{
"set-cluster-policy": [
{"cores": "<4","node": "#ANY"}
]
}' http://localhost:8983/solr/admin/autoscaling{code}
Now if I try to create a collection where the policy will get violated , I'll 
get the following message in as a response 

 
{code:java}
{
  "responseHeader":{
"status":500,
"QTime":13},
  "Operation create caused 
exception:":"org.apache.solr.common.SolrException:org.apache.solr.common.SolrException:
 Error getting replica locations",
  "exception":{
"msg":"Error getting replica locations",
"rspCode":500},
  "error":{
"metadata":[
  "error-class","org.apache.solr.common.SolrException",
  "root-error-class","org.apache.solr.common.SolrException"],
"msg":"Error getting replica locations",
"trace":"org.apache.solr.common.SolrException: Error getting replica 
locations\n\tat 
org.apache.solr.client.solrj.SolrResponse.getException(SolrResponse.java:53)\n\tat
 
org.apache.solr.handler.admin.CollectionsHandler.invokeAction(CollectionsHandler.java:269)\n\tat
 
org.apache.solr.handler.admin.CollectionsHandler.handleRequestBody(CollectionsHandler.java:241)\n\tat
 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)\n\tat
 org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:734)\n\tat 
org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:715)\n\tat
 org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:496)\n\tat 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:378)\n\tat
 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:324)\n\tat
 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1634)\n\tat
 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)\n\tat
 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)\n\tat
 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)\n\tat
 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat
 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)\n\tat
 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)\n\tat
 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)\n\tat
 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1253)\n\tat
 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)\n\tat
 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473)\n\tat 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564)\n\tat
 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)\n\tat
 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1155)\n\tat
 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)\n\tat
 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:219)\n\tat
 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:126)\n\tat
 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat
 
org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)\n\tat
 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat
 org.eclipse.jetty.server.Server.handle(Server.java:531)\n\tat 
org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:352)\n\tat 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:260)\n\tat
 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:281)\n\tat
 org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:102)\n\tat 
org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:118)\n\tat 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:333)\n\tat
 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:310)\n\tat
 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:168)\n\tat
 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:126)\n\tat
 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:366)\n\tat
 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:760)\n\tat
 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:678)\n\tat
 java.lang.Thread.run(Thread.java:745)\n",
"code":500}}{code}
Which doesn't tell the user much about 

[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1907 - Still Unstable!

2018-06-03 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1907/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseG1GC

16 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testMergeIntegration

Error Message:
did not finish processing in time

Stack Trace:
java.lang.AssertionError: did not finish processing in time
at 
__randomizedtesting.SeedInfo.seed([3B4BAF9DD2C6A33E:68F2ED2D30D736C4]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testMergeIntegration(IndexSizeTriggerTest.java:425)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testTrigger

Error Message:
number of ops expected:<2> but was:<1>

Stack Trace:
java.lang.AssertionError: number of ops expected:<2> but was:<1>
at 
__randomizedtesting.SeedInfo.seed([

[JENKINS] Lucene-Solr-repro - Build # 752 - Unstable

2018-06-03 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/752/

[...truncated 29 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-NightlyTests-7.x/234/consoleText

[repro] Revision: 88400a14716f163715eac82a35be90e6e3718208

[repro] Ant options: -Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt
[repro] Repro line:  ant test  -Dtestcase=TestCloudRecovery 
-Dtests.seed=A4FFE3FB0788B66D -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=sr-RS -Dtests.timezone=SystemV/YST9YDT -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=IndexSizeTriggerTest 
-Dtests.method=testSplitIntegration -Dtests.seed=A4FFE3FB0788B66D 
-Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=es-PR -Dtests.timezone=America/Aruba -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=DistribJoinFromCollectionTest 
-Dtests.method=testScore -Dtests.seed=A4FFE3FB0788B66D -Dtests.multiplier=2 
-Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=hi-IN -Dtests.timezone=Atlantic/Stanley -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=TestComputePlanAction 
-Dtests.method=testNodeAdded -Dtests.seed=A4FFE3FB0788B66D -Dtests.multiplier=2 
-Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=is-IS -Dtests.timezone=America/Port_of_Spain 
-Dtests.asserts=true -Dtests.file.encoding=UTF-8

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
3dc4fa199c175ed6351f66bac5c23c73b4e3f89a
[repro] git fetch

[...truncated 2 lines...]
[repro] git checkout 88400a14716f163715eac82a35be90e6e3718208

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   TestCloudRecovery
[repro]   DistribJoinFromCollectionTest
[repro]   IndexSizeTriggerTest
[repro]   TestComputePlanAction
[repro] ant compile-test

[...truncated 3317 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=20 
-Dtests.class="*.TestCloudRecovery|*.DistribJoinFromCollectionTest|*.IndexSizeTriggerTest|*.TestComputePlanAction"
 -Dtests.showOutput=onerror -Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt
 -Dtests.seed=A4FFE3FB0788B66D -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=sr-RS -Dtests.timezone=SystemV/YST9YDT -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[...truncated 5933 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   0/5 failed: org.apache.solr.cloud.DistribJoinFromCollectionTest
[repro]   0/5 failed: org.apache.solr.cloud.TestCloudRecovery
[repro]   1/5 failed: 
org.apache.solr.cloud.autoscaling.sim.TestComputePlanAction
[repro]   3/5 failed: org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest
[repro] git checkout 3dc4fa199c175ed6351f66bac5c23c73b4e3f89a

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 6 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1558 - Still Unstable

2018-06-03 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1558/

4 tests failed.
FAILED:  org.apache.solr.cloud.DistributedQueueTest.testPeekElements

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([945DB15E289F10C:F46B613432B0A511]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertTrue(Assert.java:54)
at 
org.apache.solr.cloud.DistributedQueueTest.testPeekElements(DistributedQueueTest.java:265)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  org.apache.solr.cloud.LeaderFailureAfterFreshStartTest.test

Error Message:
timeout waiting to see all nodes active

Stack Trace:
java.lang.AssertionError: timeout waiting to see all nodes active
at 
__randomizedtesting.SeedInfo.seed([945DB15E289F10C:8111E4CF4C759CF4]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apach

[jira] [Commented] (SOLR-9922) Write buffering updates to another tlog

2018-06-03 Thread Cao Manh Dat (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16499676#comment-16499676
 ] 

Cao Manh Dat commented on SOLR-9922:


Tests seem happy, I will do some review before committing the patch.

> Write buffering updates to another tlog
> ---
>
> Key: SOLR-9922
> URL: https://issues.apache.org/jira/browse/SOLR-9922
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Major
> Attachments: SOLR-9922.patch, SOLR-9922.patch, SOLR-9922.patch, 
> SOLR-9922.patch, SOLR-9922.patch, SOLR-9922.patch
>
>
> Currently, we write buffering logs to current tlog and not apply that updates 
> to index. Then we rely on replay log to apply that updates to index. But at 
> the same time there are some updates also write to current tlog and applied 
> to the index. 
> For example, during peersync, if new updates come to replica we will end up 
> with this tlog
> tlog : old1, new1, new2, old2, new3, old3
> old updates belong to peersync, and these updates are applied to the index.
> new updates belong to buffering updates, and these updates are not applied to 
> the index.
> But writing all the updates to same current tlog make code base very complex. 
> We should write buffering updates to another tlog file.
> By doing this, it will help our code base simpler. It also makes replica 
> recovery for SOLR-9835 more easier. Because after peersync success we can 
> copy new updates from temporary file to current tlog, for example
> tlog : old1, old2, old3
> temporary tlog : new1, new2, new3
> -->
> tlog : old1, old2, old3, new1, new2, new3



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-7.x - Build # 234 - Still Failing

2018-06-03 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-7.x/234/

No tests ran.

Build Log:
[...truncated 24220 lines...]
[asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: 
invalid part, must have at least one section (e.g., chapter, appendix, etc.)
[asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid 
part, must have at least one section (e.g., chapter, appendix, etc.)
 [java] Processed 2201 links (1756 relative) to 2974 anchors in 229 files
 [echo] Validated Links & Anchors via: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/build/solr-ref-guide/bare-bones-html/

-dist-changes:
 [copy] Copying 4 files to 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/package/changes

-dist-keys:
  [get] Getting: http://home.apache.org/keys/group/lucene.asc
  [get] To: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/package/KEYS

package:

-unpack-solr-tgz:

-ensure-solr-tgz-exists:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/build/solr.tgz.unpacked
[untar] Expanding: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/package/solr-7.4.0.tgz
 into 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/build/solr.tgz.unpacked

generate-maven-artifacts:

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

r

[JENKINS] Lucene-Solr-7.x-Windows (64bit/jdk-9.0.4) - Build # 629 - Still Unstable!

2018-06-03 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/629/
Java: 64bit/jdk-9.0.4 -XX:-UseCompressedOops -XX:+UseSerialGC

5 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.sim.TestComputePlanAction.testNodeAdded

Error Message:
ComputePlanAction should have computed exactly 1 operation, but was: 
[org.apache.solr.client.solrj.request.CollectionAdminRequest$MoveReplica@3e47b224,
 
org.apache.solr.client.solrj.request.CollectionAdminRequest$MoveReplica@64ad9984]
 expected:<1> but was:<2>

Stack Trace:
java.lang.AssertionError: ComputePlanAction should have computed exactly 1 
operation, but was: 
[org.apache.solr.client.solrj.request.CollectionAdminRequest$MoveReplica@3e47b224,
 
org.apache.solr.client.solrj.request.CollectionAdminRequest$MoveReplica@64ad9984]
 expected:<1> but was:<2>
at 
__randomizedtesting.SeedInfo.seed([BAEF5DD64896651F:DF2C0BA1EA35CD1C]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.cloud.autoscaling.sim.TestComputePlanAction.testNodeAdded(TestComputePlanAction.java:313)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSui

[jira] [Commented] (SOLR-12209) add Paging Streaming Expression

2018-06-03 Thread Joel Bernstein (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16499594#comment-16499594
 ] 

Joel Bernstein commented on SOLR-12209:
---

The implementations look good. We'll need to add the Stream Expression tests. 
These would go in StreamDecoratorTest. I have no problem adding the tests and 
committing. If you want to add the tests that's fine too, let me know which you 
prefer.

You'll also need to add the functions to Lang.java and TestLang for the 
Streaming Expressions to be recognized by the /stream handler.

> add Paging Streaming Expression
> ---
>
> Key: SOLR-12209
> URL: https://issues.apache.org/jira/browse/SOLR-12209
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Affects Versions: 7.3
>Reporter: mosh
>Priority: Major
> Attachments: 0001-added-skip-and-limit-stream-decorators.patch, 
> SOLR-12209.patch, SOLR-12209.patch
>
>
> Currently the closest streaming expression that allows some sort of 
> pagination is top.
> I propose we add a new streaming expression, which is based on the 
> RankedStream class to add offset to the stream. currently it can only be done 
> in code by reading the stream until the desired offset is reached.
> The new expression will be used as such:
> {{paging(rows=3, search(collection1, q="*:*", qt="/export", 
> fl="id,a_s,a_i,a_f", sort="a_f desc, a_i desc"), sort="a_f asc, a_i asc", 
> start=100)}}
> {{this will offset the returned stream by 100 documents}}
>  
> [~joel.bernstein] what to you think?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Windows (64bit/jdk-10) - Build # 7354 - Still Unstable!

2018-06-03 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7354/
Java: 64bit/jdk-10 -XX:-UseCompressedOops -XX:+UseSerialGC

7 tests failed.
FAILED:  org.apache.solr.common.util.TestTimeSource.testEpochTime

Error Message:
SimTimeSource:50.0 time diff=34465000

Stack Trace:
java.lang.AssertionError: SimTimeSource:50.0 time diff=34465000
at 
__randomizedtesting.SeedInfo.seed([71C011736F403E4D:49AC6256FB909C0B]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.common.util.TestTimeSource.doTestEpochTime(TestTimeSource.java:52)
at 
org.apache.solr.common.util.TestTimeSource.testEpochTime(TestTimeSource.java:32)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  org.apache.solr.common.util.TestTimeSource.testEpochTime

Error Message:
SimTimeSource:50.0 time diff=12095000

Stack Trace:
java.lang.AssertionError: SimTimeSource:50.0 time diff=12095000
at 
__randomizedtest

[GitHub] lucene-solr pull request #392: LUCENE-8345 - add wrapper class constructors ...

2018-06-03 Thread michaelbraun
GitHub user michaelbraun opened a pull request:

https://github.com/apache/lucene-solr/pull/392

LUCENE-8345 - add wrapper class constructors to forbiddenapis



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/michaelbraun/lucene-solr 
remove-constructor-wrapper-classes

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/392.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #392


commit fb6574100e493ed9162265133f2cbe967746c166
Author: Michael Braun 
Date:   2018-06-03T19:40:50Z

LUCENE-8345 - add wrapper class constructors to forbiddenapis




---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5211) updating parent as childless makes old children orphans

2018-06-03 Thread Yonik Seeley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-5211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16499542#comment-16499542
 ] 

Yonik Seeley commented on SOLR-5211:


If we check for \_root\_ in the index, everything could be back compat (and 
avoid the need for schema update + reindex).

If parent-child docs are being used, then updates could use 2 update terms (one 
for id and one for \_root\_)

> updating parent as childless makes old children orphans
> ---
>
> Key: SOLR-5211
> URL: https://issues.apache.org/jira/browse/SOLR-5211
> Project: Solr
>  Issue Type: Sub-task
>  Components: update
>Affects Versions: 4.5, 6.0
>Reporter: Mikhail Khludnev
>Assignee: Mikhail Khludnev
>Priority: Major
> Attachments: SOLR-5211.patch, SOLR-5211.patch
>
>
> if I have parent with children in the index, I can send update omitting 
> children. as a result old children become orphaned. 
> I suppose separate \_root_ fields makes much trouble. I propose to extend 
> notion of uniqueKey, and let it spans across blocks that makes updates 
> unambiguous.  
> WDYT? Do you like to see a test proves this issue?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12441) Add deeply nested documents URP

2018-06-03 Thread mosh (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mosh updated SOLR-12441:

Description: 
As discussed in [SOLR-12298|https://issues.apache.org/jira/browse/SOLR-12298], 
there ought to be an URP to add metadata fields to childDocuments in order to 
allow a transformer to rebuild the original document hierarchy.

{quote}I propose we add the following fields:
 # __nestParent__
 # _nestLevel_
 # __nestPath__

__nestParent__: This field wild will store the document's parent docId, to be 
used for building the whole hierarchy, using a new document transformer, as 
suggested by Jan on the mailing list.

_nestLevel_: This field will store the level of the specified field in the 
document, using an int value. This field can be used for the parentFilter, 
eliminating the need to provide a parentFilter, which will be set by default as 
"_level_:queriedFieldLevel".

_nestLevel_: This field will contain the full path, separated by a specific 
reserved char e.g., '.'
 for example: "first.second.third".
 This will enable users to search for a specific path, or provide a regular 
expression to search for fields sharing the same name in different levels of 
the document, filtering using the level key if needed.
{quote}

  was:
As discussed in [SOLR_12298|https://issues.apache.org/jira/browse/SOLR-12298], 
there ought to be an URP to add metadata fields to childDocuments in order to 
allow a transformer to rebuild the original document hierarchy.

{quote}I propose we add the following fields:
 # __nestParent__
 # _nestLevel_
 # __nestPath__

__nestParent__: This field wild will store the document's parent docId, to be 
used for building the whole hierarchy, using a new document transformer, as 
suggested by Jan on the mailing list.

_nestLevel_: This field will store the level of the specified field in the 
document, using an int value. This field can be used for the parentFilter, 
eliminating the need to provide a parentFilter, which will be set by default as 
"_level_:queriedFieldLevel".

_nestLevel_: This field will contain the full path, separated by a specific 
reserved char e.g., '.'
 for example: "first.second.third".
 This will enable users to search for a specific path, or provide a regular 
expression to search for fields sharing the same name in different levels of 
the document, filtering using the level key if needed.
{quote}


> Add deeply nested documents URP
> ---
>
> Key: SOLR-12441
> URL: https://issues.apache.org/jira/browse/SOLR-12441
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: mosh
>Priority: Major
>
> As discussed in 
> [SOLR-12298|https://issues.apache.org/jira/browse/SOLR-12298], there ought to 
> be an URP to add metadata fields to childDocuments in order to allow a 
> transformer to rebuild the original document hierarchy.
> {quote}I propose we add the following fields:
>  # __nestParent__
>  # _nestLevel_
>  # __nestPath__
> __nestParent__: This field wild will store the document's parent docId, to be 
> used for building the whole hierarchy, using a new document transformer, as 
> suggested by Jan on the mailing list.
> _nestLevel_: This field will store the level of the specified field in the 
> document, using an int value. This field can be used for the parentFilter, 
> eliminating the need to provide a parentFilter, which will be set by default 
> as "_level_:queriedFieldLevel".
> _nestLevel_: This field will contain the full path, separated by a specific 
> reserved char e.g., '.'
>  for example: "first.second.third".
>  This will enable users to search for a specific path, or provide a regular 
> expression to search for fields sharing the same name in different levels of 
> the document, filtering using the level key if needed.
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Solaris (64bit/jdk1.8.0) - Build # 662 - Still Unstable!

2018-06-03 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Solaris/662/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseG1GC

9 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testSplitIntegration

Error Message:
did not finish processing in time

Stack Trace:
java.lang.AssertionError: did not finish processing in time
at 
__randomizedtesting.SeedInfo.seed([D64F8AD5393D8F1:34EA41ED7C6C110F]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testSplitIntegration(IndexSizeTriggerTest.java:298)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testTrigger

Error Message:
should have fired an event

Stack Trace:
java.lang.AssertionError: should have fired an event
at 
__randomizedtesting.SeedInfo.seed([D64F8AD5393D8F1:6EAFCE2FCA5CAB

[jira] [Commented] (LUCENE-8335) Do not allow changing soft-deletes field

2018-06-03 Thread Lucene/Solr QA (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16499529#comment-16499529
 ] 

Lucene/Solr QA commented on LUCENE-8335:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
51s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  4m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Release audit (RAT) {color} | 
{color:green}  1m 12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Check forbidden APIs {color} | 
{color:green}  0m 46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Validate source patterns {color} | 
{color:green}  0m 46s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
34s{color} | {color:green} codecs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 24m 
36s{color} | {color:green} core in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
12s{color} | {color:green} highlighter in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
34s{color} | {color:green} memory in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
27s{color} | {color:green} test-framework in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 71m 41s{color} 
| {color:red} core in the patch failed. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}123m 29s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | solr.TestDistributedSearch |
|   | solr.cloud.api.collections.TestCollectionsAPIViaSolrCloudCluster |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | LUCENE-8335 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12926255/LUCENE-8335.patch |
| Optional Tests |  compile  javac  unit  ratsources  checkforbiddenapis  
validatesourcepatterns  |
| uname | Linux lucene1-us-west 3.13.0-88-generic #135-Ubuntu SMP Wed Jun 8 
21:10:42 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | ant |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-LUCENE-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh
 |
| git revision | master / 3dc4fa1 |
| ant | version: Apache Ant(TM) version 1.9.3 compiled on April 8 2014 |
| Default Java | 1.8.0_172 |
| unit | 
https://builds.apache.org/job/PreCommit-LUCENE-Build/25/artifact/out/patch-unit-solr_core.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-LUCENE-Build/25/testReport/ |
| modules | C: lucene/codecs lucene/core lucene/highlighter lucene/memory 
lucene/test-framework solr/core U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-LUCENE-Build/25/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> Do not allow changing soft-deletes field
> 
>
> Key: LUCENE-8335
> URL: https://issues.apache.org/jira/browse/LUCENE-8335
> Project: Lucene - Core
>  Issue Type: Improvement
>Affects Versions: 7.4, master (8.0)
>Reporter: Nhat Nguyen
>Assignee: Simon Willnauer
>Priority: Minor
> Attachments: LUCENE-8335.patch, LUCENE-8335.patch, LUCENE-8335.patch
>
>
> Today we do not enforce an index to use a single soft-deletes field. A user 
> can create an index with one soft-deletes field then open an IW with another 
> field or add an index with a different soft-deletes field. This should not be 
> allowed and reported the error to users as soon as possible.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8345) Add wrapper class constructors to forbiddenapis

2018-06-03 Thread Uwe Schindler (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16499526#comment-16499526
 ] 

Uwe Schindler commented on LUCENE-8345:
---

+1

> Add wrapper class constructors to forbiddenapis
> ---
>
> Key: LUCENE-8345
> URL: https://issues.apache.org/jira/browse/LUCENE-8345
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael Braun
>Priority: Minor
>
> Wrapper classes for the Java primitives (Boolean, Byte, Short, Character, 
> Integer, Long, Float, Double) have constructors which will always create new 
> objects. These constructors are officially deprecated as of Java 9 and it is 
> recommended to use the public static methods since these can reuse the same 
> underlying objects. In 99% of cases we should be doing this, so these 
> constructors should be added to forbiddenapis and code corrected to use 
> autoboxing or call the static methods (.valueOf, .parse*) explicitly. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-master - Build # 2554 - Still Unstable

2018-06-03 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/2554/

8 tests failed.
FAILED:  
org.apache.lucene.classification.utils.ConfusionMatrixGeneratorTest.testGetConfusionMatrixWithSNB

Error Message:
expected:<7> but was:<6>

Stack Trace:
java.lang.AssertionError: expected:<7> but was:<6>
at 
__randomizedtesting.SeedInfo.seed([2D876800A0E19369:E41B6CB31FE5F51A]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.lucene.classification.utils.ConfusionMatrixGeneratorTest.checkCM(ConfusionMatrixGeneratorTest.java:110)
at 
org.apache.lucene.classification.utils.ConfusionMatrixGeneratorTest.testGetConfusionMatrixWithSNB(ConfusionMatrixGeneratorTest.java:99)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
junit.framework.TestSuite.org.apache.solr.client.solrj.io.graph.GraphTest

Error Message:
Solr servers failed to register with ZK. Current count: 1; Expected count: 2

Stack Trace:
java.lang.IllegalStateException: Solr servers failed to register with ZK. 
Current count: 1; Expected count: 2
at __randomizedtesting.SeedInfo.se

[jira] [Updated] (LUCENE-8345) Add wrapper class constructors to forbiddenapis

2018-06-03 Thread Michael Braun (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Braun updated LUCENE-8345:
--
Environment: (was: Wrapper classes for the Java primitives (Boolean, 
Byte, Short, Character, Integer, Long, Float, Double) have constructors which 
will always create new objects. These constructors are officially deprecated as 
of Java 9 and it is recommended to use the public static methods since these 
can reuse the same underlying objects. In 99% of cases we should be doing this, 
so these constructors should be added to forbiddenapis and code corrected to 
use autoboxing or call the static methods (.valueOf, .parse*) explicitly. )
Description: Wrapper classes for the Java primitives (Boolean, Byte, Short, 
Character, Integer, Long, Float, Double) have constructors which will always 
create new objects. These constructors are officially deprecated as of Java 9 
and it is recommended to use the public static methods since these can reuse 
the same underlying objects. In 99% of cases we should be doing this, so these 
constructors should be added to forbiddenapis and code corrected to use 
autoboxing or call the static methods (.valueOf, .parse*) explicitly. 

> Add wrapper class constructors to forbiddenapis
> ---
>
> Key: LUCENE-8345
> URL: https://issues.apache.org/jira/browse/LUCENE-8345
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael Braun
>Priority: Minor
>
> Wrapper classes for the Java primitives (Boolean, Byte, Short, Character, 
> Integer, Long, Float, Double) have constructors which will always create new 
> objects. These constructors are officially deprecated as of Java 9 and it is 
> recommended to use the public static methods since these can reuse the same 
> underlying objects. In 99% of cases we should be doing this, so these 
> constructors should be added to forbiddenapis and code corrected to use 
> autoboxing or call the static methods (.valueOf, .parse*) explicitly. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-8345) Add wrapper class constructors to forbiddenapis

2018-06-03 Thread Michael Braun (JIRA)
Michael Braun created LUCENE-8345:
-

 Summary: Add wrapper class constructors to forbiddenapis
 Key: LUCENE-8345
 URL: https://issues.apache.org/jira/browse/LUCENE-8345
 Project: Lucene - Core
  Issue Type: Improvement
 Environment: Wrapper classes for the Java primitives (Boolean, Byte, 
Short, Character, Integer, Long, Float, Double) have constructors which will 
always create new objects. These constructors are officially deprecated as of 
Java 9 and it is recommended to use the public static methods since these can 
reuse the same underlying objects. In 99% of cases we should be doing this, so 
these constructors should be added to forbiddenapis and code corrected to use 
autoboxing or call the static methods (.valueOf, .parse*) explicitly. 
Reporter: Michael Braun






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1906 - Still Unstable!

2018-06-03 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1906/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseG1GC

6 tests failed.
FAILED:  
org.apache.lucene.index.TestIndexWriterWithThreads.testIOExceptionDuringAbortWithThreadsOnlyOnce

Error Message:
MockDirectoryWrapper: cannot close: there are still 23 open files: {_m.cfs=1, 
_7.cfs=1, _c.cfs=1, _h.cfs=1, _6.cfs=1, _d.cfs=1, _i.cfs=1, _0.cfs=1, _5.cfs=1, 
_j.cfs=1, _9.cfs=1, _e.cfs=1, _a.cfs=1, _n.cfs=1, _b.cfs=1, _g.cfs=1, _c.nvd=1, 
_3.cfs=1, _4.cfs=1, _k.cfs=1, _f.cfs=1, _8.cfs=1, _c.cfe=1}

Stack Trace:
java.lang.RuntimeException: MockDirectoryWrapper: cannot close: there are still 
23 open files: {_m.cfs=1, _7.cfs=1, _c.cfs=1, _h.cfs=1, _6.cfs=1, _d.cfs=1, 
_i.cfs=1, _0.cfs=1, _5.cfs=1, _j.cfs=1, _9.cfs=1, _e.cfs=1, _a.cfs=1, _n.cfs=1, 
_b.cfs=1, _g.cfs=1, _c.nvd=1, _3.cfs=1, _4.cfs=1, _k.cfs=1, _f.cfs=1, _8.cfs=1, 
_c.cfe=1}
at 
__randomizedtesting.SeedInfo.seed([113D581A3E105A8A:466B7FFB9BA11FD4]:0)
at 
org.apache.lucene.store.MockDirectoryWrapper.close(MockDirectoryWrapper.java:841)
at 
org.apache.lucene.index.TestIndexWriterWithThreads._testMultipleThreadsFailure(TestIndexWriterWithThreads.java:341)
at 
org.apache.lucene.index.TestIndexWriterWithThreads.testIOExceptionDuringAbortWithThreadsOnlyOnce(TestIndexWriterWithThreads.java:464)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.r

[JENKINS] Lucene-Solr-repro - Build # 749 - Unstable

2018-06-03 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/749/

[...truncated 29 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-master/70/consoleText

[repro] Revision: 3dc4fa199c175ed6351f66bac5c23c73b4e3f89a

[repro] Repro line:  ant test  -Dtestcase=AutoScalingHandlerTest 
-Dtests.method=testReadApi -Dtests.seed=4F6E0A37FED735A7 -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.badapples=true -Dtests.locale=hr-HR 
-Dtests.timezone=America/Caracas -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=TestLargeCluster 
-Dtests.method=testNodeLost -Dtests.seed=4F6E0A37FED735A7 -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.badapples=true -Dtests.locale=en-MT 
-Dtests.timezone=Europe/Isle_of_Man -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=TestTriggerIntegration 
-Dtests.method=testSearchRate -Dtests.seed=4F6E0A37FED735A7 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true 
-Dtests.locale=el-CY -Dtests.timezone=Europe/Helsinki -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=CollectionsAPISolrJTest 
-Dtests.method=testCreateCollWithDefaultClusterProperties 
-Dtests.seed=4F6E0A37FED735A7 -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.badapples=true -Dtests.locale=ga-IE -Dtests.timezone=Etc/GMT-2 
-Dtests.asserts=true -Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=TestComputePlanAction 
-Dtests.method=testNodeAdded -Dtests.seed=4F6E0A37FED735A7 -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.badapples=true -Dtests.locale=zh 
-Dtests.timezone=America/Anchorage -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
3dc4fa199c175ed6351f66bac5c23c73b4e3f89a
[repro] git fetch

[...truncated 2 lines...]
[repro] git checkout 3dc4fa199c175ed6351f66bac5c23c73b4e3f89a

[...truncated 1 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   AutoScalingHandlerTest
[repro]   TestLargeCluster
[repro]   CollectionsAPISolrJTest
[repro]   TestComputePlanAction
[repro]   TestTriggerIntegration
[repro] ant compile-test

[...truncated 3299 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=25 
-Dtests.class="*.AutoScalingHandlerTest|*.TestLargeCluster|*.CollectionsAPISolrJTest|*.TestComputePlanAction|*.TestTriggerIntegration"
 -Dtests.showOutput=onerror  -Dtests.seed=4F6E0A37FED735A7 -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.badapples=true -Dtests.locale=hr-HR 
-Dtests.timezone=America/Caracas -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[...truncated 18909 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   0/5 failed: org.apache.solr.cloud.CollectionsAPISolrJTest
[repro]   1/5 failed: org.apache.solr.cloud.autoscaling.AutoScalingHandlerTest
[repro]   1/5 failed: 
org.apache.solr.cloud.autoscaling.sim.TestComputePlanAction
[repro]   2/5 failed: 
org.apache.solr.cloud.autoscaling.sim.TestTriggerIntegration
[repro]   4/5 failed: org.apache.solr.cloud.autoscaling.sim.TestLargeCluster
[repro] git checkout 3dc4fa199c175ed6351f66bac5c23c73b4e3f89a

[...truncated 1 lines...]
[repro] Exiting with code 256

[...truncated 6 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-SmokeRelease-master - Build # 1039 - Still Failing

2018-06-03 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/1039/

No tests ran.

Build Log:
[...truncated 24176 lines...]
[asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: 
invalid part, must have at least one section (e.g., chapter, appendix, etc.)
[asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid 
part, must have at least one section (e.g., chapter, appendix, etc.)
 [java] Processed 2214 links (1768 relative) to 3105 anchors in 246 files
 [echo] Validated Links & Anchors via: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr-ref-guide/bare-bones-html/

-dist-changes:
 [copy] Copying 4 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/changes

-dist-keys:
  [get] Getting: http://home.apache.org/keys/group/lucene.asc
  [get] To: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/KEYS

package:

-unpack-solr-tgz:

-ensure-solr-tgz-exists:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr.tgz.unpacked
[untar] Expanding: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/solr-8.0.0.tgz
 into 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr.tgz.unpacked

generate-maven-artifacts:

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-leve

[jira] [Commented] (SOLR-12069) Default operator parameter q.op ignored.

2018-06-03 Thread Munendra S N (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16499474#comment-16499474
 ] 

Munendra S N commented on SOLR-12069:
-

[~cstrobl]
Facing the same issue.

>From solr-7, split on whitespaces has been disabled by default(sow=false)
In Solr-6, sow=true by default

So, I tried passing sow=true in the request, the results were as expected i.e, 
in this can number of products are 0 and *parsedquery_toString* is 
*+(+inStock:T +inStock:F)*

Looks like this issue is due to change in sow default behavior
https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/parser/QueryParser.java#L560
 - The way the multiTerm query is handled


> Default operator parameter q.op ignored.
> 
>
> Key: SOLR-12069
> URL: https://issues.apache.org/jira/browse/SOLR-12069
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: query parsers
>Affects Versions: 7.2
>Reporter: Christoph Strobl
>Priority: Critical
>
> The {{q.op}} parameter as described in the [7.2 reference for the Standard 
> Query 
> Parser|http://lucene.apache.org/solr/guide/7_2/the-standard-query-parser.html#standard-query-parser-parameters]
>  does not get used when executing a query.
> Reproduce via _techproducts_ example.
>  # {{./bin/solr -e tchproducts}}
>  # {{./bin/post -c techproducts example/exampledocs/*.xml}}
>  # {{http "http://localhost:8983/solr/techproducts/select?q=inStock:(true 
> false)&q.op=AND"}}
> The search above is expected to return {{0}} results but instead matches all 
> {{21}} entries.
> The response header lists the {{q.op}} parameter 
> {code:javascript}
> {
>   "responseHeader": {
> "QTime": 3,
> "params": {
>   "debugQuery": "on",
>   "q": "inStock:(true false)",
>   "q.op": "AND"
> },
> "status": 0
>   },
>   // ...
> }
> {code}
> The debug output is as follows:
> {code:javascript}
> debug": {
>   "QParser": "LuceneQParser",
>   "explain": {
>   "100-435805": "\n1.5869651 = sum of:\n  1.5869651 = weight(inStock:F in 
> 31) [SchemaSimilarity], result of:\n1.5869651 = score(doc=31,freq=1.0 = 
> termFreq=1.0\n), product of:\n  1.5869651 = idf, computed as log(1 + 
> (docCount - docFreq + 0.5) / (docFreq + 0.5)) from:\n4.0 = docFreq\n  
>   21.0 = docCount\n  1.0 = tfNorm, computed as (freq * (k1 + 1)) / 
> (freq + k1) from:\n1.0 = termFreq=1.0\n1.2 = parameter k1\n   
>  0.0 = parameter b (norms omitted for field)\n",
>   "6H500F0": "\n0.22884157 = sum of:\n  0.22884157 = weight(inStock:T in 
> 2) [SchemaSimilarity], result of:\n0.22884157 = score(doc=2,freq=1.0 = 
> termFreq=1.0\n), product of:\n  0.22884157 = idf, computed as log(1 + 
> (docCount - docFreq + 0.5) / (docFreq + 0.5)) from:\n17.0 = docFreq\n 
>21.0 = docCount\n  1.0 = tfNorm, computed as (freq * (k1 + 1)) / 
> (freq + k1) from:\n1.0 = termFreq=1.0\n1.2 = parameter k1\n   
>  0.0 = parameter b (norms omitted for field)\n",
>   "EN7800GTX/2DHTV/256M": "\n1.5869651 = sum of:\n  1.5869651 = 
> weight(inStock:F in 30) [SchemaSimilarity], result of:\n1.5869651 = 
> score(doc=30,freq=1.0 = termFreq=1.0\n), product of:\n  1.5869651 = idf, 
> computed as log(1 + (docCount - docFreq + 0.5) / (docFreq + 0.5)) from:\n 
>4.0 = docFreq\n21.0 = docCount\n  1.0 = tfNorm, computed as 
> (freq * (k1 + 1)) / (freq + k1) from:\n1.0 = termFreq=1.0\n
> 1.2 = parameter k1\n0.0 = parameter b (norms omitted for field)\n",
>   "F8V7067-APL-KIT": "\n1.5869651 = sum of:\n  1.5869651 = 
> weight(inStock:F in 3) [SchemaSimilarity], result of:\n1.5869651 = 
> score(doc=3,freq=1.0 = termFreq=1.0\n), product of:\n  1.5869651 = idf, 
> computed as log(1 + (docCount - docFreq + 0.5) / (docFreq + 0.5)) from:\n 
>4.0 = docFreq\n21.0 = docCount\n  1.0 = tfNorm, computed as 
> (freq * (k1 + 1)) / (freq + k1) from:\n1.0 = termFreq=1.0\n
> 1.2 = parameter k1\n0.0 = parameter b (norms omitted for field)\n",
>   "GB18030TEST": "\n0.22884157 = sum of:\n  0.22884157 = weight(inStock:T 
> in 0) [SchemaSimilarity], result of:\n0.22884157 = score(doc=0,freq=1.0 = 
> termFreq=1.0\n), product of:\n  0.22884157 = idf, computed as log(1 + 
> (docCount - docFreq + 0.5) / (docFreq + 0.5)) from:\n17.0 = docFreq\n 
>21.0 = docCount\n  1.0 = tfNorm, computed as (freq * (k1 + 1)) / 
> (freq + k1) from:\n1.0 = termFreq=1.0\n1.2 = parameter k1\n   
>  0.0 = parameter b (norms omitted for field)\n",
>   "IW-02": "\n1.5869651 = sum of:\n  1.5869651 = weight(inStock:F in 4) 
> [SchemaSimilarity]

[jira] [Commented] (SOLR-5211) updating parent as childless makes old children orphans

2018-06-03 Thread Dr Oleg Savrasov (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-5211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16499465#comment-16499465
 ] 

Dr Oleg Savrasov commented on SOLR-5211:


The patch is reworked to support checking _root_ in schema. New logic is 
covered with tests. DirectUpdateHandler2 is updated to delete children together 
with parent.

> updating parent as childless makes old children orphans
> ---
>
> Key: SOLR-5211
> URL: https://issues.apache.org/jira/browse/SOLR-5211
> Project: Solr
>  Issue Type: Sub-task
>  Components: update
>Affects Versions: 4.5, 6.0
>Reporter: Mikhail Khludnev
>Assignee: Mikhail Khludnev
>Priority: Major
> Attachments: SOLR-5211.patch, SOLR-5211.patch
>
>
> if I have parent with children in the index, I can send update omitting 
> children. as a result old children become orphaned. 
> I suppose separate \_root_ fields makes much trouble. I propose to extend 
> notion of uniqueKey, and let it spans across blocks that makes updates 
> unambiguous.  
> WDYT? Do you like to see a test proves this issue?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5211) updating parent as childless makes old children orphans

2018-06-03 Thread Dr Oleg Savrasov (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-5211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dr Oleg Savrasov updated SOLR-5211:
---
Attachment: SOLR-5211.patch

> updating parent as childless makes old children orphans
> ---
>
> Key: SOLR-5211
> URL: https://issues.apache.org/jira/browse/SOLR-5211
> Project: Solr
>  Issue Type: Sub-task
>  Components: update
>Affects Versions: 4.5, 6.0
>Reporter: Mikhail Khludnev
>Assignee: Mikhail Khludnev
>Priority: Major
> Attachments: SOLR-5211.patch, SOLR-5211.patch
>
>
> if I have parent with children in the index, I can send update omitting 
> children. as a result old children become orphaned. 
> I suppose separate \_root_ fields makes much trouble. I propose to extend 
> notion of uniqueKey, and let it spans across blocks that makes updates 
> unambiguous.  
> WDYT? Do you like to see a test proves this issue?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-12440) Add deeply nested documents URP

2018-06-03 Thread mosh (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mosh resolved SOLR-12440.
-
Resolution: Duplicate

> Add deeply nested documents URP
> ---
>
> Key: SOLR-12440
> URL: https://issues.apache.org/jira/browse/SOLR-12440
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
> Environment: As stated in SOLR-12298, there ought to be an URP to add 
> metadata fields to solr child documents, so the full original document 
> hierarchy can be rebuilt using a transformer.
> {quote}1. __nestParent__
>  2. __nestLevel__
>  3. __nestPath__
> __nestParent__: This field wild will store the document's parent docId, to be 
> used for building the whole hierarchy, using a new document transformer, as 
> suggested by Jan on the mailing list.
> __nestLevel__: This field will store the level of the specified field in the 
> document, using an int value. This field can be used for the parentFilter, 
> eliminating the need to provide a parentFilter, which will be set by default 
> as "_level_:queriedFieldLevel".
> __nestPath__: This field will contain the full path, separated by a specific 
> reserved char e.g., '.'
>  for example: "first.second.third".
>  This will enable users to search for a specific path, or provide a regular 
> expression to search for fields sharing the same name in different levels of 
> the document, filtering using the level key if needed.
> {quote}
>Reporter: mosh
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12440) Add deeply nested documents URP

2018-06-03 Thread Cassandra Targett (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16499439#comment-16499439
 ] 

Cassandra Targett commented on SOLR-12440:
--

Is this a (possibly accidental) duplicate of SOLR-12441?

> Add deeply nested documents URP
> ---
>
> Key: SOLR-12440
> URL: https://issues.apache.org/jira/browse/SOLR-12440
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
> Environment: As stated in SOLR-12298, there ought to be an URP to add 
> metadata fields to solr child documents, so the full original document 
> hierarchy can be rebuilt using a transformer.
> {quote}1. __nestParent__
>  2. __nestLevel__
>  3. __nestPath__
> __nestParent__: This field wild will store the document's parent docId, to be 
> used for building the whole hierarchy, using a new document transformer, as 
> suggested by Jan on the mailing list.
> __nestLevel__: This field will store the level of the specified field in the 
> document, using an int value. This field can be used for the parentFilter, 
> eliminating the need to provide a parentFilter, which will be set by default 
> as "_level_:queriedFieldLevel".
> __nestPath__: This field will contain the full path, separated by a specific 
> reserved char e.g., '.'
>  for example: "first.second.third".
>  This will enable users to search for a specific path, or provide a regular 
> expression to search for fields sharing the same name in different levels of 
> the document, filtering using the level key if needed.
> {quote}
>Reporter: mosh
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_172) - Build # 22168 - Unstable!

2018-06-03 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/22168/
Java: 32bit/jdk1.8.0_172 -client -XX:+UseG1GC

1 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.admin.AutoscalingHistoryHandlerTest

Error Message:
Could not find collection : AutoscalingHistoryHandlerTest_collection

Stack Trace:
org.apache.solr.common.SolrException: Could not find collection : 
AutoscalingHistoryHandlerTest_collection
at __randomizedtesting.SeedInfo.seed([E8BCAE9551663379]:0)
at 
org.apache.solr.common.cloud.ClusterState.getCollection(ClusterState.java:118)
at 
org.apache.solr.cloud.SolrCloudTestCase.getCollectionState(SolrCloudTestCase.java:256)
at 
org.apache.solr.handler.admin.AutoscalingHistoryHandlerTest.waitForRecovery(AutoscalingHistoryHandlerTest.java:404)
at 
org.apache.solr.handler.admin.AutoscalingHistoryHandlerTest.setupCluster(AutoscalingHistoryHandlerTest.java:97)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:874)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 14727 lines...]
   [junit4] Suite: org.apache.solr.handler.admin.AutoscalingHistoryHandlerTest
   [junit4]   2> Creating dataDir: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J2/temp/solr.handler.admin.AutoscalingHistoryHandlerTest_E8BCAE9551663379-001/init-core-data-001
   [junit4]   2> 2682700 INFO  
(SUITE-AutoscalingHistoryHandlerTest-seed#[E8BCAE9551663379]-worker) [] 
o.a.s.c.MiniSolrCloudCluster Starting cluster of 2 servers in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J2/temp/solr.handler.admin.AutoscalingHistoryHandlerTest_E8BCAE9551663379-001/tempDir-001
   [junit4]   2> 2682700 INFO  
(SUITE-AutoscalingHistoryHandlerTest-seed#[E8BCAE9551663379]-worker) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 2682712 INFO  (Thread-5499) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 2682713 INFO  (Thread-5499) [] o.a.s.c.ZkTestServer 
Starting server
   [junit4]   2> 2682724 ERROR (Thread-5499) [] o.a.z.s.ZooKeeperServer 
ZKShutdownHandler is not registered, so ZooKeeper server won't take any action 
on ERROR or SHUTDOWN server state changes
   [junit4]   2> 2682812 INFO  
(SUITE-AutoscalingHistoryHandlerTest-seed#[E8BCAE9551663379]-worker) [] 
o.a.s.c.ZkTestServer start zk server on port:38757
   [junit4]   2> 2682841 INFO  (zkConnectionManagerCallback-10895-thread-1) [   
 ] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 2682844 WARN  (NIOServerCxn.Factory:0.0.0.0/0.0.0.0:0) [] 

[jira] [Commented] (SOLR-12209) add Paging Streaming Expression

2018-06-03 Thread mosh (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16499434#comment-16499434
 ] 

mosh commented on SOLR-12209:
-

Is there anything else that needs to be done to get this committed?

> add Paging Streaming Expression
> ---
>
> Key: SOLR-12209
> URL: https://issues.apache.org/jira/browse/SOLR-12209
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Affects Versions: 7.3
>Reporter: mosh
>Priority: Major
> Attachments: 0001-added-skip-and-limit-stream-decorators.patch, 
> SOLR-12209.patch, SOLR-12209.patch
>
>
> Currently the closest streaming expression that allows some sort of 
> pagination is top.
> I propose we add a new streaming expression, which is based on the 
> RankedStream class to add offset to the stream. currently it can only be done 
> in code by reading the stream until the desired offset is reached.
> The new expression will be used as such:
> {{paging(rows=3, search(collection1, q="*:*", qt="/export", 
> fl="id,a_s,a_i,a_f", sort="a_f desc, a_i desc"), sort="a_f asc, a_i asc", 
> start=100)}}
> {{this will offset the returned stream by 100 documents}}
>  
> [~joel.bernstein] what to you think?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-BadApples-Tests-7.x - Build # 74 - Still Unstable

2018-06-03 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-7.x/74/

10 tests failed.
FAILED:  
org.apache.solr.cloud.CollectionsAPISolrJTest.testCreateCollWithDefaultClusterProperties

Error Message:
Error from server at https://127.0.0.1:42696/solr: numShards is a required 
param (when using CompositeId router).

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at https://127.0.0.1:42696/solr: numShards is a required param 
(when using CompositeId router).
at 
__randomizedtesting.SeedInfo.seed([4EE512EF99B075EC:FD3179C76794B748]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1106)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:886)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:819)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at 
org.apache.solr.cloud.CollectionsAPISolrJTest.testCreateCollWithDefaultClusterProperties(CollectionsAPISolrJTest.java:123)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com

[JENKINS] Lucene-Solr-7.x-Windows (64bit/jdk-9.0.4) - Build # 628 - Still Unstable!

2018-06-03 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/628/
Java: 64bit/jdk-9.0.4 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

18 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testSplitIntegration

Error Message:
did not finish processing in time

Stack Trace:
java.lang.AssertionError: did not finish processing in time
at 
__randomizedtesting.SeedInfo.seed([6CA4BC4C9CA6BA59:552A050CB35973A7]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testSplitIntegration(IndexSizeTriggerTest.java:298)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testMergeIntegration

Error Message:
did not finish processing in time

Stack Trace:
java.lang.AssertionError: did not finish processing in time
at 
__randomizedtesting.SeedInfo.seed([6CA4BC4C9CA6BA59:3F

[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk-10.0.1) - Build # 2044 - Still Unstable!

2018-06-03 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2044/
Java: 64bit/jdk-10.0.1 -XX:-UseCompressedOops -XX:+UseParallelGC

2 tests failed.
FAILED:  org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testTrigger

Error Message:
number of ops expected:<2> but was:<1>

Stack Trace:
java.lang.AssertionError: number of ops expected:<2> but was:<1>
at 
__randomizedtesting.SeedInfo.seed([41DD8F727D37647E:2216B9F0E4F81753]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testTrigger(IndexSizeTriggerTest.java:187)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testTrigger

Error Message:
number of ops expected:<2> but was:<1>

Stack Trace:
java.lang.AssertionError: 

[JENKINS] Lucene-Solr-master-Windows (64bit/jdk-10) - Build # 7353 - Still Unstable!

2018-06-03 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7353/
Java: 64bit/jdk-10 -XX:-UseCompressedOops -XX:+UseSerialGC

7 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.OutputWriterTest

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.OutputWriterTest_290667DA746EE9CD-001\init-core-data-001:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.OutputWriterTest_290667DA746EE9CD-001\init-core-data-001
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.OutputWriterTest_290667DA746EE9CD-001\init-core-data-001:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.OutputWriterTest_290667DA746EE9CD-001\init-core-data-001

at __randomizedtesting.SeedInfo.seed([290667DA746EE9CD]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:318)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  org.apache.solr.handler.TestReplicationHandler.doTestStressReplication

Error Message:
found:2[index.20180603214257877, index.20180603214311144, index.properties, 
replication.properties, snapshot_metadata]

Stack Trace:
java.lang.AssertionError: found:2[index.20180603214257877, 
index.20180603214311144, index.properties, replication.properties, 
snapshot_metadata]
at 
__randomizedtesting.SeedInfo.seed([290667DA746EE9CD:F2AD671C7146807E]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.handler.TestReplicationHandler.checkForSingleIndex(TestReplicationHandler.java:968)
at 
org.apache.solr.handler.TestReplicationHandler.checkForSingleIndex(TestReplicationHandler.java:939)
at 
org.apache.solr.handler.TestReplicationHandler.doTestStressReplication(TestReplicationHandler.java:915)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakCo

[jira] [Updated] (SOLR-12441) Add deeply nested documents URP

2018-06-03 Thread mosh (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mosh updated SOLR-12441:

Description: 
As discussed in [SOLR_12298|https://issues.apache.org/jira/browse/SOLR-12298], 
there ought to be an URP to add metadata fields to childDocuments in order to 
allow a transformer to rebuild the original document hierarchy.

{quote}I propose we add the following fields:
 # __nestParent__
 # _nestLevel_
 # __nestPath__

__nestParent__: This field wild will store the document's parent docId, to be 
used for building the whole hierarchy, using a new document transformer, as 
suggested by Jan on the mailing list.

_nestLevel_: This field will store the level of the specified field in the 
document, using an int value. This field can be used for the parentFilter, 
eliminating the need to provide a parentFilter, which will be set by default as 
"_level_:queriedFieldLevel".

_nestLevel_: This field will contain the full path, separated by a specific 
reserved char e.g., '.'
 for example: "first.second.third".
 This will enable users to search for a specific path, or provide a regular 
expression to search for fields sharing the same name in different levels of 
the document, filtering using the level key if needed.
{quote}

> Add deeply nested documents URP
> ---
>
> Key: SOLR-12441
> URL: https://issues.apache.org/jira/browse/SOLR-12441
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: mosh
>Priority: Major
>
> As discussed in 
> [SOLR_12298|https://issues.apache.org/jira/browse/SOLR-12298], there ought to 
> be an URP to add metadata fields to childDocuments in order to allow a 
> transformer to rebuild the original document hierarchy.
> {quote}I propose we add the following fields:
>  # __nestParent__
>  # _nestLevel_
>  # __nestPath__
> __nestParent__: This field wild will store the document's parent docId, to be 
> used for building the whole hierarchy, using a new document transformer, as 
> suggested by Jan on the mailing list.
> _nestLevel_: This field will store the level of the specified field in the 
> document, using an int value. This field can be used for the parentFilter, 
> eliminating the need to provide a parentFilter, which will be set by default 
> as "_level_:queriedFieldLevel".
> _nestLevel_: This field will contain the full path, separated by a specific 
> reserved char e.g., '.'
>  for example: "first.second.third".
>  This will enable users to search for a specific path, or provide a regular 
> expression to search for fields sharing the same name in different levels of 
> the document, filtering using the level key if needed.
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12441) Add deeply nested documents URP

2018-06-03 Thread mosh (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mosh updated SOLR-12441:

Environment: (was: As discussed in 
[SOLR_12298|https://issues.apache.org/jira/browse/SOLR-12298], there ought to 
be an URP to add metadata fields to childDocuments in order to allow a 
transformer to rebuild the original document hierarchy.

{quote}I propose we add the following fields:
 # __nestParent__
 # _nestLevel_
 # __nestPath__

__nestParent__: This field wild will store the document's parent docId, to be 
used for building the whole hierarchy, using a new document transformer, as 
suggested by Jan on the mailing list.

_nestLevel_: This field will store the level of the specified field in the 
document, using an int value. This field can be used for the parentFilter, 
eliminating the need to provide a parentFilter, which will be set by default as 
"_level_:queriedFieldLevel".

_nestLevel_: This field will contain the full path, separated by a specific 
reserved char e.g., '.'
 for example: "first.second.third".
 This will enable users to search for a specific path, or provide a regular 
expression to search for fields sharing the same name in different levels of 
the document, filtering using the level key if needed.
{quote})

> Add deeply nested documents URP
> ---
>
> Key: SOLR-12441
> URL: https://issues.apache.org/jira/browse/SOLR-12441
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: mosh
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-7.x - Build # 234 - Still Unstable

2018-06-03 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-7.x/234/

4 tests failed.
FAILED:  org.apache.solr.cloud.DistribJoinFromCollectionTest.testScore

Error Message:
Error from server at https://127.0.0.1:35052/solr/to_2x2: SolrCloud join: 
Collection 'from_1x4Alias' not found!

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at https://127.0.0.1:35052/solr/to_2x2: SolrCloud join: Collection 
'from_1x4Alias' not found!
at 
__randomizedtesting.SeedInfo.seed([A4FFE3FB0788B66D:E867ACCB4FAD6276]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1106)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:886)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:819)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
at 
org.apache.solr.cloud.DistribJoinFromCollectionTest.testJoins(DistribJoinFromCollectionTest.java:173)
at 
org.apache.solr.cloud.DistribJoinFromCollectionTest.testScore(DistribJoinFromCollectionTest.java:115)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtes

[jira] [Commented] (SOLR-12439) Switch from Apache HttpClient to Jetty HttpClient which offers a single API for Async, Http/1.1 Http/2.h

2018-06-03 Thread Oleg Kalnichevski (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16499350#comment-16499350
 ] 

Oleg Kalnichevski commented on SOLR-12439:
--

Just for the record, so does Apache HttpClient 5.0.

Oleg 

> Switch from Apache HttpClient to Jetty HttpClient which offers a single API 
> for Async, Http/1.1 Http/2.h
> 
>
> Key: SOLR-12439
> URL: https://issues.apache.org/jira/browse/SOLR-12439
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Solaris (64bit/jdk1.8.0) - Build # 661 - Unstable!

2018-06-03 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Solaris/661/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseParallelGC

3 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.sim.TestComputePlanAction.testNodeAdded

Error Message:
ComputePlanAction should have computed exactly 1 operation, but was: 
[org.apache.solr.client.solrj.request.CollectionAdminRequest$MoveReplica@73fc78d4,
 
org.apache.solr.client.solrj.request.CollectionAdminRequest$MoveReplica@3bd110f9]
 expected:<1> but was:<2>

Stack Trace:
java.lang.AssertionError: ComputePlanAction should have computed exactly 1 
operation, but was: 
[org.apache.solr.client.solrj.request.CollectionAdminRequest$MoveReplica@73fc78d4,
 
org.apache.solr.client.solrj.request.CollectionAdminRequest$MoveReplica@3bd110f9]
 expected:<1> but was:<2>
at 
__randomizedtesting.SeedInfo.seed([6978F91CB0691C76:CBBAF6B12CAB475]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.cloud.autoscaling.sim.TestComputePlanAction.testNodeAdded(TestComputePlanAction.java:313)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSu

[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk1.8.0_172) - Build # 2043 - Unstable!

2018-06-03 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2043/
Java: 64bit/jdk1.8.0_172 -XX:-UseCompressedOops -XX:+UseSerialGC

3 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testSplitIntegration

Error Message:
last state: 
DocCollection(testSplitIntegration_collection//clusterstate.json/43)={   
"replicationFactor":"2",   "pullReplicas":"0",   
"router":{"name":"compositeId"},   "maxShardsPerNode":"2",   
"autoAddReplicas":"false",   "nrtReplicas":"2",   "tlogReplicas":"0",   
"autoCreated":"true",   "shards":{ "shard2":{   "replicas":{ 
"core_node3":{   
"core":"testSplitIntegration_collection_shard2_replica_n3",   
"leader":"true",   "SEARCHER.searcher.maxDoc":11,   
"SEARCHER.searcher.deletedDocs":0,   "INDEX.sizeInBytes":1,   
"node_name":"127.0.0.1:10002_solr",   "state":"active",   
"type":"NRT",   "SEARCHER.searcher.numDocs":11}, "core_node4":{ 
  "core":"testSplitIntegration_collection_shard2_replica_n4",   
"SEARCHER.searcher.maxDoc":11,   "SEARCHER.searcher.deletedDocs":0, 
  "INDEX.sizeInBytes":1,   "node_name":"127.0.0.1:10003_solr",  
 "state":"active",   "type":"NRT",   
"SEARCHER.searcher.numDocs":11}},   "range":"0-7fff",   
"state":"active"}, "shard1":{   "stateTimestamp":"1528017183087572850", 
  "replicas":{ "core_node1":{   
"core":"testSplitIntegration_collection_shard1_replica_n1",   
"leader":"true",   "SEARCHER.searcher.maxDoc":14,   
"SEARCHER.searcher.deletedDocs":0,   "INDEX.sizeInBytes":1,   
"node_name":"127.0.0.1:10002_solr",   "state":"active",   
"type":"NRT",   "SEARCHER.searcher.numDocs":14}, "core_node2":{ 
  "core":"testSplitIntegration_collection_shard1_replica_n2",   
"SEARCHER.searcher.maxDoc":14,   "SEARCHER.searcher.deletedDocs":0, 
  "INDEX.sizeInBytes":1,   "node_name":"127.0.0.1:10003_solr",  
 "state":"active",   "type":"NRT",   
"SEARCHER.searcher.numDocs":14}},   "range":"8000-",   
"state":"inactive"}, "shard1_1":{   "parent":"shard1",   
"stateTimestamp":"1528017183088082650",   "range":"c000-",  
 "state":"active",   "replicas":{ "core_node10":{   
"leader":"true",   
"core":"testSplitIntegration_collection_shard1_1_replica1",   
"SEARCHER.searcher.maxDoc":7,   "SEARCHER.searcher.deletedDocs":0,  
 "INDEX.sizeInBytes":1,   "node_name":"127.0.0.1:10002_solr",   
"base_url":"http://127.0.0.1:10002/solr";,   "state":"active",   
"type":"NRT",   "SEARCHER.searcher.numDocs":7}, 
"core_node9":{   
"core":"testSplitIntegration_collection_shard1_1_replica0",   
"SEARCHER.searcher.maxDoc":7,   "SEARCHER.searcher.deletedDocs":0,  
 "INDEX.sizeInBytes":1,   "node_name":"127.0.0.1:10003_solr",   
"base_url":"http://127.0.0.1:10003/solr";,   "state":"active",   
"type":"NRT",   "SEARCHER.searcher.numDocs":7}}}, "shard1_0":{  
 "parent":"shard1",   "stateTimestamp":"1528017183087982750",   
"range":"8000-bfff",   "state":"active",   "replicas":{ 
"core_node7":{   "leader":"true",   
"core":"testSplitIntegration_collection_shard1_0_replica0",   
"SEARCHER.searcher.maxDoc":7,   "SEARCHER.searcher.deletedDocs":0,  
 "INDEX.sizeInBytes":1,   "node_name":"127.0.0.1:10003_solr",   
"base_url":"http://127.0.0.1:10003/solr";,   "state":"active",   
"type":"NRT",   "SEARCHER.searcher.numDocs":7}, 
"core_node8":{   
"core":"testSplitIntegration_collection_shard1_0_replica1",   
"SEARCHER.searcher.maxDoc":7,   "SEARCHER.searcher.deletedDocs":0,  
 "INDEX.sizeInBytes":1,   "node_name":"127.0.0.1:10002_solr",   
"base_url":"http://127.0.0.1:10002/solr";,   "state":"active",   
"type":"NRT",   "SEARCHER.searcher.numDocs":7}

Stack Trace:
java.util.concurrent.TimeoutException: last state: 
DocCollection(testSplitIntegration_collection//clusterstate.json/43)={
  "replicationFactor":"2",
  "pullReplicas":"0",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"2",
  "autoAddReplicas":"false",
  "nrtReplicas":"2",
  "tlogReplicas":"0",
  "autoCreated":"true",
  "shards":{
"shard2":{
  "replicas":{
"core_node3":{
  "core":"testSplitIntegration_collection_shard2_replica_n3",
  "leader":"true",
  "SEARCHER.searcher.maxDoc":11,
  "SEARCHER.searcher.deletedDocs":0,
  "INDEX.sizeInBytes":1,
  "node_name":"127.0.0.1:10002_solr",
 

[jira] [Commented] (SOLR-9922) Write buffering updates to another tlog

2018-06-03 Thread Cao Manh Dat (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16499337#comment-16499337
 ] 

Cao Manh Dat commented on SOLR-9922:


Newest patch after doing some cleanup and fixing bugs found on beasting tests.

> Write buffering updates to another tlog
> ---
>
> Key: SOLR-9922
> URL: https://issues.apache.org/jira/browse/SOLR-9922
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Major
> Attachments: SOLR-9922.patch, SOLR-9922.patch, SOLR-9922.patch, 
> SOLR-9922.patch, SOLR-9922.patch, SOLR-9922.patch
>
>
> Currently, we write buffering logs to current tlog and not apply that updates 
> to index. Then we rely on replay log to apply that updates to index. But at 
> the same time there are some updates also write to current tlog and applied 
> to the index. 
> For example, during peersync, if new updates come to replica we will end up 
> with this tlog
> tlog : old1, new1, new2, old2, new3, old3
> old updates belong to peersync, and these updates are applied to the index.
> new updates belong to buffering updates, and these updates are not applied to 
> the index.
> But writing all the updates to same current tlog make code base very complex. 
> We should write buffering updates to another tlog file.
> By doing this, it will help our code base simpler. It also makes replica 
> recovery for SOLR-9835 more easier. Because after peersync success we can 
> copy new updates from temporary file to current tlog, for example
> tlog : old1, old2, old3
> temporary tlog : new1, new2, new3
> -->
> tlog : old1, old2, old3, new1, new2, new3



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9922) Write buffering updates to another tlog

2018-06-03 Thread Cao Manh Dat (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-9922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cao Manh Dat updated SOLR-9922:
---
Attachment: SOLR-9922.patch

> Write buffering updates to another tlog
> ---
>
> Key: SOLR-9922
> URL: https://issues.apache.org/jira/browse/SOLR-9922
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Major
> Attachments: SOLR-9922.patch, SOLR-9922.patch, SOLR-9922.patch, 
> SOLR-9922.patch, SOLR-9922.patch, SOLR-9922.patch
>
>
> Currently, we write buffering logs to current tlog and not apply that updates 
> to index. Then we rely on replay log to apply that updates to index. But at 
> the same time there are some updates also write to current tlog and applied 
> to the index. 
> For example, during peersync, if new updates come to replica we will end up 
> with this tlog
> tlog : old1, new1, new2, old2, new3, old3
> old updates belong to peersync, and these updates are applied to the index.
> new updates belong to buffering updates, and these updates are not applied to 
> the index.
> But writing all the updates to same current tlog make code base very complex. 
> We should write buffering updates to another tlog file.
> By doing this, it will help our code base simpler. It also makes replica 
> recovery for SOLR-9835 more easier. Because after peersync success we can 
> copy new updates from temporary file to current tlog, for example
> tlog : old1, old2, old3
> temporary tlog : new1, new2, new3
> -->
> tlog : old1, old2, old3, new1, new2, new3



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1905 - Still Unstable!

2018-06-03 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1905/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

4 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.sim.TestComputePlanAction.testNodeAdded

Error Message:
ComputePlanAction should have computed exactly 1 operation, but was: 
[org.apache.solr.client.solrj.request.CollectionAdminRequest$MoveReplica@33727735,
 
org.apache.solr.client.solrj.request.CollectionAdminRequest$MoveReplica@430f62b5]
 expected:<1> but was:<2>

Stack Trace:
java.lang.AssertionError: ComputePlanAction should have computed exactly 1 
operation, but was: 
[org.apache.solr.client.solrj.request.CollectionAdminRequest$MoveReplica@33727735,
 
org.apache.solr.client.solrj.request.CollectionAdminRequest$MoveReplica@430f62b5]
 expected:<1> but was:<2>
at 
__randomizedtesting.SeedInfo.seed([4402E322B9FAE65C:21C1B5551B594E5F]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.cloud.autoscaling.sim.TestComputePlanAction.testNodeAdded(TestComputePlanAction.java:313)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIg

[GitHub] lucene-solr pull request #385: WIP: SOLR-12361

2018-06-03 Thread moshebla
Github user moshebla commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/385#discussion_r192579604
  
--- Diff: 
solr/core/src/java/org/apache/solr/update/DirectUpdateHandler2.java ---
@@ -417,7 +417,8 @@ private void addAndDelete(AddUpdateCommand cmd, 
List deletesAfter
   }
 
   private Term getIdTerm(AddUpdateCommand cmd) {
--- End diff --

After looking at the code, it seems like updateDocOrDocValues has to return 
the idTerm to be used by the caller, and the updateTerm can then be figured out 
by the caller, either the cmd.updateTerm or the idTerm.
I will push a commit soon, hopefully the code can do the talking.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12441) Add deeply nested documents URP

2018-06-03 Thread mosh (JIRA)
mosh created SOLR-12441:
---

 Summary: Add deeply nested documents URP
 Key: SOLR-12441
 URL: https://issues.apache.org/jira/browse/SOLR-12441
 Project: Solr
  Issue Type: Sub-task
  Security Level: Public (Default Security Level. Issues are Public)
 Environment: As discussed in 
[SOLR_12298|https://issues.apache.org/jira/browse/SOLR-12298], there ought to 
be an URP to add metadata fields to childDocuments in order to allow a 
transformer to rebuild the original document hierarchy.

{quote}I propose we add the following fields:
 # __nestParent__
 # _nestLevel_
 # __nestPath__

__nestParent__: This field wild will store the document's parent docId, to be 
used for building the whole hierarchy, using a new document transformer, as 
suggested by Jan on the mailing list.

_nestLevel_: This field will store the level of the specified field in the 
document, using an int value. This field can be used for the parentFilter, 
eliminating the need to provide a parentFilter, which will be set by default as 
"_level_:queriedFieldLevel".

_nestLevel_: This field will contain the full path, separated by a specific 
reserved char e.g., '.'
 for example: "first.second.third".
 This will enable users to search for a specific path, or provide a regular 
expression to search for fields sharing the same name in different levels of 
the document, filtering using the level key if needed.
{quote}
Reporter: mosh






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12440) Add deeply nested documents URP

2018-06-03 Thread mosh (JIRA)
mosh created SOLR-12440:
---

 Summary: Add deeply nested documents URP
 Key: SOLR-12440
 URL: https://issues.apache.org/jira/browse/SOLR-12440
 Project: Solr
  Issue Type: New Feature
  Security Level: Public (Default Security Level. Issues are Public)
 Environment: As stated in SOLR-12298, there ought to be an URP to add 
metadata fields to solr child documents, so the full original document 
hierarchy can be rebuilt using a transformer.
{quote}1. __nestParent__
 2. __nestLevel__
 3. __nestPath__

__nestParent__: This field wild will store the document's parent docId, to be 
used for building the whole hierarchy, using a new document transformer, as 
suggested by Jan on the mailing list.

__nestLevel__: This field will store the level of the specified field in the 
document, using an int value. This field can be used for the parentFilter, 
eliminating the need to provide a parentFilter, which will be set by default as 
"_level_:queriedFieldLevel".

__nestPath__: This field will contain the full path, separated by a specific 
reserved char e.g., '.'
 for example: "first.second.third".
 This will enable users to search for a specific path, or provide a regular 
expression to search for fields sharing the same name in different levels of 
the document, filtering using the level key if needed.
{quote}
Reporter: mosh






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org