[JENKINS] Lucene-Solr-BadApples-Tests-7.x - Build # 89 - Still Unstable

2018-06-26 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-7.x/89/

3 tests failed.
FAILED:  
org.apache.solr.cloud.api.collections.ShardSplitTest.testSplitWithChaosMonkey

Error Message:
We think that split was successful but sub-shard states were not updated even 
after 2 minutes.

Stack Trace:
java.lang.AssertionError: We think that split was successful but sub-shard 
states were not updated even after 2 minutes.
at 
__randomizedtesting.SeedInfo.seed([45EB8D19C41F0CEA:CECC5EC88519A76E]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.api.collections.ShardSplitTest.testSplitWithChaosMonkey(ShardSplitTest.java:555)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1008)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:983)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
   

[JENKINS] Lucene-Solr-7.x-Solaris (64bit/jdk1.8.0) - Build # 690 - Unstable!

2018-06-26 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Solaris/690/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

6 tests failed.
FAILED:  org.apache.solr.search.TestRecovery.testExistOldBufferLog

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([4527A15DE01C0C98]:0)


FAILED:  junit.framework.TestSuite.org.apache.solr.search.TestRecovery

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([4527A15DE01C0C98]:0)


FAILED:  org.apache.solr.search.TestRecovery.testExistOldBufferLog

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([4527A15DE01C0C98:1B77BC086ED39C11]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertFalse(Assert.java:68)
at org.junit.Assert.assertFalse(Assert.java:79)
at 
org.apache.solr.search.TestRecovery.testExistOldBufferLog(TestRecovery.java:1071)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[jira] [Commented] (SOLR-12517) Support range values for replica in autoscaling policies

2018-06-26 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16524516#comment-16524516
 ] 

ASF subversion and git services commented on SOLR-12517:


Commit e2ac4ab4799322c573a9ada771b2c42ea1eb0b82 in lucene-solr's branch 
refs/heads/branch_7x from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=e2ac4ab ]

SOLR-11985: ref guide

SOLR-12511: ref guide

SOLR-12517: ref guide


> Support range values for replica in autoscaling policies
> 
>
> Key: SOLR-12517
> URL: https://issues.apache.org/jira/browse/SOLR-12517
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, SolrCloud
>Reporter: Noble Paul
>Priority: Major
>
> example
> {code}
> {"replica" : "3 - 5", "shard" :"#EACH", "node" : "#ANY"}
> {code}
> means anode may have 3, 4 or 5 replicas of a shard. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12511) Support non integer values for replica in autoscaling rules

2018-06-26 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16524515#comment-16524515
 ] 

ASF subversion and git services commented on SOLR-12511:


Commit e2ac4ab4799322c573a9ada771b2c42ea1eb0b82 in lucene-solr's branch 
refs/heads/branch_7x from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=e2ac4ab ]

SOLR-11985: ref guide

SOLR-12511: ref guide

SOLR-12517: ref guide


> Support non integer values for replica in autoscaling rules
> ---
>
> Key: SOLR-12511
> URL: https://issues.apache.org/jira/browse/SOLR-12511
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Major
>
> This means the user can configure a decimal value in replica
> example :
> {code:json}
> {"replica": 1.638, "node":"#ANY"}
> {code}
> This means a few things. The no:of of replicas in a node can be either 2 or 
> 1. This also means that violations are calculated as follows
>  * If the replica count is 1 or 2 there are no violations 
>  * If the replica count is 3, there is a violation and the delta is 
> *{{3-1.638 = 1.362}}*
>  * if the replica count is 0, there is a violation and the delta is *{{1.638 
> - 0 = 1.638}}*
>  * This also means that the node with zero replicas has a *more serious* 
> violation and the system would try to rectify that first before it address 
> the node with 3 replicas



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11985) Allow percentage in replica attribute in policy

2018-06-26 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16524514#comment-16524514
 ] 

ASF subversion and git services commented on SOLR-11985:


Commit e2ac4ab4799322c573a9ada771b2c42ea1eb0b82 in lucene-solr's branch 
refs/heads/branch_7x from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=e2ac4ab ]

SOLR-11985: ref guide

SOLR-12511: ref guide

SOLR-12517: ref guide


> Allow percentage in replica attribute in policy
> ---
>
> Key: SOLR-11985
> URL: https://issues.apache.org/jira/browse/SOLR-11985
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Noble Paul
>Priority: Major
> Fix For: master (8.0), 7.5
>
> Attachments: SOLR-11985.patch, SOLR-11985.patch
>
>
> Today we can only specify an absolute number in the 'replica' attribute in 
> the policy rules. It'd be useful to write a percentage value to make certain 
> use-cases easier. For example:
> {code:java}
> // Keep a third of the the replicas of each shard in east region
> {"replica" : "<34%", "shard" : "#EACH", "sysprop:region": "east"}
> // Keep two thirds of the the replicas of each shard in west region
> {"replica" : "<67%", "shard" : "#EACH", "sysprop:region": "west"}
> {code}
> Today the above must be represented by different rules for each collection if 
> they have different replication factors. Also if the replication factor 
> changes later, the absolute value has to be changed in tandem. So expressing 
> a percentage removes both of these restrictions.
> This feature means that the value of the attribute {{"replica"}} is only 
> available just in time. We call such values {{"computed values"}} . The 
> computed value for this attribute depends on other attributes as well. 
>  Take the following 2 rules
> {code:java}
> //example 1
> {"replica" : "<34%", "shard" : "#EACH", "sysprop:region": "east"}
> //example 2
> {"replica" : "<34%",  "sysprop:region": "east"}
> {code}
> assume we have collection {{"A"}} with 2 shards and {{replicationFactor=3}}
> *example 1* would mean that the value of replica is computed as
> {{3 * 34 / 100 = 1.02}}
> Which means *_for each shard_* keep less than 1.02 replica in east 
> availability zone
>  
> *example 2* would mean that the value of replica is computed as 
> {{3 * 2 * 34 / 100 = 2.04}}
>  
> which means _*for each collection*_ keep less than 2.04 replicas on east 
> availability zone



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12517) Support range values for replica in autoscaling policies

2018-06-26 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16524501#comment-16524501
 ] 

ASF subversion and git services commented on SOLR-12517:


Commit a929003f5b2792dedef6563203a86b99ac54e5df in lucene-solr's branch 
refs/heads/master from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=a929003 ]

SOLR-11985: ref guide

SOLR-12511: ref guide

SOLR-12517: ref guide


> Support range values for replica in autoscaling policies
> 
>
> Key: SOLR-12517
> URL: https://issues.apache.org/jira/browse/SOLR-12517
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, SolrCloud
>Reporter: Noble Paul
>Priority: Major
>
> example
> {code}
> {"replica" : "3 - 5", "shard" :"#EACH", "node" : "#ANY"}
> {code}
> means anode may have 3, 4 or 5 replicas of a shard. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12511) Support non integer values for replica in autoscaling rules

2018-06-26 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16524500#comment-16524500
 ] 

ASF subversion and git services commented on SOLR-12511:


Commit a929003f5b2792dedef6563203a86b99ac54e5df in lucene-solr's branch 
refs/heads/master from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=a929003 ]

SOLR-11985: ref guide

SOLR-12511: ref guide

SOLR-12517: ref guide


> Support non integer values for replica in autoscaling rules
> ---
>
> Key: SOLR-12511
> URL: https://issues.apache.org/jira/browse/SOLR-12511
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Major
>
> This means the user can configure a decimal value in replica
> example :
> {code:json}
> {"replica": 1.638, "node":"#ANY"}
> {code}
> This means a few things. The no:of of replicas in a node can be either 2 or 
> 1. This also means that violations are calculated as follows
>  * If the replica count is 1 or 2 there are no violations 
>  * If the replica count is 3, there is a violation and the delta is 
> *{{3-1.638 = 1.362}}*
>  * if the replica count is 0, there is a violation and the delta is *{{1.638 
> - 0 = 1.638}}*
>  * This also means that the node with zero replicas has a *more serious* 
> violation and the system would try to rectify that first before it address 
> the node with 3 replicas



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11985) Allow percentage in replica attribute in policy

2018-06-26 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16524499#comment-16524499
 ] 

ASF subversion and git services commented on SOLR-11985:


Commit a929003f5b2792dedef6563203a86b99ac54e5df in lucene-solr's branch 
refs/heads/master from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=a929003 ]

SOLR-11985: ref guide

SOLR-12511: ref guide

SOLR-12517: ref guide


> Allow percentage in replica attribute in policy
> ---
>
> Key: SOLR-11985
> URL: https://issues.apache.org/jira/browse/SOLR-11985
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Noble Paul
>Priority: Major
> Fix For: master (8.0), 7.5
>
> Attachments: SOLR-11985.patch, SOLR-11985.patch
>
>
> Today we can only specify an absolute number in the 'replica' attribute in 
> the policy rules. It'd be useful to write a percentage value to make certain 
> use-cases easier. For example:
> {code:java}
> // Keep a third of the the replicas of each shard in east region
> {"replica" : "<34%", "shard" : "#EACH", "sysprop:region": "east"}
> // Keep two thirds of the the replicas of each shard in west region
> {"replica" : "<67%", "shard" : "#EACH", "sysprop:region": "west"}
> {code}
> Today the above must be represented by different rules for each collection if 
> they have different replication factors. Also if the replication factor 
> changes later, the absolute value has to be changed in tandem. So expressing 
> a percentage removes both of these restrictions.
> This feature means that the value of the attribute {{"replica"}} is only 
> available just in time. We call such values {{"computed values"}} . The 
> computed value for this attribute depends on other attributes as well. 
>  Take the following 2 rules
> {code:java}
> //example 1
> {"replica" : "<34%", "shard" : "#EACH", "sysprop:region": "east"}
> //example 2
> {"replica" : "<34%",  "sysprop:region": "east"}
> {code}
> assume we have collection {{"A"}} with 2 shards and {{replicationFactor=3}}
> *example 1* would mean that the value of replica is computed as
> {{3 * 34 / 100 = 1.02}}
> Which means *_for each shard_* keep less than 1.02 replica in east 
> availability zone
>  
> *example 2* would mean that the value of replica is computed as 
> {{3 * 2 * 34 / 100 = 2.04}}
>  
> which means _*for each collection*_ keep less than 2.04 replicas on east 
> availability zone



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12522) Support a runtime function `#ALL` for 'replica' in autoscaling policies

2018-06-26 Thread Noble Paul (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-12522:
--
Summary: Support a runtime function `#ALL` for 'replica' in autoscaling 
policies  (was: Support a runtime function `#ALL` in replica in autoscaling 
policies)

> Support a runtime function `#ALL` for 'replica' in autoscaling policies
> ---
>
> Key: SOLR-12522
> URL: https://issues.apache.org/jira/browse/SOLR-12522
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, SolrCloud
>Reporter: Noble Paul
>Priority: Major
>
> today we have to use the convoluted rule to place all TLOG replicas in nodes 
> with ssd disk
> {code}
> { "replica": 0,  "diskType" : "!ssd",  "type" : "TLOG" }
> {code}
> Ideally it should be expressed better as 
> {code}
> { "replica": "#ALL",  "diskType" : "ssd",  "type" : "TLOG" }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12522) Support a runtime function `#ALL` in replica in autoscaling policies

2018-06-26 Thread Noble Paul (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-12522:
--
Summary: Support a runtime function `#ALL` in replica in autoscaling 
policies  (was: Support a runtime function `#ALL` in replicas)

> Support a runtime function `#ALL` in replica in autoscaling policies
> 
>
> Key: SOLR-12522
> URL: https://issues.apache.org/jira/browse/SOLR-12522
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, SolrCloud
>Reporter: Noble Paul
>Priority: Major
>
> today we have to use the convoluted rule to place all TLOG replicas in nodes 
> with ssd disk
> {code}
> { "replica": 0,  "diskType" : "!ssd",  "type" : "TLOG" }
> {code}
> Ideally it should be expressed better as 
> {code}
> { "replica": "#ALL",  "diskType" : "ssd",  "type" : "TLOG" }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12522) Support a runtime function `#ALL` in replicas

2018-06-26 Thread Noble Paul (JIRA)
Noble Paul created SOLR-12522:
-

 Summary: Support a runtime function `#ALL` in replicas
 Key: SOLR-12522
 URL: https://issues.apache.org/jira/browse/SOLR-12522
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: AutoScaling, SolrCloud
Reporter: Noble Paul


today we have to use the convoluted rule to place all TLOG replicas in nodes 
with ssd disk
{code}
{ "replica": 0,  "diskType" : "!ssd",  "type" : "TLOG" }
{code}

Ideally it should be expressed better as 
{code}
{ "replica": "#ALL",  "diskType" : "ssd",  "type" : "TLOG" }
{code}




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12441) Add deeply nested documents URP

2018-06-26 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16524478#comment-16524478
 ] 

David Smiley commented on SOLR-12441:
-

Great idea Jan!
As an implementation detail, we'd no longer simply have 
{{org.apache.solr.handler.loader.JsonLoader.SingleThreadedJsonLoader#isChildDoc}}
 simply look for the presence of the ID field to distinguish an "extended field 
value" syntax from a child doc, but that's okay.

> Add deeply nested documents URP
> ---
>
> Key: SOLR-12441
> URL: https://issues.apache.org/jira/browse/SOLR-12441
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: mosh
>Assignee: David Smiley
>Priority: Major
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> As discussed in 
> [SOLR-12298|https://issues.apache.org/jira/browse/SOLR-12298], there ought to 
> be an URP to add metadata fields to childDocuments in order to allow a 
> transformer to rebuild the original document hierarchy.
> {quote}I propose we add the following fields:
>  # __nestParent__
>  # _nestLevel_
>  # __nestPath__
> __nestParent__: This field wild will store the document's parent docId, to be 
> used for building the whole hierarchy, using a new document transformer, as 
> suggested by Jan on the mailing list.
> _nestLevel_: This field will store the level of the specified field in the 
> document, using an int value. This field can be used for the parentFilter, 
> eliminating the need to provide a parentFilter, which will be set by default 
> as "_level_:queriedFieldLevel".
> _nestLevel_: This field will contain the full path, separated by a specific 
> reserved char e.g., '.'
>  for example: "first.second.third".
>  This will enable users to search for a specific path, or provide a regular 
> expression to search for fields sharing the same name in different levels of 
> the document, filtering using the level key if needed.
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-master - Build # 2575 - Unstable

2018-06-26 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/2575/

3 tests failed.
FAILED:  org.apache.solr.cloud.DeleteReplicaTest.deleteReplicaFromClusterState

Error Message:
Collection not found: deleteFromClusterState_false

Stack Trace:
org.apache.solr.common.SolrException: Collection not found: 
deleteFromClusterState_false
at 
__randomizedtesting.SeedInfo.seed([C388414818860493:2D11EA2527B87F24]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:853)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:819)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:173)
at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:138)
at 
org.apache.solr.cloud.DeleteReplicaTest.deleteReplicaFromClusterState(DeleteReplicaTest.java:187)
at 
org.apache.solr.cloud.DeleteReplicaTest.deleteReplicaFromClusterState(DeleteReplicaTest.java:178)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 

[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_172) - Build # 22327 - Unstable!

2018-06-26 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/22327/
Java: 32bit/jdk1.8.0_172 -server -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.cloud.cdcr.CdcrBidirectionalTest.testBiDir

Error Message:
Captured an uncaught exception in thread: Thread[id=21429, 
name=cdcr-replicator-8132-thread-1, state=RUNNABLE, 
group=TGRP-CdcrBidirectionalTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=21429, name=cdcr-replicator-8132-thread-1, 
state=RUNNABLE, group=TGRP-CdcrBidirectionalTest]
Caused by: java.lang.AssertionError
at __randomizedtesting.SeedInfo.seed([3472601869342D8C]:0)
at 
org.apache.solr.update.CdcrUpdateLog$CdcrLogReader.forwardSeek(CdcrUpdateLog.java:611)
at org.apache.solr.handler.CdcrReplicator.run(CdcrReplicator.java:125)
at 
org.apache.solr.handler.CdcrReplicatorScheduler.lambda$null$0(CdcrReplicatorScheduler.java:81)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 14322 lines...]
   [junit4] Suite: org.apache.solr.cloud.cdcr.CdcrBidirectionalTest
   [junit4]   2> 1967407 INFO  
(SUITE-CdcrBidirectionalTest-seed#[3472601869342D8C]-worker) [] 
o.a.s.SolrTestCaseJ4 SecureRandom sanity checks: 
test.solr.allowed.securerandom=null & java.security.egd=file:/dev/./urandom
   [junit4]   2> Creating dataDir: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.cdcr.CdcrBidirectionalTest_3472601869342D8C-001/init-core-data-001
   [junit4]   2> 1967407 WARN  
(SUITE-CdcrBidirectionalTest-seed#[3472601869342D8C]-worker) [] 
o.a.s.SolrTestCaseJ4 startTrackingSearchers: numOpens=4 numCloses=4
   [junit4]   2> 1967408 INFO  
(SUITE-CdcrBidirectionalTest-seed#[3472601869342D8C]-worker) [] 
o.a.s.SolrTestCaseJ4 Using TrieFields (NUMERIC_POINTS_SYSPROP=false) 
w/NUMERIC_DOCVALUES_SYSPROP=false
   [junit4]   2> 1967409 INFO  
(SUITE-CdcrBidirectionalTest-seed#[3472601869342D8C]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (true) and clientAuth (true) via: 
@org.apache.solr.util.RandomizeSSL(reason=, ssl=NaN, value=NaN, clientAuth=NaN)
   [junit4]   2> 1967410 INFO  
(TEST-CdcrBidirectionalTest.testBiDir-seed#[3472601869342D8C]) [] 
o.a.s.SolrTestCaseJ4 ###Starting testBiDir
   [junit4]   2> 1967411 INFO  
(TEST-CdcrBidirectionalTest.testBiDir-seed#[3472601869342D8C]) [] 
o.a.s.c.MiniSolrCloudCluster Starting cluster of 1 servers in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.cdcr.CdcrBidirectionalTest_3472601869342D8C-001/cdcr-cluster2-001
   [junit4]   2> 1967411 INFO  
(TEST-CdcrBidirectionalTest.testBiDir-seed#[3472601869342D8C]) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 1967411 INFO  (Thread-6296) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 1967411 INFO  (Thread-6296) [] o.a.s.c.ZkTestServer 
Starting server
   [junit4]   2> 1967412 ERROR (Thread-6296) [] o.a.z.s.ZooKeeperServer 
ZKShutdownHandler is not registered, so ZooKeeper server won't take any action 
on ERROR or SHUTDOWN server state changes
   [junit4]   2> 1967511 INFO  
(TEST-CdcrBidirectionalTest.testBiDir-seed#[3472601869342D8C]) [] 
o.a.s.c.ZkTestServer start zk server on port:37289
   [junit4]   2> 1967513 INFO  (zkConnectionManagerCallback-5069-thread-1) [
] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 1967515 INFO  (jetty-launcher-5066-thread-1) [] 
o.e.j.s.Server jetty-9.4.11.v20180605; built: 2018-06-05T18:24:03.829Z; git: 
d5fc0523cfa96bfebfbda19606cad384d772f04c; jvm 1.8.0_172-b11
   [junit4]   2> 1967516 INFO  (jetty-launcher-5066-thread-1) [] 
o.e.j.s.session DefaultSessionIdManager workerName=node0
   [junit4]   2> 1967516 INFO  (jetty-launcher-5066-thread-1) [] 
o.e.j.s.session No SessionScavenger set, using defaults
   [junit4]   2> 1967516 INFO  (jetty-launcher-5066-thread-1) [] 
o.e.j.s.session node0 Scavenging every 66ms
   [junit4]   2> 1967516 INFO  (jetty-launcher-5066-thread-1) [] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@100dca6{/solr,null,AVAILABLE}
   [junit4]   2> 1967517 INFO  (jetty-launcher-5066-thread-1) [] 
o.e.j.s.AbstractConnector Started ServerConnector@14a3dbd{SSL,[ssl, 
http/1.1]}{127.0.0.1:42285}
   [junit4]   2> 1967517 INFO  (jetty-launcher-5066-thread-1) [] 
o.e.j.s.Server Started @1967553ms
   [junit4]   2> 1967517 INFO  (jetty-launcher-5066-thread-1) [] 
o.a.s.c.s.e.JettySolrRunner Jetty properties: {hostContext=/solr, 
hostPort=42285}
   [junit4]   2> 1967517 ERROR 

[jira] [Commented] (LUCENE-8369) Remove the spatial module as it is obsolete

2018-06-26 Thread Lucene/Solr QA (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16524271#comment-16524271
 ] 

Lucene/Solr QA commented on LUCENE-8369:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m  
3s{color} | {color:red} spatial in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m  3s{color} 
| {color:red} spatial in the patch failed. {color} |
| {color:red}-1{color} | {color:red} Release audit (RAT) {color} | {color:red}  
0m  3s{color} | {color:red} spatial in the patch failed. {color} |
| {color:green}+1{color} | {color:green} Release audit (RAT) {color} | 
{color:green}  1m 28s{color} | {color:green} Release audit (RAT) rat-sources 
passed {color} |
| {color:red}-1{color} | {color:red} Check forbidden APIs {color} | {color:red} 
 0m  3s{color} | {color:red} spatial in the patch failed. {color} |
| {color:red}-1{color} | {color:red} Validate source patterns {color} | 
{color:red}  0m  3s{color} | {color:red} spatial in the patch failed. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m  4s{color} 
| {color:red} spatial in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
11s{color} | {color:green} spatial-extras in the patch passed. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}  4m 46s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | LUCENE-8369 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12929126/LUCENE-8369.patch |
| Optional Tests |  ratsources  validatesourcepatterns  compile  javac  unit  
checkforbiddenapis  |
| uname | Linux lucene2-us-west.apache.org 4.4.0-112-generic #135-Ubuntu SMP 
Fri Jan 19 11:48:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | ant |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-LUCENE-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh
 |
| git revision | master / 1023b83 |
| ant | version: Apache Ant(TM) version 1.9.6 compiled on July 8 2015 |
| Default Java | 1.8.0_172 |
| compile | 
https://builds.apache.org/job/PreCommit-LUCENE-Build/40/artifact/out/patch-compile-lucene_spatial.txt
 |
| javac | 
https://builds.apache.org/job/PreCommit-LUCENE-Build/40/artifact/out/patch-compile-lucene_spatial.txt
 |
| Release audit (RAT) | 
https://builds.apache.org/job/PreCommit-LUCENE-Build/40/artifact/out/patch-compile-lucene_spatial.txt
 |
| Check forbidden APIs | 
https://builds.apache.org/job/PreCommit-LUCENE-Build/40/artifact/out/patch-compile-lucene_spatial.txt
 |
| Validate source patterns | 
https://builds.apache.org/job/PreCommit-LUCENE-Build/40/artifact/out/patch-compile-lucene_spatial.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-LUCENE-Build/40/artifact/out/patch-unit-lucene_spatial.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-LUCENE-Build/40/testReport/ |
| modules | C: lucene lucene/spatial lucene/spatial-extras U: lucene |
| Console output | 
https://builds.apache.org/job/PreCommit-LUCENE-Build/40/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> Remove the spatial module as it is obsolete
> ---
>
> Key: LUCENE-8369
> URL: https://issues.apache.org/jira/browse/LUCENE-8369
> Project: Lucene - Core
>  Issue Type: Task
>  Components: modules/spatial
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Major
> Attachments: LUCENE-8369.patch
>
>
> The "spatial" module is at this juncture nearly empty with only a couple 
> utilities that aren't used by anything in the entire codebase -- 
> GeoRelationUtils, and MortonEncoder.  Perhaps it should have been removed 
> earlier in LUCENE-7664 which was the removal of GeoPointField which was 
> essentially why the module existed.  Better late than never.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1572 - Failure

2018-06-26 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1572/

9 tests failed.
FAILED:  org.apache.lucene.index.TestIndexWriterDelete.testUpdatesOnDiskFull

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([CB5429D89CDE2378]:0)


FAILED:  junit.framework.TestSuite.org.apache.lucene.index.TestIndexWriterDelete

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([CB5429D89CDE2378]:0)


FAILED:  org.apache.solr.cloud.CollectionsAPISolrJTest.testCreateAndDeleteShard

Error Message:
Error from server at https://127.0.0.1:32890/solr: Error creating counter node 
in Zookeeper for collection:solrj_implicit

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at https://127.0.0.1:32890/solr: Error creating counter node in 
Zookeeper for collection:solrj_implicit
at 
__randomizedtesting.SeedInfo.seed([7623FF8601E5210D:B37D4B9A8710EBF0]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1106)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:886)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:819)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at 
org.apache.solr.cloud.CollectionsAPISolrJTest.testCreateAndDeleteShard(CollectionsAPISolrJTest.java:220)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-12441) Add deeply nested documents URP

2018-06-26 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-12441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16524248#comment-16524248
 ] 

Jan Høydahl commented on SOLR-12441:


Have you thought about the possibility of making it optional to supply an 
{{id}} for the children? I mean, if the URP detects that child docs lack the 
uniqueID field, could it not construct a guaranteed unique id as  {{_ROOT_ + 
"/" + }}{{_NEST_PATH_ + "/" + child_num }}? This way the burden on the user 
when constructing the document is lighter and he need only define a root ID 
manually if he does not want to provide those IDs himself...

> Add deeply nested documents URP
> ---
>
> Key: SOLR-12441
> URL: https://issues.apache.org/jira/browse/SOLR-12441
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: mosh
>Assignee: David Smiley
>Priority: Major
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> As discussed in 
> [SOLR-12298|https://issues.apache.org/jira/browse/SOLR-12298], there ought to 
> be an URP to add metadata fields to childDocuments in order to allow a 
> transformer to rebuild the original document hierarchy.
> {quote}I propose we add the following fields:
>  # __nestParent__
>  # _nestLevel_
>  # __nestPath__
> __nestParent__: This field wild will store the document's parent docId, to be 
> used for building the whole hierarchy, using a new document transformer, as 
> suggested by Jan on the mailing list.
> _nestLevel_: This field will store the level of the specified field in the 
> document, using an int value. This field can be used for the parentFilter, 
> eliminating the need to provide a parentFilter, which will be set by default 
> as "_level_:queriedFieldLevel".
> _nestLevel_: This field will contain the full path, separated by a specific 
> reserved char e.g., '.'
>  for example: "first.second.third".
>  This will enable users to search for a specific path, or provide a regular 
> expression to search for fields sharing the same name in different levels of 
> the document, filtering using the level key if needed.
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11665) Improve SplitShardCmd error handling

2018-06-26 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16524169#comment-16524169
 ] 

ASF subversion and git services commented on SOLR-11665:


Commit c0853200f20e3dd874e025418f6919fd913c5523 in lucene-solr's branch 
refs/heads/branch_7x from Andrzej Bialecki
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=c085320 ]

SOLR-11665: Improve error handling of shard splitting. Fix splitting of mixed 
replica types.


> Improve SplitShardCmd error handling
> 
>
> Key: SOLR-11665
> URL: https://issues.apache.org/jira/browse/SOLR-11665
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Cao Manh Dat
>Assignee: Andrzej Bialecki 
>Priority: Major
> Attachments: SOLR-11665.patch, SOLR-11665.test.patch
>
>
> I do see some problems when doing split shard but there are no available 
> nodes for creating replicas ( due to policy framework )
> - The patch contains a test, in which sub shard stay in CONSTRUCTION state 
> forever.
> - Shard which get split, stay in inactive state forever and subshards are not 
> created



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12357) TRA: Pre-emptively create next collection

2018-06-26 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16524144#comment-16524144
 ] 

David Smiley commented on SOLR-12357:
-

If we do SOLR-12521 first then MaintainRoutedAliasCmd won't need to know a 
doc's reference date at all?  It'll only be told to explicitly delete or given 
collection (assuming it exists and is last), or to create a collection after 
the lead collection assuming the lead collection is a given input.

> TRA: Pre-emptively create next collection 
> --
>
> Key: SOLR-12357
> URL: https://issues.apache.org/jira/browse/SOLR-12357
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: David Smiley
>Priority: Major
>
> When adding data to a Time Routed Alias (TRA), we sometimes need to create 
> new collections.  Today we only do this synchronously – on-demand when a 
> document is coming in.  But this can add delays as the documents inbound are 
> held up for a collection to be created.  And, there may be a problem like a 
> lack of resources (e.g. ample SolrCloud nodes with space) that the policy 
> framework defines.  Such problems could be rectified sooner rather than later 
> assume there is log alerting in place (definitely out of scope here).
> Pre-emptive TRA collection needs a time window configuration parameter, 
> perhaps named something like "preemptiveCreateWindowMs".  If a document's 
> timestamp is within this time window _from the end time of the head/lead 
> collection_ then the collection can be created pre-eptively.  If no data is 
> being sent to the TRA, no collections will be auto created, nor will it 
> happen if older data is being added.  It may be convenient to effectively 
> limit this time setting to the _smaller_ of this value and the TRA interval 
> window, which I think is a fine limitation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Windows (64bit/jdk-10) - Build # 656 - Still Unstable!

2018-06-26 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/656/
Java: 64bit/jdk-10 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.search.TestRecovery.testExistOldBufferLog

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([980E37160FB7CE65:C65E2A4381785EEC]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertFalse(Assert.java:68)
at org.junit.Assert.assertFalse(Assert.java:79)
at 
org.apache.solr.search.TestRecovery.testExistOldBufferLog(TestRecovery.java:1071)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)




Build Log:
[...truncated 15084 lines...]
   [junit4] Suite: org.apache.solr.search.TestRecovery
   [junit4]   2> Creating dataDir: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\solr-core\test\J0\temp\solr.search.TestRecovery_980E37160FB7CE65-001\init-core-data-001
   

[jira] [Updated] (SOLR-12521) TRA: evaluate autoDeleteAge independently of when collections are created

2018-06-26 Thread David Smiley (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated SOLR-12521:

Issue Type: Sub-task  (was: Task)
Parent: SOLR-11299

> TRA: evaluate autoDeleteAge independently of when collections are created
> -
>
> Key: SOLR-12521
> URL: https://issues.apache.org/jira/browse/SOLR-12521
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: David Smiley
>Priority: Major
>
> Today, aging out the oldest collection due to exceeding autoDeleteAge occurs 
> immediately before new collections in the Time Routed Alias are created.  
> It's performed by MaintainRoutedAliasCmd.  While this is fine, it would be 
> better if this were evaluated at additional circumstances -- basically 
> whenever we get new documents.  This would make the TRA more responsive to 
> dynamic changes in this metadata and it would allow more effective use of 
> finer granularity of autoDeleteAge than the interval size.  Furthermore, once 
> we create new TRA collections preemptively (SOLR-12357), we'll probably want 
> this even more since otherwise the oldest collection will tend to stick 
> around longer.  SOLR-12357 could probably share some logic here since both 
> will involve preemptive action (action that does not delay routing the 
> current document) by the TimeRoutedAliasUpdateProcessor, and both need to 
> deal with not overloading the overseer with effectively redundant requests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12521) TRA: evaluate autoDeleteAge independently of when collections are created

2018-06-26 Thread David Smiley (JIRA)
David Smiley created SOLR-12521:
---

 Summary: TRA: evaluate autoDeleteAge independently of when 
collections are created
 Key: SOLR-12521
 URL: https://issues.apache.org/jira/browse/SOLR-12521
 Project: Solr
  Issue Type: Task
  Security Level: Public (Default Security Level. Issues are Public)
  Components: SolrCloud
Reporter: David Smiley


Today, aging out the oldest collection due to exceeding autoDeleteAge occurs 
immediately before new collections in the Time Routed Alias are created.  It's 
performed by MaintainRoutedAliasCmd.  While this is fine, it would be better if 
this were evaluated at additional circumstances -- basically whenever we get 
new documents.  This would make the TRA more responsive to dynamic changes in 
this metadata and it would allow more effective use of finer granularity of 
autoDeleteAge than the interval size.  Furthermore, once we create new TRA 
collections preemptively (SOLR-12357), we'll probably want this even more since 
otherwise the oldest collection will tend to stick around longer.  SOLR-12357 
could probably share some logic here since both will involve preemptive action 
(action that does not delay routing the current document) by the 
TimeRoutedAliasUpdateProcessor, and both need to deal with not overloading the 
overseer with effectively redundant requests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #411: Debugging PriorityQueue.java

2018-06-26 Thread rsaavedraf
GitHub user rsaavedraf opened a pull request:

https://github.com/apache/lucene-solr/pull/411

Debugging PriorityQueue.java

The change I'm proposing eliminates a problem by properly checking the 
validity of maxSize itself, before even computing heapSize=maxSize+1.

In the constructor, when maxSize has a value == Integer.MAX_VALUE, then 
heapSize = maxSize+1 ends up being negative, more exactly -2147483648 (aka. 
Integer.MIN_VALUE.)  This is quite a bug because then the if statement checking 
that heapSize was larger than ArrayUtil.MAX_ARRAY_LENGTH ends up false, so the 
IllegalArgumentException is never thrown. Yet the code in PriorityQueue.java 
fails immediately afterwards when reaching "new Object[heapSize]" because of 
heapSize being negative, causing a NegativeArraySize exception.

We saw this problem with our software which was using Solr 6.6.2.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rsaavedraf/lucene-solr patch-1

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/411.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #411


commit b9a1168f2d1fcf82d6bdec2ff329462638d63bb1
Author: rsaavedraf <40606625+rsaavedraf@...>
Date:   2018-06-26T19:14:19Z

Debugging PriorityQueue.java

The change I'm proposing eliminates a problem by properly checking the 
validity of maxSize itself, before even computing heapSize=maxSize+1.
In the constructor, when maxSize has a value == Integer.MAX_VALUE, then 
heapSize = maxSize+1 ends up being negative, more exactly -2147483648 (aka. 
Integer.MIN_VALUE.)  This is quite a bug because then the if statement checking 
that heapSize was larger than ArrayUtil.MAX_ARRAY_LENGTH-1 ends up false, so 
the IllegalArgumentException is never thrown. Yet the code fails immediately 
afterwards when reaching "new Object[heapSize]" because of heapSize being 
negative, causing a NegativeArraySize exception.
We saw this problem with our software which was using Solr 6.6.2.




---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8370) Reproducing TestLucene70DocValuesFormat.testSortedSetVariableLengthBigVsStoredFields() failure

2018-06-26 Thread Erick Erickson (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16524103#comment-16524103
 ] 

Erick Erickson commented on LUCENE-8370:


[~rcmuir] Thanks for looking, I just got to it.

I agree the test makes some (now) invalid assumptions about TMP.

When specifying the maximum number of segments (other than 1), TMP does a "best 
effort" attempt to hit that target but does not guarantee it. The algorithm is 
roughly.

1> compute the theoretical segment size to hit the target exactly, i.e. 
totalIndexBytes/numSegmentsSpecified

2> Increase <1> by 25% (this is a totally arbitrary percentage on my part).

3> Find the "best" merges respecting the size in <2> and do them.

If the scoring algorithm happens to pick segments to merge that don't pack well 
in the limit from <2> above, and there'll be more segments than specified.

What should be true in this case is that no pair of the segments that result 
from the merge will sum to <  the theoretical max size 
((totalIndexBytes/segsSpecified) * 1.25).

TestTieredMergePolicy does test this expectation.

I can take this assert out of this specific policy (TMP) here in 
RandomIndexWriter or remove it completely, WDYT? Actually take this out of 
RandomIndexWriter for TMP or when it's TMP and the number of segments specified 
is > 1.

[~mikemccand] any opinions? This is the "scary loop" from LUCENE-7976 that made 
us both nervous and I removed.

> Reproducing 
> TestLucene70DocValuesFormat.testSortedSetVariableLengthBigVsStoredFields() 
> failure
> --
>
> Key: LUCENE-8370
> URL: https://issues.apache.org/jira/browse/LUCENE-8370
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index, general/test
>Reporter: Steve Rowe
>Assignee: Erick Erickson
>Priority: Major
>
> Policeman Jenkins found a reproducing seed for a 
> {{TestLucene70DocValuesFormat.testSortedSetVariableLengthBigVsStoredFields()}}
>  failure [https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/22320/]; 
> {{git bisect}} blames commit {{2519025}} on LUCENE-7976:
> {noformat}
> Checking out Revision 8c714348aeea51df19e7603905f85995bcf0371c 
> (refs/remotes/origin/master)
> [...]
>[junit4] Suite: 
> org.apache.lucene.codecs.lucene70.TestLucene70DocValuesFormat
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestLucene70DocValuesFormat 
> -Dtests.method=testSortedSetVariableLengthBigVsStoredFields 
> -Dtests.seed=63A61B46A6934B1A -Dtests.multiplier=3 -Dtests.slow=true 
> -Dtests.locale=sw-TZ -Dtests.timezone=Pacific/Pitcairn -Dtests.asserts=true 
> -Dtests.file.encoding=US-ASCII
>[junit4] FAILURE 23.3s J2 | 
> TestLucene70DocValuesFormat.testSortedSetVariableLengthBigVsStoredFields <<<
>[junit4]> Throwable #1: java.lang.AssertionError: limit=4 actual=5
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([63A61B46A6934B1A:6BE93FA35E02851]:0)
>[junit4]>  at 
> org.apache.lucene.index.RandomIndexWriter.doRandomForceMerge(RandomIndexWriter.java:372)
>[junit4]>  at 
> org.apache.lucene.index.RandomIndexWriter.getReader(RandomIndexWriter.java:386)
>[junit4]>  at 
> org.apache.lucene.index.RandomIndexWriter.getReader(RandomIndexWriter.java:332)
>[junit4]>  at 
> org.apache.lucene.index.BaseDocValuesFormatTestCase.doTestSortedSetVsStoredFields(BaseDocValuesFormatTestCase.java:2155)
>[junit4]>  at 
> org.apache.lucene.codecs.lucene70.TestLucene70DocValuesFormat.testSortedSetVariableLengthBigVsStoredFields(TestLucene70DocValuesFormat.java:93)
>[junit4]>  at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>[junit4]>  at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>[junit4]>  at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>[junit4]>  at 
> java.base/java.lang.reflect.Method.invoke(Method.java:564)
>[junit4]>  at java.base/java.lang.Thread.run(Thread.java:844)
> [...]
>[junit4]   2> NOTE: test params are: codec=Asserting(Lucene70): {}, 
> docValues:{}, maxPointsInLeafNode=693, maxMBSortInHeap=5.078503794479895, 
> sim=Asserting(org.apache.lucene.search.similarities.AssertingSimilarity@20a604e6),
>  locale=sw-TZ, timezone=Pacific/Pitcairn
>[junit4]   2> NOTE: Linux 4.13.0-41-generic amd64/Oracle Corporation 9.0.4 
> (64-bit)/cpus=8,threads=1,free=352300304,total=518979584
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional 

[jira] [Commented] (SOLR-11665) Improve SplitShardCmd error handling

2018-06-26 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16524100#comment-16524100
 ] 

ASF subversion and git services commented on SOLR-11665:


Commit 1023b839aeda4f4688103995051b727d7ca4fdce in lucene-solr's branch 
refs/heads/master from Andrzej Bialecki
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=1023b83 ]

SOLR-11665: Improve error handling of shard splitting. Fix splitting of mixed 
replica types.


> Improve SplitShardCmd error handling
> 
>
> Key: SOLR-11665
> URL: https://issues.apache.org/jira/browse/SOLR-11665
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Cao Manh Dat
>Assignee: Andrzej Bialecki 
>Priority: Major
> Attachments: SOLR-11665.patch, SOLR-11665.test.patch
>
>
> I do see some problems when doing split shard but there are no available 
> nodes for creating replicas ( due to policy framework )
> - The patch contains a test, in which sub shard stay in CONSTRUCTION state 
> forever.
> - Shard which get split, stay in inactive state forever and subshards are not 
> created



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12357) TRA: Pre-emptively create next collection

2018-06-26 Thread Gus Heck (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16524083#comment-16524083
 ] 

Gus Heck commented on SOLR-12357:
-

This is going require some rework of MaintainRoutedAliasCmd. Presently the code 
there can never delete a collection unless it's creating a collection. With 
this feature it would then delay deletion for timePartionSize - 
premptiveCreateInterval... which would be significant for long partitions and 
confusing in general. Also, delete time frames that are not even multiples of 
partition size probably behave somewhat strangely as it is, with old partitions 
living somewhat longer than they should. I think the maintain command needs to 
delete if delete is appropriate and create if create is appropriate 
independently.

Also, it uses Instant.now() to check if it should create a collection and it 
will now need to know the triggering date from the document or be sent an 
implicit "force create" attribute. The latter option doesn't sound good because 
I believe we are relying on this command to be idempotent. If more than one 
client is updating, several documents might be processed (one by each client) 
before the results of the command take effect so we can get several instances 
of the maintain command given to the overseer. Synchronization in the overseer 
should ensure that subsequent instances see the results of the first and then 
return as a no-op. So I think we need to pass in a "docDate" or maybe 
"referenceDate"

> TRA: Pre-emptively create next collection 
> --
>
> Key: SOLR-12357
> URL: https://issues.apache.org/jira/browse/SOLR-12357
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: David Smiley
>Priority: Major
>
> When adding data to a Time Routed Alias (TRA), we sometimes need to create 
> new collections.  Today we only do this synchronously – on-demand when a 
> document is coming in.  But this can add delays as the documents inbound are 
> held up for a collection to be created.  And, there may be a problem like a 
> lack of resources (e.g. ample SolrCloud nodes with space) that the policy 
> framework defines.  Such problems could be rectified sooner rather than later 
> assume there is log alerting in place (definitely out of scope here).
> Pre-emptive TRA collection needs a time window configuration parameter, 
> perhaps named something like "preemptiveCreateWindowMs".  If a document's 
> timestamp is within this time window _from the end time of the head/lead 
> collection_ then the collection can be created pre-eptively.  If no data is 
> being sent to the TRA, no collections will be auto created, nor will it 
> happen if older data is being added.  It may be convenient to effectively 
> limit this time setting to the _smaller_ of this value and the TRA interval 
> window, which I think is a fine limitation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Windows (64bit/jdk-10.0.1) - Build # 7382 - Still Unstable!

2018-06-26 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7382/
Java: 64bit/jdk-10.0.1 -XX:-UseCompressedOops -XX:+UseG1GC

5 tests failed.
FAILED:  org.apache.solr.cloud.TestPullReplica.testKillLeader

Error Message:
Replica core_node4 not up to date after 10 seconds expected:<1> but was:<0>

Stack Trace:
java.lang.AssertionError: Replica core_node4 not up to date after 10 seconds 
expected:<1> but was:<0>
at 
__randomizedtesting.SeedInfo.seed([A6C799ED4820CDD3:EFD16D592A9B5985]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.cloud.TestPullReplica.waitForNumDocsInAllReplicas(TestPullReplica.java:542)
at 
org.apache.solr.cloud.TestPullReplica.waitForNumDocsInAllReplicas(TestPullReplica.java:533)
at 
org.apache.solr.cloud.TestPullReplica.doTestNoLeader(TestPullReplica.java:403)
at 
org.apache.solr.cloud.TestPullReplica.testKillLeader(TestPullReplica.java:309)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 

[jira] [Commented] (SOLR-12520) Switch DateRangeField from NumberRangePrefixTree to LongRange

2018-06-26 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16524051#comment-16524051
 ] 

David Smiley commented on SOLR-12520:
-

It remains to be seen how DateRangeField (NumberRangePrefixTreeStrategy + 
DateRangePrefixTree) compares to LongRange for _realistic queries_.  By 
realistic queries, I mean for a query such as find all indexed ranges 
intersecting a particular month -- or similar simple intervals.  
DateRangePrefixTree is deliberately aligned to useful/meaningful units of time 
we humans work in (at least in the Gregorian Calendar) that allow for queries 
to sometimes amount to visiting one term, or perhaps more terms but not a ton.  
Both Lucene Points codec + the legacy numeric trie stuff it replaced were 
agnostic of this.  I expect that LongRange will have better indexing 
characteristics but that's not what's being optimized for in DateRangeField.

Another intentional purpose of DateRangeField was to allow fast faceting using 
the underlying terms.  That was never wired into Solr though, and there was a 
bug or something to be worked out.

> Switch DateRangeField from NumberRangePrefixTree to LongRange
> -
>
> Key: SOLR-12520
> URL: https://issues.apache.org/jira/browse/SOLR-12520
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Nicholas Knize
>Priority: Major
>
> Since graduating {{Range}} field types in LUCENE-7740 we should consider 
> switching SOLR's {{DateRangeField}} from using {{NumberRangePrefixTree}} in 
> the {{spatial-extras}} module to {{LongRange}} in {{lucene-core}}. Not only 
> will this provide a nice performance improvement but nothing will depend on 
> {{NumberRangePrefixTree}} so it can be deprecated and removed. To maintain 
> backcompat we could consider refactoring it from {{spatial-extras}} to SOLR 
> and then removing it once the switch to {{LongRange}} is complete.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11598) Export Writer needs to support more than 4 Sort fields - Say 10, ideally it should not be bound at all, but 4 seems to really short sell the StreamRollup capabilities.

2018-06-26 Thread Joel Bernstein (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16524017#comment-16524017
 ] 

Joel Bernstein commented on SOLR-11598:
---

Hi Varun,

You pretty much have it right. Having a large amount of memory available to the 
filesystem cache will help. But doc values have other overhead as well which 
can add up. If you read the source code to the different doc values 
implementations you'll see there is a fair amount of work involved in 
retrieving the values. The fewer doc values reads you make, the faster the 
export.

> Export Writer needs to support more than 4 Sort fields - Say 10, ideally it 
> should not be bound at all, but 4 seems to really short sell the StreamRollup 
> capabilities.
> ---
>
> Key: SOLR-11598
> URL: https://issues.apache.org/jira/browse/SOLR-11598
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Affects Versions: 6.6.1, 7.0
>Reporter: Aroop
>Assignee: Varun Thacker
>Priority: Major
>  Labels: patch
> Attachments: SOLR-11598-6_6-streamtests, SOLR-11598-6_6.patch, 
> SOLR-11598-master.patch, SOLR-11598.patch, SOLR-11598.patch, 
> SOLR-11598.patch, SOLR-11598.patch
>
>
> I am a user of Streaming and I am currently trying to use rollups on an 10 
> dimensional document.
> I am unable to get correct results on this query as I am bounded by the 
> limitation of the export handler which supports only 4 sort fields.
> I do not see why this needs to be the case, as it could very well be 10 or 20.
> My current needs would be satisfied with 10, but one would want to ask why 
> can't it be any decent integer n, beyond which we know performance degrades, 
> but even then it should be caveat emptor.
> [~varunthacker] 
> Code Link:
> https://github.com/apache/lucene-solr/blob/19db1df81a18e6eb2cce5be973bf2305d606a9f8/solr/core/src/java/org/apache/solr/handler/ExportWriter.java#L455
> Error
> null:java.io.IOException: A max of 4 sorts can be specified
>   at 
> org.apache.solr.handler.ExportWriter.getSortDoc(ExportWriter.java:452)
>   at org.apache.solr.handler.ExportWriter.writeDocs(ExportWriter.java:228)
>   at 
> org.apache.solr.handler.ExportWriter.lambda$null$1(ExportWriter.java:219)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeIterator(JavaBinCodec.java:664)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeKnownType(JavaBinCodec.java:333)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:223)
>   at org.apache.solr.common.util.JavaBinCodec$1.put(JavaBinCodec.java:394)
>   at 
> org.apache.solr.handler.ExportWriter.lambda$null$2(ExportWriter.java:219)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeMap(JavaBinCodec.java:437)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeKnownType(JavaBinCodec.java:354)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:223)
>   at org.apache.solr.common.util.JavaBinCodec$1.put(JavaBinCodec.java:394)
>   at 
> org.apache.solr.handler.ExportWriter.lambda$write$3(ExportWriter.java:217)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeMap(JavaBinCodec.java:437)
>   at org.apache.solr.handler.ExportWriter.write(ExportWriter.java:215)
>   at org.apache.solr.core.SolrCore$3.write(SolrCore.java:2601)
>   at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:49)
>   at 
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:809)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:538)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> 

[jira] [Created] (SOLR-12520) Switch DateRangeField from NumberRangePrefixTree to LongRange

2018-06-26 Thread Nicholas Knize (JIRA)
Nicholas Knize created SOLR-12520:
-

 Summary: Switch DateRangeField from NumberRangePrefixTree to 
LongRange
 Key: SOLR-12520
 URL: https://issues.apache.org/jira/browse/SOLR-12520
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Nicholas Knize


Since graduating {{Range}} field types in LUCENE-7740 we should consider 
switching SOLR's {{DateRangeField}} from using {{NumberRangePrefixTree}} in the 
{{spatial-extras}} module to {{LongRange}} in {{lucene-core}}. Not only will 
this provide a nice performance improvement but nothing will depend on 
{{NumberRangePrefixTree}} so it can be deprecated and removed. To maintain 
backcompat we could consider refactoring it from {{spatial-extras}} to SOLR and 
then removing it once the switch to {{LongRange}} is complete.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-11598) Export Writer needs to support more than 4 Sort fields - Say 10, ideally it should not be bound at all, but 4 seems to really short sell the StreamRollup capabilit

2018-06-26 Thread Varun Thacker (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16523898#comment-16523898
 ] 

Varun Thacker edited comment on SOLR-11598 at 6/26/18 3:53 PM:
---

Hi Joel,

This is my understanding of the export writer . Does this sound right to you? 

1. Go through all the matched documents ( q=*..* )  and collect 30k sorted 
docs ( priority queue ) and stream them out
 2. Repeat till all matched documents have been collected

The number of matches is key in performance and that we chose 30k docs as the 
heap size.

 

So the number of sort fields will increase the number of seeks against 
doc-values every time we add an item to the priority queue. Hence the 
performance would greatly depend on how much memory does the OS have to mmap 
doc-values? 

 


was (Author: varunthacker):
Hi Joel,

This is my understanding of the export writer . Does this sound right to you? 

1. Go through all the matched documents ( q=*:* )  and collect 30k sorted 
docs ( priority queue ) and stream them out
2. Repeat till all matched documents have been collected

The number of matches is key in performance and that we chose 30k docs as the 
heap size.

 

So the number of sort fields will increase the number of seeks against 
doc-values every time we add an item to the priority queue. Hence the 
performance would greatly depend on how much memory does the OS have to mmap 
doc-values? 

 

> Export Writer needs to support more than 4 Sort fields - Say 10, ideally it 
> should not be bound at all, but 4 seems to really short sell the StreamRollup 
> capabilities.
> ---
>
> Key: SOLR-11598
> URL: https://issues.apache.org/jira/browse/SOLR-11598
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Affects Versions: 6.6.1, 7.0
>Reporter: Aroop
>Assignee: Varun Thacker
>Priority: Major
>  Labels: patch
> Attachments: SOLR-11598-6_6-streamtests, SOLR-11598-6_6.patch, 
> SOLR-11598-master.patch, SOLR-11598.patch, SOLR-11598.patch, 
> SOLR-11598.patch, SOLR-11598.patch
>
>
> I am a user of Streaming and I am currently trying to use rollups on an 10 
> dimensional document.
> I am unable to get correct results on this query as I am bounded by the 
> limitation of the export handler which supports only 4 sort fields.
> I do not see why this needs to be the case, as it could very well be 10 or 20.
> My current needs would be satisfied with 10, but one would want to ask why 
> can't it be any decent integer n, beyond which we know performance degrades, 
> but even then it should be caveat emptor.
> [~varunthacker] 
> Code Link:
> https://github.com/apache/lucene-solr/blob/19db1df81a18e6eb2cce5be973bf2305d606a9f8/solr/core/src/java/org/apache/solr/handler/ExportWriter.java#L455
> Error
> null:java.io.IOException: A max of 4 sorts can be specified
>   at 
> org.apache.solr.handler.ExportWriter.getSortDoc(ExportWriter.java:452)
>   at org.apache.solr.handler.ExportWriter.writeDocs(ExportWriter.java:228)
>   at 
> org.apache.solr.handler.ExportWriter.lambda$null$1(ExportWriter.java:219)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeIterator(JavaBinCodec.java:664)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeKnownType(JavaBinCodec.java:333)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:223)
>   at org.apache.solr.common.util.JavaBinCodec$1.put(JavaBinCodec.java:394)
>   at 
> org.apache.solr.handler.ExportWriter.lambda$null$2(ExportWriter.java:219)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeMap(JavaBinCodec.java:437)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeKnownType(JavaBinCodec.java:354)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:223)
>   at org.apache.solr.common.util.JavaBinCodec$1.put(JavaBinCodec.java:394)
>   at 
> org.apache.solr.handler.ExportWriter.lambda$write$3(ExportWriter.java:217)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeMap(JavaBinCodec.java:437)
>   at org.apache.solr.handler.ExportWriter.write(ExportWriter.java:215)
>   at org.apache.solr.core.SolrCore$3.write(SolrCore.java:2601)
>   at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:49)
>   at 
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:809)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:538)
>   at 
> 

[jira] [Commented] (SOLR-11598) Export Writer needs to support more than 4 Sort fields - Say 10, ideally it should not be bound at all, but 4 seems to really short sell the StreamRollup capabilities.

2018-06-26 Thread Varun Thacker (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16523898#comment-16523898
 ] 

Varun Thacker commented on SOLR-11598:
--

Hi Joel,

This is my understanding of the export writer . Does this sound right to you? 

1. Go through all the matched documents ( q=*:* )  and collect 30k sorted 
docs ( priority queue ) and stream them out
2. Repeat till all matched documents have been collected

The number of matches is key in performance and that we chose 30k docs as the 
heap size.

 

So the number of sort fields will increase the number of seeks against 
doc-values every time we add an item to the priority queue. Hence the 
performance would greatly depend on how much memory does the OS have to mmap 
doc-values? 

 

> Export Writer needs to support more than 4 Sort fields - Say 10, ideally it 
> should not be bound at all, but 4 seems to really short sell the StreamRollup 
> capabilities.
> ---
>
> Key: SOLR-11598
> URL: https://issues.apache.org/jira/browse/SOLR-11598
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Affects Versions: 6.6.1, 7.0
>Reporter: Aroop
>Assignee: Varun Thacker
>Priority: Major
>  Labels: patch
> Attachments: SOLR-11598-6_6-streamtests, SOLR-11598-6_6.patch, 
> SOLR-11598-master.patch, SOLR-11598.patch, SOLR-11598.patch, 
> SOLR-11598.patch, SOLR-11598.patch
>
>
> I am a user of Streaming and I am currently trying to use rollups on an 10 
> dimensional document.
> I am unable to get correct results on this query as I am bounded by the 
> limitation of the export handler which supports only 4 sort fields.
> I do not see why this needs to be the case, as it could very well be 10 or 20.
> My current needs would be satisfied with 10, but one would want to ask why 
> can't it be any decent integer n, beyond which we know performance degrades, 
> but even then it should be caveat emptor.
> [~varunthacker] 
> Code Link:
> https://github.com/apache/lucene-solr/blob/19db1df81a18e6eb2cce5be973bf2305d606a9f8/solr/core/src/java/org/apache/solr/handler/ExportWriter.java#L455
> Error
> null:java.io.IOException: A max of 4 sorts can be specified
>   at 
> org.apache.solr.handler.ExportWriter.getSortDoc(ExportWriter.java:452)
>   at org.apache.solr.handler.ExportWriter.writeDocs(ExportWriter.java:228)
>   at 
> org.apache.solr.handler.ExportWriter.lambda$null$1(ExportWriter.java:219)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeIterator(JavaBinCodec.java:664)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeKnownType(JavaBinCodec.java:333)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:223)
>   at org.apache.solr.common.util.JavaBinCodec$1.put(JavaBinCodec.java:394)
>   at 
> org.apache.solr.handler.ExportWriter.lambda$null$2(ExportWriter.java:219)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeMap(JavaBinCodec.java:437)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeKnownType(JavaBinCodec.java:354)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:223)
>   at org.apache.solr.common.util.JavaBinCodec$1.put(JavaBinCodec.java:394)
>   at 
> org.apache.solr.handler.ExportWriter.lambda$write$3(ExportWriter.java:217)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeMap(JavaBinCodec.java:437)
>   at org.apache.solr.handler.ExportWriter.write(ExportWriter.java:215)
>   at org.apache.solr.core.SolrCore$3.write(SolrCore.java:2601)
>   at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:49)
>   at 
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:809)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:538)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>

[jira] [Commented] (LUCENE-7314) Graduate InetAddressPoint and LatLonPoint to core

2018-06-26 Thread Nicholas Knize (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-7314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16523865#comment-16523865
 ] 

Nicholas Knize commented on LUCENE-7314:


Thanks [~jpountz]! The updated patch leaves NearestNeighbor query classes in 
sandbox and moves {{LatLonPoint#nearest}} as a static method in new utility 
class {{LatLonPointPrototypeQueries#nearest}}

> Graduate InetAddressPoint and LatLonPoint to core
> -
>
> Key: LUCENE-7314
> URL: https://issues.apache.org/jira/browse/LUCENE-7314
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-7314.patch, LUCENE-7314.patch
>
>
> Maybe we should graduate these fields (and related queries) to core for 
> Lucene 6.1?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11665) Improve SplitShardCmd error handling

2018-06-26 Thread Andrzej Bialecki (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16523864#comment-16523864
 ] 

Andrzej Bialecki  commented on SOLR-11665:
--

[~caomanhdat] - the test in your patch no longer fails with these changes.

> Improve SplitShardCmd error handling
> 
>
> Key: SOLR-11665
> URL: https://issues.apache.org/jira/browse/SOLR-11665
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Cao Manh Dat
>Assignee: Andrzej Bialecki 
>Priority: Major
> Attachments: SOLR-11665.patch, SOLR-11665.test.patch
>
>
> I do see some problems when doing split shard but there are no available 
> nodes for creating replicas ( due to policy framework )
> - The patch contains a test, in which sub shard stay in CONSTRUCTION state 
> forever.
> - Shard which get split, stay in inactive state forever and subshards are not 
> created



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7314) Graduate InetAddressPoint and LatLonPoint to core

2018-06-26 Thread Nicholas Knize (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-7314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicholas Knize updated LUCENE-7314:
---
Attachment: LUCENE-7314.patch

> Graduate InetAddressPoint and LatLonPoint to core
> -
>
> Key: LUCENE-7314
> URL: https://issues.apache.org/jira/browse/LUCENE-7314
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-7314.patch, LUCENE-7314.patch
>
>
> Maybe we should graduate these fields (and related queries) to core for 
> Lucene 6.1?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10307) Provide SSL/TLS keystore password a more secure way

2018-06-26 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-10307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16523857#comment-16523857
 ] 

Jan Høydahl commented on SOLR-10307:


Is this documented in the ref guide?

> Provide SSL/TLS keystore password a more secure way
> ---
>
> Key: SOLR-10307
> URL: https://issues.apache.org/jira/browse/SOLR-10307
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: security
>Reporter: Mano Kovacs
>Assignee: Mark Miller
>Priority: Major
> Fix For: 6.7, 7.0
>
> Attachments: SOLR-10307.2.patch, SOLR-10307.patch, SOLR-10307.patch, 
> SOLR-10307.patch
>
>
> Currently the only way to pass server and client side SSL keytstore and 
> truststore passwords is to set specific environment variables that will be 
> passed as system properties, through command line parameter.
> First option is to pass passwords through environment variables which gives a 
> better level of protection. Second option would be to use hadoop credential 
> provider interface to access credential store.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8371) TestRandomChains.testRandomChainsWithLargeStrings() failure

2018-06-26 Thread Erick Erickson (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16523840#comment-16523840
 ] 

Erick Erickson commented on LUCENE-8371:


OK, I'll put it on the permanent "don't BadApple" list.

> TestRandomChains.testRandomChainsWithLargeStrings() failure
> ---
>
> Key: LUCENE-8371
> URL: https://issues.apache.org/jira/browse/LUCENE-8371
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/analysis
>Reporter: Steve Rowe
>Assignee: Alan Woodward
>Priority: Major
>
> Reproducing seed for {{TestRandomChains.testRandomChainsWithLargeStrings()}} 
> failure from [https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2196/]:
> {noformat}
> Checking out Revision 53ec8224705f4f0d35751b18b3f0168517c43121 
> (refs/remotes/origin/branch_7x)
> [...]
>[junit4] Suite: org.apache.lucene.analysis.core.TestRandomChains
>[junit4]   2> TEST FAIL: useCharFilter=true text='\ua97b  
> \uebcf\ueb06\uf85b\uf649\uf0b7 esgm s \uabfd 
> \ue11c\udbb4\udc48\ue90d\u0142\u0014\u0018 cr \u30ed\u30a8\u30ec\u30e1  \ud835\udf53\ud835\udc58\ud835\ude2b 
> \ueff5\uda61\ude33\ud94d\udcbb\udb3b\uddc8\u0738 \ua711\ua719 xqu ygvfwc 
> ~?\u0781%'
>[junit4]   2> Exception from random analyzer: 
>[junit4]   2> charfilters=
>[junit4]   2>   
> org.apache.lucene.analysis.fa.PersianCharFilter(java.io.StringReader@12c9ec6)
>[junit4]   2> tokenizer=
>[junit4]   2>   org.apache.lucene.analysis.core.LowerCaseTokenizer()
>[junit4]   2> filters=
>[junit4]   2>   
> org.apache.lucene.analysis.hunspell.HunspellStemFilter(ValidatingTokenFilter@17533c4
>  
> term=,bytes=[],startOffset=0,endOffset=0,positionIncrement=1,positionLength=1,type=word,termFrequency=1,keyword=false,
>  org.apache.lucene.analysis.hunspell.Dictionary@1e0b337, true, false)
>[junit4]   2>   
> Conditional:org.apache.lucene.analysis.no.NorwegianLightStemFilter(OneTimeWrapper@3e3989
>  
> term=,bytes=[],startOffset=0,endOffset=0,positionIncrement=1,positionLength=1,type=word,termFrequency=1,keyword=false)
>[junit4]   2>   
> Conditional:org.apache.lucene.analysis.en.EnglishPossessiveFilter(OneTimeWrapper@96b77b
>  
> term=,bytes=[],startOffset=0,endOffset=0,positionIncrement=1,positionLength=1,type=word,termFrequency=1,keyword=false)
>[junit4]   2>   
> Conditional:org.apache.lucene.analysis.shingle.FixedShingleFilter(OneTimeWrapper@d4fade
>  
> term=,bytes=[],startOffset=0,endOffset=0,positionIncrement=1,positionLength=1,type=word,termFrequency=1,keyword=false,
>  3)
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestRandomChains 
> -Dtests.method=testRandomChainsWithLargeStrings -Dtests.seed=8C3CDE29C6D4A774 
> -Dtests.multiplier=3 -Dtests.slow=true -Dtests.locale=ms 
> -Dtests.timezone=Europe/Saratov -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>[junit4] FAILURE 0.42s J2 | 
> TestRandomChains.testRandomChainsWithLargeStrings <<<
>[junit4]> Throwable #1: java.lang.AssertionError: finalOffset 
> expected:<74> but was:<73>
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([8C3CDE29C6D4A774:E66761389F9A8787]:0)
>[junit4]>  at 
> org.apache.lucene.analysis.BaseTokenStreamTestCase.assertTokenStreamContents(BaseTokenStreamTestCase.java:305)
>[junit4]>  at 
> org.apache.lucene.analysis.BaseTokenStreamTestCase.assertTokenStreamContents(BaseTokenStreamTestCase.java:320)
>[junit4]>  at 
> org.apache.lucene.analysis.BaseTokenStreamTestCase.assertTokenStreamContents(BaseTokenStreamTestCase.java:324)
>[junit4]>  at 
> org.apache.lucene.analysis.BaseTokenStreamTestCase.checkAnalysisConsistency(BaseTokenStreamTestCase.java:860)
>[junit4]>  at 
> org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:659)
>[junit4]>  at 
> org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:561)
>[junit4]>  at 
> org.apache.lucene.analysis.core.TestRandomChains.testRandomChainsWithLargeStrings(TestRandomChains.java:893)
>[junit4]>  at java.lang.Thread.run(Thread.java:748)
>[junit4]   2> NOTE: test params are: codec=Asserting(Lucene70): 
> {dummy=PostingsFormat(name=Memory)}, docValues:{}, maxPointsInLeafNode=1890, 
> maxMBSortInHeap=7.329943162959591, sim=RandomSimilarity(queryNorm=false): {}, 
> locale=ms, timezone=Europe/Saratov
>[junit4]   2> NOTE: Linux 4.13.0-41-generic i386/Oracle Corporation 
> 1.8.0_172 (32-bit)/cpus=8,threads=1,free=313060856,total=533725184
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For 

[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_172) - Build # 22324 - Unstable!

2018-06-26 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/22324/
Java: 64bit/jdk1.8.0_172 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.cdcr.CdcrBidirectionalTest.testBiDir

Error Message:
Captured an uncaught exception in thread: Thread[id=19586, 
name=cdcr-replicator-6407-thread-1, state=RUNNABLE, 
group=TGRP-CdcrBidirectionalTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=19586, name=cdcr-replicator-6407-thread-1, 
state=RUNNABLE, group=TGRP-CdcrBidirectionalTest]
Caused by: java.lang.AssertionError
at __randomizedtesting.SeedInfo.seed([72C6C4732714E1B0]:0)
at 
org.apache.solr.update.CdcrUpdateLog$CdcrLogReader.forwardSeek(CdcrUpdateLog.java:611)
at org.apache.solr.handler.CdcrReplicator.run(CdcrReplicator.java:125)
at 
org.apache.solr.handler.CdcrReplicatorScheduler.lambda$null$0(CdcrReplicatorScheduler.java:81)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 14291 lines...]
   [junit4] Suite: org.apache.solr.cloud.cdcr.CdcrBidirectionalTest
   [junit4]   2> Creating dataDir: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.cdcr.CdcrBidirectionalTest_72C6C4732714E1B0-001/init-core-data-001
   [junit4]   2> 1698970 INFO  
(TEST-CdcrBidirectionalTest.testBiDir-seed#[72C6C4732714E1B0]) [] 
o.a.s.c.MiniSolrCloudCluster Starting cluster of 1 servers in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.cdcr.CdcrBidirectionalTest_72C6C4732714E1B0-001/cdcr-cluster2-001
   [junit4]   2> 1698970 INFO  
(TEST-CdcrBidirectionalTest.testBiDir-seed#[72C6C4732714E1B0]) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 1698970 INFO  (Thread-5603) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 1698970 INFO  (Thread-5603) [] o.a.s.c.ZkTestServer 
Starting server
   [junit4]   2> 1698971 ERROR (Thread-5603) [] o.a.z.s.ZooKeeperServer 
ZKShutdownHandler is not registered, so ZooKeeper server won't take any action 
on ERROR or SHUTDOWN server state changes
   [junit4]   2> 1699070 INFO  
(TEST-CdcrBidirectionalTest.testBiDir-seed#[72C6C4732714E1B0]) [] 
o.a.s.c.ZkTestServer start zk server on port:44699
   [junit4]   2> 1699072 INFO  (zkConnectionManagerCallback-5089-thread-1) [
] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 1699074 INFO  (jetty-launcher-5086-thread-1) [] 
o.e.j.s.Server jetty-9.4.11.v20180605; built: 2018-06-05T18:24:03.829Z; git: 
d5fc0523cfa96bfebfbda19606cad384d772f04c; jvm 1.8.0_172-b11
   [junit4]   2> 1699074 INFO  (jetty-launcher-5086-thread-1) [] 
o.e.j.s.session DefaultSessionIdManager workerName=node0
   [junit4]   2> 1699074 INFO  (jetty-launcher-5086-thread-1) [] 
o.e.j.s.session No SessionScavenger set, using defaults
   [junit4]   2> 1699074 INFO  (jetty-launcher-5086-thread-1) [] 
o.e.j.s.session node0 Scavenging every 66ms
   [junit4]   2> 1699075 INFO  (jetty-launcher-5086-thread-1) [] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@23597e0e{/solr,null,AVAILABLE}
   [junit4]   2> 1699076 INFO  (jetty-launcher-5086-thread-1) [] 
o.e.j.s.AbstractConnector Started 
ServerConnector@7cc183c8{HTTP/1.1,[http/1.1]}{127.0.0.1:33423}
   [junit4]   2> 1699076 INFO  (jetty-launcher-5086-thread-1) [] 
o.e.j.s.Server Started @1699120ms
   [junit4]   2> 1699076 INFO  (jetty-launcher-5086-thread-1) [] 
o.a.s.c.s.e.JettySolrRunner Jetty properties: {hostContext=/solr, 
hostPort=33423}
   [junit4]   2> 1699076 ERROR (jetty-launcher-5086-thread-1) [] 
o.a.s.u.StartupLoggingUtils Missing Java Option solr.log.dir. Logging may be 
missing or incomplete.
   [junit4]   2> 1699076 INFO  (jetty-launcher-5086-thread-1) [] 
o.a.s.s.SolrDispatchFilter Using logger factory 
org.apache.logging.slf4j.Log4jLoggerFactory
   [junit4]   2> 1699076 INFO  (jetty-launcher-5086-thread-1) [] 
o.a.s.s.SolrDispatchFilter  ___  _   Welcome to Apache Solr? version 
8.0.0
   [junit4]   2> 1699076 INFO  (jetty-launcher-5086-thread-1) [] 
o.a.s.s.SolrDispatchFilter / __| ___| |_ _   Starting in cloud mode on port null
   [junit4]   2> 1699076 INFO  (jetty-launcher-5086-thread-1) [] 
o.a.s.s.SolrDispatchFilter \__ \/ _ \ | '_|  Install dir: null
   [junit4]   2> 1699076 INFO  (jetty-launcher-5086-thread-1) [] 
o.a.s.s.SolrDispatchFilter |___/\___/_|_|Start time: 
2018-06-26T14:26:21.386Z
   [junit4]   2> 1699078 INFO  (zkConnectionManagerCallback-5091-thread-1) [
] 

[jira] [Comment Edited] (SOLR-8897) SSL-related passwords in solr.in.sh are in plain text

2018-06-26 Thread Ian (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-8897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16523832#comment-16523832
 ] 

Ian edited comment on SOLR-8897 at 6/26/18 2:54 PM:


Thanks [~pluk77] for pointing out that the Jetty password utility doesn't work 
with the collection API.
That was one of the suggestions I was looking into from this thread from 2016
[http://lucene.472066.n3.nabble.com/Prevent-the-SSL-Keystore-and-Truststore-password-from-showing-up-in-the-Solr-Admin-and-Linux-process-td4257422.html]

[~janhoy] Is there an open ticket about not showing the password in the Solr 
Portal UI as you suggest?
Also this solution from SOLR-10307 which has marked this issue as a duplicate, 
resolves the issue by using environment variables.
I don't think this is much of an improvement, see 
https://diogomonica.com/2017/03/27/why-you-shouldnt-use-env-variables-for-secret-data/
(There is another solution referenced of using Hadoop, but that doesn't apply 
to me)

For reference I'm using Solr 6.6 on Windows.

This is my first time posting here, so not sure on the protocols.
Can this ticket be re-raised/split?
To solve storing the password securely at rest (If that the Jetty password 
Utility or other mechanism, my main language is not Java, what's best practice?)
Not exposed in the UI.
Not expose the password to other processes, likely to be caught in memory/crash 
dumps.
Update the documentation to show how can configure Solr HTTPS password 
certificates securely, (Even 7.2 still shows setting the password in plain text 
in solr.in.cmd - [https://lucene.apache.org/solr/guide/7_2/enabling-ssl.html)]

Thanks in advance, let me know how I can help.


was (Author: bigredmachine):
Thanks [~pluk77] for pointing out that the Jetty password utility doesn't work 
with the collection API.
That was one of the suggestions I was looking into from this thread from 2016
[http://lucene.472066.n3.nabble.com/Prevent-the-SSL-Keystore-and-Truststore-password-from-showing-up-in-the-Solr-Admin-and-Linux-process-td4257422.html]

[~janhoy] Is there an open ticket about not showing the password in the Solr 
Portal UI as you suggest?
Also this solution from SOLR-10307 which has marked this issue as a duplicate, 
resolves the issue by using environment variables.
I don't think this is much of an improvement, see 
[https://diogomonica.com/2017/03/27/why-you-shouldnt-use-env-variables-for-secret-data/
(|https://diogomonica.com/2017/03/27/why-you-shouldnt-use-env-variables-for-secret-data/]There
 is another solution referenced of using Hadoop, but that doesn't apply to me)

For reference I'm using Solr 6.6 on Windows.

This is my first time posting here, so not sure on the protocols.
Can this ticket be re-raised/split?
To solve storing the password securely at rest (If that the Jetty password 
Utility or other mechanism, my main language is not Java, what's best practice?)
Not exposed in the UI.
Not expose the password to other processes, likely to be caught in memory/crash 
dumps.
Update the documentation to show how can configure Solr HTTPS password 
certificates securely, (Even 7.2 still shows setting the password in plain text 
in solr.in.cmd - [https://lucene.apache.org/solr/guide/7_2/enabling-ssl.html)]

Thanks in advance, let me know how I can help.

> SSL-related passwords in solr.in.sh are in plain text
> -
>
> Key: SOLR-8897
> URL: https://issues.apache.org/jira/browse/SOLR-8897
> Project: Solr
>  Issue Type: Improvement
>  Components: scripts and tools, security
>Reporter: Esther Quansah
>Priority: Major
>
> As per the steps mentioned at following URL, one needs to store the plain 
> text password for the keystore to configure SSL for Solr, which is not a good 
> idea from security perspective.
> URL: 
> https://cwiki.apache.org/confluence/display/solr/Enabling+SSL#EnablingSSL-SetcommonSSLrelatedsystemproperties
>  
> (https://cwiki.apache.org/confluence/display/solr/Enabling+SSL#EnablingSSL-SetcommonSSLrelatedsystemproperties)
> Is there any way so that the encrypted password can be stored (instead of 
> plain password) in solr.in.cmd/solr.in.sh to configure SSL?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8897) SSL-related passwords in solr.in.sh are in plain text

2018-06-26 Thread Ian (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-8897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16523832#comment-16523832
 ] 

Ian commented on SOLR-8897:
---

Thanks [~pluk77] for pointing out that the Jetty password utility doesn't work 
with the collection API.
That was one of the suggestions I was looking into from this thread from 2016
[http://lucene.472066.n3.nabble.com/Prevent-the-SSL-Keystore-and-Truststore-password-from-showing-up-in-the-Solr-Admin-and-Linux-process-td4257422.html]

[~janhoy] Is there an open ticket about not showing the password in the Solr 
Portal UI as you suggest?
Also this solution from SOLR-10307 which has marked this issue as a duplicate, 
resolves the issue by using environment variables.
I don't think this is much of an improvement, see 
[https://diogomonica.com/2017/03/27/why-you-shouldnt-use-env-variables-for-secret-data/
(|https://diogomonica.com/2017/03/27/why-you-shouldnt-use-env-variables-for-secret-data/]There
 is another solution referenced of using Hadoop, but that doesn't apply to me)

For reference I'm using Solr 6.6 on Windows.

This is my first time posting here, so not sure on the protocols.
Can this ticket be re-raised/split?
To solve storing the password securely at rest (If that the Jetty password 
Utility or other mechanism, my main language is not Java, what's best practice?)
Not exposed in the UI.
Not expose the password to other processes, likely to be caught in memory/crash 
dumps.
Update the documentation to show how can configure Solr HTTPS password 
certificates securely, (Even 7.2 still shows setting the password in plain text 
in solr.in.cmd - [https://lucene.apache.org/solr/guide/7_2/enabling-ssl.html)]

Thanks in advance, let me know how I can help.

> SSL-related passwords in solr.in.sh are in plain text
> -
>
> Key: SOLR-8897
> URL: https://issues.apache.org/jira/browse/SOLR-8897
> Project: Solr
>  Issue Type: Improvement
>  Components: scripts and tools, security
>Reporter: Esther Quansah
>Priority: Major
>
> As per the steps mentioned at following URL, one needs to store the plain 
> text password for the keystore to configure SSL for Solr, which is not a good 
> idea from security perspective.
> URL: 
> https://cwiki.apache.org/confluence/display/solr/Enabling+SSL#EnablingSSL-SetcommonSSLrelatedsystemproperties
>  
> (https://cwiki.apache.org/confluence/display/solr/Enabling+SSL#EnablingSSL-SetcommonSSLrelatedsystemproperties)
> Is there any way so that the encrypted password can be stored (instead of 
> plain password) in solr.in.cmd/solr.in.sh to configure SSL?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #410: SOLR-12441: add deeply nested URP for nested ...

2018-06-26 Thread moshebla
Github user moshebla commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/410#discussion_r198175722
  
--- Diff: 
solr/core/src/java/org/apache/solr/update/processor/NestedUpdateProcessor.java 
---
@@ -0,0 +1,85 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.solr.update.processor;
+
+import java.io.IOException;
+import java.util.EnumSet;
+import java.util.Objects;
+
+import org.apache.solr.common.SolrException;
+import org.apache.solr.common.SolrInputDocument;
+import org.apache.solr.common.SolrInputField;
+import org.apache.solr.request.SolrQueryRequest;
+import org.apache.solr.response.SolrQueryResponse;
+import org.apache.solr.schema.IndexSchema;
+import org.apache.solr.update.AddUpdateCommand;
+import static 
org.apache.solr.update.processor.NestedUpdateProcessorFactory.NestedFlag;
+
+public class NestedUpdateProcessor extends UpdateRequestProcessor {
+  public static final String splitChar = ".";
+  private EnumSet fields;
--- End diff --

Replaced the EnumSet with two booleans as you proposed.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8371) TestRandomChains.testRandomChainsWithLargeStrings() failure

2018-06-26 Thread Robert Muir (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16523811#comment-16523811
 ] 

Robert Muir commented on LUCENE-8371:
-

There is nothing wrong with the test, don't badapple it. It finds real bugs.

> TestRandomChains.testRandomChainsWithLargeStrings() failure
> ---
>
> Key: LUCENE-8371
> URL: https://issues.apache.org/jira/browse/LUCENE-8371
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/analysis
>Reporter: Steve Rowe
>Assignee: Alan Woodward
>Priority: Major
>
> Reproducing seed for {{TestRandomChains.testRandomChainsWithLargeStrings()}} 
> failure from [https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2196/]:
> {noformat}
> Checking out Revision 53ec8224705f4f0d35751b18b3f0168517c43121 
> (refs/remotes/origin/branch_7x)
> [...]
>[junit4] Suite: org.apache.lucene.analysis.core.TestRandomChains
>[junit4]   2> TEST FAIL: useCharFilter=true text='\ua97b  
> \uebcf\ueb06\uf85b\uf649\uf0b7 esgm s \uabfd 
> \ue11c\udbb4\udc48\ue90d\u0142\u0014\u0018 cr \u30ed\u30a8\u30ec\u30e1  \ud835\udf53\ud835\udc58\ud835\ude2b 
> \ueff5\uda61\ude33\ud94d\udcbb\udb3b\uddc8\u0738 \ua711\ua719 xqu ygvfwc 
> ~?\u0781%'
>[junit4]   2> Exception from random analyzer: 
>[junit4]   2> charfilters=
>[junit4]   2>   
> org.apache.lucene.analysis.fa.PersianCharFilter(java.io.StringReader@12c9ec6)
>[junit4]   2> tokenizer=
>[junit4]   2>   org.apache.lucene.analysis.core.LowerCaseTokenizer()
>[junit4]   2> filters=
>[junit4]   2>   
> org.apache.lucene.analysis.hunspell.HunspellStemFilter(ValidatingTokenFilter@17533c4
>  
> term=,bytes=[],startOffset=0,endOffset=0,positionIncrement=1,positionLength=1,type=word,termFrequency=1,keyword=false,
>  org.apache.lucene.analysis.hunspell.Dictionary@1e0b337, true, false)
>[junit4]   2>   
> Conditional:org.apache.lucene.analysis.no.NorwegianLightStemFilter(OneTimeWrapper@3e3989
>  
> term=,bytes=[],startOffset=0,endOffset=0,positionIncrement=1,positionLength=1,type=word,termFrequency=1,keyword=false)
>[junit4]   2>   
> Conditional:org.apache.lucene.analysis.en.EnglishPossessiveFilter(OneTimeWrapper@96b77b
>  
> term=,bytes=[],startOffset=0,endOffset=0,positionIncrement=1,positionLength=1,type=word,termFrequency=1,keyword=false)
>[junit4]   2>   
> Conditional:org.apache.lucene.analysis.shingle.FixedShingleFilter(OneTimeWrapper@d4fade
>  
> term=,bytes=[],startOffset=0,endOffset=0,positionIncrement=1,positionLength=1,type=word,termFrequency=1,keyword=false,
>  3)
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestRandomChains 
> -Dtests.method=testRandomChainsWithLargeStrings -Dtests.seed=8C3CDE29C6D4A774 
> -Dtests.multiplier=3 -Dtests.slow=true -Dtests.locale=ms 
> -Dtests.timezone=Europe/Saratov -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>[junit4] FAILURE 0.42s J2 | 
> TestRandomChains.testRandomChainsWithLargeStrings <<<
>[junit4]> Throwable #1: java.lang.AssertionError: finalOffset 
> expected:<74> but was:<73>
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([8C3CDE29C6D4A774:E66761389F9A8787]:0)
>[junit4]>  at 
> org.apache.lucene.analysis.BaseTokenStreamTestCase.assertTokenStreamContents(BaseTokenStreamTestCase.java:305)
>[junit4]>  at 
> org.apache.lucene.analysis.BaseTokenStreamTestCase.assertTokenStreamContents(BaseTokenStreamTestCase.java:320)
>[junit4]>  at 
> org.apache.lucene.analysis.BaseTokenStreamTestCase.assertTokenStreamContents(BaseTokenStreamTestCase.java:324)
>[junit4]>  at 
> org.apache.lucene.analysis.BaseTokenStreamTestCase.checkAnalysisConsistency(BaseTokenStreamTestCase.java:860)
>[junit4]>  at 
> org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:659)
>[junit4]>  at 
> org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:561)
>[junit4]>  at 
> org.apache.lucene.analysis.core.TestRandomChains.testRandomChainsWithLargeStrings(TestRandomChains.java:893)
>[junit4]>  at java.lang.Thread.run(Thread.java:748)
>[junit4]   2> NOTE: test params are: codec=Asserting(Lucene70): 
> {dummy=PostingsFormat(name=Memory)}, docValues:{}, maxPointsInLeafNode=1890, 
> maxMBSortInHeap=7.329943162959591, sim=RandomSimilarity(queryNorm=false): {}, 
> locale=ms, timezone=Europe/Saratov
>[junit4]   2> NOTE: Linux 4.13.0-41-generic i386/Oracle Corporation 
> 1.8.0_172 (32-bit)/cpus=8,threads=1,free=313060856,total=533725184
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: 

[jira] [Commented] (LUCENE-8371) TestRandomChains.testRandomChainsWithLargeStrings() failure

2018-06-26 Thread Erick Erickson (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16523792#comment-16523792
 ] 

Erick Erickson commented on LUCENE-8371:


So I should _not_ BadApple this one on Thursday?

> TestRandomChains.testRandomChainsWithLargeStrings() failure
> ---
>
> Key: LUCENE-8371
> URL: https://issues.apache.org/jira/browse/LUCENE-8371
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/analysis
>Reporter: Steve Rowe
>Assignee: Alan Woodward
>Priority: Major
>
> Reproducing seed for {{TestRandomChains.testRandomChainsWithLargeStrings()}} 
> failure from [https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2196/]:
> {noformat}
> Checking out Revision 53ec8224705f4f0d35751b18b3f0168517c43121 
> (refs/remotes/origin/branch_7x)
> [...]
>[junit4] Suite: org.apache.lucene.analysis.core.TestRandomChains
>[junit4]   2> TEST FAIL: useCharFilter=true text='\ua97b  
> \uebcf\ueb06\uf85b\uf649\uf0b7 esgm s \uabfd 
> \ue11c\udbb4\udc48\ue90d\u0142\u0014\u0018 cr \u30ed\u30a8\u30ec\u30e1  \ud835\udf53\ud835\udc58\ud835\ude2b 
> \ueff5\uda61\ude33\ud94d\udcbb\udb3b\uddc8\u0738 \ua711\ua719 xqu ygvfwc 
> ~?\u0781%'
>[junit4]   2> Exception from random analyzer: 
>[junit4]   2> charfilters=
>[junit4]   2>   
> org.apache.lucene.analysis.fa.PersianCharFilter(java.io.StringReader@12c9ec6)
>[junit4]   2> tokenizer=
>[junit4]   2>   org.apache.lucene.analysis.core.LowerCaseTokenizer()
>[junit4]   2> filters=
>[junit4]   2>   
> org.apache.lucene.analysis.hunspell.HunspellStemFilter(ValidatingTokenFilter@17533c4
>  
> term=,bytes=[],startOffset=0,endOffset=0,positionIncrement=1,positionLength=1,type=word,termFrequency=1,keyword=false,
>  org.apache.lucene.analysis.hunspell.Dictionary@1e0b337, true, false)
>[junit4]   2>   
> Conditional:org.apache.lucene.analysis.no.NorwegianLightStemFilter(OneTimeWrapper@3e3989
>  
> term=,bytes=[],startOffset=0,endOffset=0,positionIncrement=1,positionLength=1,type=word,termFrequency=1,keyword=false)
>[junit4]   2>   
> Conditional:org.apache.lucene.analysis.en.EnglishPossessiveFilter(OneTimeWrapper@96b77b
>  
> term=,bytes=[],startOffset=0,endOffset=0,positionIncrement=1,positionLength=1,type=word,termFrequency=1,keyword=false)
>[junit4]   2>   
> Conditional:org.apache.lucene.analysis.shingle.FixedShingleFilter(OneTimeWrapper@d4fade
>  
> term=,bytes=[],startOffset=0,endOffset=0,positionIncrement=1,positionLength=1,type=word,termFrequency=1,keyword=false,
>  3)
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestRandomChains 
> -Dtests.method=testRandomChainsWithLargeStrings -Dtests.seed=8C3CDE29C6D4A774 
> -Dtests.multiplier=3 -Dtests.slow=true -Dtests.locale=ms 
> -Dtests.timezone=Europe/Saratov -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>[junit4] FAILURE 0.42s J2 | 
> TestRandomChains.testRandomChainsWithLargeStrings <<<
>[junit4]> Throwable #1: java.lang.AssertionError: finalOffset 
> expected:<74> but was:<73>
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([8C3CDE29C6D4A774:E66761389F9A8787]:0)
>[junit4]>  at 
> org.apache.lucene.analysis.BaseTokenStreamTestCase.assertTokenStreamContents(BaseTokenStreamTestCase.java:305)
>[junit4]>  at 
> org.apache.lucene.analysis.BaseTokenStreamTestCase.assertTokenStreamContents(BaseTokenStreamTestCase.java:320)
>[junit4]>  at 
> org.apache.lucene.analysis.BaseTokenStreamTestCase.assertTokenStreamContents(BaseTokenStreamTestCase.java:324)
>[junit4]>  at 
> org.apache.lucene.analysis.BaseTokenStreamTestCase.checkAnalysisConsistency(BaseTokenStreamTestCase.java:860)
>[junit4]>  at 
> org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:659)
>[junit4]>  at 
> org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:561)
>[junit4]>  at 
> org.apache.lucene.analysis.core.TestRandomChains.testRandomChainsWithLargeStrings(TestRandomChains.java:893)
>[junit4]>  at java.lang.Thread.run(Thread.java:748)
>[junit4]   2> NOTE: test params are: codec=Asserting(Lucene70): 
> {dummy=PostingsFormat(name=Memory)}, docValues:{}, maxPointsInLeafNode=1890, 
> maxMBSortInHeap=7.329943162959591, sim=RandomSimilarity(queryNorm=false): {}, 
> locale=ms, timezone=Europe/Saratov
>[junit4]   2> NOTE: Linux 4.13.0-41-generic i386/Oracle Corporation 
> 1.8.0_172 (32-bit)/cpus=8,threads=1,free=313060856,total=533725184
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional 

[GitHub] lucene-solr pull request #410: SOLR-12441: add deeply nested URP for nested ...

2018-06-26 Thread moshebla
Github user moshebla commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/410#discussion_r198158464
  
--- Diff: 
solr/core/src/java/org/apache/solr/update/processor/NestedUpdateProcessor.java 
---
@@ -0,0 +1,85 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.solr.update.processor;
+
+import java.io.IOException;
+import java.util.EnumSet;
+import java.util.Objects;
+
+import org.apache.solr.common.SolrException;
+import org.apache.solr.common.SolrInputDocument;
+import org.apache.solr.common.SolrInputField;
+import org.apache.solr.request.SolrQueryRequest;
+import org.apache.solr.response.SolrQueryResponse;
+import org.apache.solr.schema.IndexSchema;
+import org.apache.solr.update.AddUpdateCommand;
+import static 
org.apache.solr.update.processor.NestedUpdateProcessorFactory.NestedFlag;
+
+public class NestedUpdateProcessor extends UpdateRequestProcessor {
+  public static final String splitChar = ".";
+  private EnumSet fields;
--- End diff --

Yes, though in the future if more meta-data is to be added it will be a lot 
more painful to add more fields


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12519) Support Deeply Nested Docs In Child Documents Transformer

2018-06-26 Thread mosh (JIRA)
mosh created SOLR-12519:
---

 Summary: Support Deeply Nested Docs In Child Documents Transformer
 Key: SOLR-12519
 URL: https://issues.apache.org/jira/browse/SOLR-12519
 Project: Solr
  Issue Type: Sub-task
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: mosh


As discussed in SOLR-12298, to make use of the meta-data fields in SOLR-12441, 
there needs to be a smarter child document transformer, which provides the 
ability to rebuild the original nested documents' structure.
 In addition, I also propose the transformer will also have the ability to 
bring only some of the original hierarchy, to prevent unnecessary block join 
queries. e.g.
{code}  {"a": "b", "c": [ {"e": "f"}, {"e": "g"} , {"h": "i"} ]} {code}
 Incase my query is for all the children of "a:b", which contain the key "e" in 
them, the query will be broken in to two parts:
 1. The parent query "a:b"
 2. The child query "e:*".

If the only children flag is on, the transformer will return the following 
documents:
 {code}[ {"e": "f"}, {"e": "g"} ]{code}

In case the flag was not turned on(perhaps the default state), the whole 
document hierarchy will be returned, containing only the matching children:
{code}{"a": "b", "c": [ {"e": "f"}, {"e": "g"} ]{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #410: SOLR-12441: add deeply nested URP for nested ...

2018-06-26 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/410#discussion_r198151910
  
--- Diff: 
solr/core/src/test/org/apache/solr/update/TestNestedUpdateProcessor.java ---
@@ -0,0 +1,158 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.solr.update;
+
+import java.util.ArrayList;
+import java.util.List;
+
+import org.apache.solr.SolrTestCaseJ4;
+import org.apache.solr.common.SolrException;
+import org.apache.solr.common.util.ContentStream;
+import org.apache.solr.common.util.ContentStreamBase;
+import org.apache.solr.request.SolrQueryRequest;
+import org.apache.solr.request.SolrRequestInfo;
+import org.apache.solr.response.SolrQueryResponse;
+import org.apache.solr.schema.IndexSchema;
+import org.apache.solr.servlet.SolrRequestParsers;
+import org.junit.Before;
+import org.junit.BeforeClass;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.ExpectedException;
+
+import static 
org.apache.solr.update.processor.NestedUpdateProcessor.splitChar;
+
+public class TestNestedUpdateProcessor extends SolrTestCaseJ4 {
+
+  private static final String[] childrenIds = { "2", "3" };
+  private static final String grandChildId = "4";
+  private static final String jDoc = "{\n" +
+  "\"add\": {\n" +
+  "\"doc\": {\n" +
+  "\"id\": \"1\",\n" +
+  "\"children\": [\n" +
+  "{\n" +
+  "\"id\": \"2\",\n" +
+  "\"foo_s\": \"Yaz\"\n" +
+  "\"grandChild\": \n" +
+  "  {\n" +
+  " \"id\": \""+ grandChildId + "\",\n" +
+  " \"foo_s\": \"Jazz\"\n" +
+  "  },\n" +
+  "},\n" +
+  "{\n" +
+  "\"id\": \"3\",\n" +
+  "\"foo_s\": \"Bar\"\n" +
+  "}\n" +
+  "]\n" +
+  "}\n" +
+  "}\n" +
+  "}";
+
+  private static final String errDoc = "{\n" +
+  "\"add\": {\n" +
+  "\"doc\": {\n" +
+  "\"id\": \"1\",\n" +
+  "\"children" + splitChar + "a\": [\n" +
+  "{\n" +
+  "\"id\": \"2\",\n" +
+  "\"foo_s\": \"Yaz\"\n" +
+  "\"grandChild\": \n" +
+  "  {\n" +
+  " \"id\": \""+ grandChildId + "\",\n" +
+  " \"foo_s\": \"Jazz\"\n" +
+  "  },\n" +
+  "},\n" +
+  "{\n" +
+  "\"id\": \"3\",\n" +
+  "\"foo_s\": \"Bar\"\n" +
+  "}\n" +
+  "]\n" +
+  "}\n" +
+  "}\n" +
+  "}";
+
+  @Rule
+  public ExpectedException thrown = ExpectedException.none();
+
+  @BeforeClass
+  public static void beforeClass() throws Exception {
+initCore("solrconfig-update-processor-chains.xml", "schema15.xml");
+  }
+
+  @Before
+  public void before() throws Exception {
+assertU(delQ("*:*"));
+assertU(commit());
+  }
+
+  @Test
+  public void testDeeplyNestedURPGrandChild() throws Exception {
+indexSampleData(jDoc);
+
+assertJQ(req("q", IndexSchema.PATH_FIELD_NAME + ":*.grandChild",
+"fl","*",
+"sort","id desc",
+"wt","json"),
+"/response/docs/[0]/id=='" + grandChildId + "'");
+  }
+
+  @Test
+  public void testDeeplyNestedURPChildren() throws Exception {
+final String[] childrenTests = 

[GitHub] lucene-solr pull request #410: SOLR-12441: add deeply nested URP for nested ...

2018-06-26 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/410#discussion_r198150106
  
--- Diff: 
solr/core/src/test/org/apache/solr/update/TestNestedUpdateProcessor.java ---
@@ -0,0 +1,158 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.solr.update;
+
+import java.util.ArrayList;
+import java.util.List;
+
+import org.apache.solr.SolrTestCaseJ4;
+import org.apache.solr.common.SolrException;
+import org.apache.solr.common.util.ContentStream;
+import org.apache.solr.common.util.ContentStreamBase;
+import org.apache.solr.request.SolrQueryRequest;
+import org.apache.solr.request.SolrRequestInfo;
+import org.apache.solr.response.SolrQueryResponse;
+import org.apache.solr.schema.IndexSchema;
+import org.apache.solr.servlet.SolrRequestParsers;
+import org.junit.Before;
+import org.junit.BeforeClass;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.ExpectedException;
+
+import static 
org.apache.solr.update.processor.NestedUpdateProcessor.splitChar;
--- End diff --

After Yonik gave some sage advice on this topic once, I now think tests 
ought not to refer to constants in the tested code.  This way in the future if 
we change our minds on what those constants refer to, we're then realizing we 
may break back-compat.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #410: SOLR-12441: add deeply nested URP for nested ...

2018-06-26 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/410#discussion_r198148078
  
--- Diff: 
solr/core/src/java/org/apache/solr/update/processor/NestedUpdateProcessor.java 
---
@@ -0,0 +1,85 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.solr.update.processor;
+
+import java.io.IOException;
+import java.util.EnumSet;
+import java.util.Objects;
+
+import org.apache.solr.common.SolrException;
+import org.apache.solr.common.SolrInputDocument;
+import org.apache.solr.common.SolrInputField;
+import org.apache.solr.request.SolrQueryRequest;
+import org.apache.solr.response.SolrQueryResponse;
+import org.apache.solr.schema.IndexSchema;
+import org.apache.solr.update.AddUpdateCommand;
+import static 
org.apache.solr.update.processor.NestedUpdateProcessorFactory.NestedFlag;
+
+public class NestedUpdateProcessor extends UpdateRequestProcessor {
+  public static final String splitChar = ".";
--- End diff --

nitpick: "splitChar" name suggests we are going to "split" on something but 
we don't.  I think "PATH_SEP_CHAR" is a better constant name.  If it suits the 
usage details it could be of type character but if not leave as String.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #410: SOLR-12441: add deeply nested URP for nested ...

2018-06-26 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/410#discussion_r198147106
  
--- Diff: 
solr/core/src/java/org/apache/solr/update/processor/NestedUpdateProcessor.java 
---
@@ -0,0 +1,85 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.solr.update.processor;
+
+import java.io.IOException;
+import java.util.EnumSet;
+import java.util.Objects;
+
+import org.apache.solr.common.SolrException;
+import org.apache.solr.common.SolrInputDocument;
+import org.apache.solr.common.SolrInputField;
+import org.apache.solr.request.SolrQueryRequest;
+import org.apache.solr.response.SolrQueryResponse;
+import org.apache.solr.schema.IndexSchema;
+import org.apache.solr.update.AddUpdateCommand;
+import static 
org.apache.solr.update.processor.NestedUpdateProcessorFactory.NestedFlag;
+
+public class NestedUpdateProcessor extends UpdateRequestProcessor {
+  public static final String splitChar = ".";
+  private EnumSet fields;
--- End diff --

Although I do think EnumSet / enums are pretty nifty JDK utilities, in this 
case two booleans would be more clear?


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #410: SOLR-12441: add deeply nested URP for nested ...

2018-06-26 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/410#discussion_r198148926
  
--- Diff: 
solr/core/src/java/org/apache/solr/update/processor/NestedUpdateProcessorFactory.java
 ---
@@ -0,0 +1,72 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.solr.update.processor;
+
+import java.util.Arrays;
+import java.util.EnumSet;
+import java.util.List;
+import java.util.Locale;
+import java.util.stream.Collectors;
+
+import org.apache.solr.common.SolrException;
+import org.apache.commons.lang3.StringUtils;
+import org.apache.solr.common.util.NamedList;
+import org.apache.solr.common.util.StrUtils;
+import org.apache.solr.request.SolrQueryRequest;
+import org.apache.solr.response.SolrQueryResponse;
+
+import static org.apache.solr.common.SolrException.ErrorCode.SERVER_ERROR;
+
+public class NestedUpdateProcessorFactory extends 
UpdateRequestProcessorFactory {
+
+  private EnumSet fields;
+  private static final List allowedConfFields = 
Arrays.stream(NestedFlag.values()).map(e -> 
e.toString().toLowerCase(Locale.ROOT)).collect(Collectors.toList());
+
+  public UpdateRequestProcessor getInstance(SolrQueryRequest req, 
SolrQueryResponse rsp, UpdateRequestProcessor next ) {
+return new NestedUpdateProcessor(req, rsp, fields, next);
+  }
+
+  @Override
+  public void init( NamedList args )
--- End diff --

Maybe lets not have any configuration at all -- let the schema be the 
guide.  If the schema contains `IndexSchema.PATH_FIELD_NAME` then we should 
populate it, if not then don't.  Ditto for `PARENT_FIELD_NAME`.  The 
getInstance method of the factory can detect if there is any work to do and if 
none then return "next", thus avoiding overhead.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11990) Make it possible to co-locate replicas of multiple collections together in a node using policy

2018-06-26 Thread Shalin Shekhar Mangar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16523720#comment-16523720
 ] 

Shalin Shekhar Mangar commented on SOLR-11990:
--

Patch updated to master after the commits on SOLR-12506 and SOLR-12507.

> Make it possible to co-locate replicas of multiple collections together in a 
> node using policy
> --
>
> Key: SOLR-11990
> URL: https://issues.apache.org/jira/browse/SOLR-11990
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: SOLR-11990.patch, SOLR-11990.patch, SOLR-11990.patch, 
> SOLR-11990.patch, SOLR-11990.patch
>
>
> It is necessary to co-locate replicas of different collection together in a 
> node when cross-collection joins are performed. The policy rules framework 
> should support this use-case.
> Example: Co-locate exactly 1 replica of collection A in each node where a 
> replica of collection B is present.
> {code}
> {"replica":">0", "collection":"A", "shard":"#EACH", "withCollection":"B"}
> {code}
> This requires changing create collection, create shard and add replica APIs 
> as well because we want a replica of collection A to be created first before 
> a replica of collection B is created so that join queries etc are always 
> possible.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11990) Make it possible to co-locate replicas of multiple collections together in a node using policy

2018-06-26 Thread Shalin Shekhar Mangar (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-11990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-11990:
-
Attachment: SOLR-11990.patch

> Make it possible to co-locate replicas of multiple collections together in a 
> node using policy
> --
>
> Key: SOLR-11990
> URL: https://issues.apache.org/jira/browse/SOLR-11990
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: SOLR-11990.patch, SOLR-11990.patch, SOLR-11990.patch, 
> SOLR-11990.patch, SOLR-11990.patch
>
>
> It is necessary to co-locate replicas of different collection together in a 
> node when cross-collection joins are performed. The policy rules framework 
> should support this use-case.
> Example: Co-locate exactly 1 replica of collection A in each node where a 
> replica of collection B is present.
> {code}
> {"replica":">0", "collection":"A", "shard":"#EACH", "withCollection":"B"}
> {code}
> This requires changing create collection, create shard and add replica APIs 
> as well because we want a replica of collection A to be created first before 
> a replica of collection B is created so that join queries etc are always 
> possible.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12441) Add deeply nested documents URP

2018-06-26 Thread mosh (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16523715#comment-16523715
 ] 

mosh commented on SOLR-12441:
-

Yes, this is probably a discussion for another ticket, as it can be done using 
a block join query anyway.
I have pushed a new commit a few hours ago which eliminates the level field.

> Add deeply nested documents URP
> ---
>
> Key: SOLR-12441
> URL: https://issues.apache.org/jira/browse/SOLR-12441
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: mosh
>Assignee: David Smiley
>Priority: Major
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> As discussed in 
> [SOLR-12298|https://issues.apache.org/jira/browse/SOLR-12298], there ought to 
> be an URP to add metadata fields to childDocuments in order to allow a 
> transformer to rebuild the original document hierarchy.
> {quote}I propose we add the following fields:
>  # __nestParent__
>  # _nestLevel_
>  # __nestPath__
> __nestParent__: This field wild will store the document's parent docId, to be 
> used for building the whole hierarchy, using a new document transformer, as 
> suggested by Jan on the mailing list.
> _nestLevel_: This field will store the level of the specified field in the 
> document, using an int value. This field can be used for the parentFilter, 
> eliminating the need to provide a parentFilter, which will be set by default 
> as "_level_:queriedFieldLevel".
> _nestLevel_: This field will contain the full path, separated by a specific 
> reserved char e.g., '.'
>  for example: "first.second.third".
>  This will enable users to search for a specific path, or provide a regular 
> expression to search for fields sharing the same name in different levels of 
> the document, filtering using the level key if needed.
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12441) Add deeply nested documents URP

2018-06-26 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16523710#comment-16523710
 ] 

David Smiley commented on SOLR-12441:
-

I still don't think I appreciate the real-world use case that brings about the 
need to query by level... and so I'd rather not discuss solutions to a problem 
until I appreciate that problem.  I do understand what you mean by query for 
levels >= some number but, practically speaking what might a higher level 
search requirement look like that asks for this?  I could invent crazy ones to 
force fit the need but I'd rather you tell me about a real/practical need.  If 
it seems very esoteric then lets not add it -- it's within the realm of the 
possible for search apps to handle this themselves (e.g. they can add an urp 
themselves).

> Add deeply nested documents URP
> ---
>
> Key: SOLR-12441
> URL: https://issues.apache.org/jira/browse/SOLR-12441
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: mosh
>Assignee: David Smiley
>Priority: Major
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> As discussed in 
> [SOLR-12298|https://issues.apache.org/jira/browse/SOLR-12298], there ought to 
> be an URP to add metadata fields to childDocuments in order to allow a 
> transformer to rebuild the original document hierarchy.
> {quote}I propose we add the following fields:
>  # __nestParent__
>  # _nestLevel_
>  # __nestPath__
> __nestParent__: This field wild will store the document's parent docId, to be 
> used for building the whole hierarchy, using a new document transformer, as 
> suggested by Jan on the mailing list.
> _nestLevel_: This field will store the level of the specified field in the 
> document, using an int value. This field can be used for the parentFilter, 
> eliminating the need to provide a parentFilter, which will be set by default 
> as "_level_:queriedFieldLevel".
> _nestLevel_: This field will contain the full path, separated by a specific 
> reserved char e.g., '.'
>  for example: "first.second.third".
>  This will enable users to search for a specific path, or provide a regular 
> expression to search for fields sharing the same name in different levels of 
> the document, filtering using the level key if needed.
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11985) Allow percentage in replica attribute in policy

2018-06-26 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16523703#comment-16523703
 ] 

ASF subversion and git services commented on SOLR-11985:


Commit 17d253262cfc1ca24a13618aa1811b21342be267 in lucene-solr's branch 
refs/heads/branch_7x from noble
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=17d2532 ]

SOLR-11985 :  Added a test for pecentage with replica type


> Allow percentage in replica attribute in policy
> ---
>
> Key: SOLR-11985
> URL: https://issues.apache.org/jira/browse/SOLR-11985
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Noble Paul
>Priority: Major
> Fix For: master (8.0), 7.5
>
> Attachments: SOLR-11985.patch, SOLR-11985.patch
>
>
> Today we can only specify an absolute number in the 'replica' attribute in 
> the policy rules. It'd be useful to write a percentage value to make certain 
> use-cases easier. For example:
> {code:java}
> // Keep a third of the the replicas of each shard in east region
> {"replica" : "<34%", "shard" : "#EACH", "sysprop:region": "east"}
> // Keep two thirds of the the replicas of each shard in west region
> {"replica" : "<67%", "shard" : "#EACH", "sysprop:region": "west"}
> {code}
> Today the above must be represented by different rules for each collection if 
> they have different replication factors. Also if the replication factor 
> changes later, the absolute value has to be changed in tandem. So expressing 
> a percentage removes both of these restrictions.
> This feature means that the value of the attribute {{"replica"}} is only 
> available just in time. We call such values {{"computed values"}} . The 
> computed value for this attribute depends on other attributes as well. 
>  Take the following 2 rules
> {code:java}
> //example 1
> {"replica" : "<34%", "shard" : "#EACH", "sysprop:region": "east"}
> //example 2
> {"replica" : "<34%",  "sysprop:region": "east"}
> {code}
> assume we have collection {{"A"}} with 2 shards and {{replicationFactor=3}}
> *example 1* would mean that the value of replica is computed as
> {{3 * 34 / 100 = 1.02}}
> Which means *_for each shard_* keep less than 1.02 replica in east 
> availability zone
>  
> *example 2* would mean that the value of replica is computed as 
> {{3 * 2 * 34 / 100 = 2.04}}
>  
> which means _*for each collection*_ keep less than 2.04 replicas on east 
> availability zone



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-repro - Build # 888 - Still Unstable

2018-06-26 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/888/

[...truncated 29 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1571/consoleText

[repro] Revision: ccfae65050388d75e79db04bda05f9d31bc81537

[repro] Ant options: -Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
[repro] Repro line:  ant test  -Dtestcase=TestStressCloudBlindAtomicUpdates 
-Dtests.method=test_stored_idx -Dtests.seed=915C419EB633AB03 
-Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=ar-MA -Dtests.timezone=Etc/GMT-7 -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=SolrRrdBackendFactoryTest 
-Dtests.method=testBasic -Dtests.seed=915C419EB633AB03 -Dtests.multiplier=2 
-Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=es-CU -Dtests.timezone=Antarctica/Syowa -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=TestComputePlanAction 
-Dtests.method=testNodeAdded -Dtests.seed=915C419EB633AB03 -Dtests.multiplier=2 
-Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=be -Dtests.timezone=Pacific/Nauru -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
1eb2676f27ad4f3913c0f9f43b08e8f3faf889a0
[repro] git fetch
[repro] git checkout ccfae65050388d75e79db04bda05f9d31bc81537

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   TestComputePlanAction
[repro]   TestStressCloudBlindAtomicUpdates
[repro]   SolrRrdBackendFactoryTest
[repro] ant compile-test

[...truncated 3300 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=15 
-Dtests.class="*.TestComputePlanAction|*.TestStressCloudBlindAtomicUpdates|*.SolrRrdBackendFactoryTest"
 -Dtests.showOutput=onerror -Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.seed=915C419EB633AB03 -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=be -Dtests.timezone=Pacific/Nauru -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[...truncated 683676 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   0/5 failed: org.apache.solr.metrics.rrd.SolrRrdBackendFactoryTest
[repro]   1/5 failed: 
org.apache.solr.cloud.autoscaling.sim.TestComputePlanAction
[repro]   4/5 failed: org.apache.solr.cloud.TestStressCloudBlindAtomicUpdates
[repro] git checkout 1eb2676f27ad4f3913c0f9f43b08e8f3faf889a0

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 5 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-11985) Allow percentage in replica attribute in policy

2018-06-26 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16523678#comment-16523678
 ] 

ASF subversion and git services commented on SOLR-11985:


Commit 7fb36c59062275b3bcd810c6035c073798124e56 in lucene-solr's branch 
refs/heads/master from noble
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=7fb36c5 ]

SOLR-11985 :  Added a test for pecentage with replica type


> Allow percentage in replica attribute in policy
> ---
>
> Key: SOLR-11985
> URL: https://issues.apache.org/jira/browse/SOLR-11985
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Noble Paul
>Priority: Major
> Fix For: master (8.0), 7.5
>
> Attachments: SOLR-11985.patch, SOLR-11985.patch
>
>
> Today we can only specify an absolute number in the 'replica' attribute in 
> the policy rules. It'd be useful to write a percentage value to make certain 
> use-cases easier. For example:
> {code:java}
> // Keep a third of the the replicas of each shard in east region
> {"replica" : "<34%", "shard" : "#EACH", "sysprop:region": "east"}
> // Keep two thirds of the the replicas of each shard in west region
> {"replica" : "<67%", "shard" : "#EACH", "sysprop:region": "west"}
> {code}
> Today the above must be represented by different rules for each collection if 
> they have different replication factors. Also if the replication factor 
> changes later, the absolute value has to be changed in tandem. So expressing 
> a percentage removes both of these restrictions.
> This feature means that the value of the attribute {{"replica"}} is only 
> available just in time. We call such values {{"computed values"}} . The 
> computed value for this attribute depends on other attributes as well. 
>  Take the following 2 rules
> {code:java}
> //example 1
> {"replica" : "<34%", "shard" : "#EACH", "sysprop:region": "east"}
> //example 2
> {"replica" : "<34%",  "sysprop:region": "east"}
> {code}
> assume we have collection {{"A"}} with 2 shards and {{replicationFactor=3}}
> *example 1* would mean that the value of replica is computed as
> {{3 * 34 / 100 = 1.02}}
> Which means *_for each shard_* keep less than 1.02 replica in east 
> availability zone
>  
> *example 2* would mean that the value of replica is computed as 
> {{3 * 2 * 34 / 100 = 2.04}}
>  
> which means _*for each collection*_ keep less than 2.04 replicas on east 
> availability zone



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-12389) Support nested properties in cluster props

2018-06-26 Thread Noble Paul (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul resolved SOLR-12389.
---
   Resolution: Fixed
Fix Version/s: master (8.0)
   7.4

> Support nested properties in cluster props
> --
>
> Key: SOLR-12389
> URL: https://issues.apache.org/jira/browse/SOLR-12389
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
> Environment: cluster props API does not support nested objects . 
>  
>  
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Major
> Fix For: 7.4, master (8.0)
>
>
> A new command is added to V2 endpoint to set deeply nested objects
> example 1:
> {code}
> $ curl http://localhost:8983/api/cluster -H 'Content-type: application/json' 
> -d '
> { "set-obj-property":  {
>   "collectionDefaults" :{
>  "numShards": 3, 
>  "nrtreplicas": "2 ,
>  "tlogReplicas":3,
>  "pullReplicas" : 2
> }}}'
> {code}
> example 2:
> unset the value of {{numShards}}
> {code}
> $ curl http://localhost:8983/api/cluster -H 'Content-type: application/json' 
> -d '
> { "set-obj-property":  {
>   "collectionDefaults" :{
>  "numShards": null
> }}}'
> {code}
> example 2:
> unset the value of {{numShards}}
> example 3:
> unset the entire {{collectionDefaults}} and set another value for another key 
> all in one command
> {code}
> $ curl http://localhost:8983/api/cluster -H 'Content-type: application/json' 
> -d '
> { "set-obj-property":  {
>  "autoAddReplicas" : true,
>   "collectionDefaults" :null}}'
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-12294) System collection - Lazy loading mechanism not working for custom UpdateProcessors

2018-06-26 Thread Noble Paul (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12294?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul resolved SOLR-12294.
---
   Resolution: Fixed
Fix Version/s: master (8.0)
   7.4

> System collection - Lazy loading mechanism not working for custom 
> UpdateProcessors
> --
>
> Key: SOLR-12294
> URL: https://issues.apache.org/jira/browse/SOLR-12294
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: UpdateRequestProcessors
>Affects Versions: 7.3
>Reporter: Johannes Brucher
>Assignee: Noble Paul
>Priority: Critical
> Fix For: 7.4, master (8.0)
>
> Attachments: no_active_replica_available.png, schema.xml, 
> solrconfig.xml, update-processor-0.0.1-SNAPSHOT.jar
>
>
> Hi all,
> I'm facing an issue regarding custom code inside a .system-collection and 
> starting up a Solr Cloud cluster.
> I thought, like its stated in the documentation, that in case using the 
> .system collection custom code is lazy loaded, because it can happen that a 
> collection that uses custom code is initialized before the .system collection 
> is up and running.
> I did all the necessary configuration and while debugging, I can see that the 
> custom code is wrapped via a PluginBag$LazyPluginHolder. So far its seems 
> good, but I still get Exceptions when starting the Solr Cloud with the 
> following errors:
> SolrException: Blob loading failed: .no active replica available for .system 
> collection...
> In my case I'm using custom code for a couple of UpdateProcessors. So it 
> seems, that this lazy mechanism is not working well for UpdateProcessors.
> Inside the calzz LazyPluginHolder the comment says:
> "A class that loads plugins Lazily. When the get() method is invoked the 
> Plugin is initialized and returned."
> When a core is initialized and you have a custom UpdateProcessor, the 
> get-method is invoked directly and the lazy loading mechanism tries to get 
> the custom class from the MemClassLoader, but in most scenarios the system 
> collection is not up and the above Exception is thrown...
> So maybe it’s the case that for UpdateProcessors while initializing a core, 
> the routine is not implemented optimal for the lazy loading mechanism?
>  
> Here are the steps to reproduce the issue:
>  # Unpack solr 7.3.0
>  1.1 Add SOLR_OPTS="$SOLR_OPTS -Denable.runtime.lib=true" to solr.in.sh
>  1.2 Start Solr with -c option
>  # Setup .system collection:
>  2.1 Upload custom test jar --> curl -X POST -H 'Content-Type: 
> application/octet-stream' --data-binary @ jar>/update-processor-0.0.1-SNAPSHOT.jar http:// solr>/solr/.system/blob/test_blob
>  2.2 Alter maxShardsPerNode --> 
> .../admin/collections?action=MODIFYCOLLECTION=.system=2
>  2.2 Add Replica to .system collection --> 
> .../admin/collections?action=ADDREPLICA=.system=shard1
>  # Setup test collection:
>  3.1 Upload test conf to ZK --> ./zkcli.sh -zkhost  -cmd 
> upconfig -confdir  -confname test_conf
>  3.2 Create a test1 collection with commented UP-chain inside solrconfig.xml 
> via Admin UI
>  3.3 Add blob to test collection --> curl http:// Solr>/solr/test1/config -H 'Content-type:application/json' -d 
> '\{"add-runtimelib": { "name":"test_blob", "version":1 }}'
>  3.4 Uncomment the UP-chain and upload test-conf again --> ./zkcli.sh -zkhost 
>  -cmd upconfig -confdir  -confname test_conf
>  3.5 Reload test1 collection
>  3.6 Everything should work as expected now (no erros are shown)
>  # Restart SOLR
>  4.1 Now you can see: SolrException: Blob loading failed: No active replica 
> available for .system collection
> Expected: The lazy loading mechanism should work for UpdateProcessors inside 
> core init routine, but it isn't due to above Exception.
> Sometimes you are lucky and the test1 collection will be initialize after the 
> .system collection. But in ~90% of the time this isn't the case...
> Let me know if you need further details here,
>  
> Johannes



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11823) Incorrect number of replica calculation when using Restore Collection API

2018-06-26 Thread Ansgar Wiechers (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16523642#comment-16523642
 ] 

Ansgar Wiechers commented on SOLR-11823:


That is correct.

> Incorrect number of replica calculation when using Restore Collection API
> -
>
> Key: SOLR-11823
> URL: https://issues.apache.org/jira/browse/SOLR-11823
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Backup/Restore
>Affects Versions: 7.1
>Reporter: Ansgar Wiechers
>Priority: Major
> Fix For: 7.4, master (8.0)
>
>
> I'm running Solr 7.1 (didn't test other versions) in SolrCloud mode ona a 
> 3-node cluster and tried using the backup/restore API for the first time. 
> Backup worked fine, but when trying to restore the backed-up collection I ran 
> into an unexpected problem with the replication factor setting.
> I expected the command below to restore a backup of the collection "demo" 
> with 3 shards, creating 2 replicas per shard. Instead it's trying to create 6 
> replicas per shard:
> {noformat}
> # curl -s -k 
> 'https://localhost:8983/solr/admin/collections?action=restore=demo=/srv/backup/solr/solr-dev=demo=2=2'
> {
>   "error": {
> "code": 400,
> "msg": "Solr cloud with available number of nodes:3 is insufficient for 
> restoring a collection with 3 shards, total replicas per shard 6 and 
> maxShardsPerNode 2. Consider increasing maxShardsPerNode value OR number 
> ofavailable nodes.",
> "metadata": [
>   "error-class",
>   "org.apache.solr.common.SolrException",
>   "root-error-class",
>   "org.apache.solr.common.SolrException"
> ]
>   },
>   "exception": {
> "rspCode": 400,
> "msg": "Solr cloud with available number of nodes:3 is insufficient for 
> restoring a collection with 3 shards, total replicas per shard 6 and 
> maxShardsPerNode 2. Consider increasing maxShardsPerNode value OR number of 
> available nodes."
>   },
>   "Operation restore caused exception:": 
> "org.apache.solr.common.SolrException:org.apache.solr.common.SolrException: 
> Solr cloud with available number of nodes:3 is insufficient for restoring a 
> collection with 3 shards, total replicas per shard 6 and maxShardsPerNode 2. 
> Consider increasing maxShardsPerNode value OR number of available nodes.",
>   "responseHeader": {
> "QTime": 28,
> "status": 400
>   }
> }
> {noformat}
> Restoring a collection with only 2 shards tries to create 6 replicas as well, 
> so it looks to me like the restore API multiplies the replication factor with 
> the number of nodes, which is not how the replication factor behaves in other 
> contexts. The 
> [documentation|https://lucene.apache.org/solr/guide/7_1/collections-api.html] 
> also didn't lead me to expect this behavior:
> {quote}
> replicationFactor
>The number of replicas to be created for each shard.
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11807) maxShardsPerNode=-1 needs special handling while restoring collections

2018-06-26 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16523579#comment-16523579
 ] 

ASF subversion and git services commented on SOLR-11807:


Commit c33bb65cf6e45a131d36c887aac77b7791d43bcf in lucene-solr's branch 
refs/heads/master from [~varunthacker]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=c33bb65 ]

SOLR-11807: Test code didn't take into account changing maxShardsPerNode for 
one code path


> maxShardsPerNode=-1 needs special handling while restoring collections
> --
>
> Key: SOLR-11807
> URL: https://issues.apache.org/jira/browse/SOLR-11807
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Backup/Restore
>Reporter: Varun Thacker
>Assignee: Varun Thacker
>Priority: Minor
> Fix For: master (8.0), 7.5
>
> Attachments: SOLR-11807.patch, SOLR-11807.patch, SOLR-11807.patch
>
>
> When you start Solr 6.6. and run the cloud example here's the log excerpt :
> {code:java}
> Connecting to ZooKeeper at localhost:9983 ...
> INFO  - 2018-06-20 13:44:47.491; 
> org.apache.solr.client.solrj.impl.ZkClientClusterStateProvider; Cluster at 
> localhost:9983 ready
> ...
> Creating new collection 'gettingstarted' using command:
> http://localhost:8983/solr/admin/collections?action=CREATE=gettingstarted=2=2=2=gettingstarted{code}
> maxShardsPerNode get's set to 2 . 
>  
> Compare this to Solr 7.3 
> {code:java}
> INFO  - 2018-06-20 13:55:33.823; 
> org.apache.solr.client.solrj.impl.ZkClientClusterStateProvider; Cluster at 
> localhost:9983 ready
> Created collection 'gettingstarted' with 2 shard(s), 2 replica(s) with 
> config-set 'gettingstarted'{code}
> So something changed and now we no longer set maxShardsPerNode and it 
> defaults to -1 . 
>  
> -1 has special handing while creating a collection ( it means max int ) . 
> This special handling is not there while restoring a collection and hence 
> this fails
> We should not set maxShardsPerNode to -1 in the first place
> Steps to reproduce:
> 1. ./bin/solr start -e cloud -noprompt : This creates a 2 node cluster and a 
> gettingstarted collection which 2X2
>  2. Add 4 docs (id=1,2,3,4) with commit=true and openSearcher=true (default)
>  3. Call backup: 
> [http://localhost:8983/solr/admin/collections?action=BACKUP=gettingstarted_backup=gettingstarted=/Users/varunthacker/solr-7.1.0]
>  4. Call restore:
>  
> [http://localhost:8983/solr/admin/collections?action=restore=gettingstarted_backup=restore_gettingstarted=/Users/varunthacker/solr-7.1.0]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11665) Improve SplitShardCmd error handling

2018-06-26 Thread Andrzej Bialecki (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16523548#comment-16523548
 ] 

Andrzej Bialecki  commented on SOLR-11665:
--

This patch contains the following improvements and fixes:
 * failures at any phase in the process are followed by an explicit cleanup, 
which removes partially constructed shards
 * sub-shards are now created with the same number of replicas per replica type 
as the parent type - previously only NRT replicas would be created.
 * fixed a bug in coreName construction so that the new cores follow the same 
naming pattern per replica type as the parent shards
 * SplitShardCmd now first checks the amount of available disk space on the 
parent shard leader's node to ensure that the actual index splitting has enough 
disk space to proceed.
 * unit test changes, and a test that verifies the correct number of replicas 
per type in the new sub-shards.

> Improve SplitShardCmd error handling
> 
>
> Key: SOLR-11665
> URL: https://issues.apache.org/jira/browse/SOLR-11665
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Cao Manh Dat
>Assignee: Andrzej Bialecki 
>Priority: Major
> Attachments: SOLR-11665.patch, SOLR-11665.test.patch
>
>
> I do see some problems when doing split shard but there are no available 
> nodes for creating replicas ( due to policy framework )
> - The patch contains a test, in which sub shard stay in CONSTRUCTION state 
> forever.
> - Shard which get split, stay in inactive state forever and subshards are not 
> created



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12514) Rule-base Authorization plugin skips authorization if querying node does not have collection replica

2018-06-26 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-12514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16523546#comment-16523546
 ] 

Jan Høydahl commented on SOLR-12514:


There are a bunch of complexities arising here, so it's tempting to throw a 302 
as a quick-fix. We can then advertise that clients using this type of auth can 
either comply with 302 responses or preemptively read the cluster state and 
route requests intelligently to nodes hosting a collection - the same way as 
CloudSolrClient already does.

If we go down the proxy style forwarding route, I imagine that we get other 
problems down the road. Say we add a IP-based auth plugin and tjen the request 
gets forwarded to another node, and IP auth fails etc.

On the other hand - if we want Admin UI to play nicely with an authenticated 
user (given that the UI gets auth support in the future), we end up with CORS 
issues if the UI has to send query requests to different nodes...

> Rule-base Authorization plugin skips authorization if querying node does not 
> have collection replica
> 
>
> Key: SOLR-12514
> URL: https://issues.apache.org/jira/browse/SOLR-12514
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: security
>Affects Versions: 7.3.1
>Reporter: Mahesh Kumar Vasanthu Somashekar
>Priority: Major
> Attachments: SOLR-12514.patch, Screen Shot 2018-06-24 at 9.36.45 
> PM.png, security.json
>
>
> Solr serves client requests going throught 3 steps - init(), authorize() and 
> handle-request ([link 
> git-link|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/7.3.1/solr/core/src/java/org/apache/solr/servlet/HttpSolrCall.java#L471]).
>  init() initializes all required information to be used by authorize(). 
> init() skips initializing if request is to be served remotely, which leads to 
> skipping authorization step ([link 
> git-link|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/7.3.1/solr/core/src/java/org/apache/solr/servlet/HttpSolrCall.java#L291]).
>  init() relies on 'cores' object which only has information of local node 
> (which is perfect as per design). It should actually be getting security 
> information (security.json) from zookeeper, which has global view of the 
> cluster.
>  
> Example:
>  SolrCloud setup consists of 2 nodes (solr-7.3.1):
>  live_nodes: [
>  "localhost:8983_solr",
>  "localhost:8984_solr",
>  ]
> Two collections are created - 'collection-rf-1' with RF=1 and 
> 'collection-rf-2' with RF=2.
> Two users are created - 'collection-rf-1-user' and 'collection-rf-2-user'.
> Security configuration is as below (security.json attached):
>  "authorization":{
>  "class":"solr.RuleBasedAuthorizationPlugin",
>  "permissions":[
> { "name":"read", "collection":"collection-rf-2", "role":"collection-rf-2", 
> "index":1}
> ,
> { "name":"read", "collection":"collection-rf-1", "role":"collection-rf-1", 
> "index":2}
> ,
> { "name":"read", "role":"*", "index":3}
> ,
>  ...
>  "user-role":
> { "collection-rf-1-user":[ "collection-rf-1"], "collection-rf-2-user":[ 
> "collection-rf-2"]}
> ,
>  ...
>  
> Basically, its setup to that 'collection-rf-1-user' user can only access 
> 'collection-rf-1' collection and 'collection-rf-2-user' user can only access 
> 'collection-rf-2' collection.
> Also note that 'collection-rf-1' collection replica is only on 
> 'localhost:8983_solr' node, whereas ''collection-rf-2' collection replica is 
> on both live nodes.
>  
> Authorization does not work as expected for 'collection-rf-1' collection:
> $ curl -u collection-rf-2-user:password 
> 'http://*localhost:8983*/solr/collection-rf-1/select?q=*:*'
>  
>  
>  
>  Error 403 Unauthorized request, Response code: 403
>  
>  HTTP ERROR 403
>  Problem accessing /solr/collection-rf-1/select. Reason:
>   Unauthorized request, Response code: 403
>  
>  
> $ curl -u collection-rf-2-user:password 
> 'http://*localhost:8984*/solr/collection-rf-1/select?q=*:*'
>  {
>  "responseHeader":{
>  "zkConnected":true,
>  "status":0,
>  "QTime":0,
>  "params":{
>  "q":"*:*"}},
>  "response":{"numFound":0,"start":0,"docs":[]
>  }}
>  
> Whereas authorization works perfectly for 'collection-rf-2' collection (as 
> both nodes have replica):
> $ curl -u collection-rf-1-user:password 
> 'http://*localhost:8984*/solr/collection-rf-2/select?q=*:*'
>  
>  
>  
>  Error 403 Unauthorized request, Response code: 403
>  
>  HTTP ERROR 403
>  Problem accessing /solr/collection-rf-2/select. Reason:
>   Unauthorized request, Response code: 403
>  
>  
> $ curl -u collection-rf-1-user:password 
> 'http://*localhost:8983*/solr/collection-rf-2/select?q=*:*'
>  
>  
>  
>  Error 403 Unauthorized request, Response code: 403
>  
>  HTTP 

[jira] [Updated] (SOLR-11665) Improve SplitShardCmd error handling

2018-06-26 Thread Andrzej Bialecki (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-11665?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki  updated SOLR-11665:
-
Attachment: SOLR-11665.patch

> Improve SplitShardCmd error handling
> 
>
> Key: SOLR-11665
> URL: https://issues.apache.org/jira/browse/SOLR-11665
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Cao Manh Dat
>Assignee: Andrzej Bialecki 
>Priority: Major
> Attachments: SOLR-11665.patch, SOLR-11665.test.patch
>
>
> I do see some problems when doing split shard but there are no available 
> nodes for creating replicas ( due to policy framework )
> - The patch contains a test, in which sub shard stay in CONSTRUCTION state 
> forever.
> - Shard which get split, stay in inactive state forever and subshards are not 
> created



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12514) Rule-base Authorization plugin skips authorization if querying node does not have collection replica

2018-06-26 Thread Noble Paul (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16523540#comment-16523540
 ] 

Noble Paul commented on SOLR-12514:
---

Why not forward all headers other than a few blacklisted ones? Even 302 is a 
good option. We just have to ensure that SolrJ works correctly with 302

> Rule-base Authorization plugin skips authorization if querying node does not 
> have collection replica
> 
>
> Key: SOLR-12514
> URL: https://issues.apache.org/jira/browse/SOLR-12514
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: security
>Affects Versions: 7.3.1
>Reporter: Mahesh Kumar Vasanthu Somashekar
>Priority: Major
> Attachments: SOLR-12514.patch, Screen Shot 2018-06-24 at 9.36.45 
> PM.png, security.json
>
>
> Solr serves client requests going throught 3 steps - init(), authorize() and 
> handle-request ([link 
> git-link|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/7.3.1/solr/core/src/java/org/apache/solr/servlet/HttpSolrCall.java#L471]).
>  init() initializes all required information to be used by authorize(). 
> init() skips initializing if request is to be served remotely, which leads to 
> skipping authorization step ([link 
> git-link|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/7.3.1/solr/core/src/java/org/apache/solr/servlet/HttpSolrCall.java#L291]).
>  init() relies on 'cores' object which only has information of local node 
> (which is perfect as per design). It should actually be getting security 
> information (security.json) from zookeeper, which has global view of the 
> cluster.
>  
> Example:
>  SolrCloud setup consists of 2 nodes (solr-7.3.1):
>  live_nodes: [
>  "localhost:8983_solr",
>  "localhost:8984_solr",
>  ]
> Two collections are created - 'collection-rf-1' with RF=1 and 
> 'collection-rf-2' with RF=2.
> Two users are created - 'collection-rf-1-user' and 'collection-rf-2-user'.
> Security configuration is as below (security.json attached):
>  "authorization":{
>  "class":"solr.RuleBasedAuthorizationPlugin",
>  "permissions":[
> { "name":"read", "collection":"collection-rf-2", "role":"collection-rf-2", 
> "index":1}
> ,
> { "name":"read", "collection":"collection-rf-1", "role":"collection-rf-1", 
> "index":2}
> ,
> { "name":"read", "role":"*", "index":3}
> ,
>  ...
>  "user-role":
> { "collection-rf-1-user":[ "collection-rf-1"], "collection-rf-2-user":[ 
> "collection-rf-2"]}
> ,
>  ...
>  
> Basically, its setup to that 'collection-rf-1-user' user can only access 
> 'collection-rf-1' collection and 'collection-rf-2-user' user can only access 
> 'collection-rf-2' collection.
> Also note that 'collection-rf-1' collection replica is only on 
> 'localhost:8983_solr' node, whereas ''collection-rf-2' collection replica is 
> on both live nodes.
>  
> Authorization does not work as expected for 'collection-rf-1' collection:
> $ curl -u collection-rf-2-user:password 
> 'http://*localhost:8983*/solr/collection-rf-1/select?q=*:*'
>  
>  
>  
>  Error 403 Unauthorized request, Response code: 403
>  
>  HTTP ERROR 403
>  Problem accessing /solr/collection-rf-1/select. Reason:
>   Unauthorized request, Response code: 403
>  
>  
> $ curl -u collection-rf-2-user:password 
> 'http://*localhost:8984*/solr/collection-rf-1/select?q=*:*'
>  {
>  "responseHeader":{
>  "zkConnected":true,
>  "status":0,
>  "QTime":0,
>  "params":{
>  "q":"*:*"}},
>  "response":{"numFound":0,"start":0,"docs":[]
>  }}
>  
> Whereas authorization works perfectly for 'collection-rf-2' collection (as 
> both nodes have replica):
> $ curl -u collection-rf-1-user:password 
> 'http://*localhost:8984*/solr/collection-rf-2/select?q=*:*'
>  
>  
>  
>  Error 403 Unauthorized request, Response code: 403
>  
>  HTTP ERROR 403
>  Problem accessing /solr/collection-rf-2/select. Reason:
>   Unauthorized request, Response code: 403
>  
>  
> $ curl -u collection-rf-1-user:password 
> 'http://*localhost:8983*/solr/collection-rf-2/select?q=*:*'
>  
>  
>  
>  Error 403 Unauthorized request, Response code: 403
>  
>  HTTP ERROR 403
>  Problem accessing /solr/collection-rf-2/select. Reason:
>   Unauthorized request, Response code: 403
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12161) CloudSolrClient with basic auth enabled will update even if no credentials supplied

2018-06-26 Thread Noble Paul (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16523537#comment-16523537
 ] 

Noble Paul commented on SOLR-12161:
---

I shall do this before 7.5

> CloudSolrClient with basic auth enabled will update even if no credentials 
> supplied
> ---
>
> Key: SOLR-12161
> URL: https://issues.apache.org/jira/browse/SOLR-12161
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Authentication
>Affects Versions: 7.3
>Reporter: Erick Erickson
>Assignee: Noble Paul
>Priority: Major
> Attachments: AuthUpdateTest.java, SOLR-12161.patch, tests.patch
>
>
> This is an offshoot of SOLR-9399. When I was writing a test, if I create a 
> cluster with basic authentication set up, I can _still_ add documents to a 
> collection even without credentials being set in the request.
> However, simple queries, commits etc. all fail without credentials set in the 
> request.
> I'll attach a test class that illustrates the problem.
> If I use a new HttpSolrClient instead of a CloudSolrClient, then the update 
> request fails as expected.
> [~noblepaul] do you have any insight here? Possibly something with splitting 
> up the update request to go to each individual shard?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12514) Rule-base Authorization plugin skips authorization if querying node does not have collection replica

2018-06-26 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-12514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16523532#comment-16523532
 ] 

Jan Høydahl commented on SOLR-12514:


Agree that we need a common generic solution to this, not custom solutions per 
plugin.

Do we currently have a way to forward a request as-is, i.e. retaining certain 
request headers while dropping others etc? Would not the Solr node forwarding 
the request need to act as a some kind of [HTTP 
proxy|https://www.mnot.net/blog/2011/07/11/what_proxies_must_do]? Or could we 
respond with HTTP 302 moved temporarily and return the address of a node 
actually hosting the collection :)

> Rule-base Authorization plugin skips authorization if querying node does not 
> have collection replica
> 
>
> Key: SOLR-12514
> URL: https://issues.apache.org/jira/browse/SOLR-12514
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: security
>Affects Versions: 7.3.1
>Reporter: Mahesh Kumar Vasanthu Somashekar
>Priority: Major
> Attachments: SOLR-12514.patch, Screen Shot 2018-06-24 at 9.36.45 
> PM.png, security.json
>
>
> Solr serves client requests going throught 3 steps - init(), authorize() and 
> handle-request ([link 
> git-link|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/7.3.1/solr/core/src/java/org/apache/solr/servlet/HttpSolrCall.java#L471]).
>  init() initializes all required information to be used by authorize(). 
> init() skips initializing if request is to be served remotely, which leads to 
> skipping authorization step ([link 
> git-link|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/7.3.1/solr/core/src/java/org/apache/solr/servlet/HttpSolrCall.java#L291]).
>  init() relies on 'cores' object which only has information of local node 
> (which is perfect as per design). It should actually be getting security 
> information (security.json) from zookeeper, which has global view of the 
> cluster.
>  
> Example:
>  SolrCloud setup consists of 2 nodes (solr-7.3.1):
>  live_nodes: [
>  "localhost:8983_solr",
>  "localhost:8984_solr",
>  ]
> Two collections are created - 'collection-rf-1' with RF=1 and 
> 'collection-rf-2' with RF=2.
> Two users are created - 'collection-rf-1-user' and 'collection-rf-2-user'.
> Security configuration is as below (security.json attached):
>  "authorization":{
>  "class":"solr.RuleBasedAuthorizationPlugin",
>  "permissions":[
> { "name":"read", "collection":"collection-rf-2", "role":"collection-rf-2", 
> "index":1}
> ,
> { "name":"read", "collection":"collection-rf-1", "role":"collection-rf-1", 
> "index":2}
> ,
> { "name":"read", "role":"*", "index":3}
> ,
>  ...
>  "user-role":
> { "collection-rf-1-user":[ "collection-rf-1"], "collection-rf-2-user":[ 
> "collection-rf-2"]}
> ,
>  ...
>  
> Basically, its setup to that 'collection-rf-1-user' user can only access 
> 'collection-rf-1' collection and 'collection-rf-2-user' user can only access 
> 'collection-rf-2' collection.
> Also note that 'collection-rf-1' collection replica is only on 
> 'localhost:8983_solr' node, whereas ''collection-rf-2' collection replica is 
> on both live nodes.
>  
> Authorization does not work as expected for 'collection-rf-1' collection:
> $ curl -u collection-rf-2-user:password 
> 'http://*localhost:8983*/solr/collection-rf-1/select?q=*:*'
>  
>  
>  
>  Error 403 Unauthorized request, Response code: 403
>  
>  HTTP ERROR 403
>  Problem accessing /solr/collection-rf-1/select. Reason:
>   Unauthorized request, Response code: 403
>  
>  
> $ curl -u collection-rf-2-user:password 
> 'http://*localhost:8984*/solr/collection-rf-1/select?q=*:*'
>  {
>  "responseHeader":{
>  "zkConnected":true,
>  "status":0,
>  "QTime":0,
>  "params":{
>  "q":"*:*"}},
>  "response":{"numFound":0,"start":0,"docs":[]
>  }}
>  
> Whereas authorization works perfectly for 'collection-rf-2' collection (as 
> both nodes have replica):
> $ curl -u collection-rf-1-user:password 
> 'http://*localhost:8984*/solr/collection-rf-2/select?q=*:*'
>  
>  
>  
>  Error 403 Unauthorized request, Response code: 403
>  
>  HTTP ERROR 403
>  Problem accessing /solr/collection-rf-2/select. Reason:
>   Unauthorized request, Response code: 403
>  
>  
> $ curl -u collection-rf-1-user:password 
> 'http://*localhost:8983*/solr/collection-rf-2/select?q=*:*'
>  
>  
>  
>  Error 403 Unauthorized request, Response code: 403
>  
>  HTTP ERROR 403
>  Problem accessing /solr/collection-rf-2/select. Reason:
>   Unauthorized request, Response code: 403
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: 

[JENKINS] Lucene-Solr-7.x-Windows (64bit/jdk-10.0.1) - Build # 655 - Unstable!

2018-06-26 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/655/
Java: 64bit/jdk-10.0.1 -XX:-UseCompressedOops -XX:+UseSerialGC

2 tests failed.
FAILED:  org.apache.solr.common.util.TestTimeSource.testEpochTime

Error Message:
SimTimeSource:50.0 time diff=2465

Stack Trace:
java.lang.AssertionError: SimTimeSource:50.0 time diff=2465
at 
__randomizedtesting.SeedInfo.seed([B8A346965880A3D6:80CF35B3CC500190]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.common.util.TestTimeSource.doTestEpochTime(TestTimeSource.java:52)
at 
org.apache.solr.common.util.TestTimeSource.testEpochTime(TestTimeSource.java:32)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  org.apache.solr.common.util.TestTimeSource.testEpochTime

Error Message:
SimTimeSource:50.0 time diff=3430

Stack Trace:
java.lang.AssertionError: SimTimeSource:50.0 time diff=3430
at 

[JENKINS] Lucene-Solr-NightlyTests-7.x - Build # 248 - Failure

2018-06-26 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-7.x/248/

3 tests failed.
FAILED:  org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test

Error Message:
Java heap space

Stack Trace:
java.lang.OutOfMemoryError: Java heap space
at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57)
at java.nio.ByteBuffer.allocate(ByteBuffer.java:335)
at java.nio.charset.CharsetEncoder.encode(CharsetEncoder.java:795)
at java.nio.charset.Charset.encode(Charset.java:843)
at java.nio.charset.Charset.encode(Charset.java:863)
at 
org.apache.http.client.utils.URLEncodedUtils.urlEncode(URLEncodedUtils.java:532)
at 
org.apache.http.client.utils.URLEncodedUtils.encodeFormFields(URLEncodedUtils.java:652)
at 
org.apache.http.client.utils.URLEncodedUtils.format(URLEncodedUtils.java:404)
at 
org.apache.http.client.utils.URLEncodedUtils.format(URLEncodedUtils.java:382)
at 
org.apache.http.client.entity.UrlEncodedFormEntity.(UrlEncodedFormEntity.java:75)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.fillContentStream(HttpSolrClient.java:513)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.createMethod(HttpSolrClient.java:420)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:253)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1106)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:886)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:819)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:974)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:990)
at 
org.apache.solr.cloud.CloudInspectUtil.compareResults(CloudInspectUtil.java:228)
at 
org.apache.solr.cloud.CloudInspectUtil.compareResults(CloudInspectUtil.java:167)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.testIndexingBatchPerRequestWithHttpSolrClient(FullSolrCloudDistribCmdsTest.java:668)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test(FullSolrCloudDistribCmdsTest.java:152)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)


FAILED:  
org.apache.solr.cloud.TestStressCloudBlindAtomicUpdates.test_dv_stored_idx

Error Message:
Some docs had errors -- check logs expected:<0> but was:<6>

Stack Trace:
java.lang.AssertionError: Some docs had errors -- check logs expected:<0> but 
was:<6>
at 
__randomizedtesting.SeedInfo.seed([BC9584D1D2827C3F:B6B2D5F1ADF4DF68]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.cloud.TestStressCloudBlindAtomicUpdates.checkField(TestStressCloudBlindAtomicUpdates.java:342)
at 
org.apache.solr.cloud.TestStressCloudBlindAtomicUpdates.test_dv_stored_idx(TestStressCloudBlindAtomicUpdates.java:221)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 

[jira] [Assigned] (LUCENE-8371) TestRandomChains.testRandomChainsWithLargeStrings() failure

2018-06-26 Thread Alan Woodward (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward reassigned LUCENE-8371:
-

Assignee: Alan Woodward

> TestRandomChains.testRandomChainsWithLargeStrings() failure
> ---
>
> Key: LUCENE-8371
> URL: https://issues.apache.org/jira/browse/LUCENE-8371
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/analysis
>Reporter: Steve Rowe
>Assignee: Alan Woodward
>Priority: Major
>
> Reproducing seed for {{TestRandomChains.testRandomChainsWithLargeStrings()}} 
> failure from [https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2196/]:
> {noformat}
> Checking out Revision 53ec8224705f4f0d35751b18b3f0168517c43121 
> (refs/remotes/origin/branch_7x)
> [...]
>[junit4] Suite: org.apache.lucene.analysis.core.TestRandomChains
>[junit4]   2> TEST FAIL: useCharFilter=true text='\ua97b  
> \uebcf\ueb06\uf85b\uf649\uf0b7 esgm s \uabfd 
> \ue11c\udbb4\udc48\ue90d\u0142\u0014\u0018 cr \u30ed\u30a8\u30ec\u30e1  \ud835\udf53\ud835\udc58\ud835\ude2b 
> \ueff5\uda61\ude33\ud94d\udcbb\udb3b\uddc8\u0738 \ua711\ua719 xqu ygvfwc 
> ~?\u0781%'
>[junit4]   2> Exception from random analyzer: 
>[junit4]   2> charfilters=
>[junit4]   2>   
> org.apache.lucene.analysis.fa.PersianCharFilter(java.io.StringReader@12c9ec6)
>[junit4]   2> tokenizer=
>[junit4]   2>   org.apache.lucene.analysis.core.LowerCaseTokenizer()
>[junit4]   2> filters=
>[junit4]   2>   
> org.apache.lucene.analysis.hunspell.HunspellStemFilter(ValidatingTokenFilter@17533c4
>  
> term=,bytes=[],startOffset=0,endOffset=0,positionIncrement=1,positionLength=1,type=word,termFrequency=1,keyword=false,
>  org.apache.lucene.analysis.hunspell.Dictionary@1e0b337, true, false)
>[junit4]   2>   
> Conditional:org.apache.lucene.analysis.no.NorwegianLightStemFilter(OneTimeWrapper@3e3989
>  
> term=,bytes=[],startOffset=0,endOffset=0,positionIncrement=1,positionLength=1,type=word,termFrequency=1,keyword=false)
>[junit4]   2>   
> Conditional:org.apache.lucene.analysis.en.EnglishPossessiveFilter(OneTimeWrapper@96b77b
>  
> term=,bytes=[],startOffset=0,endOffset=0,positionIncrement=1,positionLength=1,type=word,termFrequency=1,keyword=false)
>[junit4]   2>   
> Conditional:org.apache.lucene.analysis.shingle.FixedShingleFilter(OneTimeWrapper@d4fade
>  
> term=,bytes=[],startOffset=0,endOffset=0,positionIncrement=1,positionLength=1,type=word,termFrequency=1,keyword=false,
>  3)
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestRandomChains 
> -Dtests.method=testRandomChainsWithLargeStrings -Dtests.seed=8C3CDE29C6D4A774 
> -Dtests.multiplier=3 -Dtests.slow=true -Dtests.locale=ms 
> -Dtests.timezone=Europe/Saratov -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>[junit4] FAILURE 0.42s J2 | 
> TestRandomChains.testRandomChainsWithLargeStrings <<<
>[junit4]> Throwable #1: java.lang.AssertionError: finalOffset 
> expected:<74> but was:<73>
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([8C3CDE29C6D4A774:E66761389F9A8787]:0)
>[junit4]>  at 
> org.apache.lucene.analysis.BaseTokenStreamTestCase.assertTokenStreamContents(BaseTokenStreamTestCase.java:305)
>[junit4]>  at 
> org.apache.lucene.analysis.BaseTokenStreamTestCase.assertTokenStreamContents(BaseTokenStreamTestCase.java:320)
>[junit4]>  at 
> org.apache.lucene.analysis.BaseTokenStreamTestCase.assertTokenStreamContents(BaseTokenStreamTestCase.java:324)
>[junit4]>  at 
> org.apache.lucene.analysis.BaseTokenStreamTestCase.checkAnalysisConsistency(BaseTokenStreamTestCase.java:860)
>[junit4]>  at 
> org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:659)
>[junit4]>  at 
> org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:561)
>[junit4]>  at 
> org.apache.lucene.analysis.core.TestRandomChains.testRandomChainsWithLargeStrings(TestRandomChains.java:893)
>[junit4]>  at java.lang.Thread.run(Thread.java:748)
>[junit4]   2> NOTE: test params are: codec=Asserting(Lucene70): 
> {dummy=PostingsFormat(name=Memory)}, docValues:{}, maxPointsInLeafNode=1890, 
> maxMBSortInHeap=7.329943162959591, sim=RandomSimilarity(queryNorm=false): {}, 
> locale=ms, timezone=Europe/Saratov
>[junit4]   2> NOTE: Linux 4.13.0-41-generic i386/Oracle Corporation 
> 1.8.0_172 (32-bit)/cpus=8,threads=1,free=313060856,total=533725184
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12474) Add an UpdateRequest Object that implements RequestWriter.ContentWriter

2018-06-26 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16523412#comment-16523412
 ] 

ASF subversion and git services commented on SOLR-12474:


Commit 3d5875f4d83aec7db533580b94a3194ad3b9a98d in lucene-solr's branch 
refs/heads/branch_7x from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=3d5875f ]

SOLR-12474: use javadoc style comment


> Add an UpdateRequest Object that implements RequestWriter.ContentWriter
> ---
>
> Key: SOLR-12474
> URL: https://issues.apache.org/jira/browse/SOLR-12474
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Minor
> Fix For: master (8.0), 7.5
>
>
> There is no standard way to simply use the new 
> {{RequestWriter.ContentWriter}} interface



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12474) Add an UpdateRequest Object that implements RequestWriter.ContentWriter

2018-06-26 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16523409#comment-16523409
 ] 

ASF subversion and git services commented on SOLR-12474:


Commit 980354da8eca2a8069ff285bb0c63519c52b490c in lucene-solr's branch 
refs/heads/master from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=980354d ]

SOLR-12474: use javadoc style comment


> Add an UpdateRequest Object that implements RequestWriter.ContentWriter
> ---
>
> Key: SOLR-12474
> URL: https://issues.apache.org/jira/browse/SOLR-12474
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Minor
> Fix For: master (8.0), 7.5
>
>
> There is no standard way to simply use the new 
> {{RequestWriter.ContentWriter}} interface



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-8371) TestRandomChains.testRandomChainsWithLargeStrings() failure

2018-06-26 Thread Steve Rowe (JIRA)
Steve Rowe created LUCENE-8371:
--

 Summary: TestRandomChains.testRandomChainsWithLargeStrings() 
failure
 Key: LUCENE-8371
 URL: https://issues.apache.org/jira/browse/LUCENE-8371
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/analysis
Reporter: Steve Rowe


Reproducing seed for {{TestRandomChains.testRandomChainsWithLargeStrings()}} 
failure from [https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2196/]:

{noformat}
Checking out Revision 53ec8224705f4f0d35751b18b3f0168517c43121 
(refs/remotes/origin/branch_7x)
[...]
   [junit4] Suite: org.apache.lucene.analysis.core.TestRandomChains
   [junit4]   2> TEST FAIL: useCharFilter=true text='\ua97b  
\uebcf\ueb06\uf85b\uf649\uf0b7 esgm s \uabfd 
\ue11c\udbb4\udc48\ue90d\u0142\u0014\u0018 cr \u30ed\u30a8\u30ec\u30e1  Exception from random analyzer: 
   [junit4]   2> charfilters=
   [junit4]   2>   
org.apache.lucene.analysis.fa.PersianCharFilter(java.io.StringReader@12c9ec6)
   [junit4]   2> tokenizer=
   [junit4]   2>   org.apache.lucene.analysis.core.LowerCaseTokenizer()
   [junit4]   2> filters=
   [junit4]   2>   
org.apache.lucene.analysis.hunspell.HunspellStemFilter(ValidatingTokenFilter@17533c4
 
term=,bytes=[],startOffset=0,endOffset=0,positionIncrement=1,positionLength=1,type=word,termFrequency=1,keyword=false,
 org.apache.lucene.analysis.hunspell.Dictionary@1e0b337, true, false)
   [junit4]   2>   
Conditional:org.apache.lucene.analysis.no.NorwegianLightStemFilter(OneTimeWrapper@3e3989
 
term=,bytes=[],startOffset=0,endOffset=0,positionIncrement=1,positionLength=1,type=word,termFrequency=1,keyword=false)
   [junit4]   2>   
Conditional:org.apache.lucene.analysis.en.EnglishPossessiveFilter(OneTimeWrapper@96b77b
 
term=,bytes=[],startOffset=0,endOffset=0,positionIncrement=1,positionLength=1,type=word,termFrequency=1,keyword=false)
   [junit4]   2>   
Conditional:org.apache.lucene.analysis.shingle.FixedShingleFilter(OneTimeWrapper@d4fade
 
term=,bytes=[],startOffset=0,endOffset=0,positionIncrement=1,positionLength=1,type=word,termFrequency=1,keyword=false,
 3)
   [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestRandomChains 
-Dtests.method=testRandomChainsWithLargeStrings -Dtests.seed=8C3CDE29C6D4A774 
-Dtests.multiplier=3 -Dtests.slow=true -Dtests.locale=ms 
-Dtests.timezone=Europe/Saratov -Dtests.asserts=true -Dtests.file.encoding=UTF-8
   [junit4] FAILURE 0.42s J2 | 
TestRandomChains.testRandomChainsWithLargeStrings <<<
   [junit4]> Throwable #1: java.lang.AssertionError: finalOffset 
expected:<74> but was:<73>
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([8C3CDE29C6D4A774:E66761389F9A8787]:0)
   [junit4]>at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.assertTokenStreamContents(BaseTokenStreamTestCase.java:305)
   [junit4]>at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.assertTokenStreamContents(BaseTokenStreamTestCase.java:320)
   [junit4]>at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.assertTokenStreamContents(BaseTokenStreamTestCase.java:324)
   [junit4]>at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkAnalysisConsistency(BaseTokenStreamTestCase.java:860)
   [junit4]>at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:659)
   [junit4]>at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:561)
   [junit4]>at 
org.apache.lucene.analysis.core.TestRandomChains.testRandomChainsWithLargeStrings(TestRandomChains.java:893)
   [junit4]>at java.lang.Thread.run(Thread.java:748)
   [junit4]   2> NOTE: test params are: codec=Asserting(Lucene70): 
{dummy=PostingsFormat(name=Memory)}, docValues:{}, maxPointsInLeafNode=1890, 
maxMBSortInHeap=7.329943162959591, sim=RandomSimilarity(queryNorm=false): {}, 
locale=ms, timezone=Europe/Saratov
   [junit4]   2> NOTE: Linux 4.13.0-41-generic i386/Oracle Corporation 
1.8.0_172 (32-bit)/cpus=8,threads=1,free=313060856,total=533725184
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11216) Race condition in peerSync

2018-06-26 Thread Steve Rowe (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16523391#comment-16523391
 ] 

Steve Rowe commented on SOLR-11216:
---

Another reproducing seed from 
[https://jenkins.thetaphi.de/job/Lucene-Solr-BadApples-master-Linux/56/]:

{noformat}
Checking out Revision 095f9eb90db92649a0805e83ff5a0ec93763a31f 
(refs/remotes/origin/master)
[...]
   [junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=TestStressCloudBlindAtomicUpdates -Dtests.method=test_dv_idx 
-Dtests.seed=3006617022AC9671 -Dtests.multiplier=3 -Dtests.slow=true 
-Dtests.badapples=true -Dtests.locale=de -Dtests.timezone=Etc/GMT+5 
-Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1
   [junit4] FAILURE 14.5s J1 | TestStressCloudBlindAtomicUpdates.test_dv_idx <<<
   [junit4]> Throwable #1: java.lang.AssertionError: Some docs had errors 
-- check logs expected:<0> but was:<4>
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([3006617022AC9671:A57A14B8FE703B90]:0)
   [junit4]>at 
org.apache.solr.cloud.TestStressCloudBlindAtomicUpdates.checkField(TestStressCloudBlindAtomicUpdates.java:342)
   [junit4]>at 
org.apache.solr.cloud.TestStressCloudBlindAtomicUpdates.test_dv_idx(TestStressCloudBlindAtomicUpdates.java:231)
   [junit4]>at java.lang.Thread.run(Thread.java:748)
   [junit4]   2> 1499188 INFO  
(TEST-TestStressCloudBlindAtomicUpdates.test_dv_stored-seed#[3006617022AC9671]) 
[] o.a.s.SolrTestCaseJ4 ###Starting test_dv_stored
[...]

{noformat}

> Race condition in peerSync
> --
>
> Key: SOLR-11216
> URL: https://issues.apache.org/jira/browse/SOLR-11216
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.4
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Major
> Fix For: master (8.0), 7.5
>
> Attachments: SOLR-11216.patch, SOLR-11216.patch, SOLR-11216.patch, 
> SOLR-11216.patch, SOLR-11216.patch, SOLR-11216.patch, SOLR-11216.patch
>
>
> When digging into SOLR-10126. I found a case that can make peerSync fail.
> * leader and replica receive update from 1 to 4
> * replica stop
> * replica miss updates 5, 6
> * replica start recovery
> ## replica buffer updates 7, 8
> ## replica request versions from leader, 
> ## in the same time leader receive update 9, so it will return updates from 1 
> to 9 (for request versions) when replica get recent versions ( so it will be 
> 1,2,3,4,5,6,7,8,9 )
> ## replica do peersync and request updates 5, 6, 9 from leader 
> ## replica apply updates 5, 6, 9. Its index does not have update 7, 8 and 
> maxVersionSpecified for fingerprint is 9, therefore compare fingerprint will 
> fail



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12458) ADLS support for SOLR

2018-06-26 Thread Lucene/Solr QA (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16523357#comment-16523357
 ] 

Lucene/Solr QA commented on SOLR-12458:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
29s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  4m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Release audit (RAT) {color} | 
{color:green}  4m 24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Check forbidden APIs {color} | 
{color:green}  4m 19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Check licenses {color} | {color:green} 
 5m  9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Validate source patterns {color} | 
{color:green}  4m 19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Validate ref guide {color} | 
{color:green}  4m 19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 99m 31s{color} 
| {color:red} core in the patch failed. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}112m 41s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | solr.update.AddBlockUpdateTest |
|   | solr.cloud.autoscaling.IndexSizeTriggerTest |
|   | solr.handler.CSVRequestHandlerTest |
|   | solr.cloud.autoscaling.sim.TestTriggerIntegration |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | SOLR-12458 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12929069/SOLR-12458.patch |
| Optional Tests |  checklicenses  validatesourcepatterns  ratsources  compile  
javac  unit  checkforbiddenapis  validaterefguide  |
| uname | Linux lucene2-us-west.apache.org 4.4.0-112-generic #135-Ubuntu SMP 
Fri Jan 19 11:48:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | ant |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-SOLR-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh
 |
| git revision | master / 095f9eb |
| ant | version: Apache Ant(TM) version 1.9.6 compiled on July 8 2015 |
| Default Java | 1.8.0_172 |
| unit | 
https://builds.apache.org/job/PreCommit-SOLR-Build/132/artifact/out/patch-unit-solr_core.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-SOLR-Build/132/testReport/ |
| modules | C: lucene solr solr/core solr/solr-ref-guide U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-SOLR-Build/132/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> ADLS support for SOLR
> -
>
> Key: SOLR-12458
> URL: https://issues.apache.org/jira/browse/SOLR-12458
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration
>Affects Versions: master (8.0)
>Reporter: Mike Wingert
>Priority: Minor
>  Labels: features
> Fix For: master (8.0)
>
> Attachments: SOLR-12458.patch, SOLR-12458.patch, SOLR-12458.patch, 
> SOLR-12458.patch, SOLR-12458.patch, SOLR-12458.patch, SOLR-12458.patch, 
> SOLR-12458.patch
>
>
> This is to track ADLS support for SOLR.
> ADLS is a HDFS like API available in Microsoft Azure.   
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Windows (64bit/jdk-10.0.1) - Build # 7381 - Unstable!

2018-06-26 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7381/
Java: 64bit/jdk-10.0.1 -XX:+UseCompressedOops -XX:+UseSerialGC

3 tests failed.
FAILED:  org.apache.solr.cloud.autoscaling.ScheduledTriggerTest.testTrigger

Error Message:
expected:<3> but was:<2>

Stack Trace:
java.lang.AssertionError: expected:<3> but was:<2>
at 
__randomizedtesting.SeedInfo.seed([D5334340A560A11F:B6F875C23CAFD232]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.cloud.autoscaling.ScheduledTriggerTest.scheduledTriggerTest(ScheduledTriggerTest.java:112)
at 
org.apache.solr.cloud.autoscaling.ScheduledTriggerTest.testTrigger(ScheduledTriggerTest.java:65)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  

[jira] [Updated] (SOLR-12518) PreAnalyzedField fails to index documents without tokens

2018-06-26 Thread Yuki Yano (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuki Yano updated SOLR-12518:
-
Attachment: SOLR-12518.patch

> PreAnalyzedField fails to index documents without tokens
> 
>
> Key: SOLR-12518
> URL: https://issues.apache.org/jira/browse/SOLR-12518
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: update
>Reporter: Yuki Yano
>Priority: Minor
> Attachments: SOLR-12518.patch
>
>
> h1. Overview
> {{PreAnalyzedField}} fails to index documents without tokens like the 
> following data:
> {code:java}
> {
>   "v": "1",
>   "str": "foo",
>   "tokens": []
> }
> {code}
> h1. Details
> {{PreAnalyzedField}} consumes field values which have been pre-analyzed in 
> advance. The format of pre-analyzed value is like follows:
> {code:java}
> {
>   "v":"1",
>   "str":"test",
>   "tokens": [
> {"t":"one","s":123,"e":128,"i":22,"p":"DQ4KDQsODg8=","y":"word"},
> {"t":"two","s":5,"e":8,"i":1,"y":"word"},
> {"t":"three","s":20,"e":22,"i":1,"y":"foobar"}
>   ]
> }
> {code}
> As [the 
> document|https://lucene.apache.org/solr/guide/7_3/working-with-external-files-and-processes.html#WorkingwithExternalFilesandProcesses]
>  mensions, {{"str"}} and {{"tokens"}} are optional, i.e., both an empty value 
> and no key are allowed. However, when {{"tokens"}} is empty or not defined, 
> {{PreAnalyzedField}} throws IOException and fails to index the document.
> This error is related to the behavior of {{Field#tokenStream}}. This method 
> tries to create {{TokenStream}} by following steps (NOTE: assume 
> {{indexed=true}}):
>  * If the field has {{tokenStream}} value, returns it.
>  * Otherwise, creates {{tokenStream}} by parsing the stored value.
> If pre-analyzed value doesn't have tokens, the second step will be executed. 
> Unfortunately, since {{PreAnalyzedField}} always returns 
> {{PreAnalyzedAnalyzer}} as the index analyzer and the stored value (i.e., the 
> value of {{"str"}}) is not the pre-analyzed format, this step will fail due 
> to the pre-analyzed format error (i.e., IOException).
> h1. How to reproduce
> 1. Download latest solr package and prepare solr server according to [Solr 
> Tutorial|http://lucene.apache.org/solr/guide/7_3/solr-tutorial.html].
>  2. Add following fieldType and field to the schema.
> {code:xml}
> 
>   
> 
>   
> 
>  indexed="true" stored="true" multiValued="false"/>
> {code}
> 3. Index following documents and Solr will throw IOException.
> {code:java}
> // This is OK
> {"id": 1, "pre_with_analyzer": "{\'v\':\'1\',\'str\':\'document 
> one\',\'tokens\':[{\'t\':\'one\'},{\'t\':\'two\'},{\'t\':\'three\',\'i\':100}]}"}
> // Solr throws IOException
> {"id": 2, "pre_with_analyzer": "{\'v\':\'1\',\'str\':\'document two\', 
> \'tokens\':[]}"}
> // Solr throws IOException
> {"id": 3, "pre_with_analyzer": "{\'v\':\'1\',\'str\':\'document three\'}"}
> {code}
> h1. How to fix
> Because we don't need to analyze again if {{"tokens"}} is empty or not set, 
> we can avoid this error by setting {{EmptyTokenStream}} as {{tokenStream}} 
> instead like the following code:
> {code:java}
> parse.hasTokenStream() ? parse : new EmptyTokenStream()
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12518) PreAnalyzedField fails to index documents without tokens

2018-06-26 Thread Yuki Yano (JIRA)
Yuki Yano created SOLR-12518:


 Summary: PreAnalyzedField fails to index documents without tokens
 Key: SOLR-12518
 URL: https://issues.apache.org/jira/browse/SOLR-12518
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: update
Reporter: Yuki Yano
 Attachments: SOLR-12518.patch

h1. Overview

{{PreAnalyzedField}} fails to index documents without tokens like the following 
data:
{code:java}
{
  "v": "1",
  "str": "foo",
  "tokens": []
}
{code}
h1. Details

{{PreAnalyzedField}} consumes field values which have been pre-analyzed in 
advance. The format of pre-analyzed value is like follows:
{code:java}
{
  "v":"1",
  "str":"test",
  "tokens": [
{"t":"one","s":123,"e":128,"i":22,"p":"DQ4KDQsODg8=","y":"word"},
{"t":"two","s":5,"e":8,"i":1,"y":"word"},
{"t":"three","s":20,"e":22,"i":1,"y":"foobar"}
  ]
}
{code}
As [the 
document|https://lucene.apache.org/solr/guide/7_3/working-with-external-files-and-processes.html#WorkingwithExternalFilesandProcesses]
 mensions, {{"str"}} and {{"tokens"}} are optional, i.e., both an empty value 
and no key are allowed. However, when {{"tokens"}} is empty or not defined, 
{{PreAnalyzedField}} throws IOException and fails to index the document.

This error is related to the behavior of {{Field#tokenStream}}. This method 
tries to create {{TokenStream}} by following steps (NOTE: assume 
{{indexed=true}}):
 * If the field has {{tokenStream}} value, returns it.
 * Otherwise, creates {{tokenStream}} by parsing the stored value.

If pre-analyzed value doesn't have tokens, the second step will be executed. 
Unfortunately, since {{PreAnalyzedField}} always returns 
{{PreAnalyzedAnalyzer}} as the index analyzer and the stored value (i.e., the 
value of {{"str"}}) is not the pre-analyzed format, this step will fail due to 
the pre-analyzed format error (i.e., IOException).
h1. How to reproduce

1. Download latest solr package and prepare solr server according to [Solr 
Tutorial|http://lucene.apache.org/solr/guide/7_3/solr-tutorial.html].
 2. Add following fieldType and field to the schema.
{code:xml}

  

  


{code}
3. Index following documents and Solr will throw IOException.
{code:java}
// This is OK
{"id": 1, "pre_with_analyzer": "{\'v\':\'1\',\'str\':\'document 
one\',\'tokens\':[{\'t\':\'one\'},{\'t\':\'two\'},{\'t\':\'three\',\'i\':100}]}"}

// Solr throws IOException
{"id": 2, "pre_with_analyzer": "{\'v\':\'1\',\'str\':\'document two\', 
\'tokens\':[]}"}

// Solr throws IOException
{"id": 3, "pre_with_analyzer": "{\'v\':\'1\',\'str\':\'document three\'}"}
{code}
h1. How to fix

Because we don't need to analyze again if {{"tokens"}} is empty or not set, we 
can avoid this error by setting {{EmptyTokenStream}} as {{tokenStream}} instead 
like the following code:
{code:java}
parse.hasTokenStream() ? parse : new EmptyTokenStream()
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12441) Add deeply nested documents URP

2018-06-26 Thread mosh (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16523300#comment-16523300
 ] 

mosh edited comment on SOLR-12441 at 6/26/18 6:54 AM:
--

{quote}Can you please provide further justification for _nestLevel_ like a more 
fleshed out scenario?{quote}
After sleeping on this it seems like this can be solved using a simple regex.
This field was to be used in case the user only wants to query certain levels 
of the nested document, but this can be filtered using a regex checking for the 
number of split char('.') in the field name. This can be easily done using a 
transformer.
The thing is we would need to query the whole block instead of including 
another filter in the query itself to pick only the wanted children. Perhaps 
this is a better approach in terms of index weight saving. What do you think 
[~dsmiley]?


was (Author: moshebla):
{quote}Can you please provide further justification for _nestLevel_ like a more 
fleshed out scenario?{quote}
After sleeping on this it seems like this can be solved using a simple regex.
This field was to be used in case the user only wants to query certain levels 
of the nested document, but this can be filtered using a regex checking for the 
number of split char('.') in the field name. This can be easily done using a 
transformer.
I will work on a commit eliminating this field.

> Add deeply nested documents URP
> ---
>
> Key: SOLR-12441
> URL: https://issues.apache.org/jira/browse/SOLR-12441
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: mosh
>Assignee: David Smiley
>Priority: Major
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> As discussed in 
> [SOLR-12298|https://issues.apache.org/jira/browse/SOLR-12298], there ought to 
> be an URP to add metadata fields to childDocuments in order to allow a 
> transformer to rebuild the original document hierarchy.
> {quote}I propose we add the following fields:
>  # __nestParent__
>  # _nestLevel_
>  # __nestPath__
> __nestParent__: This field wild will store the document's parent docId, to be 
> used for building the whole hierarchy, using a new document transformer, as 
> suggested by Jan on the mailing list.
> _nestLevel_: This field will store the level of the specified field in the 
> document, using an int value. This field can be used for the parentFilter, 
> eliminating the need to provide a parentFilter, which will be set by default 
> as "_level_:queriedFieldLevel".
> _nestLevel_: This field will contain the full path, separated by a specific 
> reserved char e.g., '.'
>  for example: "first.second.third".
>  This will enable users to search for a specific path, or provide a regular 
> expression to search for fields sharing the same name in different levels of 
> the document, filtering using the level key if needed.
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12441) Add deeply nested documents URP

2018-06-26 Thread mosh (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16523300#comment-16523300
 ] 

mosh commented on SOLR-12441:
-

{quote}Can you please provide further justification for _nestLevel_ like a more 
fleshed out scenario?{quote}
After sleeping on this it seems like this can be solved using a simple regex.
This field was to be used in case the user only wants to query certain levels 
of the nested document, but this can be filtered using a regex checking for the 
number of split char('.') in the field name. This can be easily done using a 
transformer.
I will work on a commit eliminating this field.

> Add deeply nested documents URP
> ---
>
> Key: SOLR-12441
> URL: https://issues.apache.org/jira/browse/SOLR-12441
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: mosh
>Assignee: David Smiley
>Priority: Major
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> As discussed in 
> [SOLR-12298|https://issues.apache.org/jira/browse/SOLR-12298], there ought to 
> be an URP to add metadata fields to childDocuments in order to allow a 
> transformer to rebuild the original document hierarchy.
> {quote}I propose we add the following fields:
>  # __nestParent__
>  # _nestLevel_
>  # __nestPath__
> __nestParent__: This field wild will store the document's parent docId, to be 
> used for building the whole hierarchy, using a new document transformer, as 
> suggested by Jan on the mailing list.
> _nestLevel_: This field will store the level of the specified field in the 
> document, using an int value. This field can be used for the parentFilter, 
> eliminating the need to provide a parentFilter, which will be set by default 
> as "_level_:queriedFieldLevel".
> _nestLevel_: This field will contain the full path, separated by a specific 
> reserved char e.g., '.'
>  for example: "first.second.third".
>  This will enable users to search for a specific path, or provide a regular 
> expression to search for fields sharing the same name in different levels of 
> the document, filtering using the level key if needed.
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-BadApples-7.x-Linux (64bit/jdk-10) - Build # 56 - Still Unstable!

2018-06-26 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-BadApples-7.x-Linux/56/
Java: 64bit/jdk-10 -XX:+UseCompressedOops -XX:+UseSerialGC

3 tests failed.
FAILED:  org.apache.solr.cloud.SSLMigrationTest.test

Error Message:
Replica didn't have the proper urlScheme in the ClusterState

Stack Trace:
java.lang.AssertionError: Replica didn't have the proper urlScheme in the 
ClusterState
at 
__randomizedtesting.SeedInfo.seed([577E25054DD1C06D:DF2A1ADFE32DAD95]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.SSLMigrationTest.assertReplicaInformation(SSLMigrationTest.java:104)
at 
org.apache.solr.cloud.SSLMigrationTest.testMigrateSSL(SSLMigrationTest.java:97)
at org.apache.solr.cloud.SSLMigrationTest.test(SSLMigrationTest.java:61)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1008)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:983)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 

[jira] [Commented] (SOLR-12511) Support non integer values for replica in autoscaling rules

2018-06-26 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16523280#comment-16523280
 ] 

ASF subversion and git services commented on SOLR-12511:


Commit 99e5cf914028ec65381be0e980139f471bd8fb2d in lucene-solr's branch 
refs/heads/branch_7x from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=99e5cf9 ]

SOLR-11985: Support percentage values in replica attribute in autoscaling policy

SOLR-12511: Support non integer values for replica in autoscaling policy

SOLR-12517: Support range values for replica in autoscaling policy


> Support non integer values for replica in autoscaling rules
> ---
>
> Key: SOLR-12511
> URL: https://issues.apache.org/jira/browse/SOLR-12511
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Major
>
> This means the user can configure a decimal value in replica
> example :
> {code:json}
> {"replica": 1.638, "node":"#ANY"}
> {code}
> This means a few things. The no:of of replicas in a node can be either 2 or 
> 1. This also means that violations are calculated as follows
>  * If the replica count is 1 or 2 there are no violations 
>  * If the replica count is 3, there is a violation and the delta is 
> *{{3-1.638 = 1.362}}*
>  * if the replica count is 0, there is a violation and the delta is *{{1.638 
> - 0 = 1.638}}*
>  * This also means that the node with zero replicas has a *more serious* 
> violation and the system would try to rectify that first before it address 
> the node with 3 replicas



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11985) Allow percentage in replica attribute in policy

2018-06-26 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16523279#comment-16523279
 ] 

ASF subversion and git services commented on SOLR-11985:


Commit 99e5cf914028ec65381be0e980139f471bd8fb2d in lucene-solr's branch 
refs/heads/branch_7x from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=99e5cf9 ]

SOLR-11985: Support percentage values in replica attribute in autoscaling policy

SOLR-12511: Support non integer values for replica in autoscaling policy

SOLR-12517: Support range values for replica in autoscaling policy


> Allow percentage in replica attribute in policy
> ---
>
> Key: SOLR-11985
> URL: https://issues.apache.org/jira/browse/SOLR-11985
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Noble Paul
>Priority: Major
> Fix For: master (8.0), 7.5
>
> Attachments: SOLR-11985.patch, SOLR-11985.patch
>
>
> Today we can only specify an absolute number in the 'replica' attribute in 
> the policy rules. It'd be useful to write a percentage value to make certain 
> use-cases easier. For example:
> {code:java}
> // Keep a third of the the replicas of each shard in east region
> {"replica" : "<34%", "shard" : "#EACH", "sysprop:region": "east"}
> // Keep two thirds of the the replicas of each shard in west region
> {"replica" : "<67%", "shard" : "#EACH", "sysprop:region": "west"}
> {code}
> Today the above must be represented by different rules for each collection if 
> they have different replication factors. Also if the replication factor 
> changes later, the absolute value has to be changed in tandem. So expressing 
> a percentage removes both of these restrictions.
> This feature means that the value of the attribute {{"replica"}} is only 
> available just in time. We call such values {{"computed values"}} . The 
> computed value for this attribute depends on other attributes as well. 
>  Take the following 2 rules
> {code:java}
> //example 1
> {"replica" : "<34%", "shard" : "#EACH", "sysprop:region": "east"}
> //example 2
> {"replica" : "<34%",  "sysprop:region": "east"}
> {code}
> assume we have collection {{"A"}} with 2 shards and {{replicationFactor=3}}
> *example 1* would mean that the value of replica is computed as
> {{3 * 34 / 100 = 1.02}}
> Which means *_for each shard_* keep less than 1.02 replica in east 
> availability zone
>  
> *example 2* would mean that the value of replica is computed as 
> {{3 * 2 * 34 / 100 = 2.04}}
>  
> which means _*for each collection*_ keep less than 2.04 replicas on east 
> availability zone



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12517) Support range values for replica in autoscaling policies

2018-06-26 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16523281#comment-16523281
 ] 

ASF subversion and git services commented on SOLR-12517:


Commit 99e5cf914028ec65381be0e980139f471bd8fb2d in lucene-solr's branch 
refs/heads/branch_7x from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=99e5cf9 ]

SOLR-11985: Support percentage values in replica attribute in autoscaling policy

SOLR-12511: Support non integer values for replica in autoscaling policy

SOLR-12517: Support range values for replica in autoscaling policy


> Support range values for replica in autoscaling policies
> 
>
> Key: SOLR-12517
> URL: https://issues.apache.org/jira/browse/SOLR-12517
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, SolrCloud
>Reporter: Noble Paul
>Priority: Major
>
> example
> {code}
> {"replica" : "3 - 5", "shard" :"#EACH", "node" : "#ANY"}
> {code}
> means anode may have 3, 4 or 5 replicas of a shard. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12517) Support range values for replica in autoscaling policies

2018-06-26 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16523278#comment-16523278
 ] 

ASF subversion and git services commented on SOLR-12517:


Commit 1eb2676f27ad4f3913c0f9f43b08e8f3faf889a0 in lucene-solr's branch 
refs/heads/master from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=1eb2676 ]

SOLR-11985: Support percentage values in replica attribute in autoscaling policy

SOLR-12511: Support non integer values for replica in autoscaling policy

SOLR-12517: Support range values for replica in autoscaling policy


> Support range values for replica in autoscaling policies
> 
>
> Key: SOLR-12517
> URL: https://issues.apache.org/jira/browse/SOLR-12517
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, SolrCloud
>Reporter: Noble Paul
>Priority: Major
>
> example
> {code}
> {"replica" : "3 - 5", "shard" :"#EACH", "node" : "#ANY"}
> {code}
> means anode may have 3, 4 or 5 replicas of a shard. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11985) Allow percentage in replica attribute in policy

2018-06-26 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16523276#comment-16523276
 ] 

ASF subversion and git services commented on SOLR-11985:


Commit 1eb2676f27ad4f3913c0f9f43b08e8f3faf889a0 in lucene-solr's branch 
refs/heads/master from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=1eb2676 ]

SOLR-11985: Support percentage values in replica attribute in autoscaling policy

SOLR-12511: Support non integer values for replica in autoscaling policy

SOLR-12517: Support range values for replica in autoscaling policy


> Allow percentage in replica attribute in policy
> ---
>
> Key: SOLR-11985
> URL: https://issues.apache.org/jira/browse/SOLR-11985
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Noble Paul
>Priority: Major
> Fix For: master (8.0), 7.5
>
> Attachments: SOLR-11985.patch, SOLR-11985.patch
>
>
> Today we can only specify an absolute number in the 'replica' attribute in 
> the policy rules. It'd be useful to write a percentage value to make certain 
> use-cases easier. For example:
> {code:java}
> // Keep a third of the the replicas of each shard in east region
> {"replica" : "<34%", "shard" : "#EACH", "sysprop:region": "east"}
> // Keep two thirds of the the replicas of each shard in west region
> {"replica" : "<67%", "shard" : "#EACH", "sysprop:region": "west"}
> {code}
> Today the above must be represented by different rules for each collection if 
> they have different replication factors. Also if the replication factor 
> changes later, the absolute value has to be changed in tandem. So expressing 
> a percentage removes both of these restrictions.
> This feature means that the value of the attribute {{"replica"}} is only 
> available just in time. We call such values {{"computed values"}} . The 
> computed value for this attribute depends on other attributes as well. 
>  Take the following 2 rules
> {code:java}
> //example 1
> {"replica" : "<34%", "shard" : "#EACH", "sysprop:region": "east"}
> //example 2
> {"replica" : "<34%",  "sysprop:region": "east"}
> {code}
> assume we have collection {{"A"}} with 2 shards and {{replicationFactor=3}}
> *example 1* would mean that the value of replica is computed as
> {{3 * 34 / 100 = 1.02}}
> Which means *_for each shard_* keep less than 1.02 replica in east 
> availability zone
>  
> *example 2* would mean that the value of replica is computed as 
> {{3 * 2 * 34 / 100 = 2.04}}
>  
> which means _*for each collection*_ keep less than 2.04 replicas on east 
> availability zone



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12511) Support non integer values for replica in autoscaling rules

2018-06-26 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16523277#comment-16523277
 ] 

ASF subversion and git services commented on SOLR-12511:


Commit 1eb2676f27ad4f3913c0f9f43b08e8f3faf889a0 in lucene-solr's branch 
refs/heads/master from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=1eb2676 ]

SOLR-11985: Support percentage values in replica attribute in autoscaling policy

SOLR-12511: Support non integer values for replica in autoscaling policy

SOLR-12517: Support range values for replica in autoscaling policy


> Support non integer values for replica in autoscaling rules
> ---
>
> Key: SOLR-12511
> URL: https://issues.apache.org/jira/browse/SOLR-12511
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Major
>
> This means the user can configure a decimal value in replica
> example :
> {code:json}
> {"replica": 1.638, "node":"#ANY"}
> {code}
> This means a few things. The no:of of replicas in a node can be either 2 or 
> 1. This also means that violations are calculated as follows
>  * If the replica count is 1 or 2 there are no violations 
>  * If the replica count is 3, there is a violation and the delta is 
> *{{3-1.638 = 1.362}}*
>  * if the replica count is 0, there is a violation and the delta is *{{1.638 
> - 0 = 1.638}}*
>  * This also means that the node with zero replicas has a *more serious* 
> violation and the system would try to rectify that first before it address 
> the node with 3 replicas



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Solaris (64bit/jdk1.8.0) - Build # 688 - Still Unstable!

2018-06-26 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Solaris/688/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.handler.component.DistributedMLTComponentTest.test

Error Message:
Error from server at http://127.0.0.1:43743//collection1: ERROR: [doc=1] Error 
adding field 'lowerfilt1'='x' msg=Multiple values encountered for non 
multiValued copy field lowerfilt1and2: x

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:43743//collection1: ERROR: [doc=1] Error adding 
field 'lowerfilt1'='x' msg=Multiple values encountered for non multiValued copy 
field lowerfilt1and2: x
at 
__randomizedtesting.SeedInfo.seed([3707172B782E644A:BF5328F1D6D209B2]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:173)
at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:138)
at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:152)
at 
org.apache.solr.BaseDistributedSearchTestCase.indexDoc(BaseDistributedSearchTestCase.java:483)
at 
org.apache.solr.BaseDistributedSearchTestCase.index(BaseDistributedSearchTestCase.java:476)
at 
org.apache.solr.handler.component.DistributedMLTComponentTest.test(DistributedMLTComponentTest.java:80)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1008)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:983)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 

[jira] [Updated] (SOLR-12517) Support range values for replica in autoscaling policies

2018-06-26 Thread Noble Paul (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-12517:
--
Component/s: SolrCloud
 AutoScaling

> Support range values for replica in autoscaling policies
> 
>
> Key: SOLR-12517
> URL: https://issues.apache.org/jira/browse/SOLR-12517
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, SolrCloud
>Reporter: Noble Paul
>Priority: Major
>
> example
> {code}
> {"replica" : "3 - 5", "shard" :"#EACH", "node" : "#ANY"}
> {code}
> means anode may have 3, 4 or 5 replicas of a shard. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12514) Rule-base Authorization plugin skips authorization if querying node does not have collection replica

2018-06-26 Thread Noble Paul (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16523248#comment-16523248
 ] 

Noble Paul commented on SOLR-12514:
---

I see this code in the patch
{code}
+  /**
+   * When Solr is configured in secure mode (with Sentry), the request
+   * forwarding logic allows an unauthorized user to perform arbitrary 
operation in
+   * Solr. The reason is that authorization framework design requires 
access to
+   * RequestHandler reference to figure out the appropriate permissions to 
test against
+   * Sentry (or any other authorization back-end).But in case of request 
forwarding
+   * scenario, such reference is not available (since no local core can be 
found in that
+   * case). When no request handler reference is available, authorization 
plugin needs
+   * to allow that request to go through. This is because there are many 
request handlers
+   * which do not implement PermissionNameProvider interface and the 
authorization framework
+   * needs to play well with these request handlers (Ref: SOLR-11623). 
Internally request is
+   * forwarded using the Solr admin credentials. Hence the remote Solr 
instance bypass the
+   * authorization check, resulting in this vulnerability.
+   * This code change uses proxy users support in Hadoop authentication 
framework
+   * to pass original user-name via doAs param. That forces the remote 
Solr node to perfrom
+   * authorization check for the forwarded request, avoiding this 
vulnerability.
+   **/
+  if (cores.getAuthenticationPlugin() instanceof HadoopAuthPlugin) {
{code}

There should be no need to handle a per implementation handling in security 
framework. If we go down this rabbit hole, this is going to be a nightmare. if 
we implement a solution, it should be universal to all authorization providers. 

If a node does not host a collection,it should forward the request as is 
without using inter-node communication. If it is basic auth send the header 
down to the target node and do not send the PKI header




> Rule-base Authorization plugin skips authorization if querying node does not 
> have collection replica
> 
>
> Key: SOLR-12514
> URL: https://issues.apache.org/jira/browse/SOLR-12514
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: security
>Affects Versions: 7.3.1
>Reporter: Mahesh Kumar Vasanthu Somashekar
>Priority: Major
> Attachments: SOLR-12514.patch, Screen Shot 2018-06-24 at 9.36.45 
> PM.png, security.json
>
>
> Solr serves client requests going throught 3 steps - init(), authorize() and 
> handle-request ([link 
> git-link|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/7.3.1/solr/core/src/java/org/apache/solr/servlet/HttpSolrCall.java#L471]).
>  init() initializes all required information to be used by authorize(). 
> init() skips initializing if request is to be served remotely, which leads to 
> skipping authorization step ([link 
> git-link|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/7.3.1/solr/core/src/java/org/apache/solr/servlet/HttpSolrCall.java#L291]).
>  init() relies on 'cores' object which only has information of local node 
> (which is perfect as per design). It should actually be getting security 
> information (security.json) from zookeeper, which has global view of the 
> cluster.
>  
> Example:
>  SolrCloud setup consists of 2 nodes (solr-7.3.1):
>  live_nodes: [
>  "localhost:8983_solr",
>  "localhost:8984_solr",
>  ]
> Two collections are created - 'collection-rf-1' with RF=1 and 
> 'collection-rf-2' with RF=2.
> Two users are created - 'collection-rf-1-user' and 'collection-rf-2-user'.
> Security configuration is as below (security.json attached):
>  "authorization":{
>  "class":"solr.RuleBasedAuthorizationPlugin",
>  "permissions":[
> { "name":"read", "collection":"collection-rf-2", "role":"collection-rf-2", 
> "index":1}
> ,
> { "name":"read", "collection":"collection-rf-1", "role":"collection-rf-1", 
> "index":2}
> ,
> { "name":"read", "role":"*", "index":3}
> ,
>  ...
>  "user-role":
> { "collection-rf-1-user":[ "collection-rf-1"], "collection-rf-2-user":[ 
> "collection-rf-2"]}
> ,
>  ...
>  
> Basically, its setup to that 'collection-rf-1-user' user can only access 
> 'collection-rf-1' collection and 'collection-rf-2-user' user can only access 
> 'collection-rf-2' collection.
> Also note that 'collection-rf-1' collection replica is only on 
> 'localhost:8983_solr' node, whereas ''collection-rf-2' collection replica is 
> on both live nodes.
>  
> Authorization does not work as expected for 'collection-rf-1' collection:
> $ curl -u