[jira] [Updated] (SOLR-11654) TimePartitionedUpdateProcessor.lookupShardLeaderOfCollection should route to the ideal shard

2018-06-20 Thread Gus Heck (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-11654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gus Heck updated SOLR-11654:

Attachment: SOLR-11654.patch

> TimePartitionedUpdateProcessor.lookupShardLeaderOfCollection should route to 
> the ideal shard
> 
>
> Key: SOLR-11654
> URL: https://issues.apache.org/jira/browse/SOLR-11654
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Major
> Attachments: SOLR-11654.patch, SOLR-11654.patch, SOLR-11654.patch, 
> SOLR-11654.patch, SOLR-11654.patch
>
>
> {{TimePartitionedUpdateProcessor.lookupShardLeaderOfCollection}} looks up the 
> Shard/Slice to talk to for the given collection.  It currently picks the 
> first active Shard/Slice but it has a TODO to route to the ideal one based on 
> the router configuration of the target collection.  There is similar code in 
> CloudSolrClient & DistributedUpdateProcessor that should probably be 
> refactored/standardized so that we don't have to repeat this logic.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11654) TimePartitionedUpdateProcessor.lookupShardLeaderOfCollection should route to the ideal shard

2018-06-20 Thread Gus Heck (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16518924#comment-16518924
 ] 

Gus Heck commented on SOLR-11654:
-

Took the symptoms were quite odd... waiting for ever... and failing, sometimes 
not finding shards. Turned out our waitCol() method needed the number of shards 
passed in and then an assert needed it too.

> TimePartitionedUpdateProcessor.lookupShardLeaderOfCollection should route to 
> the ideal shard
> 
>
> Key: SOLR-11654
> URL: https://issues.apache.org/jira/browse/SOLR-11654
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Major
> Attachments: SOLR-11654.patch, SOLR-11654.patch, SOLR-11654.patch, 
> SOLR-11654.patch
>
>
> {{TimePartitionedUpdateProcessor.lookupShardLeaderOfCollection}} looks up the 
> Shard/Slice to talk to for the given collection.  It currently picks the 
> first active Shard/Slice but it has a TODO to route to the ideal one based on 
> the router configuration of the target collection.  There is similar code in 
> CloudSolrClient & DistributedUpdateProcessor that should probably be 
> refactored/standardized so that we don't have to repeat this logic.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-BadApples-NightlyTests-7.x - Build # 14 - Still Unstable

2018-06-20 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-NightlyTests-7.x/14/

7 tests failed.
FAILED:  org.apache.solr.cloud.TestCloudPivotFacet.test

Error Message:
Error from server at http://127.0.0.1:34321/v_rmj/zp: At least one of the 
node(s) specified [127.0.0.1:41064_v_rmj%2Fzp] are not currently active in [], 
no action taken.

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:34321/v_rmj/zp: At least one of the node(s) 
specified [127.0.0.1:41064_v_rmj%2Fzp] are not currently active in [], no 
action taken.
at 
__randomizedtesting.SeedInfo.seed([9B677342BB4945D1:13334C9815B52829]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1106)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:886)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:819)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createJettys(AbstractFullDistribZkTestBase.java:425)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createServers(AbstractFullDistribZkTestBase.java:341)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:991)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 

[jira] [Commented] (SOLR-12413) Solr ignores aliases.json from ZooKeeper at startup

2018-06-20 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16518891#comment-16518891
 ] 

David Smiley commented on SOLR-12413:
-

Good catch on identifying why my proposed test was fundamentally flawed; I 
wasn't quite sure yet.  I can also see that it's probably impossible to do a 
unit test for this.

Attached is a "nocommit" patch that hacks ZkController.createClusterZkNodes to 
ensure that the default aliases.json has "alias1" pointing to "collection1".  
And it has a shortened version of the flawed test that merely tries to see if 
querying "alias1" from the get-go works.  I wanted to see if Aliases.EMPTY with 
a zkNodeVersion of -1 works.  Note the additional asserts as well.  The 
rationale for why I think this works is because the first aliases operation to 
occur is update() which sets ZkStateReader.AliasesManager.aliases to be 
whatever zookeeper has, which will be a good zk version (not -1).  
applyModificationAndExportToZk will only ever be called _after_ this point, at 
which we never see the '-1' again.  This isn't all to say your patch doesn't 
also solve the problem but if we agree this "-1" solution works too then it's 
way simpler. (no additional lines of code except some assertions)

> Solr ignores aliases.json from ZooKeeper at startup
> ---
>
> Key: SOLR-12413
> URL: https://issues.apache.org/jira/browse/SOLR-12413
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.2.1
> Environment: A SolrCloud cluster with ZooKeeper (one node is enough 
> to reproduce).
> Solr 7.2.1.
> ZooKeeper 3.4.6.
>Reporter: Gaël Jourdan
>Assignee: David Smiley
>Priority: Major
> Attachments: SOLR-12413-nocommit.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Since upgrading to 7.2.1, we ran into an issue where Solr ignores 
> _aliases.json_ file stored in ZooKeeper.
>  
> +Steps to reproduce the problem:+
>  # SolrCloud cluster is down
>  # Direct update of _aliases.json_ file in ZooKeeper with Solr ZkCLI *without 
> using Collections API* :
>  ** {{java ... org.apache.solr.cloud.ZkCLI -zkhost ... -cmd clear 
> /aliases.json}}
>  ** {{java ... org.apache.solr.cloud.ZkCLI -zkhost ... -cmd put /aliases.json 
> "new content"}}
>  # SolrCloud cluster is started => _aliases.json_ not taken into account
>  
> +Analysis:+ 
> Digging a bit in the code, what is actually causing the issue is that, when 
> starting, Solr now checks for the metadata of the _aliases.json_ file and if 
> the version metadata from ZooKeeper is lower or equal to local version, it 
> keeps the local version.
> When it starts, Solr has a local version of 0 for the aliases but ZooKeeper 
> also has a version of 0 of the file because we just recreated it. So Solr 
> ignores ZooKeeper configuration and never has a chance to load aliases.
>  
> Relevant parts of Solr code are:
>  * 
> [https://github.com/apache/lucene-solr/blob/branch_7_2/solr/solrj/src/java/org/apache/solr/common/cloud/ZkStateReader.java]
>  : line 1562 : method setIfNewer
> {code:java}
> /**
> * Update the internal aliases reference with a new one, provided that its ZK 
> version has increased.
> *
> * @param newAliases the potentially newer version of Aliases
> */
> private boolean setIfNewer(Aliases newAliases) {
>   synchronized (this) {
>     int cmp = Integer.compare(aliases.getZNodeVersion(), 
> newAliases.getZNodeVersion());
>     if (cmp < 0) {
>   LOG.debug("Aliases: cmp={}, new definition is: {}", cmp, newAliases);
>   aliases = newAliases;
>   this.notifyAll();
>       return true;
>     } else {
>   LOG.debug("Aliases: cmp={}, not overwriting ZK version.", cmp);
>       assert cmp != 0 || Arrays.equals(aliases.toJSON(), newAliases.toJSON()) 
> : aliases + " != " + newAliases;
>     return false;
>     }
>   }
> }{code}
>  * 
> [https://github.com/apache/lucene-solr/blob/branch_7_2/solr/solrj/src/java/org/apache/solr/common/cloud/Aliases.java]
>  : line 45 : the "empty" Aliases object with default version 0
> {code:java}
> /**
> * An empty, minimal Aliases primarily used to support the non-cloud solr use 
> cases. Not normally useful
> * in cloud situations where the version of the node needs to be tracked even 
> if all aliases are removed.
> * A version of 0 is provided rather than -1 to minimize the possibility that 
> if this is used in a cloud
> * instance data is written without version checking.
> */
> public static final Aliases EMPTY = new Aliases(Collections.emptyMap(), 
> Collections.emptyMap(), 0);{code}
>  
> Note that a workaround is to force ZooKeeper to always have a version greater 
> than 0 for _aliases.json_ file (for instance by not clearing the file 

[jira] [Updated] (SOLR-12413) Solr ignores aliases.json from ZooKeeper at startup

2018-06-20 Thread David Smiley (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated SOLR-12413:

Attachment: SOLR-12413-nocommit.patch

> Solr ignores aliases.json from ZooKeeper at startup
> ---
>
> Key: SOLR-12413
> URL: https://issues.apache.org/jira/browse/SOLR-12413
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.2.1
> Environment: A SolrCloud cluster with ZooKeeper (one node is enough 
> to reproduce).
> Solr 7.2.1.
> ZooKeeper 3.4.6.
>Reporter: Gaël Jourdan
>Assignee: David Smiley
>Priority: Major
> Attachments: SOLR-12413-nocommit.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Since upgrading to 7.2.1, we ran into an issue where Solr ignores 
> _aliases.json_ file stored in ZooKeeper.
>  
> +Steps to reproduce the problem:+
>  # SolrCloud cluster is down
>  # Direct update of _aliases.json_ file in ZooKeeper with Solr ZkCLI *without 
> using Collections API* :
>  ** {{java ... org.apache.solr.cloud.ZkCLI -zkhost ... -cmd clear 
> /aliases.json}}
>  ** {{java ... org.apache.solr.cloud.ZkCLI -zkhost ... -cmd put /aliases.json 
> "new content"}}
>  # SolrCloud cluster is started => _aliases.json_ not taken into account
>  
> +Analysis:+ 
> Digging a bit in the code, what is actually causing the issue is that, when 
> starting, Solr now checks for the metadata of the _aliases.json_ file and if 
> the version metadata from ZooKeeper is lower or equal to local version, it 
> keeps the local version.
> When it starts, Solr has a local version of 0 for the aliases but ZooKeeper 
> also has a version of 0 of the file because we just recreated it. So Solr 
> ignores ZooKeeper configuration and never has a chance to load aliases.
>  
> Relevant parts of Solr code are:
>  * 
> [https://github.com/apache/lucene-solr/blob/branch_7_2/solr/solrj/src/java/org/apache/solr/common/cloud/ZkStateReader.java]
>  : line 1562 : method setIfNewer
> {code:java}
> /**
> * Update the internal aliases reference with a new one, provided that its ZK 
> version has increased.
> *
> * @param newAliases the potentially newer version of Aliases
> */
> private boolean setIfNewer(Aliases newAliases) {
>   synchronized (this) {
>     int cmp = Integer.compare(aliases.getZNodeVersion(), 
> newAliases.getZNodeVersion());
>     if (cmp < 0) {
>   LOG.debug("Aliases: cmp={}, new definition is: {}", cmp, newAliases);
>   aliases = newAliases;
>   this.notifyAll();
>       return true;
>     } else {
>   LOG.debug("Aliases: cmp={}, not overwriting ZK version.", cmp);
>       assert cmp != 0 || Arrays.equals(aliases.toJSON(), newAliases.toJSON()) 
> : aliases + " != " + newAliases;
>     return false;
>     }
>   }
> }{code}
>  * 
> [https://github.com/apache/lucene-solr/blob/branch_7_2/solr/solrj/src/java/org/apache/solr/common/cloud/Aliases.java]
>  : line 45 : the "empty" Aliases object with default version 0
> {code:java}
> /**
> * An empty, minimal Aliases primarily used to support the non-cloud solr use 
> cases. Not normally useful
> * in cloud situations where the version of the node needs to be tracked even 
> if all aliases are removed.
> * A version of 0 is provided rather than -1 to minimize the possibility that 
> if this is used in a cloud
> * instance data is written without version checking.
> */
> public static final Aliases EMPTY = new Aliases(Collections.emptyMap(), 
> Collections.emptyMap(), 0);{code}
>  
> Note that a workaround is to force ZooKeeper to always have a version greater 
> than 0 for _aliases.json_ file (for instance by not clearing the file and 
> just overwriting it again and again).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.4-Windows (64bit/jdk-9.0.4) - Build # 8 - Still Unstable!

2018-06-20 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.4-Windows/8/
Java: 64bit/jdk-9.0.4 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.ActionThrottleTest.testBasics

Error Message:
993ms

Stack Trace:
java.lang.AssertionError: 993ms
at 
__randomizedtesting.SeedInfo.seed([A3A8CF84A6A42C6E:9E7061A89E4A721E]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.ActionThrottleTest.testBasics(ActionThrottleTest.java:87)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)




Build Log:
[...truncated 1 lines...]
   [junit4] Suite: org.apache.solr.cloud.ActionThrottleTest
   [junit4]   2> Creating dataDir: 
C:\Users\jenkins\workspace\Lucene-Solr-7.4-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.ActionThrottleTest_A3A8CF84A6A42C6E-001\init-core-data-001
   [junit4]   2> 1043260 INFO  
(TEST-ActionThrottleTest.testBasics-seed#[A3A8CF84A6A42C6E]) [

[jira] [Commented] (SOLR-12505) Streaming expressions - fetch() does not work as expected

2018-06-20 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16518873#comment-16518873
 ] 

David Smiley commented on SOLR-12505:
-

I think it doesn't matter either way?  I'd at least be surprised to learn it 
could matter.  Local-params aren't even parsed unless the default parser (based 
on context of use) is "lucene"; it's basically assumed in every case except 'q' 
which is governed by defType.  Does the context of this scenario above go to a 
'q' to a request handler that might be customized with a defType?

> Streaming expressions - fetch() does not work as expected
> -
>
> Key: SOLR-12505
> URL: https://issues.apache.org/jira/browse/SOLR-12505
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.3.1
> Environment: Windows 10, Java 10, Solr Cloud 7.3.1
>Reporter: Dariusz Wojtas
>Assignee: Joel Bernstein
>Priority: Major
> Attachments: names.zip
>
>
> The issue:
>  # when I try to use fetch() within a streaming expression, it does not 
> enrich the inner source data. The result is exactly the same as if there was 
> no surrounding fetch() function.
>  # but it works if I try to do a leftOuterJoin() function instead.
> Use the attached 'names' collection configuration.
>  SOLR works in _cloud_ mode, streaming expressions do work, ie. stream(), 
> join(), etc
> Data to be inserted:
>  ==
> {code:xml}
> 
>  
>   1
>   entity
>   Orignal Darek name
>   uk
>   
>N001
>1
>alternate
>Darek
>   
>   
>N002
>1
>alternate
>Darke
>   
>   
>N003
>1
>alternate
>   Darko
>   
>  
>  
>   2
>   entity
>   Texaco
>   de
>   
>N0011
>2
>alternate
>Texxo
>   
>   
>N0012
>2
>alternate
>Texoco
>   
>  
> 
> {code}
> ==
>  The streaming query to execute.
>  Simplified, as the mainsearch usually does more complext stuff.
>  ==
> {noformat}
>  fetch( 
>  names,
>  search(names,
>  qt="/select",
>  q="*:*",
>  fq="type:alternate",
>  fl="parentId, alias",
>  rows=10,
>  sort="parentId asc"), 
>  on="parentId=id",
>  fl="name,country"
>  )
> {noformat}
> ==
> *Result*:
>  * Collection of attributes: parentId, alias
> *Expected result*:
>  * Collection of attributes: parentId, alias, name, country



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-11598) Export Writer needs to support more than 4 Sort fields - Say 10, ideally it should not be bound at all, but 4 seems to really short sell the StreamRollup capabilit

2018-06-20 Thread Aroop (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16518857#comment-16518857
 ] 

Aroop edited comment on SOLR-11598 at 6/21/18 3:13 AM:
---

Hi [~joel.bernstein]

 By, number of records exported, do you mean the result-set size of the entire 
streaming expression? Then that was ~50,000 records after rollup. (this would 
increase with dimensionality of course)

By, parallel if you mean if the parallel stream decorator was used then yes we 
used it. The overall streaming expression was of the type 
parallel(select(rollup(search(.

Per, my experiments the growth in dimensions shows linear performance. 


was (Author: aroopganguly):
Hi [~joel.bernstein]

 By, number of records exported, do you mean the result-set size of the entire 
streaming expression? Then that was ~50,000 records after rollup.

By, parallel if you mean if the parallel stream decorator was used then yes we 
used it. The overall streaming expression was of the type 
parallel(select(rollup(search(.

Per, my experiments the growth in dimensions shows linear performance. 

> Export Writer needs to support more than 4 Sort fields - Say 10, ideally it 
> should not be bound at all, but 4 seems to really short sell the StreamRollup 
> capabilities.
> ---
>
> Key: SOLR-11598
> URL: https://issues.apache.org/jira/browse/SOLR-11598
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Affects Versions: 6.6.1, 7.0
>Reporter: Aroop
>Assignee: Varun Thacker
>Priority: Major
>  Labels: patch
> Attachments: SOLR-11598-6_6-streamtests, SOLR-11598-6_6.patch, 
> SOLR-11598-master.patch, SOLR-11598.patch, SOLR-11598.patch, 
> SOLR-11598.patch, SOLR-11598.patch
>
>
> I am a user of Streaming and I am currently trying to use rollups on an 10 
> dimensional document.
> I am unable to get correct results on this query as I am bounded by the 
> limitation of the export handler which supports only 4 sort fields.
> I do not see why this needs to be the case, as it could very well be 10 or 20.
> My current needs would be satisfied with 10, but one would want to ask why 
> can't it be any decent integer n, beyond which we know performance degrades, 
> but even then it should be caveat emptor.
> [~varunthacker] 
> Code Link:
> https://github.com/apache/lucene-solr/blob/19db1df81a18e6eb2cce5be973bf2305d606a9f8/solr/core/src/java/org/apache/solr/handler/ExportWriter.java#L455
> Error
> null:java.io.IOException: A max of 4 sorts can be specified
>   at 
> org.apache.solr.handler.ExportWriter.getSortDoc(ExportWriter.java:452)
>   at org.apache.solr.handler.ExportWriter.writeDocs(ExportWriter.java:228)
>   at 
> org.apache.solr.handler.ExportWriter.lambda$null$1(ExportWriter.java:219)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeIterator(JavaBinCodec.java:664)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeKnownType(JavaBinCodec.java:333)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:223)
>   at org.apache.solr.common.util.JavaBinCodec$1.put(JavaBinCodec.java:394)
>   at 
> org.apache.solr.handler.ExportWriter.lambda$null$2(ExportWriter.java:219)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeMap(JavaBinCodec.java:437)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeKnownType(JavaBinCodec.java:354)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:223)
>   at org.apache.solr.common.util.JavaBinCodec$1.put(JavaBinCodec.java:394)
>   at 
> org.apache.solr.handler.ExportWriter.lambda$write$3(ExportWriter.java:217)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeMap(JavaBinCodec.java:437)
>   at org.apache.solr.handler.ExportWriter.write(ExportWriter.java:215)
>   at org.apache.solr.core.SolrCore$3.write(SolrCore.java:2601)
>   at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:49)
>   at 
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:809)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:538)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
>   at 
> 

[jira] [Updated] (LUCENE-8366) upgrade to icu 62.1

2018-06-20 Thread Robert Muir (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-8366:

Attachment: LUCENE-8366.patch

> upgrade to icu 62.1
> ---
>
> Key: LUCENE-8366
> URL: https://issues.apache.org/jira/browse/LUCENE-8366
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Reporter: Robert Muir
>Priority: Major
> Attachments: LUCENE-8366.patch
>
>
> This gives unicode 11 support.
> Also emoji tokenization is simpler and it gives a way to have better 
> tokenization for emoji from the future.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-8366) upgrade to icu 62.1

2018-06-20 Thread Robert Muir (JIRA)
Robert Muir created LUCENE-8366:
---

 Summary: upgrade to icu 62.1
 Key: LUCENE-8366
 URL: https://issues.apache.org/jira/browse/LUCENE-8366
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/analysis
Reporter: Robert Muir


This gives unicode 11 support.

Also emoji tokenization is simpler and it gives a way to have better 
tokenization for emoji from the future.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11598) Export Writer needs to support more than 4 Sort fields - Say 10, ideally it should not be bound at all, but 4 seems to really short sell the StreamRollup capabilities.

2018-06-20 Thread Aroop (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16518857#comment-16518857
 ] 

Aroop commented on SOLR-11598:
--

Hi [~joel.bernstein]

 

By, number of records exported, do you mean the result-set size of the entire 
streaming expression? Then that was ~50,000 records after rollup.

By, parallel if you mean if the parallel stream decorator was used then yes we 
used it. The overall streaming expression was of the type 
parallel(select(rollup(search(.

Per, my experiments the growth in dimensions shows linear performance. 

> Export Writer needs to support more than 4 Sort fields - Say 10, ideally it 
> should not be bound at all, but 4 seems to really short sell the StreamRollup 
> capabilities.
> ---
>
> Key: SOLR-11598
> URL: https://issues.apache.org/jira/browse/SOLR-11598
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Affects Versions: 6.6.1, 7.0
>Reporter: Aroop
>Assignee: Varun Thacker
>Priority: Major
>  Labels: patch
> Attachments: SOLR-11598-6_6-streamtests, SOLR-11598-6_6.patch, 
> SOLR-11598-master.patch, SOLR-11598.patch, SOLR-11598.patch, 
> SOLR-11598.patch, SOLR-11598.patch
>
>
> I am a user of Streaming and I am currently trying to use rollups on an 10 
> dimensional document.
> I am unable to get correct results on this query as I am bounded by the 
> limitation of the export handler which supports only 4 sort fields.
> I do not see why this needs to be the case, as it could very well be 10 or 20.
> My current needs would be satisfied with 10, but one would want to ask why 
> can't it be any decent integer n, beyond which we know performance degrades, 
> but even then it should be caveat emptor.
> [~varunthacker] 
> Code Link:
> https://github.com/apache/lucene-solr/blob/19db1df81a18e6eb2cce5be973bf2305d606a9f8/solr/core/src/java/org/apache/solr/handler/ExportWriter.java#L455
> Error
> null:java.io.IOException: A max of 4 sorts can be specified
>   at 
> org.apache.solr.handler.ExportWriter.getSortDoc(ExportWriter.java:452)
>   at org.apache.solr.handler.ExportWriter.writeDocs(ExportWriter.java:228)
>   at 
> org.apache.solr.handler.ExportWriter.lambda$null$1(ExportWriter.java:219)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeIterator(JavaBinCodec.java:664)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeKnownType(JavaBinCodec.java:333)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:223)
>   at org.apache.solr.common.util.JavaBinCodec$1.put(JavaBinCodec.java:394)
>   at 
> org.apache.solr.handler.ExportWriter.lambda$null$2(ExportWriter.java:219)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeMap(JavaBinCodec.java:437)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeKnownType(JavaBinCodec.java:354)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:223)
>   at org.apache.solr.common.util.JavaBinCodec$1.put(JavaBinCodec.java:394)
>   at 
> org.apache.solr.handler.ExportWriter.lambda$write$3(ExportWriter.java:217)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeMap(JavaBinCodec.java:437)
>   at org.apache.solr.handler.ExportWriter.write(ExportWriter.java:215)
>   at org.apache.solr.core.SolrCore$3.write(SolrCore.java:2601)
>   at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:49)
>   at 
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:809)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:538)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> 

[jira] [Comment Edited] (SOLR-11598) Export Writer needs to support more than 4 Sort fields - Say 10, ideally it should not be bound at all, but 4 seems to really short sell the StreamRollup capabilit

2018-06-20 Thread Aroop (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16518857#comment-16518857
 ] 

Aroop edited comment on SOLR-11598 at 6/21/18 2:59 AM:
---

Hi [~joel.bernstein]

 By, number of records exported, do you mean the result-set size of the entire 
streaming expression? Then that was ~50,000 records after rollup.

By, parallel if you mean if the parallel stream decorator was used then yes we 
used it. The overall streaming expression was of the type 
parallel(select(rollup(search(.

Per, my experiments the growth in dimensions shows linear performance. 


was (Author: aroopganguly):
Hi [~joel.bernstein]

 

By, number of records exported, do you mean the result-set size of the entire 
streaming expression? Then that was ~50,000 records after rollup.

By, parallel if you mean if the parallel stream decorator was used then yes we 
used it. The overall streaming expression was of the type 
parallel(select(rollup(search(.

Per, my experiments the growth in dimensions shows linear performance. 

> Export Writer needs to support more than 4 Sort fields - Say 10, ideally it 
> should not be bound at all, but 4 seems to really short sell the StreamRollup 
> capabilities.
> ---
>
> Key: SOLR-11598
> URL: https://issues.apache.org/jira/browse/SOLR-11598
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Affects Versions: 6.6.1, 7.0
>Reporter: Aroop
>Assignee: Varun Thacker
>Priority: Major
>  Labels: patch
> Attachments: SOLR-11598-6_6-streamtests, SOLR-11598-6_6.patch, 
> SOLR-11598-master.patch, SOLR-11598.patch, SOLR-11598.patch, 
> SOLR-11598.patch, SOLR-11598.patch
>
>
> I am a user of Streaming and I am currently trying to use rollups on an 10 
> dimensional document.
> I am unable to get correct results on this query as I am bounded by the 
> limitation of the export handler which supports only 4 sort fields.
> I do not see why this needs to be the case, as it could very well be 10 or 20.
> My current needs would be satisfied with 10, but one would want to ask why 
> can't it be any decent integer n, beyond which we know performance degrades, 
> but even then it should be caveat emptor.
> [~varunthacker] 
> Code Link:
> https://github.com/apache/lucene-solr/blob/19db1df81a18e6eb2cce5be973bf2305d606a9f8/solr/core/src/java/org/apache/solr/handler/ExportWriter.java#L455
> Error
> null:java.io.IOException: A max of 4 sorts can be specified
>   at 
> org.apache.solr.handler.ExportWriter.getSortDoc(ExportWriter.java:452)
>   at org.apache.solr.handler.ExportWriter.writeDocs(ExportWriter.java:228)
>   at 
> org.apache.solr.handler.ExportWriter.lambda$null$1(ExportWriter.java:219)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeIterator(JavaBinCodec.java:664)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeKnownType(JavaBinCodec.java:333)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:223)
>   at org.apache.solr.common.util.JavaBinCodec$1.put(JavaBinCodec.java:394)
>   at 
> org.apache.solr.handler.ExportWriter.lambda$null$2(ExportWriter.java:219)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeMap(JavaBinCodec.java:437)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeKnownType(JavaBinCodec.java:354)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:223)
>   at org.apache.solr.common.util.JavaBinCodec$1.put(JavaBinCodec.java:394)
>   at 
> org.apache.solr.handler.ExportWriter.lambda$write$3(ExportWriter.java:217)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeMap(JavaBinCodec.java:437)
>   at org.apache.solr.handler.ExportWriter.write(ExportWriter.java:215)
>   at org.apache.solr.core.SolrCore$3.write(SolrCore.java:2601)
>   at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:49)
>   at 
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:809)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:538)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
>   at 
> 

[jira] [Comment Edited] (SOLR-12505) Streaming expressions - fetch() does not work as expected

2018-06-20 Thread Joel Bernstein (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16518834#comment-16518834
 ] 

Joel Bernstein edited comment on SOLR-12505 at 6/21/18 2:40 AM:


[~dsmiley], in a previous ticket (SOLR-10404) you changed how the fetch query 
was being sent down to the following:
{code:java}
buf.append("{! df=").append(rightKey).append(" q.op=OR cache=false 
}");//disable queryCache
{code}
I think we want this to be:
{code:java}
buf.append("{!lucene df=").append(rightKey).append(" q.op=OR cache=false 
}");//disable queryCache
{code}
Otherwise if a defType is defined in the request handler that does not use "df" 
or "q.op" those local params will be ignored.

I'm not sure if that is what's causing the problem in this case, but it does 
appear like it would be a problem in general.


was (Author: joel.bernstein):
[~dsmiley], in a previous ticket (SOLR-10404) you changed how the fetch query 
was being sent down to the following:
{code:java}
buf.append("{! df=").append(rightKey).append(" q.op=OR cache=false 
}");//disable queryCache
{code}
I think we want this to be:
{code:java}
buf.append("{!lucene df=").append(rightKey).append(" q.op=OR cache=false 
}");//disable queryCache
{code}
Otherwise if a defType is defined in the request handler that does not use "df" 
or "q.op" those local params will be ignored.

I'm not sure if that's what causing the problem in this case, but it does 
appear like it would be a problem in general.

 

 

> Streaming expressions - fetch() does not work as expected
> -
>
> Key: SOLR-12505
> URL: https://issues.apache.org/jira/browse/SOLR-12505
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.3.1
> Environment: Windows 10, Java 10, Solr Cloud 7.3.1
>Reporter: Dariusz Wojtas
>Assignee: Joel Bernstein
>Priority: Major
> Attachments: names.zip
>
>
> The issue:
>  # when I try to use fetch() within a streaming expression, it does not 
> enrich the inner source data. The result is exactly the same as if there was 
> no surrounding fetch() function.
>  # but it works if I try to do a leftOuterJoin() function instead.
> Use the attached 'names' collection configuration.
>  SOLR works in _cloud_ mode, streaming expressions do work, ie. stream(), 
> join(), etc
> Data to be inserted:
>  ==
> {code:xml}
> 
>  
>   1
>   entity
>   Orignal Darek name
>   uk
>   
>N001
>1
>alternate
>Darek
>   
>   
>N002
>1
>alternate
>Darke
>   
>   
>N003
>1
>alternate
>   Darko
>   
>  
>  
>   2
>   entity
>   Texaco
>   de
>   
>N0011
>2
>alternate
>Texxo
>   
>   
>N0012
>2
>alternate
>Texoco
>   
>  
> 
> {code}
> ==
>  The streaming query to execute.
>  Simplified, as the mainsearch usually does more complext stuff.
>  ==
> {noformat}
>  fetch( 
>  names,
>  search(names,
>  qt="/select",
>  q="*:*",
>  fq="type:alternate",
>  fl="parentId, alias",
>  rows=10,
>  sort="parentId asc"), 
>  on="parentId=id",
>  fl="name,country"
>  )
> {noformat}
> ==
> *Result*:
>  * Collection of attributes: parentId, alias
> *Expected result*:
>  * Collection of attributes: parentId, alias, name, country



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12505) Streaming expressions - fetch() does not work as expected

2018-06-20 Thread Joel Bernstein (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16518834#comment-16518834
 ] 

Joel Bernstein edited comment on SOLR-12505 at 6/21/18 2:39 AM:


[~dsmiley], in a previous ticket (SOLR-10404) you changed how the fetch query 
was being sent down to the following:
{code:java}
buf.append("{! df=").append(rightKey).append(" q.op=OR cache=false 
}");//disable queryCache
{code}
I think we want this to be:
{code:java}
buf.append("{!lucene df=").append(rightKey).append(" q.op=OR cache=false 
}");//disable queryCache
{code}
Otherwise if a defType is defined in the request handler that does not use "df" 
or "q.op" those local params will be ignored.

I'm not sure if that's what causing the problem in this case, but it does 
appear like it would be a problem in general.

 

 


was (Author: joel.bernstein):
[~dsmiley], in a previous ticket (SOLR-10404) you changed how the fetch query 
was being sent down to the following:
{code:java}
buf.append("{! df=").append(rightKey).append(" q.op=OR cache=false 
}");//disable queryCache
{code}
I think we want this to be:
{code:java}
buf.append("{!lucene df=").append(rightKey).append(" q.op=OR cache=false 
}");//disable queryCache
{code}
Otherwise if a defType is defined in the request handler that does not use "df" 
or "q.op" those fields will be ignored.

I'm not sure if that's what causing the problem in this case, but it does 
appear like it would be a problem in general.

 

 

> Streaming expressions - fetch() does not work as expected
> -
>
> Key: SOLR-12505
> URL: https://issues.apache.org/jira/browse/SOLR-12505
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.3.1
> Environment: Windows 10, Java 10, Solr Cloud 7.3.1
>Reporter: Dariusz Wojtas
>Assignee: Joel Bernstein
>Priority: Major
> Attachments: names.zip
>
>
> The issue:
>  # when I try to use fetch() within a streaming expression, it does not 
> enrich the inner source data. The result is exactly the same as if there was 
> no surrounding fetch() function.
>  # but it works if I try to do a leftOuterJoin() function instead.
> Use the attached 'names' collection configuration.
>  SOLR works in _cloud_ mode, streaming expressions do work, ie. stream(), 
> join(), etc
> Data to be inserted:
>  ==
> {code:xml}
> 
>  
>   1
>   entity
>   Orignal Darek name
>   uk
>   
>N001
>1
>alternate
>Darek
>   
>   
>N002
>1
>alternate
>Darke
>   
>   
>N003
>1
>alternate
>   Darko
>   
>  
>  
>   2
>   entity
>   Texaco
>   de
>   
>N0011
>2
>alternate
>Texxo
>   
>   
>N0012
>2
>alternate
>Texoco
>   
>  
> 
> {code}
> ==
>  The streaming query to execute.
>  Simplified, as the mainsearch usually does more complext stuff.
>  ==
> {noformat}
>  fetch( 
>  names,
>  search(names,
>  qt="/select",
>  q="*:*",
>  fq="type:alternate",
>  fl="parentId, alias",
>  rows=10,
>  sort="parentId asc"), 
>  on="parentId=id",
>  fl="name,country"
>  )
> {noformat}
> ==
> *Result*:
>  * Collection of attributes: parentId, alias
> *Expected result*:
>  * Collection of attributes: parentId, alias, name, country



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12505) Streaming expressions - fetch() does not work as expected

2018-06-20 Thread Joel Bernstein (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16518834#comment-16518834
 ] 

Joel Bernstein commented on SOLR-12505:
---

[~dsmiley], in a previous ticket (SOLR-10404) you changed how the fetch query 
was being sent down to the following:
{code:java}
buf.append("{! df=").append(rightKey).append(" q.op=OR cache=false 
}");//disable queryCache
{code}
I think we want this to be:
{code:java}
buf.append("{!lucene df=").append(rightKey).append(" q.op=OR cache=false 
}");//disable queryCache
{code}
Otherwise if a defType is defined in the request handler that does not use "df" 
or "q.op" those fields will be ignored.

I'm not sure if that's what causing the problem in this case, but it does 
appear like it would be a problem in general.

 

 

> Streaming expressions - fetch() does not work as expected
> -
>
> Key: SOLR-12505
> URL: https://issues.apache.org/jira/browse/SOLR-12505
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.3.1
> Environment: Windows 10, Java 10, Solr Cloud 7.3.1
>Reporter: Dariusz Wojtas
>Assignee: Joel Bernstein
>Priority: Major
> Attachments: names.zip
>
>
> The issue:
>  # when I try to use fetch() within a streaming expression, it does not 
> enrich the inner source data. The result is exactly the same as if there was 
> no surrounding fetch() function.
>  # but it works if I try to do a leftOuterJoin() function instead.
> Use the attached 'names' collection configuration.
>  SOLR works in _cloud_ mode, streaming expressions do work, ie. stream(), 
> join(), etc
> Data to be inserted:
>  ==
> {code:xml}
> 
>  
>   1
>   entity
>   Orignal Darek name
>   uk
>   
>N001
>1
>alternate
>Darek
>   
>   
>N002
>1
>alternate
>Darke
>   
>   
>N003
>1
>alternate
>   Darko
>   
>  
>  
>   2
>   entity
>   Texaco
>   de
>   
>N0011
>2
>alternate
>Texxo
>   
>   
>N0012
>2
>alternate
>Texoco
>   
>  
> 
> {code}
> ==
>  The streaming query to execute.
>  Simplified, as the mainsearch usually does more complext stuff.
>  ==
> {noformat}
>  fetch( 
>  names,
>  search(names,
>  qt="/select",
>  q="*:*",
>  fq="type:alternate",
>  fl="parentId, alias",
>  rows=10,
>  sort="parentId asc"), 
>  on="parentId=id",
>  fl="name,country"
>  )
> {noformat}
> ==
> *Result*:
>  * Collection of attributes: parentId, alias
> *Expected result*:
>  * Collection of attributes: parentId, alias, name, country



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Windows (64bit/jdk1.8.0_172) - Build # 7371 - Unstable!

2018-06-20 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7371/
Java: 64bit/jdk1.8.0_172 -XX:+UseCompressedOops -XX:+UseG1GC

2 tests failed.
FAILED:  org.apache.solr.common.util.TestTimeSource.testEpochTime

Error Message:
SimTimeSource:50.0 time diff=1371

Stack Trace:
java.lang.AssertionError: SimTimeSource:50.0 time diff=1371
at 
__randomizedtesting.SeedInfo.seed([EFA61D331BB8D4C1:D7CA6E168F687687]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.common.util.TestTimeSource.doTestEpochTime(TestTimeSource.java:52)
at 
org.apache.solr.common.util.TestTimeSource.testEpochTime(TestTimeSource.java:32)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  org.apache.solr.common.util.TestTimeSource.testEpochTime

Error Message:
SimTimeSource:50.0 time diff=22625000

Stack Trace:
java.lang.AssertionError: SimTimeSource:50.0 time 

[jira] [Commented] (SOLR-12413) Solr ignores aliases.json from ZooKeeper at startup

2018-06-20 Thread Gus Heck (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16518798#comment-16518798
 ] 

Gus Heck commented on SOLR-12413:
-

I did test this manually by
 # creating a 4 node cluster,
 # copying the aliases.json to a file,
 # modifying it to add an alias,
 # bringing the cluster down,
 # deleting aliases.json from zk,
 # uploading the edited version to zk
 # restarting the cluster... 

At which time I observed the change in the UI and successfully queried the 
alias 

That test you supplied doesn't seem to work for me with or without the patch... 
the deletion of aliases.json appears to blow up the cluster almost 
immediately... the delete triggers the watch and leads to:

 
{code:java}
22756 ERROR (zkCallback-21-thread-1) [ ] o.a.s.c.c.ZkStateReader$AliasesManager 
A ZK error has occurred
org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode 
for /aliases.json
at org.apache.zookeeper.KeeperException.create(KeeperException.java:114) 
~[zookeeper-3.4.11.jar:3.4.11-37e277162d567b55a07d1755f0b31c32e93c01a0]
at org.apache.zookeeper.KeeperException.create(KeeperException.java:54) 
~[zookeeper-3.4.11.jar:3.4.11-37e277162d567b55a07d1755f0b31c32e93c01a0]
at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1215) 
~[zookeeper-3.4.11.jar:3.4.11-37e277162d567b55a07d1755f0b31c32e93c01a0]
at 
org.apache.solr.common.cloud.SolrZkClient.lambda$getData$5(SolrZkClient.java:341)
 ~[java/:?]
at 
org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:60)
 ~[java/:?]
at org.apache.solr.common.cloud.SolrZkClient.getData(SolrZkClient.java:341) 
~[java/:?]
at 
org.apache.solr.common.cloud.ZkStateReader$AliasesManager.process(ZkStateReader.java:1781)
 ~[java/:?]
at 
org.apache.solr.common.cloud.SolrZkClient$1.lambda$process$1(SolrZkClient.java:270)
 ~[java/:?]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[?:1.8.0_144]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_144]
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
 ~[java/:?]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
[?:1.8.0_144]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
[?:1.8.0_144]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_144]
22756 ERROR (zkCallback-28-thread-1) [ ] o.a.s.c.c.ZkStateReader$AliasesManager 
A ZK error has occurred
org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode 
for /aliases.json
at org.apache.zookeeper.KeeperException.create(KeeperException.java:114) 
~[zookeeper-3.4.11.jar:3.4.11-37e277162d567b55a07d1755f0b31c32e93c01a0]
at org.apache.zookeeper.KeeperException.create(KeeperException.java:54) 
~[zookeeper-3.4.11.jar:3.4.11-37e277162d567b55a07d1755f0b31c32e93c01a0]
at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1215) 
~[zookeeper-3.4.11.jar:3.4.11-37e277162d567b55a07d1755f0b31c32e93c01a0]
at 
org.apache.solr.common.cloud.SolrZkClient.lambda$getData$5(SolrZkClient.java:341)
 ~[java/:?]
at 
org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:60)
 ~[java/:?]
at org.apache.solr.common.cloud.SolrZkClient.getData(SolrZkClient.java:341) 
~[java/:?]
at 
org.apache.solr.common.cloud.ZkStateReader$AliasesManager.process(ZkStateReader.java:1781)
 ~[java/:?]
at 
org.apache.solr.common.cloud.SolrZkClient$1.lambda$process$1(SolrZkClient.java:270)
 ~[java/:?]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[?:1.8.0_144]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_144]
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
 ~[java/:?]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
[?:1.8.0_144]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
[?:1.8.0_144]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_144]
{code}
followed by
{code:java}
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this request:[http://127.0.0.1:33023/solr/alias1]

at __randomizedtesting.SeedInfo.seed([3A63ED446F3BE85D:C178AC5AAC5AF704]:0)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:462)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1106)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:886)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:819)
at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:942)
at 

[JENKINS] Lucene-Solr-Tests-7.x - Build # 648 - Still Unstable

2018-06-20 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/648/

3 tests failed.
FAILED:  org.apache.solr.util.OrderedExecutorTest.testLockWhenQueueIsFull

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([906C77F6FB708661:B997A6D07C1B7347]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertTrue(Assert.java:54)
at 
org.apache.solr.util.OrderedExecutorTest.testLockWhenQueueIsFull(OrderedExecutorTest.java:54)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  junit.framework.TestSuite.org.apache.solr.util.OrderedExecutorTest

Error Message:
1 thread leaked from SUITE scope at org.apache.solr.util.OrderedExecutorTest:   
  1) Thread[id=7636, name=testLockWhenQueueIsFull-1838-thread-1, 
state=TIMED_WAITING, group=TGRP-OrderedExecutorTest] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
 at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
 at 
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) 
at 

[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk-10.0.1) - Build # 2164 - Still Unstable!

2018-06-20 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2164/
Java: 64bit/jdk-10.0.1 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.cloud.MoveReplicaHDFSTest.testFailedMove

Error Message:
No live SolrServers available to handle this 
request:[https://127.0.0.1:34583/solr/MoveReplicaHDFSTest_failed_coll_true, 
https://127.0.0.1:33395/solr/MoveReplicaHDFSTest_failed_coll_true]

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this 
request:[https://127.0.0.1:34583/solr/MoveReplicaHDFSTest_failed_coll_true, 
https://127.0.0.1:33395/solr/MoveReplicaHDFSTest_failed_coll_true]
at 
__randomizedtesting.SeedInfo.seed([2FF014368F98092F:853DC7C4384BDCFF]:0)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:462)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1106)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:886)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:993)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:819)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:942)
at 
org.apache.solr.cloud.MoveReplicaTest.testFailedMove(MoveReplicaTest.java:288)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
 

[jira] [Commented] (SOLR-11598) Export Writer needs to support more than 4 Sort fields - Say 10, ideally it should not be bound at all, but 4 seems to really short sell the StreamRollup capabilities.

2018-06-20 Thread Joel Bernstein (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16518789#comment-16518789
 ] 

Joel Bernstein commented on SOLR-11598:
---

Just reading through the numbers. A couple of questions:

What were the number of records exported? Also were the exports done in 
parallel?

I'm actually surprised that it's performing this well with so many sort fields. 
I think its fine to move this forward. I'll try to help out with the review and 
manual testing.

 

> Export Writer needs to support more than 4 Sort fields - Say 10, ideally it 
> should not be bound at all, but 4 seems to really short sell the StreamRollup 
> capabilities.
> ---
>
> Key: SOLR-11598
> URL: https://issues.apache.org/jira/browse/SOLR-11598
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Affects Versions: 6.6.1, 7.0
>Reporter: Aroop
>Assignee: Varun Thacker
>Priority: Major
>  Labels: patch
> Attachments: SOLR-11598-6_6-streamtests, SOLR-11598-6_6.patch, 
> SOLR-11598-master.patch, SOLR-11598.patch, SOLR-11598.patch, 
> SOLR-11598.patch, SOLR-11598.patch
>
>
> I am a user of Streaming and I am currently trying to use rollups on an 10 
> dimensional document.
> I am unable to get correct results on this query as I am bounded by the 
> limitation of the export handler which supports only 4 sort fields.
> I do not see why this needs to be the case, as it could very well be 10 or 20.
> My current needs would be satisfied with 10, but one would want to ask why 
> can't it be any decent integer n, beyond which we know performance degrades, 
> but even then it should be caveat emptor.
> [~varunthacker] 
> Code Link:
> https://github.com/apache/lucene-solr/blob/19db1df81a18e6eb2cce5be973bf2305d606a9f8/solr/core/src/java/org/apache/solr/handler/ExportWriter.java#L455
> Error
> null:java.io.IOException: A max of 4 sorts can be specified
>   at 
> org.apache.solr.handler.ExportWriter.getSortDoc(ExportWriter.java:452)
>   at org.apache.solr.handler.ExportWriter.writeDocs(ExportWriter.java:228)
>   at 
> org.apache.solr.handler.ExportWriter.lambda$null$1(ExportWriter.java:219)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeIterator(JavaBinCodec.java:664)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeKnownType(JavaBinCodec.java:333)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:223)
>   at org.apache.solr.common.util.JavaBinCodec$1.put(JavaBinCodec.java:394)
>   at 
> org.apache.solr.handler.ExportWriter.lambda$null$2(ExportWriter.java:219)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeMap(JavaBinCodec.java:437)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeKnownType(JavaBinCodec.java:354)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:223)
>   at org.apache.solr.common.util.JavaBinCodec$1.put(JavaBinCodec.java:394)
>   at 
> org.apache.solr.handler.ExportWriter.lambda$write$3(ExportWriter.java:217)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeMap(JavaBinCodec.java:437)
>   at org.apache.solr.handler.ExportWriter.write(ExportWriter.java:215)
>   at org.apache.solr.core.SolrCore$3.write(SolrCore.java:2601)
>   at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:49)
>   at 
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:809)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:538)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> 

Re: Lucene/Solr 8.0

2018-06-20 Thread Robert Muir
How can the end user actually use the biggest new feature: impacts and
BMW? As far as I can tell, the issue to actually implement the
necessary API changes (IndexSearcher/TopDocs/etc) is still open and
unresolved, although there are some interesting ideas on it. This
seems like a really big missing piece, without a proper API, the stuff
is not really usable. I also can't imagine a situation where the API
could be introduced in a followup minor release because it would be
too invasive.

On Mon, Jun 18, 2018 at 1:19 PM, Adrien Grand  wrote:
> Hi all,
>
> I would like to start discussing releasing Lucene/Solr 8.0. Lucene 8 already
> has some good changes around scoring, notably cleanups to
> similarities[1][2][3], indexing of impacts[4], and an implementation of
> Block-Max WAND[5] which, once combined, allow to run queries faster when
> total hit counts are not requested.
>
> [1] https://issues.apache.org/jira/browse/LUCENE-8116
> [2] https://issues.apache.org/jira/browse/LUCENE-8020
> [3] https://issues.apache.org/jira/browse/LUCENE-8007
> [4] https://issues.apache.org/jira/browse/LUCENE-4198
> [5] https://issues.apache.org/jira/browse/LUCENE-8135
>
> In terms of bug fixes, there is also a bad relevancy bug[6] which is only in
> 8.0 because it required a breaking change[7] to be implemented.
>
> [6] https://issues.apache.org/jira/browse/LUCENE-8031
> [7] https://issues.apache.org/jira/browse/LUCENE-8134
>
> As usual, doing a new major release will also help age out old codecs, which
> in-turn make maintenance easier: 8.0 will no longer need to care about the
> fact that some codecs were initially implemented with a random-access API
> for doc values, that pre-7.0 indices encoded norms differently, or that
> pre-6.2 indices could not record an index sort.
>
> I also expect that we will come up with ideas of things to do for 8.0 as we
> feel that the next major is getting closer. In terms of planning, I was
> thinking that we could target something like october 2018, which would be
> 12-13 months after 7.0 and 3-4 months from now.
>
> From a Solr perspective, the main change I'm aware of that would be worth
> releasing a new major is the Star Burst effort. Is it something we want to
> get in for 8.0?
>
> Adrien

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8364) Refactor and clean up core geo api

2018-06-20 Thread Robert Muir (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16518764#comment-16518764
 ] 

Robert Muir commented on LUCENE-8364:
-

Also the relate/relatePoint changes to Polygon are a big performance trap: this 
class exists solely as a thing to pass to queries. we shouldnt dynamically 
build large data structures and stuff here, and add complexity such as the 
caching and stuff it has, I really think this doesn't belong.

> Refactor and clean up core geo api
> --
>
> Key: LUCENE-8364
> URL: https://issues.apache.org/jira/browse/LUCENE-8364
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Nicholas Knize
>Priority: Major
> Attachments: LUCENE-8364.patch
>
>
> The core geo API is quite disorganized and confusing. For example there is 
> {{Polygon}} for creating an instance of polygon vertices and holes and 
> {{Polygon2D}} for computing relations between points and polygons. There is 
> also a {{PolygonPredicate}} and {{DistancePredicate}} in {{GeoUtils}} for 
> computing point in polygon and point distance relations, respectively, and a 
> {{GeoRelationUtils}} utility class which is no longer used for anything. This 
> disorganization is due to the organic improvements of simple {{LatLonPoint}} 
> indexing and search features and a little TLC is needed to clean up api to 
> make it more approachable and easy to understand. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12505) Streaming expressions - fetch() does not work as expected

2018-06-20 Thread Joel Bernstein (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16518761#comment-16518761
 ] 

Joel Bernstein commented on SOLR-12505:
---

Ok, I will test this out tomorrow and see what's happening.

> Streaming expressions - fetch() does not work as expected
> -
>
> Key: SOLR-12505
> URL: https://issues.apache.org/jira/browse/SOLR-12505
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.3.1
> Environment: Windows 10, Java 10, Solr Cloud 7.3.1
>Reporter: Dariusz Wojtas
>Priority: Major
> Attachments: names.zip
>
>
> The issue:
>  # when I try to use fetch() within a streaming expression, it does not 
> enrich the inner source data. The result is exactly the same as if there was 
> no surrounding fetch() function.
>  # but it works if I try to do a leftOuterJoin() function instead.
> Use the attached 'names' collection configuration.
>  SOLR works in _cloud_ mode, streaming expressions do work, ie. stream(), 
> join(), etc
> Data to be inserted:
>  ==
> {code:xml}
> 
>  
>   1
>   entity
>   Orignal Darek name
>   uk
>   
>N001
>1
>alternate
>Darek
>   
>   
>N002
>1
>alternate
>Darke
>   
>   
>N003
>1
>alternate
>   Darko
>   
>  
>  
>   2
>   entity
>   Texaco
>   de
>   
>N0011
>2
>alternate
>Texxo
>   
>   
>N0012
>2
>alternate
>Texoco
>   
>  
> 
> {code}
> ==
>  The streaming query to execute.
>  Simplified, as the mainsearch usually does more complext stuff.
>  ==
> {noformat}
>  fetch( 
>  names,
>  search(names,
>  qt="/select",
>  q="*:*",
>  fq="type:alternate",
>  fl="parentId, alias",
>  rows=10,
>  sort="parentId asc"), 
>  on="parentId=id",
>  fl="name,country"
>  )
> {noformat}
> ==
> *Result*:
>  * Collection of attributes: parentId, alias
> *Expected result*:
>  * Collection of attributes: parentId, alias, name, country



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-12505) Streaming expressions - fetch() does not work as expected

2018-06-20 Thread Joel Bernstein (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein reassigned SOLR-12505:
-

Assignee: Joel Bernstein

> Streaming expressions - fetch() does not work as expected
> -
>
> Key: SOLR-12505
> URL: https://issues.apache.org/jira/browse/SOLR-12505
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.3.1
> Environment: Windows 10, Java 10, Solr Cloud 7.3.1
>Reporter: Dariusz Wojtas
>Assignee: Joel Bernstein
>Priority: Major
> Attachments: names.zip
>
>
> The issue:
>  # when I try to use fetch() within a streaming expression, it does not 
> enrich the inner source data. The result is exactly the same as if there was 
> no surrounding fetch() function.
>  # but it works if I try to do a leftOuterJoin() function instead.
> Use the attached 'names' collection configuration.
>  SOLR works in _cloud_ mode, streaming expressions do work, ie. stream(), 
> join(), etc
> Data to be inserted:
>  ==
> {code:xml}
> 
>  
>   1
>   entity
>   Orignal Darek name
>   uk
>   
>N001
>1
>alternate
>Darek
>   
>   
>N002
>1
>alternate
>Darke
>   
>   
>N003
>1
>alternate
>   Darko
>   
>  
>  
>   2
>   entity
>   Texaco
>   de
>   
>N0011
>2
>alternate
>Texxo
>   
>   
>N0012
>2
>alternate
>Texoco
>   
>  
> 
> {code}
> ==
>  The streaming query to execute.
>  Simplified, as the mainsearch usually does more complext stuff.
>  ==
> {noformat}
>  fetch( 
>  names,
>  search(names,
>  qt="/select",
>  q="*:*",
>  fq="type:alternate",
>  fl="parentId, alias",
>  rows=10,
>  sort="parentId asc"), 
>  on="parentId=id",
>  fl="name,country"
>  )
> {noformat}
> ==
> *Result*:
>  * Collection of attributes: parentId, alias
> *Expected result*:
>  * Collection of attributes: parentId, alias, name, country



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8364) Refactor and clean up core geo api

2018-06-20 Thread Robert Muir (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16518757#comment-16518757
 ] 

Robert Muir commented on LUCENE-8364:
-

Just looking, I have a few concerns:
* what is the goal of all the new abstractions? Abstractions have a significant 
cost, and I don't think we should be building a geo library here. We should 
just make the searches and stuff work.
* why does Polygon have new methods such as relate(), relatePoint() that are 
not used anywhere. We shouldn't add unnecessary stuff like that, we should keep 
this stuff minimal.
* the hashcode/equals on Polygon2d is unnecessary. It is an implementation 
detail and such methods should not be used. For example all queries just use 
equals() with the Polygon.
* methods like maxLon() on Polygon are unnecessary. These are already final 
variables so we don't need to wrap them in methods. Additionally such method 
names don't follow standard java notation: it seems to just add noise.
* some of the checks e.g. in Polygon are unnecessary. We don't need 
checkVertexIndex when the user already gets a correct exception 
(IndexOutOfBounds).

Maybe, it would be easier to split up the proposed changes so its easier to 
review. Especially for proposing any new abstract classes as I want to make 
sure that we really get value out of any abstractions, due to their high cost.

> Refactor and clean up core geo api
> --
>
> Key: LUCENE-8364
> URL: https://issues.apache.org/jira/browse/LUCENE-8364
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Nicholas Knize
>Priority: Major
> Attachments: LUCENE-8364.patch
>
>
> The core geo API is quite disorganized and confusing. For example there is 
> {{Polygon}} for creating an instance of polygon vertices and holes and 
> {{Polygon2D}} for computing relations between points and polygons. There is 
> also a {{PolygonPredicate}} and {{DistancePredicate}} in {{GeoUtils}} for 
> computing point in polygon and point distance relations, respectively, and a 
> {{GeoRelationUtils}} utility class which is no longer used for anything. This 
> disorganization is due to the organic improvements of simple {{LatLonPoint}} 
> indexing and search features and a little TLC is needed to clean up api to 
> make it more approachable and easy to understand. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.4-Linux (32bit/jdk1.8.0_172) - Build # 20 - Unstable!

2018-06-20 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.4-Linux/20/
Java: 32bit/jdk1.8.0_172 -client -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.DeleteReplicaTest.deleteReplicaFromClusterState

Error Message:
Collection not found: deleteFromClusterState_false

Stack Trace:
org.apache.solr.common.SolrException: Collection not found: 
deleteFromClusterState_false
at 
__randomizedtesting.SeedInfo.seed([ECE04EE313FC5F15:279E58E2CC224A2]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:853)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:819)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:173)
at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:138)
at 
org.apache.solr.cloud.DeleteReplicaTest.deleteReplicaFromClusterState(DeleteReplicaTest.java:187)
at 
org.apache.solr.cloud.DeleteReplicaTest.deleteReplicaFromClusterState(DeleteReplicaTest.java:178)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 

[jira] [Commented] (LUCENE-8364) Refactor and clean up core geo api

2018-06-20 Thread Nicholas Knize (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16518740#comment-16518740
 ] 

Nicholas Knize commented on LUCENE-8364:


Thanks [~dsmiley]  No worries. And thanks for opening the discussion. In the 
meantime I'm hoping this provides the next natural step to making the existing 
API's more approachable, manageable, and extendable.

> Refactor and clean up core geo api
> --
>
> Key: LUCENE-8364
> URL: https://issues.apache.org/jira/browse/LUCENE-8364
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Nicholas Knize
>Priority: Major
> Attachments: LUCENE-8364.patch
>
>
> The core geo API is quite disorganized and confusing. For example there is 
> {{Polygon}} for creating an instance of polygon vertices and holes and 
> {{Polygon2D}} for computing relations between points and polygons. There is 
> also a {{PolygonPredicate}} and {{DistancePredicate}} in {{GeoUtils}} for 
> computing point in polygon and point distance relations, respectively, and a 
> {{GeoRelationUtils}} utility class which is no longer used for anything. This 
> disorganization is due to the organic improvements of simple {{LatLonPoint}} 
> indexing and search features and a little TLC is needed to clean up api to 
> make it more approachable and easy to understand. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11598) Export Writer needs to support more than 4 Sort fields - Say 10, ideally it should not be bound at all, but 4 seems to really short sell the StreamRollup capabilities.

2018-06-20 Thread Gus Heck (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16518725#comment-16518725
 ] 

Gus Heck commented on SOLR-11598:
-

If it's not technically difficult to allow a large number of sorts as implied 
above, I think it should be allowed, but if there are strong performance 
implications that should also be +clearly+ documented too. A low, artificial 
limit merely prevents the user from making a trade-off decision. For a use case 
that's valuable enough, it might be worth it to them to fund a really beefy box 
(or cluster of beefy boxes) to handle the load. Give the user the tool and the 
information, let them decide.

> Export Writer needs to support more than 4 Sort fields - Say 10, ideally it 
> should not be bound at all, but 4 seems to really short sell the StreamRollup 
> capabilities.
> ---
>
> Key: SOLR-11598
> URL: https://issues.apache.org/jira/browse/SOLR-11598
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Affects Versions: 6.6.1, 7.0
>Reporter: Aroop
>Assignee: Varun Thacker
>Priority: Major
>  Labels: patch
> Attachments: SOLR-11598-6_6-streamtests, SOLR-11598-6_6.patch, 
> SOLR-11598-master.patch, SOLR-11598.patch, SOLR-11598.patch, 
> SOLR-11598.patch, SOLR-11598.patch
>
>
> I am a user of Streaming and I am currently trying to use rollups on an 10 
> dimensional document.
> I am unable to get correct results on this query as I am bounded by the 
> limitation of the export handler which supports only 4 sort fields.
> I do not see why this needs to be the case, as it could very well be 10 or 20.
> My current needs would be satisfied with 10, but one would want to ask why 
> can't it be any decent integer n, beyond which we know performance degrades, 
> but even then it should be caveat emptor.
> [~varunthacker] 
> Code Link:
> https://github.com/apache/lucene-solr/blob/19db1df81a18e6eb2cce5be973bf2305d606a9f8/solr/core/src/java/org/apache/solr/handler/ExportWriter.java#L455
> Error
> null:java.io.IOException: A max of 4 sorts can be specified
>   at 
> org.apache.solr.handler.ExportWriter.getSortDoc(ExportWriter.java:452)
>   at org.apache.solr.handler.ExportWriter.writeDocs(ExportWriter.java:228)
>   at 
> org.apache.solr.handler.ExportWriter.lambda$null$1(ExportWriter.java:219)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeIterator(JavaBinCodec.java:664)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeKnownType(JavaBinCodec.java:333)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:223)
>   at org.apache.solr.common.util.JavaBinCodec$1.put(JavaBinCodec.java:394)
>   at 
> org.apache.solr.handler.ExportWriter.lambda$null$2(ExportWriter.java:219)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeMap(JavaBinCodec.java:437)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeKnownType(JavaBinCodec.java:354)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:223)
>   at org.apache.solr.common.util.JavaBinCodec$1.put(JavaBinCodec.java:394)
>   at 
> org.apache.solr.handler.ExportWriter.lambda$write$3(ExportWriter.java:217)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeMap(JavaBinCodec.java:437)
>   at org.apache.solr.handler.ExportWriter.write(ExportWriter.java:215)
>   at org.apache.solr.core.SolrCore$3.write(SolrCore.java:2601)
>   at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:49)
>   at 
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:809)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:538)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
>   at 
> 

[JENKINS] Lucene-Solr-BadApples-Tests-7.x - Build # 85 - Still Unstable

2018-06-20 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-7.x/85/

3 tests failed.
FAILED:  
org.apache.solr.cloud.LeaderVoteWaitTimeoutTest.testMostInSyncReplicasCanWinElection

Error Message:
Timeout waiting for new leader null Live Nodes: [127.0.0.1:56229_solr, 
127.0.0.1:56323_solr, 127.0.0.1:59138_solr] Last available state: 
DocCollection(collection1//collections/collection1/state.json/14)={   
"pullReplicas":"0",   "replicationFactor":"3",   "shards":{"shard1":{   
"range":"8000-7fff",   "state":"active",   "replicas":{ 
"core_node62":{   "core":"collection1_shard1_replica_n61",   
"base_url":"http://127.0.0.1:39042/solr;,   
"node_name":"127.0.0.1:39042_solr",   "state":"down",   
"type":"NRT",   "force_set_state":"false"}, "core_node64":{ 
  "core":"collection1_shard1_replica_n63",   
"base_url":"http://127.0.0.1:59138/solr;,   
"node_name":"127.0.0.1:59138_solr",   "state":"active",   
"type":"NRT",   "force_set_state":"false"}, "core_node66":{ 
  "core":"collection1_shard1_replica_n65",   
"base_url":"http://127.0.0.1:56323/solr;,   
"node_name":"127.0.0.1:56323_solr",   "state":"active",   
"type":"NRT",   "force_set_state":"false",   
"router":{"name":"compositeId"},   "maxShardsPerNode":"1",   
"autoAddReplicas":"false",   "nrtReplicas":"3",   "tlogReplicas":"0"}

Stack Trace:
java.lang.AssertionError: Timeout waiting for new leader
null
Live Nodes: [127.0.0.1:56229_solr, 127.0.0.1:56323_solr, 127.0.0.1:59138_solr]
Last available state: 
DocCollection(collection1//collections/collection1/state.json/14)={
  "pullReplicas":"0",
  "replicationFactor":"3",
  "shards":{"shard1":{
  "range":"8000-7fff",
  "state":"active",
  "replicas":{
"core_node62":{
  "core":"collection1_shard1_replica_n61",
  "base_url":"http://127.0.0.1:39042/solr;,
  "node_name":"127.0.0.1:39042_solr",
  "state":"down",
  "type":"NRT",
  "force_set_state":"false"},
"core_node64":{
  "core":"collection1_shard1_replica_n63",
  "base_url":"http://127.0.0.1:59138/solr;,
  "node_name":"127.0.0.1:59138_solr",
  "state":"active",
  "type":"NRT",
  "force_set_state":"false"},
"core_node66":{
  "core":"collection1_shard1_replica_n65",
  "base_url":"http://127.0.0.1:56323/solr;,
  "node_name":"127.0.0.1:56323_solr",
  "state":"active",
  "type":"NRT",
  "force_set_state":"false",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false",
  "nrtReplicas":"3",
  "tlogReplicas":"0"}
at 
__randomizedtesting.SeedInfo.seed([4B374FFE3EC936E:ACAF684521ACA744]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.SolrCloudTestCase.waitForState(SolrCloudTestCase.java:278)
at 
org.apache.solr.cloud.LeaderVoteWaitTimeoutTest.testMostInSyncReplicasCanWinElection(LeaderVoteWaitTimeoutTest.java:189)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
 

[jira] [Commented] (SOLR-12343) JSON Field Facet refinement can return incorrect counts/stats for sorted buckets

2018-06-20 Thread Steve Rowe (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16518698#comment-16518698
 ] 

Steve Rowe commented on SOLR-12343:
---

Not sure if it relates to this bug -- please move/add if not -- but my Jenkins 
found a reproducing failure for {{TestCloudJSONFacetSKG.testBespoke()}}:

{noformat}
Checking out Revision 008bc74bebef96414f19118a267dbf982aba58b9 
(refs/remotes/origin/master)
[...]
ant test  -Dtestcase=TestCloudJSONFacetSKG -Dtests.method=testBespoke 
-Dtests.seed=5D223D88BF5BF89 -Dtests.slow=true -Dtests.locale=bg-BG 
-Dtests.timezone=America/Asuncion -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1
   [junit4] FAILURE 0.11s J0  | TestCloudJSONFacetSKG.testBespoke <<<
   [junit4]> Throwable #1: java.lang.AssertionError: Didn't check a single 
bucket???
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([5D223D88BF5BF89:E09A7E14375787E]:0)
   [junit4]>at 
org.apache.solr.cloud.TestCloudJSONFacetSKG.testBespoke(TestCloudJSONFacetSKG.java:219)
   [junit4]>at java.lang.Thread.run(Thread.java:748)
[...]
   [junit4]   2> NOTE: test params are: 
codec=FastCompressingStoredFields(storedFieldsFormat=CompressingStoredFieldsFormat(compressionMode=FAST,
 chunkSize=4, maxDocsPerChunk=1, blockSize=332), 
termVectorsFormat=CompressingTermVectorsFormat(compressionMode=FAST, 
chunkSize=4, blockSize=332)), 
sim=Asserting(org.apache.lucene.search.similarities.AssertingSimilarity@4052d535),
 locale=el, timezone=Indian/Antananarivo
   [junit4]   2> NOTE: Linux 4.1.0-custom2-amd64 amd64/Oracle Corporation 
1.8.0_151 (64-bit)/cpus=16,threads=1,free=213710424,total=526909440
{noformat}

> JSON Field Facet refinement can return incorrect counts/stats for sorted 
> buckets
> 
>
> Key: SOLR-12343
> URL: https://issues.apache.org/jira/browse/SOLR-12343
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Priority: Major
> Attachments: SOLR-12343.patch, SOLR-12343.patch, SOLR-12343.patch
>
>
> The way JSON Facet's simple refinement "re-sorts" buckets after refinement 
> can cause _refined_ buckets to be "bumped out" of the topN based on the 
> refined counts/stats depending on the sort - causing _unrefined_ buckets 
> originally discounted in phase#2 to bubble up into the topN and be returned 
> to clients *with inaccurate counts/stats*
> The simplest way to demonstrate this bug (in some data sets) is with a 
> {{sort: 'count asc'}} facet:
>  * assume shard1 returns termX & termY in phase#1 because they have very low 
> shard1 counts
>  ** but *not* returned at all by shard2, because these terms both have very 
> high shard2 counts.
>  * Assume termX has a slightly lower shard1 count then termY, such that:
>  ** termX "makes the cut" off for the limit=N topN buckets
>  ** termY does not make the cut, and is the "N+1" known bucket at the end of 
> phase#1
>  * termX then gets included in the phase#2 refinement request against shard2
>  ** termX now has a much higher _known_ total count then termY
>  ** the coordinator now sorts termX "worse" in the sorted list of buckets 
> then termY
>  ** which causes termY to bubble up into the topN
>  * termY is ultimately included in the final result _with incomplete 
> count/stat/sub-facet data_ instead of termX
>  ** this is all indepenent of the possibility that termY may actually have a 
> significantly higher total count then termX across the entire collection
>  ** the key problem is that all/most of the other terms returned to the 
> client have counts/stats that are the cumulation of all shards, but termY 
> only has the contributions from shard1
> Important Notes:
>  * This scenerio can happen regardless of the amount of overrequest used. 
> Additional overrequest just increases the number of "extra" terms needed in 
> the index with "better" sort values then termX & termY in shard2
>  * {{sort: 'count asc'}} is not just an exceptional/pathelogical case:
>  ** any function sort where additional data provided shards during refinement 
> can cause a bucket to "sort worse" can also cause this problem.
>  ** Examples: {{sum(price_i) asc}} , {{min(price_i) desc}} , {{avg(price_i) 
> asc|desc}} , etc...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12505) Streaming expressions - fetch() does not work as expected

2018-06-20 Thread Dariusz Wojtas (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dariusz Wojtas updated SOLR-12505:
--
Description: 
The issue:
 # when I try to use fetch() within a streaming expression, it does not enrich 
the inner source data. The result is exactly the same as if there was no 
surrounding fetch() function.
 # but it works if I try to do a leftOuterJoin() function instead.

Use the attached 'names' collection configuration.
 SOLR works in _cloud_ mode, streaming expressions do work, ie. stream(), 
join(), etc

Data to be inserted:
 ==
{code:xml}

 
  1
  entity
  Orignal Darek name
  uk
  
   N001
   1
   alternate
   Darek
  
  
   N002
   1
   alternate
   Darke
  
  
   N003
   1
   alternate
  Darko
  
 
 
  2
  entity
  Texaco
  de
  
   N0011
   2
   alternate
   Texxo
  
  
   N0012
   2
   alternate
   Texoco
  
 

{code}
==
 The streaming query to execute.
 Simplified, as the mainsearch usually does more complext stuff.
 ==
{noformat}
 fetch( 
 names,
 search(names,
 qt="/select",
 q="*:*",
 fq="type:alternate",
 fl="parentId, alias",
 rows=10,
 sort="parentId asc"), 
 on="parentId=id",
 fl="name,country"
 )
{noformat}
==

*Result*:
 * Collection of attributes: parentId, alias

*Expected result*:
 * Collection of attributes: parentId, alias, name, country

  was:
The issue:
 # when I try to use fetch() within a streaming expression, it does not enrich 
the inner source data. The result is exactly the same as if there was no 
surrounding fetch() function.
 # but it works if I try to do a leftOuterJoin() function instead.

Use the attached 'names' collection configuration.
 SOLR works in _cloud_ mode, streaming expressions do work, ie. stream(), 
join(), etc

Data to be inserted:
 ==
{code:xml}
 
 
 1
 entity
 Orignal Darek name
 uk
 
 N001
 1
 alternate
 Darek
 
 
 N002
 1
 alternate
 Darke
 
 
 N003
 1
 alternate
 Darko
 
 
 
 2
 entity
 Texaco
 de
 
 N0011
 2
 alternate
 Texxo
 
 
 N0012
 2
 alternate
 Texoco
 
 
 
{code}
==
 The streaming query to execute.
 Simplified, as the mainsearch usually does more complext stuff.
 ==
{noformat}
 fetch( 
 names,
 search(names,
 qt="/select",
 q="*:*",
 fq="type:alternate",
 fl="parentId, alias",
 rows=10,
 sort="parentId asc"), 
 on="parentId=id",
 fl="name,country"
 )
{noformat}
==

*Result*:
 * Collection of attributes: parentId, alias

*Expected result*:
 * Collection of attributes: parentId, alias, name, country


> Streaming expressions - fetch() does not work as expected
> -
>
> Key: SOLR-12505
> URL: https://issues.apache.org/jira/browse/SOLR-12505
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.3.1
> Environment: Windows 10, Java 10, Solr Cloud 7.3.1
>Reporter: Dariusz Wojtas
>Priority: Major
> Attachments: names.zip
>
>
> The issue:
>  # when I try to use fetch() within a streaming expression, it does not 
> enrich the inner source data. The result is exactly the same as if there was 
> no surrounding fetch() function.
>  # but it works if I try to do a leftOuterJoin() function instead.
> Use the attached 'names' collection configuration.
>  SOLR works in _cloud_ mode, streaming expressions do work, ie. stream(), 
> join(), etc
> Data to be inserted:
>  ==
> {code:xml}
> 
>  
>   1
>   entity
>   Orignal Darek name
>   uk
>   
>N001
>1
>alternate
>Darek
>   
>   
>N002
>1
>alternate
>Darke
>   
>   
>N003
>1
>alternate
>   Darko
>   
>  
>  
>   2
>   entity
>   Texaco
>   de
>   
>N0011
>2
>alternate
>Texxo
>   
>   
>N0012
>2
>alternate
>Texoco
>   
>  
> 
> {code}
> ==
>  The streaming query to execute.
>  Simplified, as the mainsearch usually does more complext stuff.
>  ==
> {noformat}
>  fetch( 
>  names,
>  search(names,
>  qt="/select",
>  q="*:*",
>  fq="type:alternate",
>  fl="parentId, alias",
>  rows=10,
>  sort="parentId asc"), 
>  on="parentId=id",
>  fl="name,country"
>  )
> {noformat}
> ==
> *Result*:
>  * Collection of attributes: parentId, alias
> *Expected result*:
>  * Collection of attributes: parentId, alias, name, country



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12505) Streaming expressions - fetch() does not work as expected

2018-06-20 Thread Dariusz Wojtas (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dariusz Wojtas updated SOLR-12505:
--
Attachment: names.zip

> Streaming expressions - fetch() does not work as expected
> -
>
> Key: SOLR-12505
> URL: https://issues.apache.org/jira/browse/SOLR-12505
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.3.1
> Environment: Windows 10, Java 10, Solr Cloud 7.3.1
>Reporter: Dariusz Wojtas
>Priority: Major
> Attachments: names.zip
>
>
> The issue:
>  # when I try to use fetch() within a streaming expression, it does not 
> enrich the inner source data. The result is exactly the same as if there was 
> no surrounding fetch() function.
>  # but it works if I try to do a leftOuterJoin() function instead.
> Use the attached 'names' collection configuration.
>  SOLR works in _cloud_ mode, streaming expressions do work, ie. stream(), 
> join(), etc
> Data to be inserted:
>  ==
> {code:xml}
>  
>  
>  1
>  entity
>  Orignal Darek name
>  uk
>  
>  N001
>  1
>  alternate
>  Darek
>  
>  
>  N002
>  1
>  alternate
>  Darke
>  
>  
>  N003
>  1
>  alternate
>  Darko
>  
>  
>  
>  2
>  entity
>  Texaco
>  de
>  
>  N0011
>  2
>  alternate
>  Texxo
>  
>  
>  N0012
>  2
>  alternate
>  Texoco
>  
>  
>  
> {code}
> ==
>  The streaming query to execute.
>  Simplified, as the mainsearch usually does more complext stuff.
>  ==
> {noformat}
>  fetch( 
>  names,
>  search(names,
>  qt="/select",
>  q="*:*",
>  fq="type:alternate",
>  fl="parentId, alias",
>  rows=10,
>  sort="parentId asc"), 
>  on="parentId=id",
>  fl="name,country"
>  )
> {noformat}
> ==
> *Result*:
>  * Collection of attributes: parentId, alias
> *Expected result*:
>  * Collection of attributes: parentId, alias, name, country



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12505) Streaming expressions - fetch() does not work as expected

2018-06-20 Thread Dariusz Wojtas (JIRA)
Dariusz Wojtas created SOLR-12505:
-

 Summary: Streaming expressions - fetch() does not work as expected
 Key: SOLR-12505
 URL: https://issues.apache.org/jira/browse/SOLR-12505
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Affects Versions: 7.3.1
 Environment: Windows 10, Java 10, Solr Cloud 7.3.1
Reporter: Dariusz Wojtas


The issue:
 # when I try to use fetch() within a streaming expression, it does not enrich 
the inner source data. The result is exactly the same as if there was no 
surrounding fetch() function.
 # but it works if I try to do a leftOuterJoin() function instead.

Use the attached 'names' collection configuration.
 SOLR works in _cloud_ mode, streaming expressions do work, ie. stream(), 
join(), etc

Data to be inserted:
 ==
{code:xml}
 
 
 1
 entity
 Orignal Darek name
 uk
 
 N001
 1
 alternate
 Darek
 
 
 N002
 1
 alternate
 Darke
 
 
 N003
 1
 alternate
 Darko
 
 
 
 2
 entity
 Texaco
 de
 
 N0011
 2
 alternate
 Texxo
 
 
 N0012
 2
 alternate
 Texoco
 
 
 
{code}
==
 The streaming query to execute.
 Simplified, as the mainsearch usually does more complext stuff.
 ==
{noformat}
 fetch( 
 names,
 search(names,
 qt="/select",
 q="*:*",
 fq="type:alternate",
 fl="parentId, alias",
 rows=10,
 sort="parentId asc"), 
 on="parentId=id",
 fl="name,country"
 )
{noformat}
==

*Result*:
 * Collection of attributes: parentId, alias

*Expected result*:
 * Collection of attributes: parentId, alias, name, country



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [DISCUSS] Request for review of proposed LUCENE/SOLR JIRA workflow change

2018-06-20 Thread Steve Rowe
Hi David,

Thanks for the review!

I agree that it would be nice to be able to (re-)enable patch review 
independently from uploading a (new) patch. I’ll go mention your idea on 
INFRA-16094.

--
Steve
www.lucidworks.com

> On Jun 20, 2018, at 5:30 PM, David Smiley  wrote:
> 
> +1 Sounds good Steve; thanks for working with Gavin and Infra to improve our 
> workflow.
> 
> It'd be nice if, after cancelling a patch review, I could re-enable it.  It 
> appears the only way to do this is to re-attach the patch?  Any way it's a 
> minor issue.  I just did some fooling around on INFRATEST to try.
> 
> ~ David
> 
> 
> On Tue, Jun 19, 2018 at 1:53 PM Steve Rowe  wrote:
> The LUCENE and SOLR JIRA projects’ workflow was changed to support automatic 
> patch validation via Apache Yetus[1], but there have been objections to the 
> new workflow and button labels - see INFRA-16094[2].
> 
> Under INFRA-16094, Gavin McDonald has produced a new workflow for LUCENE/SOLR 
> that addresses the issues raised there.  Below I’ll summarize the changes to 
> the workflow, which is now demo'd on the JIRA project named INFRATEST1[3].
> 
> This email is a request for review of the proposed workflow changes prior to 
> putting them in place.  FYI, Gavin has offered to change other aspects of the 
> LUCENE/SOLR workflow, so if you have any pet peeves, now is the time to get 
> them addressed (but see my “separate issue” under the workflow changes 
> summary below).
> 
> Please post comments either on this thread or on INFRA-16094 (I’ll update 
> there if you comment on this thread and it makes sense to notify Infra).
> 
> -
> Summary of the workflow changes: 
> 
> 1. The “Submit Patch” button will be relabeled “Attach Patch”, and will bring 
> up the dialog to attach a patch, with a simultaneous comment (rather than 
> just changing the issue status).  This button will remain visible regardless 
> of issue status, so that it can be used to attach more patches.
> 
> 2. In the “Attach Patch” dialog, there will be a checkbox labeled “Enable 
> Automatic Patch Validation”, which will be checked by default.  If checked, 
> the issue’s status will transition to “Patch Available” (which signals Yetus 
> to perform automatic patch validation); if not checked, the patch will be 
> attached but no status transition will occur. NOTE: Gavin is still working on 
> adding this checkbox, so it’s not demo’d on INFRATEST1 issues yet, but he 
> says it’s doable and that he’ll work on it tomorrow, Australia time.
> 
> 3. When in “Patch Available” status, a button labeled “Cancel Patch Review” 
> will be visible; clicking on it will transition the issue status to “Open”, 
> thus disabling automatic patch review.
> 
> 4. The “Start Progress”/“Stop Progress”/“In Progress” aspects of the workflow 
> have been removed, because if they remain, JIRA creates a “Workflow” menu and 
> puts the “Attach Patch” button under it, which kind of defeats its purpose: 
> an obvious way to submit contributions.  I asked Gavin to remove the 
> “Progress” related aspects of the workflow because I don’t think they’re 
> being used except on a limited ad-hoc basis, not part of a conventional 
> workflow.
> -
> 
> Separate issue: on the thread where Cassandra moved the “Enviroment” field 
> below “Description” on the Create JIRA dialog[4], David Smiley wrote[5]:
> 
> > ok and these Lucene Fields, two checkboxes, New and Patch Available... I 
> > just don't think we really use this but I should raise this separately.
> 
> I think we should remove these.  In a chat on Infra Hipchat, Gavin offered to 
> do this, but since the Lucene PMC has control of this (as part of “screen 
> configuration”, which is separate from “workflow” configuration), I told him 
> we would tackle it ourselves.
> 
> [1] Enable Yetus for LUCENE/SOLR: 
> https://issues.apache.org/jira/browse/INFRA-15213
> [2] Modify LUCENE/SOLR Yetus-enabling workflow: 
> https://issues.apache.org/jira/browse/INFRA-16094
> [3] Demo of proposed LUCENE/SOLR workflow: 
> https://issues.apache.org/jira/projects/INFRATEST1
> [4] Cassandra fixes Create JIRA dialog: 
> https://lists.apache.org/thread.html/0efebe2fb08c7584421422d6005401a987a2b54bf604ae317b6e102f@%3Cdev.lucene.apache.org%3E
> [5] David Smiley says "Lucene fields” are unused: 
> https://lists.apache.org/thread.html/a17bd3b5797c12903d3c6bacb348e8b4325c59609765964527412ba4@%3Cdev.lucene.apache.org%3E
> 
> --
> Steve
> www.lucidworks.com
> 
> 
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
> 
> -- 
> Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
> LinkedIn: http://linkedin.com/in/davidwsmiley | Book: 
> http://www.solrenterprisesearchserver.com


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, 

[jira] [Comment Edited] (SOLR-8659) Improve Solr JDBC Driver to support more SQL Clients

2018-06-20 Thread Aroop (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16518628#comment-16518628
 ] 

Aroop edited comment on SOLR-8659 at 6/20/18 9:42 PM:
--

[~risdenk]

Tableau Desktop 2018 has ODBC support and I was able to use OpenLink to connect 
Solr to it.

The collections even show up on the schema dictionary, However, queries on 
collections fail due to Tableau's fixation with modeling everything as 
inner-queries and the integration does not work at this point. Tableau builds 
Queries like this " select a as T.A, T.b as B from (select * from collectionA) 
as T" and there does not seem to be any way to go around this, and these kind 
of queries fail on Solr obviously.


was (Author: aroopganguly):
Tableau Desktop 2018 has ODBC support and I was able to use OpenLink to connect 
Solr to it.

The collections even show up on the schema dictionary, However, queries on 
collections fail due to Tableau's fixation with modeling everything as 
inner-queries and the integration does not work at this point. Tableau builds 
Queries like this " select a as T.A, T.b as B from (select * from collectionA) 
as T" and there does not seem to be any way to go around this, and these kind 
of queries fail on Solr obviously.

> Improve Solr JDBC Driver to support more SQL Clients
> 
>
> Key: SOLR-8659
> URL: https://issues.apache.org/jira/browse/SOLR-8659
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Affects Versions: 6.0
>Reporter: Kevin Risden
>Priority: Major
> Attachments: 
> iODBC_Demo__Unicode__-_Connected_to__remotesolr__and_Attach_screenshot_-_ASF_JIRA.png
>
>
> SOLR-8502 was a great start to getting JDBC support to be more complete. This 
> ticket is to track items that could further improve the JDBC support for more 
> SQL clients and their features. A few SQL clients are:
> * DbVisualizer
> * SQuirrel SQL
> * Apache Zeppelin (incubating)
> * Spark
> * Python & Jython
> * IntelliJ IDEA Database Tool
> * ODBC clients like Excel/Tableau



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8659) Improve Solr JDBC Driver to support more SQL Clients

2018-06-20 Thread Aroop (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-8659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16518628#comment-16518628
 ] 

Aroop commented on SOLR-8659:
-

Tableau Desktop 2018 has ODBC support and I was able to use OpenLink to connect 
Solr to it.

The collections even show up on the schema dictionary, However, queries on 
collections fail due to Tableau's fixation with modeling everything as 
inner-queries and the integration does not work at this point. Tableau builds 
Queries like this " select a as T.A, T.b as B from (select * from collectionA) 
as T" and there does not seem to be any way to go around this, and these kind 
of queries fail on Solr obviously.

> Improve Solr JDBC Driver to support more SQL Clients
> 
>
> Key: SOLR-8659
> URL: https://issues.apache.org/jira/browse/SOLR-8659
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Affects Versions: 6.0
>Reporter: Kevin Risden
>Priority: Major
> Attachments: 
> iODBC_Demo__Unicode__-_Connected_to__remotesolr__and_Attach_screenshot_-_ASF_JIRA.png
>
>
> SOLR-8502 was a great start to getting JDBC support to be more complete. This 
> ticket is to track items that could further improve the JDBC support for more 
> SQL clients and their features. A few SQL clients are:
> * DbVisualizer
> * SQuirrel SQL
> * Apache Zeppelin (incubating)
> * Spark
> * Python & Jython
> * IntelliJ IDEA Database Tool
> * ODBC clients like Excel/Tableau



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11598) Export Writer needs to support more than 4 Sort fields - Say 10, ideally it should not be bound at all, but 4 seems to really short sell the StreamRollup capabilities.

2018-06-20 Thread Varun Thacker (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16518622#comment-16518622
 ] 

Varun Thacker commented on SOLR-11598:
--

Hi [~joel.bernstein] what do you think about this patch? Do the numbers help 
convince for the case of increasing this limit ? 

I'll start looking at the patch early next week in greater detail but wanted 
your thoughts on it as well

> Export Writer needs to support more than 4 Sort fields - Say 10, ideally it 
> should not be bound at all, but 4 seems to really short sell the StreamRollup 
> capabilities.
> ---
>
> Key: SOLR-11598
> URL: https://issues.apache.org/jira/browse/SOLR-11598
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Affects Versions: 6.6.1, 7.0
>Reporter: Aroop
>Assignee: Varun Thacker
>Priority: Major
>  Labels: patch
> Attachments: SOLR-11598-6_6-streamtests, SOLR-11598-6_6.patch, 
> SOLR-11598-master.patch, SOLR-11598.patch, SOLR-11598.patch, 
> SOLR-11598.patch, SOLR-11598.patch
>
>
> I am a user of Streaming and I am currently trying to use rollups on an 10 
> dimensional document.
> I am unable to get correct results on this query as I am bounded by the 
> limitation of the export handler which supports only 4 sort fields.
> I do not see why this needs to be the case, as it could very well be 10 or 20.
> My current needs would be satisfied with 10, but one would want to ask why 
> can't it be any decent integer n, beyond which we know performance degrades, 
> but even then it should be caveat emptor.
> [~varunthacker] 
> Code Link:
> https://github.com/apache/lucene-solr/blob/19db1df81a18e6eb2cce5be973bf2305d606a9f8/solr/core/src/java/org/apache/solr/handler/ExportWriter.java#L455
> Error
> null:java.io.IOException: A max of 4 sorts can be specified
>   at 
> org.apache.solr.handler.ExportWriter.getSortDoc(ExportWriter.java:452)
>   at org.apache.solr.handler.ExportWriter.writeDocs(ExportWriter.java:228)
>   at 
> org.apache.solr.handler.ExportWriter.lambda$null$1(ExportWriter.java:219)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeIterator(JavaBinCodec.java:664)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeKnownType(JavaBinCodec.java:333)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:223)
>   at org.apache.solr.common.util.JavaBinCodec$1.put(JavaBinCodec.java:394)
>   at 
> org.apache.solr.handler.ExportWriter.lambda$null$2(ExportWriter.java:219)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeMap(JavaBinCodec.java:437)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeKnownType(JavaBinCodec.java:354)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:223)
>   at org.apache.solr.common.util.JavaBinCodec$1.put(JavaBinCodec.java:394)
>   at 
> org.apache.solr.handler.ExportWriter.lambda$write$3(ExportWriter.java:217)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeMap(JavaBinCodec.java:437)
>   at org.apache.solr.handler.ExportWriter.write(ExportWriter.java:215)
>   at org.apache.solr.core.SolrCore$3.write(SolrCore.java:2601)
>   at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:49)
>   at 
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:809)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:538)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
>   at 
> 

[jira] [Assigned] (SOLR-11598) Export Writer needs to support more than 4 Sort fields - Say 10, ideally it should not be bound at all, but 4 seems to really short sell the StreamRollup capabilities.

2018-06-20 Thread Varun Thacker (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-11598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker reassigned SOLR-11598:


Assignee: Varun Thacker

> Export Writer needs to support more than 4 Sort fields - Say 10, ideally it 
> should not be bound at all, but 4 seems to really short sell the StreamRollup 
> capabilities.
> ---
>
> Key: SOLR-11598
> URL: https://issues.apache.org/jira/browse/SOLR-11598
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Affects Versions: 6.6.1, 7.0
>Reporter: Aroop
>Assignee: Varun Thacker
>Priority: Major
>  Labels: patch
> Attachments: SOLR-11598-6_6-streamtests, SOLR-11598-6_6.patch, 
> SOLR-11598-master.patch, SOLR-11598.patch, SOLR-11598.patch, 
> SOLR-11598.patch, SOLR-11598.patch
>
>
> I am a user of Streaming and I am currently trying to use rollups on an 10 
> dimensional document.
> I am unable to get correct results on this query as I am bounded by the 
> limitation of the export handler which supports only 4 sort fields.
> I do not see why this needs to be the case, as it could very well be 10 or 20.
> My current needs would be satisfied with 10, but one would want to ask why 
> can't it be any decent integer n, beyond which we know performance degrades, 
> but even then it should be caveat emptor.
> [~varunthacker] 
> Code Link:
> https://github.com/apache/lucene-solr/blob/19db1df81a18e6eb2cce5be973bf2305d606a9f8/solr/core/src/java/org/apache/solr/handler/ExportWriter.java#L455
> Error
> null:java.io.IOException: A max of 4 sorts can be specified
>   at 
> org.apache.solr.handler.ExportWriter.getSortDoc(ExportWriter.java:452)
>   at org.apache.solr.handler.ExportWriter.writeDocs(ExportWriter.java:228)
>   at 
> org.apache.solr.handler.ExportWriter.lambda$null$1(ExportWriter.java:219)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeIterator(JavaBinCodec.java:664)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeKnownType(JavaBinCodec.java:333)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:223)
>   at org.apache.solr.common.util.JavaBinCodec$1.put(JavaBinCodec.java:394)
>   at 
> org.apache.solr.handler.ExportWriter.lambda$null$2(ExportWriter.java:219)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeMap(JavaBinCodec.java:437)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeKnownType(JavaBinCodec.java:354)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:223)
>   at org.apache.solr.common.util.JavaBinCodec$1.put(JavaBinCodec.java:394)
>   at 
> org.apache.solr.handler.ExportWriter.lambda$write$3(ExportWriter.java:217)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeMap(JavaBinCodec.java:437)
>   at org.apache.solr.handler.ExportWriter.write(ExportWriter.java:215)
>   at org.apache.solr.core.SolrCore$3.write(SolrCore.java:2601)
>   at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:49)
>   at 
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:809)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:538)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
>   at 
> 

Re: [DISCUSS] Request for review of proposed LUCENE/SOLR JIRA workflow change

2018-06-20 Thread David Smiley
+1 Sounds good Steve; thanks for working with Gavin and Infra to improve
our workflow.

It'd be nice if, after cancelling a patch review, I could re-enable it.  It
appears the only way to do this is to re-attach the patch?  Any way it's a
minor issue.  I just did some fooling around on INFRATEST to try.

~ David


On Tue, Jun 19, 2018 at 1:53 PM Steve Rowe  wrote:

> The LUCENE and SOLR JIRA projects’ workflow was changed to support
> automatic patch validation via Apache Yetus[1], but there have been
> objections to the new workflow and button labels - see INFRA-16094[2].
>
> Under INFRA-16094, Gavin McDonald has produced a new workflow for
> LUCENE/SOLR that addresses the issues raised there.  Below I’ll summarize
> the changes to the workflow, which is now demo'd on the JIRA project named
> INFRATEST1[3].
>
> This email is a request for review of the proposed workflow changes prior
> to putting them in place.  FYI, Gavin has offered to change other aspects
> of the LUCENE/SOLR workflow, so if you have any pet peeves, now is the time
> to get them addressed (but see my “separate issue” under the workflow
> changes summary below).
>
> Please post comments either on this thread or on INFRA-16094 (I’ll update
> there if you comment on this thread and it makes sense to notify Infra).
>
> -
> Summary of the workflow changes:
>
> 1. The “Submit Patch” button will be relabeled “Attach Patch”, and will
> bring up the dialog to attach a patch, with a simultaneous comment (rather
> than just changing the issue status).  This button will remain visible
> regardless of issue status, so that it can be used to attach more patches.
>
> 2. In the “Attach Patch” dialog, there will be a checkbox labeled “Enable
> Automatic Patch Validation”, which will be checked by default.  If checked,
> the issue’s status will transition to “Patch Available” (which signals
> Yetus to perform automatic patch validation); if not checked, the patch
> will be attached but no status transition will occur. NOTE: Gavin is still
> working on adding this checkbox, so it’s not demo’d on INFRATEST1 issues
> yet, but he says it’s doable and that he’ll work on it tomorrow, Australia
> time.
>
> 3. When in “Patch Available” status, a button labeled “Cancel Patch
> Review” will be visible; clicking on it will transition the issue status to
> “Open”, thus disabling automatic patch review.
>
> 4. The “Start Progress”/“Stop Progress”/“In Progress” aspects of the
> workflow have been removed, because if they remain, JIRA creates a
> “Workflow” menu and puts the “Attach Patch” button under it, which kind of
> defeats its purpose: an obvious way to submit contributions.  I asked Gavin
> to remove the “Progress” related aspects of the workflow because I don’t
> think they’re being used except on a limited ad-hoc basis, not part of a
> conventional workflow.
> -
>
> Separate issue: on the thread where Cassandra moved the “Enviroment” field
> below “Description” on the Create JIRA dialog[4], David Smiley wrote[5]:
>
> > ok and these Lucene Fields, two checkboxes, New and Patch Available... I
> just don't think we really use this but I should raise this separately.
>
> I think we should remove these.  In a chat on Infra Hipchat, Gavin offered
> to do this, but since the Lucene PMC has control of this (as part of
> “screen configuration”, which is separate from “workflow” configuration), I
> told him we would tackle it ourselves.
>
> [1] Enable Yetus for LUCENE/SOLR:
> https://issues.apache.org/jira/browse/INFRA-15213
> [2] Modify LUCENE/SOLR Yetus-enabling workflow:
> https://issues.apache.org/jira/browse/INFRA-16094
> [3] Demo of proposed LUCENE/SOLR workflow:
> https://issues.apache.org/jira/projects/INFRATEST1
> [4] Cassandra fixes Create JIRA dialog:
> https://lists.apache.org/thread.html/0efebe2fb08c7584421422d6005401a987a2b54bf604ae317b6e102f@%3Cdev.lucene.apache.org%3E
> [5] David Smiley says "Lucene fields” are unused:
> https://lists.apache.org/thread.html/a17bd3b5797c12903d3c6bacb348e8b4325c59609765964527412ba4@%3Cdev.lucene.apache.org%3E
>
> --
> Steve
> www.lucidworks.com
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
> --
Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
http://www.solrenterprisesearchserver.com


[jira] [Commented] (SOLR-11598) Export Writer needs to support more than 4 Sort fields - Say 10, ideally it should not be bound at all, but 4 seems to really short sell the StreamRollup capabilities.

2018-06-20 Thread Aroop (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16518620#comment-16518620
 ] 

Aroop commented on SOLR-11598:
--

Pinging this thread again. 

Is there any progress on this ? This is a very important feature for the 
analytics use cases.

> Export Writer needs to support more than 4 Sort fields - Say 10, ideally it 
> should not be bound at all, but 4 seems to really short sell the StreamRollup 
> capabilities.
> ---
>
> Key: SOLR-11598
> URL: https://issues.apache.org/jira/browse/SOLR-11598
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Affects Versions: 6.6.1, 7.0
>Reporter: Aroop
>Priority: Major
>  Labels: patch
> Attachments: SOLR-11598-6_6-streamtests, SOLR-11598-6_6.patch, 
> SOLR-11598-master.patch, SOLR-11598.patch, SOLR-11598.patch, 
> SOLR-11598.patch, SOLR-11598.patch
>
>
> I am a user of Streaming and I am currently trying to use rollups on an 10 
> dimensional document.
> I am unable to get correct results on this query as I am bounded by the 
> limitation of the export handler which supports only 4 sort fields.
> I do not see why this needs to be the case, as it could very well be 10 or 20.
> My current needs would be satisfied with 10, but one would want to ask why 
> can't it be any decent integer n, beyond which we know performance degrades, 
> but even then it should be caveat emptor.
> [~varunthacker] 
> Code Link:
> https://github.com/apache/lucene-solr/blob/19db1df81a18e6eb2cce5be973bf2305d606a9f8/solr/core/src/java/org/apache/solr/handler/ExportWriter.java#L455
> Error
> null:java.io.IOException: A max of 4 sorts can be specified
>   at 
> org.apache.solr.handler.ExportWriter.getSortDoc(ExportWriter.java:452)
>   at org.apache.solr.handler.ExportWriter.writeDocs(ExportWriter.java:228)
>   at 
> org.apache.solr.handler.ExportWriter.lambda$null$1(ExportWriter.java:219)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeIterator(JavaBinCodec.java:664)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeKnownType(JavaBinCodec.java:333)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:223)
>   at org.apache.solr.common.util.JavaBinCodec$1.put(JavaBinCodec.java:394)
>   at 
> org.apache.solr.handler.ExportWriter.lambda$null$2(ExportWriter.java:219)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeMap(JavaBinCodec.java:437)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeKnownType(JavaBinCodec.java:354)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:223)
>   at org.apache.solr.common.util.JavaBinCodec$1.put(JavaBinCodec.java:394)
>   at 
> org.apache.solr.handler.ExportWriter.lambda$write$3(ExportWriter.java:217)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeMap(JavaBinCodec.java:437)
>   at org.apache.solr.handler.ExportWriter.write(ExportWriter.java:215)
>   at org.apache.solr.core.SolrCore$3.write(SolrCore.java:2601)
>   at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:49)
>   at 
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:809)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:538)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
>   at 
> 

[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk1.8.0_172) - Build # 2163 - Still Unstable!

2018-06-20 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2163/
Java: 64bit/jdk1.8.0_172 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.update.processor.TestNamedUpdateProcessors.test

Error Message:
Error from server at https://127.0.0.1:40885/collection1: Async exception 
during distributed update: Error from server at 
https://127.0.0.1:44675/collection1_shard1_replica_n43: Bad Requestrequest: 
https://127.0.0.1:44675/collection1_shard1_replica_n43/update?update.distrib=TOLEADER=https%3A%2F%2F127.0.0.1%3A40885%2Fcollection1_shard2_replica_n45%2F=javabin=2
 Remote error message: ERROR: [doc=123] multiple values encountered for non 
multiValued field test_s: [one, two]

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at https://127.0.0.1:40885/collection1: Async exception during 
distributed update: Error from server at 
https://127.0.0.1:44675/collection1_shard1_replica_n43: Bad Request



request: 
https://127.0.0.1:44675/collection1_shard1_replica_n43/update?update.distrib=TOLEADER=https%3A%2F%2F127.0.0.1%3A40885%2Fcollection1_shard2_replica_n45%2F=javabin=2
Remote error message: ERROR: [doc=123] multiple values encountered for non 
multiValued field test_s: [one, two]
at 
__randomizedtesting.SeedInfo.seed([AB5C7C8C5A9F386C:23084356F4635594]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:173)
at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:138)
at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:152)
at 
org.apache.solr.update.processor.TestNamedUpdateProcessors.test(TestNamedUpdateProcessors.java:103)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:993)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 

[jira] [Commented] (SOLR-11823) Incorrect number of replica calculation when using Restore Collection API

2018-06-20 Thread Varun Thacker (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16518609#comment-16518609
 ] 

Varun Thacker commented on SOLR-11823:
--

Hi Ansgar Wiechers,

Now that SOLR-11676 / SOLR-12489 and SOLR-11807 are wrapped up I want to see if 
this issue mentioned here still persists

 

So to summarize the way you were testing was this? 
 * Start a 3 node cluster
 * Create collection through command line : bin/solr create -c demo -shards 3 
-replicationFactor 2
 * Call backup
 * Call Restore : # curl -s -k 
'https://localhost:8983/solr/admin/collections?action=restore=demo=/srv/backup/solr/solr-dev=demo=2=2'

> Incorrect number of replica calculation when using Restore Collection API
> -
>
> Key: SOLR-11823
> URL: https://issues.apache.org/jira/browse/SOLR-11823
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Backup/Restore
>Affects Versions: 7.1
>Reporter: Ansgar Wiechers
>Priority: Major
> Fix For: 7.4, master (8.0)
>
>
> I'm running Solr 7.1 (didn't test other versions) in SolrCloud mode ona a 
> 3-node cluster and tried using the backup/restore API for the first time. 
> Backup worked fine, but when trying to restore the backed-up collection I ran 
> into an unexpected problem with the replication factor setting.
> I expected the command below to restore a backup of the collection "demo" 
> with 3 shards, creating 2 replicas per shard. Instead it's trying to create 6 
> replicas per shard:
> {noformat}
> # curl -s -k 
> 'https://localhost:8983/solr/admin/collections?action=restore=demo=/srv/backup/solr/solr-dev=demo=2=2'
> {
>   "error": {
> "code": 400,
> "msg": "Solr cloud with available number of nodes:3 is insufficient for 
> restoring a collection with 3 shards, total replicas per shard 6 and 
> maxShardsPerNode 2. Consider increasing maxShardsPerNode value OR number 
> ofavailable nodes.",
> "metadata": [
>   "error-class",
>   "org.apache.solr.common.SolrException",
>   "root-error-class",
>   "org.apache.solr.common.SolrException"
> ]
>   },
>   "exception": {
> "rspCode": 400,
> "msg": "Solr cloud with available number of nodes:3 is insufficient for 
> restoring a collection with 3 shards, total replicas per shard 6 and 
> maxShardsPerNode 2. Consider increasing maxShardsPerNode value OR number of 
> available nodes."
>   },
>   "Operation restore caused exception:": 
> "org.apache.solr.common.SolrException:org.apache.solr.common.SolrException: 
> Solr cloud with available number of nodes:3 is insufficient for restoring a 
> collection with 3 shards, total replicas per shard 6 and maxShardsPerNode 2. 
> Consider increasing maxShardsPerNode value OR number of available nodes.",
>   "responseHeader": {
> "QTime": 28,
> "status": 400
>   }
> }
> {noformat}
> Restoring a collection with only 2 shards tries to create 6 replicas as well, 
> so it looks to me like the restore API multiplies the replication factor with 
> the number of nodes, which is not how the replication factor behaves in other 
> contexts. The 
> [documentation|https://lucene.apache.org/solr/guide/7_1/collections-api.html] 
> also didn't lead me to expect this behavior:
> {quote}
> replicationFactor
>The number of replicas to be created for each shard.
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12413) Solr ignores aliases.json from ZooKeeper at startup

2018-06-20 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16518599#comment-16518599
 ] 

David Smiley commented on SOLR-12413:
-

Here's a test, intended to be added to AliasIntegrationTest
{code:java}
  @Test
  public void testPreExistingAliases() throws Exception {
final byte[] bytes = Aliases.EMPTY.cloneWithCollectionAlias("alias1", 
"collection1").toJSON();
cluster.getZkClient().delete(ZkStateReader.ALIASES, -1, true);
cluster.getZkClient().create(ZkStateReader.ALIASES, bytes, 
CreateMode.PERSISTENT, true);
Stat stat = new Stat();
cluster.getZkClient().getData(ZkStateReader.ALIASES, null, stat, true);
assertEquals(0, stat.getVersion());

// get a new solrClient instead of the one created before our manual ZK 
manipulation.
try (SolrClient solrClient = getCloudSolrClient(cluster)) {
  // put a basic alias1->collection1 alias mapping into ZK manually 
ensuring the zk version is 0
  CollectionAdminRequest.createCollection("collection1", 1, 
1).process(cluster.getSolrClient());
  solrClient.query("alias1", params("q", "*:*"));//does not throw; it 
should resolve
}
  }
{code}

> Solr ignores aliases.json from ZooKeeper at startup
> ---
>
> Key: SOLR-12413
> URL: https://issues.apache.org/jira/browse/SOLR-12413
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.2.1
> Environment: A SolrCloud cluster with ZooKeeper (one node is enough 
> to reproduce).
> Solr 7.2.1.
> ZooKeeper 3.4.6.
>Reporter: Gaël Jourdan
>Assignee: David Smiley
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Since upgrading to 7.2.1, we ran into an issue where Solr ignores 
> _aliases.json_ file stored in ZooKeeper.
>  
> +Steps to reproduce the problem:+
>  # SolrCloud cluster is down
>  # Direct update of _aliases.json_ file in ZooKeeper with Solr ZkCLI *without 
> using Collections API* :
>  ** {{java ... org.apache.solr.cloud.ZkCLI -zkhost ... -cmd clear 
> /aliases.json}}
>  ** {{java ... org.apache.solr.cloud.ZkCLI -zkhost ... -cmd put /aliases.json 
> "new content"}}
>  # SolrCloud cluster is started => _aliases.json_ not taken into account
>  
> +Analysis:+ 
> Digging a bit in the code, what is actually causing the issue is that, when 
> starting, Solr now checks for the metadata of the _aliases.json_ file and if 
> the version metadata from ZooKeeper is lower or equal to local version, it 
> keeps the local version.
> When it starts, Solr has a local version of 0 for the aliases but ZooKeeper 
> also has a version of 0 of the file because we just recreated it. So Solr 
> ignores ZooKeeper configuration and never has a chance to load aliases.
>  
> Relevant parts of Solr code are:
>  * 
> [https://github.com/apache/lucene-solr/blob/branch_7_2/solr/solrj/src/java/org/apache/solr/common/cloud/ZkStateReader.java]
>  : line 1562 : method setIfNewer
> {code:java}
> /**
> * Update the internal aliases reference with a new one, provided that its ZK 
> version has increased.
> *
> * @param newAliases the potentially newer version of Aliases
> */
> private boolean setIfNewer(Aliases newAliases) {
>   synchronized (this) {
>     int cmp = Integer.compare(aliases.getZNodeVersion(), 
> newAliases.getZNodeVersion());
>     if (cmp < 0) {
>   LOG.debug("Aliases: cmp={}, new definition is: {}", cmp, newAliases);
>   aliases = newAliases;
>   this.notifyAll();
>       return true;
>     } else {
>   LOG.debug("Aliases: cmp={}, not overwriting ZK version.", cmp);
>       assert cmp != 0 || Arrays.equals(aliases.toJSON(), newAliases.toJSON()) 
> : aliases + " != " + newAliases;
>     return false;
>     }
>   }
> }{code}
>  * 
> [https://github.com/apache/lucene-solr/blob/branch_7_2/solr/solrj/src/java/org/apache/solr/common/cloud/Aliases.java]
>  : line 45 : the "empty" Aliases object with default version 0
> {code:java}
> /**
> * An empty, minimal Aliases primarily used to support the non-cloud solr use 
> cases. Not normally useful
> * in cloud situations where the version of the node needs to be tracked even 
> if all aliases are removed.
> * A version of 0 is provided rather than -1 to minimize the possibility that 
> if this is used in a cloud
> * instance data is written without version checking.
> */
> public static final Aliases EMPTY = new Aliases(Collections.emptyMap(), 
> Collections.emptyMap(), 0);{code}
>  
> Note that a workaround is to force ZooKeeper to always have a version greater 
> than 0 for _aliases.json_ file (for instance by not clearing the file and 
> just overwriting it again and again).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] Release Lucene/Solr 7.4.0 RC1

2018-06-20 Thread Kevin Risden
+1
SUCCESS! [1:59:46.135376]

Kevin Risden

On Wed, Jun 20, 2018 at 11:30 AM, Varun Thacker  wrote:

> +1
> SUCCESS! [2:53:31.027487]
>
> On Wed, Jun 20, 2018 at 11:22 AM, Christian Moen  wrote:
>
>> +1
>> SUCCESS! [1:29:55.531758]
>>
>>
>> On Tue, Jun 19, 2018 at 5:27 AM Adrien Grand  wrote:
>>
>>> Please vote for release candidate 1 for Lucene/Solr 7.4.0
>>>
>>> The artifacts can be downloaded from:
>>> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-7.
>>> 4.0-RC1-rev9060ac689c270b02143f375de0348b7f626adebc
>>>
>>> You can run the smoke tester directly with this command:
>>>
>>> python3 -u dev-tools/scripts/smokeTestRelease.py \
>>> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-7.
>>> 4.0-RC1-rev9060ac689c270b02143f375de0348b7f626adebc
>>>
>>>
>>> 
>>> Here’s my +1
>>> SUCCESS! [0:48:15.228535]
>>>
>>
>


[JENKINS] Lucene-Solr-repro - Build # 857 - Still Unstable

2018-06-20 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/857/

[...truncated 28 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-master/80/consoleText

[repro] Revision: 3d20e8967b00ad604ae0500fa1bf6fbe4adae0d2

[repro] Repro line:  ant test  -Dtestcase=TestTriggerIntegration 
-Dtests.method=testSearchRate -Dtests.seed=A35E453ADC4267E8 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true -Dtests.locale=ko 
-Dtests.timezone=America/Indiana/Marengo -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
008bc74bebef96414f19118a267dbf982aba58b9
[repro] git fetch
[repro] git checkout 3d20e8967b00ad604ae0500fa1bf6fbe4adae0d2

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   TestTriggerIntegration
[repro] ant compile-test

[...truncated 3300 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.TestTriggerIntegration" -Dtests.showOutput=onerror  
-Dtests.seed=A35E453ADC4267E8 -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.badapples=true -Dtests.locale=ko 
-Dtests.timezone=America/Indiana/Marengo -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[...truncated 6259 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   3/5 failed: 
org.apache.solr.cloud.autoscaling.sim.TestTriggerIntegration
[repro] git checkout 008bc74bebef96414f19118a267dbf982aba58b9

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 5 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-repro - Build # 856 - Still unstable

2018-06-20 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/856/

[...truncated 28 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-NightlyTests-7.4/9/consoleText

[repro] Revision: 9060ac689c270b02143f375de0348b7f626adebc

[repro] Ant options: -Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.4/test-data/enwiki.random.lines.txt
[repro] Repro line:  ant test  -Dtestcase=TestTriggerIntegration 
-Dtests.method=testTriggerThrottling -Dtests.seed=FA94387A5E2A98CB 
-Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.4/test-data/enwiki.random.lines.txt
 -Dtests.locale=ga -Dtests.timezone=Australia/LHI -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=TestRebalanceLeaders 
-Dtests.method=test -Dtests.seed=FA94387A5E2A98CB -Dtests.multiplier=2 
-Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.4/test-data/enwiki.random.lines.txt
 -Dtests.locale=hr-HR -Dtests.timezone=Antarctica/Rothera -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=SolrRrdBackendFactoryTest 
-Dtests.method=testBasic -Dtests.seed=FA94387A5E2A98CB -Dtests.multiplier=2 
-Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.4/test-data/enwiki.random.lines.txt
 -Dtests.locale=es-AR -Dtests.timezone=Europe/Astrakhan -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=CdcrBidirectionalTest 
-Dtests.method=testBiDir -Dtests.seed=FA94387A5E2A98CB -Dtests.multiplier=2 
-Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.4/test-data/enwiki.random.lines.txt
 -Dtests.locale=en-MT -Dtests.timezone=ROK -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
008bc74bebef96414f19118a267dbf982aba58b9
[repro] git fetch
[repro] git checkout 9060ac689c270b02143f375de0348b7f626adebc

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   SolrRrdBackendFactoryTest
[repro]   TestTriggerIntegration
[repro]   TestRebalanceLeaders
[repro]   CdcrBidirectionalTest
[repro] ant compile-test

[...truncated 3318 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=20 
-Dtests.class="*.SolrRrdBackendFactoryTest|*.TestTriggerIntegration|*.TestRebalanceLeaders|*.CdcrBidirectionalTest"
 -Dtests.showOutput=onerror -Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.4/test-data/enwiki.random.lines.txt
 -Dtests.seed=FA94387A5E2A98CB -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.4/test-data/enwiki.random.lines.txt
 -Dtests.locale=es-AR -Dtests.timezone=Europe/Astrakhan -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[...truncated 14450 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   0/5 failed: org.apache.solr.cloud.TestRebalanceLeaders
[repro]   0/5 failed: org.apache.solr.metrics.rrd.SolrRrdBackendFactoryTest
[repro]   1/5 failed: org.apache.solr.cloud.cdcr.CdcrBidirectionalTest
[repro]   4/5 failed: 
org.apache.solr.cloud.autoscaling.sim.TestTriggerIntegration
[repro] git checkout 008bc74bebef96414f19118a267dbf982aba58b9

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 5 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-11865) Refactor QueryElevationComponent to prepare query subset matching

2018-06-20 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16518497#comment-16518497
 ] 

David Smiley commented on SOLR-11865:
-

To clarify, the "PR" (GitHub) is what I cannot "close"... albeit I can 
indirectly if I put the magic words into a commit message but it's easy to 
forget that and I forgot.  I can change the workflow state of Jira (which is 
not referred to as a PR) and I did.  Issues go to "Resolved" state upon a 
completion (not closed).  Closure happens when the issue is released to the 
next version, performed upon a release in bulk (without issue/Jira 
notification) by the release manager.  They shouldn't be closed beforehand, 
which you did but I simply put it back to Resolved state just now – no big deal 
– I don't think there's any consequence.

> Refactor QueryElevationComponent to prepare query subset matching
> -
>
> Key: SOLR-11865
> URL: https://issues.apache.org/jira/browse/SOLR-11865
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SearchComponents - other
>Affects Versions: master (8.0)
>Reporter: Bruno Roustant
>Assignee: David Smiley
>Priority: Minor
>  Labels: QueryComponent
> Fix For: 7.5
>
> Attachments: 
> 0001-Refactor-QueryElevationComponent-to-introduce-Elevat.patch, 
> 0002-Refactor-QueryElevationComponent-after-review.patch, 
> 0003-Remove-exception-handlers-and-refactor-getBoostDocs.patch, 
> SOLR-11865.patch, SOLR-11865.patch, SOLR-11865.patch, SOLR-11865.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The goal is to prepare a second improvement to support query terms subset 
> matching or query elevation rules.
> Before that, we need to refactor the QueryElevationComponent. We make it 
> extendible. We introduce the ElevationProvider interface which will be 
> implemented later in a second patch to support subset matching. The current 
> full-query match policy becomes a default simple MapElevationProvider.
> - Add overridable methods to handle exceptions during the component 
> initialization.
> - Add overridable methods to provide the default values for config properties.
> - No functional change beyond refactoring.
> - Adapt unit test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11985) Allow percentage in replica attribute in policy

2018-06-20 Thread Jerry Bao (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16518480#comment-16518480
 ] 

Jerry Bao commented on SOLR-11985:
--

[~noble.paul] What would happen if I had 5 replicas and 3 zones for a shard? Is 
it possible to make a rule that balances the replicas on a shard as 2 on 
us-east-1a, 2 on us-east-1b, and 1 on us-east-1c?

> Allow percentage in replica attribute in policy
> ---
>
> Key: SOLR-11985
> URL: https://issues.apache.org/jira/browse/SOLR-11985
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Noble Paul
>Priority: Major
> Fix For: master (8.0), 7.5
>
> Attachments: SOLR-11985.patch, SOLR-11985.patch
>
>
> Today we can only specify an absolute number in the 'replica' attribute in 
> the policy rules. It'd be useful to write a percentage value to make certain 
> use-cases easier. For example:
> {code:java}
> // Keep a third of the the replicas of each shard in east region
> {"replica" : "<34%", "shard" : "#EACH", "sysprop:region": "east"}
> // Keep two thirds of the the replicas of each shard in west region
> {"replica" : "<67%", "shard" : "#EACH", "sysprop:region": "west"}
> {code}
> Today the above must be represented by different rules for each collection if 
> they have different replication factors. Also if the replication factor 
> changes later, the absolute value has to be changed in tandem. So expressing 
> a percentage removes both of these restrictions.
> This feature means that the value of the attribute {{"replica"}} is only 
> available just in time. We call such values {{"computed values"}} . The 
> computed value for this attribute depends on other attributes as well. 
>  Take the following 2 rules
> {code:java}
> //example 1
> {"replica" : "<34%", "shard" : "#EACH", "sysprop:region": "east"}
> //example 2
> {"replica" : "<34%",  "sysprop:region": "east"}
> {code}
> assume we have collection {{"A"}} with 2 shards and {{replicationFactor=3}}
> *example 1* would mean that the value of replica is computed as
> {{3 * 34 / 100 = 1.02}}
> Which means *_for each shard_* keep less than 1.02 replica in east 
> availability zone
>  
> *example 2* would mean that the value of replica is computed as 
> {{3 * 2 * 34 / 100 = 2.04}}
>  
> which means _*for each collection*_ keep less than 2.04 replicas on east 
> availability zone



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12495) Make it possible to evenly distribute replicas

2018-06-20 Thread Jerry Bao (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16518476#comment-16518476
 ] 

Jerry Bao commented on SOLR-12495:
--

Wanted to add a couple of comments:

Would be great if this occurs per-collection. For example, a collection with 42 
replicas and 40 nodes should expect to have one replica from that collection on 
each node, with 2 nodes having 2 replicas. \{"replica": "#MINIMUM", 
"collection": "#EACH", "node": "#ANY"}

Cluster-wide would also go along with this, making sure each node has a similar 
amount of replicas. \{"replica": "#MINIMUM", "node": "#ANY"}

A warning that "<3" which is ceil(42/40) = 2 works, but only after each node 
has one replica. This rule also allows for 2 replicas on 21 nodes, which is not 
as good as 1 replica on all nodes, and 2 replicas on 1 node. I think this 
should be fixed by the ordering of the nodes by preference, but only if the 
list is updated after each movement.

[~noble.paul] FYI

> Make it possible to evenly distribute replicas
> --
>
> Key: SOLR-12495
> URL: https://issues.apache.org/jira/browse/SOLR-12495
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Noble Paul
>Priority: Major
>
> Support a new function value for {{replica= "#MINIMUM"}}
> {{#MINIMUM}} means the minimum computed value for the given configuration
> the value of replica will be calculated as  {{<= 
> Math.ceil(number_of_replicas/number_of_valid_nodes) }}
> *example 1:*
> {code:java}
> {"replica" : "#MINIMUM" , "shard" : "#EACH" , "node" : "#ANY"}
> {code}
> *case 1* : nodes=3, replicationFactor=4
>  the value of replica will be calculated as {{Math.ceil(4/3) = 2}}
> current state : nodes=3, replicationFactor=2
> this is equivalent to the hard coded rule
> {code:java}
> {"replica" : "<3" , "shard" : "#EACH" , "node" : "#ANY"}
> {code}
> *case 2* : 
> current state : nodes=3, replicationFactor=2
> this is equivalent to the hard coded rule
> {code:java}
> {"replica" : "<3" , "shard" : "#EACH" , "node" : "#ANY"}
> {code}
> *example:2*
> {code}
> {"replica" : "#MINIMUM"  , "node" : "#ANY"}{code}
> case 1: numShards = 2, replicationFactor=3, nodes = 5
> this is equivalent to the hard coded rule
> {code:java}
> {"replica" : "<3" , "node" : "#ANY"}
> {code}
> *example:3*
> {code}
> {"replica" : "<2"  , "shard" : "#EACH" , "port" : "8983"}{code}
> case 1: {{replicationFactor=3, nodes with port 8983 = 2}}
> this is equivalent to the hard coded rule
> {code}
> {"replica" : "<3"  , "shard" : "#EACH" , "port" : "8983"}{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12458) ADLS support for SOLR

2018-06-20 Thread Lucene/Solr QA (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16518474#comment-16518474
 ] 

Lucene/Solr QA commented on SOLR-12458:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m  
1s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  4m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Release audit (RAT) {color} | 
{color:green}  4m 25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Check forbidden APIs {color} | 
{color:green}  4m 21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Check licenses {color} | {color:green} 
 5m 14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Validate source patterns {color} | 
{color:green}  4m 21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Validate ref guide {color} | 
{color:green}  4m 21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 95m 39s{color} 
| {color:red} core in the patch failed. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}109m 20s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | solr.store.hdfs.HdfsLockFactoryTest |
|   | solr.cloud.autoscaling.ScheduledMaintenanceTriggerTest |
|   | solr.cloud.PeerSyncReplicationTest |
|   | solr.cloud.autoscaling.sim.TestTriggerIntegration |
|   | solr.cloud.autoscaling.sim.TestLargeCluster |
|   | solr.highlight.TestPostingsSolrHighlighter |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | SOLR-12458 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12928357/SOLR-12458.patch |
| Optional Tests |  checklicenses  validatesourcepatterns  ratsources  compile  
javac  unit  checkforbiddenapis  validaterefguide  |
| uname | Linux lucene2-us-west.apache.org 4.4.0-112-generic #135-Ubuntu SMP 
Fri Jan 19 11:48:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | ant |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-SOLR-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh
 |
| git revision | master / 008bc74 |
| ant | version: Apache Ant(TM) version 1.9.6 compiled on July 8 2015 |
| Default Java | 1.8.0_172 |
| unit | 
https://builds.apache.org/job/PreCommit-SOLR-Build/128/artifact/out/patch-unit-solr_core.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-SOLR-Build/128/testReport/ |
| modules | C: lucene solr solr/core solr/solr-ref-guide U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-SOLR-Build/128/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> ADLS support for SOLR
> -
>
> Key: SOLR-12458
> URL: https://issues.apache.org/jira/browse/SOLR-12458
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration
>Affects Versions: master (8.0)
>Reporter: Mike Wingert
>Priority: Minor
>  Labels: features
> Fix For: master (8.0)
>
> Attachments: SOLR-12458.patch, SOLR-12458.patch, SOLR-12458.patch, 
> SOLR-12458.patch, SOLR-12458.patch, SOLR-12458.patch
>
>
> This is to track ADLS support for SOLR.
> ADLS is a HDFS like API available in Microsoft Azure.   
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Linux (32bit/jdk1.8.0_172) - Build # 2162 - Unstable!

2018-06-20 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2162/
Java: 32bit/jdk1.8.0_172 -server -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.MoveReplicaHDFSTest.testFailedMove

Error Message:
No live SolrServers available to handle this 
request:[https://127.0.0.1:46543/solr/MoveReplicaHDFSTest_failed_coll_true, 
https://127.0.0.1:46571/solr/MoveReplicaHDFSTest_failed_coll_true]

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this 
request:[https://127.0.0.1:46543/solr/MoveReplicaHDFSTest_failed_coll_true, 
https://127.0.0.1:46571/solr/MoveReplicaHDFSTest_failed_coll_true]
at 
__randomizedtesting.SeedInfo.seed([6AEEF24E2527C591:C02321BC92F41041]:0)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:462)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1106)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:886)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:993)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:819)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:942)
at 
org.apache.solr.cloud.MoveReplicaTest.testFailedMove(MoveReplicaTest.java:288)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-12503) SolrJ deleteById doesn't work when authentication is active.

2018-06-20 Thread Erick Erickson (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16518369#comment-16518369
 ] 

Erick Erickson commented on SOLR-12503:
---

Is this a duplicate of SOLR-9399? If you think so, please close it.

> SolrJ deleteById doesn't work when authentication is active.
> 
>
> Key: SOLR-12503
> URL: https://issues.apache.org/jira/browse/SOLR-12503
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Authentication
>Affects Versions: 5.5.5, 7.2.1, 7.3.1
>Reporter: Federico Grillini
>Priority: Major
>
> When solr authentication is active the following code fails:
> {code:java}
> String id = "xxx"; // same as List ids = ...
> UpdateRequest upReq = new UpdateRequest();
> upReq.setBasicAuthCredentials("user", "pwd");
> upReq.deleteById(id).process(solrClient);
> {code}
> The error is (using *solrj 5.5.5*):
> {quote}
> org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from 
> server at http://xxx_shard1_replica_n1: Expected mime type application/xml 
> but got text/html. 
> 
> 
> Error 401 require authentication
> 
> HTTP ERROR 401
> Problem accessing /solr/XXX_shard1_replica_n1/update. Reason:
> require authentication
> 
> 
>   
> org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:653)
>   
> org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1002)
>   
> org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:891)
>   
> org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:827)
>   org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
>   org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:166)
> {quote}
> The bug is in the method 
> {{Map 
> org.apache.solr.client.solrj.request.UpdateRequest.getRoutes(DocRouter 
> router, DocCollection col, Map> urlMap, 
> ModifiableSolrParams params, String idField)}}
> At line 299 a new request is created without the credentials of the main 
> request.
> Also solrj *7.3.1* is affected by the bug.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Disabling fingerprinting

2018-06-20 Thread Erick Erickson
re: SOLR-8690. I get that this is a super-expert option, and I get
that it's a safety valve for unexpected behavior, mostly having to do
with expensive fingerprint calculations.

My question is "What are the dangers of disabling fingerprinting"?

I have two use cases:
- I'm indexing and committing while restarting a replica.
- I'm issuing delete-by-id and/or DBQ (perhaps while restarting a node).

The situation I'm seeing is that indexing is occurring while a node is
restarted. There should be _plenty_ of updates in the tlogs to do a
peer sync, but it's falling back to full sync due to fingerprint
mismatch. SOLR-11216 should address this, but won't be out for a
while.

What I'm wondering is if, as a stop-gap, turning off fingerprinting is
an option. Fingerprinting wasn't added just because someone was bored,
but I don't fully understand the situation that can lead to
fingerprinting catching a problem that the rest of peer sync doesn't.

If it's an edge case we can guarantee won't be exercised (e.g. DBQ)
then we can be more confident about turning it off.

Thanks!
Erick

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [DISCUSS] Geo/spatial organization in Lucene

2018-06-20 Thread Nicholas Knize
If I were to pick between the two, I also have a preference for B.  I've
also tried to keep this whole spatial organization rather simple:

core - simple spatial capabilities needed by the 99% spatial use case
(e.g., web mapping). Includes LatLonPoint, polygon & distance search
(everything currently in sandbox). Lightweight, and no dependencies or
complexities. If one wants simple and fast point search, all you need is
the core module.

spatial - dependency free. Expands on core spatial to include simple shape
searching. Uses internal relations. Everything confined to core and spatial
modules.

spatial-extras - expanded spatial capabilities. Welcomes third-party
dependencies (e.g., S3, SIS, Proj4J). Targets more advanced/expert GIS
use-cases.

geo3d - trades speed for accuracy. I've always struggled with the name,
since it implies 3D shapes/point cloud support. But history has shown
considering a name change to be a bike-shedding endeavor.

At the end of the day I'm up for whatever makes most sense for everyone
here. Lord knows we could use more people helping out on geo.

- Nick



On Wed, Jun 20, 2018 at 11:40 AM Adrien Grand  wrote:

> I have a slight preference for B similarly to how StandardAnalyzer is in
> core and other analyzers are in analysis, but no strong feelings. In any
> case I agree that both A and B would be much better than the current
> situation.
>
>
> Le mer. 20 juin 2018 à 18:09, David Smiley  a
> écrit :
>
>> I think everyone agrees the current state of spatial code organization in
>> Lucene is not desirable.  We have a spatial module that has almost nothing
>> in it, we have mature spatial code in the sandbox that needs to "graduate"
>> somewhere, and we've got a handful of geo utilities in Lucene core (mostly
>> because I didn't notice).  No agreement has been reached on what the
>> desired state should be.
>>
>> I'd like to hear opinions on this from members of the community.  I am
>> especially interested in listening to people that normally don't seem to
>> speak up about spatial matters. Perhaps Uwe Schindlerand Alan Woodward – I
>> respect both of you guys a ton for your tenure with Lucene and aren't too
>> pushy with your opinions. I can be convinced to change my mind, especially
>> if coming from you two.  Of course anyone can respond -- this is an open
>> discussion!
>>
>> As I understand it, there are two proposals loosely defined as follows:
>>
>> (A) Common spatial needs will be met in the "spatial" module.  The Lucene
>> "spatial" module, currently in a weird gutted state, should have basically
>> all spatial code currently in sandbox plus all geo stuff in Lucene core.
>> Thus there will be no geo stuff in Lucene core.
>>
>> (B) Common spatial needs will be met by Lucene core.  Lucene core should
>> expand it's current "geo" utilities to include the spatial stuff currently
>> in the sandbox module.  It'd also take on what little remains in the Lucene
>> spatial module and thus we can remove the spatial module.
>>
>> With either plan if a user has certain advanced/specialized needs they
>> may need to go to spatial3d or spatial-extras modules.  These would be
>> untouched in both proposals.
>>
>> I'm in favor of (A) on the grounds that we have modules for special
>> feature areas, and spatial should be no different.  My gut estimation is
>> that 75-90% of apps do not have spatial requirements and need not depend on
>> any spatial module.  Other modules are probably used more (e.g. queries,
>> suggest, etc.)
>>
>> Respectfully,
>>   ~ David
>>
>> p.s. if I mischaracterized any proposal or overlooked another then I'm
>> sorry, please correct me.
>> --
>> Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
>> LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
>> http://www.solrenterprisesearchserver.com
>>
> --
Nicholas Knize  |  Geospatial Software Guy  |  Elasticsearch & Apache
Lucene  |  nkn...@apache.org


[jira] [Commented] (SOLR-11654) TimePartitionedUpdateProcessor.lookupShardLeaderOfCollection should route to the ideal shard

2018-06-20 Thread Gus Heck (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16518348#comment-16518348
 ] 

Gus Heck commented on SOLR-11654:
-

I'll look tonight

> TimePartitionedUpdateProcessor.lookupShardLeaderOfCollection should route to 
> the ideal shard
> 
>
> Key: SOLR-11654
> URL: https://issues.apache.org/jira/browse/SOLR-11654
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Major
> Attachments: SOLR-11654.patch, SOLR-11654.patch, SOLR-11654.patch, 
> SOLR-11654.patch
>
>
> {{TimePartitionedUpdateProcessor.lookupShardLeaderOfCollection}} looks up the 
> Shard/Slice to talk to for the given collection.  It currently picks the 
> first active Shard/Slice but it has a TODO to route to the ideal one based on 
> the router configuration of the target collection.  There is similar code in 
> CloudSolrClient & DistributedUpdateProcessor that should probably be 
> refactored/standardized so that we don't have to repeat this logic.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12504) Leader should compute its fingerprint and retrieve recent updates in an atomic way

2018-06-20 Thread Cao Manh Dat (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cao Manh Dat updated SOLR-12504:

Description: 
SOLR-11216 solved many cases in the PeerSync (in doing recovery). But there are 
still cases when PeerSync will fail because of mismatch in fingerprint 
comparison. The main reason here is fingerprint and recent versions of leader 
is not computed/retrieved in an atomic way. 

For example: when an update made into leader's tlog after fingerprint is 
computed but before recent versions are retrieved.
Leader's fingerprint  : (contains updates from 1-10)
Leader's recent versions : (contains updates from 1-12)
Replica's fingerprint:  (contains updates from 1-12)
--> A mismatch in fingerprint between leader and replica.

But it seems that blocking updates in leader for {{getVersions}} operation is 
not a good idea (it will degrade indexing performance). Still struggling on 
finding a solution.

  was:
SOLR-11216 solved many cases in the PeerSync (in doing recovery). But there are 
still cases when PeerSync will fail because of mismatch in fingerprint 
comparison. The main reason here is fingerprint and recent versions of leader 
is not computed/retrieved in an atomic way. 

For example: when an update made into leader's tlog after fingerprint is 
computed but before recent versions are retrieved.
Leader's fingerprint  : (contains updates from 1-10)
Leader's recent versions : (contains updates from 1-12)
Replica's fingerprint:  (contains updates from 1-12)

But it seems that blocking updates in leader for {{getVersions}} operation is 
not a good idea (it will degrade indexing performance) Still struggling on 
finding a solution.


> Leader should compute its fingerprint and retrieve recent updates in an 
> atomic way
> --
>
> Key: SOLR-12504
> URL: https://issues.apache.org/jira/browse/SOLR-12504
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Major
>
> SOLR-11216 solved many cases in the PeerSync (in doing recovery). But there 
> are still cases when PeerSync will fail because of mismatch in fingerprint 
> comparison. The main reason here is fingerprint and recent versions of leader 
> is not computed/retrieved in an atomic way. 
> For example: when an update made into leader's tlog after fingerprint is 
> computed but before recent versions are retrieved.
> Leader's fingerprint  : (contains updates from 1-10)
> Leader's recent versions : (contains updates from 1-12)
> Replica's fingerprint:  (contains updates from 1-12)
> --> A mismatch in fingerprint between leader and replica.
> But it seems that blocking updates in leader for {{getVersions}} operation is 
> not a good idea (it will degrade indexing performance). Still struggling on 
> finding a solution.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12504) Leader should compute its fingerprint and retrieve recent updates in an atomic way

2018-06-20 Thread Cao Manh Dat (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cao Manh Dat updated SOLR-12504:

Description: 
SOLR-11216 solved many cases in the PeerSync (in doing recovery). But there are 
still cases when PeerSync will fail because of mismatch in fingerprint 
comparison. The main reason here is fingerprint and recent versions of leader 
is not computed/retrieved in an atomic way. 

For example: when an update made into leader's tlog after fingerprint is 
computed but before recent versions are retrieved.
Leader's fingerprint  : (contains updates from 1-10)
Leader's recent versions : (contains updates from 1-12)
Replica's fingerprint:  (contains updates from 1-12)

But it seems that blocking updates in leader for {{getVersions}} operation is 
not a good idea (it will degrade indexing performance) Still struggling on 
finding a solution.

  was:
SOLR-11216 solved many cases in the PeerSync (in doing recovery). But there are 
still cases when PeerSync will be failed because of mismatch in fingerprint 
comparison. The main reason here is fingerprint and recent versions of leader 
is not computed/retrieved in an atomic way. 

For example: when an update made into leader's tlog after fingerprint is 
computed but before recent versions are retrieved.
Leader's fingerprint  : (contains updates from 1-10)
Leader's recent versions : (contains updates from 1-12)
Replica's fingerprint:  (contains updates from 1-12)

But it seems that blocking updates in leader for {{getVersions}} operation is 
not a good idea (it will degrade indexing performance) Still struggling on 
finding a solution.


> Leader should compute its fingerprint and retrieve recent updates in an 
> atomic way
> --
>
> Key: SOLR-12504
> URL: https://issues.apache.org/jira/browse/SOLR-12504
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Major
>
> SOLR-11216 solved many cases in the PeerSync (in doing recovery). But there 
> are still cases when PeerSync will fail because of mismatch in fingerprint 
> comparison. The main reason here is fingerprint and recent versions of leader 
> is not computed/retrieved in an atomic way. 
> For example: when an update made into leader's tlog after fingerprint is 
> computed but before recent versions are retrieved.
> Leader's fingerprint  : (contains updates from 1-10)
> Leader's recent versions : (contains updates from 1-12)
> Replica's fingerprint:  (contains updates from 1-12)
> But it seems that blocking updates in leader for {{getVersions}} operation is 
> not a good idea (it will degrade indexing performance) Still struggling on 
> finding a solution.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1568 - Still Unstable

2018-06-20 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1568/

1 tests failed.
FAILED:  org.apache.solr.search.TestRecovery.testExistOldBufferLog

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([695E53774D81765E:370E4E22C34EE6D7]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertFalse(Assert.java:68)
at org.junit.Assert.assertFalse(Assert.java:79)
at 
org.apache.solr.search.TestRecovery.testExistOldBufferLog(TestRecovery.java:1071)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 14091 lines...]
   [junit4] Suite: org.apache.solr.search.TestRecovery
   [junit4]   2> Creating dataDir: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J1/temp/solr.search.TestRecovery_695E53774D81765E-001/init-core-data-001

[jira] [Updated] (SOLR-12504) Leader should compute its fingerprint and retrieve recent updates in an atomic way

2018-06-20 Thread Cao Manh Dat (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cao Manh Dat updated SOLR-12504:

Description: 
SOLR-11216 solved many cases in the PeerSync (in doing recovery). But there are 
still cases when PeerSync will be failed because of mismatch in fingerprint 
comparison. The main reason here is fingerprint and recent versions of leader 
is not computed/retrieved in an atomic way. 

For example: when an update made into leader's tlog after fingerprint is 
computed but before recent versions are retrieved.
Leader's fingerprint  : (contains updates from 1-10)
Leader's recent versions : (contains updates from 1-12)
Replica's fingerprint:  (contains updates from 1-12)

But it seems that blocking updates in leader for {{getVersions}} operation is 
not a good idea (it will degrade indexing performance) Still struggling on 
finding a solution.

  was:
SOLR-11216 solved many cases in the PeerSync (in doing recovery). But there are 
still cases when PeerSync will be failed because of mismatch in fingerprint 
comparison. The main reason here is fingerprint and recent versions of leader 
is not computed/retrieved in an atomic way. 

For example: when an update made into leader's tlog after fingerprint is 
computed but before recent versions are retrieved.
Leader's fingerprint  : (contains updates from 1-10)
Leader's recent versions : (contains updates from 1-12)
Replica's fingerprint:  (contains updates from 1-12)

But it seems that blocking updates in leader for {{getVersions}} operation is 
not a good idea. Still struggling on finding a solution.


> Leader should compute its fingerprint and retrieve recent updates in an 
> atomic way
> --
>
> Key: SOLR-12504
> URL: https://issues.apache.org/jira/browse/SOLR-12504
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Major
>
> SOLR-11216 solved many cases in the PeerSync (in doing recovery). But there 
> are still cases when PeerSync will be failed because of mismatch in 
> fingerprint comparison. The main reason here is fingerprint and recent 
> versions of leader is not computed/retrieved in an atomic way. 
> For example: when an update made into leader's tlog after fingerprint is 
> computed but before recent versions are retrieved.
> Leader's fingerprint  : (contains updates from 1-10)
> Leader's recent versions : (contains updates from 1-12)
> Replica's fingerprint:  (contains updates from 1-12)
> But it seems that blocking updates in leader for {{getVersions}} operation is 
> not a good idea (it will degrade indexing performance) Still struggling on 
> finding a solution.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12504) Leader should compute its fingerprint and retrieve recent updates in an atomic way

2018-06-20 Thread Cao Manh Dat (JIRA)
Cao Manh Dat created SOLR-12504:
---

 Summary: Leader should compute its fingerprint and retrieve recent 
updates in an atomic way
 Key: SOLR-12504
 URL: https://issues.apache.org/jira/browse/SOLR-12504
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Cao Manh Dat
Assignee: Cao Manh Dat


SOLR-11216 solved many cases in the PeerSync (in doing recovery). But there are 
still cases when PeerSync will be failed because of mismatch in fingerprint 
comparison. The main reason here is fingerprint and recent versions of leader 
is not computed/retrieved in an atomic way. 

For example: when an update made into leader's tlog after fingerprint is 
computed but before recent versions are retrieved.
Leader's fingerprint  : (contains updates from 1-10)
Leader's recent versions : (contains updates from 1-12)
Replica's fingerprint:  (contains updates from 1-12)

But it seems that blocking updates in leader for {{getVersions}} operation is 
not a good idea. Still struggling on finding a solution.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Windows (64bit/jdk1.8.0_172) - Build # 644 - Still Unstable!

2018-06-20 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/644/
Java: 64bit/jdk1.8.0_172 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

2 tests failed.
FAILED:  
org.apache.solr.handler.dataimport.TestSolrEntityProcessorEndToEnd.testFullImportInnerEntity

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\contrib\solr-dataimporthandler\test\J1\temp\solr.handler.dataimport.TestSolrEntityProcessorEndToEnd_61292C44353D2A1F-001\tempDir-003\collection1:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\contrib\solr-dataimporthandler\test\J1\temp\solr.handler.dataimport.TestSolrEntityProcessorEndToEnd_61292C44353D2A1F-001\tempDir-003\collection1

C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\contrib\solr-dataimporthandler\test\J1\temp\solr.handler.dataimport.TestSolrEntityProcessorEndToEnd_61292C44353D2A1F-001\tempDir-003:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\contrib\solr-dataimporthandler\test\J1\temp\solr.handler.dataimport.TestSolrEntityProcessorEndToEnd_61292C44353D2A1F-001\tempDir-003
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\contrib\solr-dataimporthandler\test\J1\temp\solr.handler.dataimport.TestSolrEntityProcessorEndToEnd_61292C44353D2A1F-001\tempDir-003\collection1:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\contrib\solr-dataimporthandler\test\J1\temp\solr.handler.dataimport.TestSolrEntityProcessorEndToEnd_61292C44353D2A1F-001\tempDir-003\collection1
   
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\contrib\solr-dataimporthandler\test\J1\temp\solr.handler.dataimport.TestSolrEntityProcessorEndToEnd_61292C44353D2A1F-001\tempDir-003:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\contrib\solr-dataimporthandler\test\J1\temp\solr.handler.dataimport.TestSolrEntityProcessorEndToEnd_61292C44353D2A1F-001\tempDir-003

at 
__randomizedtesting.SeedInfo.seed([61292C44353D2A1F:905E80F1945E5988]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:318)
at 
org.apache.solr.handler.dataimport.TestSolrEntityProcessorEndToEnd$SolrInstance.tearDown(TestSolrEntityProcessorEndToEnd.java:360)
at 
org.apache.solr.handler.dataimport.TestSolrEntityProcessorEndToEnd.tearDown(TestSolrEntityProcessorEndToEnd.java:142)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:992)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 

[jira] [Resolved] (SOLR-11216) Race condition in peerSync

2018-06-20 Thread Cao Manh Dat (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-11216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cao Manh Dat resolved SOLR-11216.
-
   Resolution: Fixed
 Assignee: Cao Manh Dat
Fix Version/s: 7.5
   master (8.0)

> Race condition in peerSync
> --
>
> Key: SOLR-11216
> URL: https://issues.apache.org/jira/browse/SOLR-11216
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.4
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Major
> Fix For: master (8.0), 7.5
>
> Attachments: SOLR-11216.patch, SOLR-11216.patch, SOLR-11216.patch, 
> SOLR-11216.patch, SOLR-11216.patch, SOLR-11216.patch, SOLR-11216.patch
>
>
> When digging into SOLR-10126. I found a case that can make peerSync fail.
> * leader and replica receive update from 1 to 4
> * replica stop
> * replica miss updates 5, 6
> * replica start recovery
> ## replica buffer updates 7, 8
> ## replica request versions from leader, 
> ## in the same time leader receive update 9, so it will return updates from 1 
> to 9 (for request versions) when replica get recent versions ( so it will be 
> 1,2,3,4,5,6,7,8,9 )
> ## replica do peersync and request updates 5, 6, 9 from leader 
> ## replica apply updates 5, 6, 9. Its index does not have update 7, 8 and 
> maxVersionSpecified for fingerprint is 9, therefore compare fingerprint will 
> fail



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [DISCUSS] Geo/spatial organization in Lucene

2018-06-20 Thread Adrien Grand
I have a slight preference for B similarly to how StandardAnalyzer is in
core and other analyzers are in analysis, but no strong feelings. In any
case I agree that both A and B would be much better than the current
situation.

Le mer. 20 juin 2018 à 18:09, David Smiley  a
écrit :

> I think everyone agrees the current state of spatial code organization in
> Lucene is not desirable.  We have a spatial module that has almost nothing
> in it, we have mature spatial code in the sandbox that needs to "graduate"
> somewhere, and we've got a handful of geo utilities in Lucene core (mostly
> because I didn't notice).  No agreement has been reached on what the
> desired state should be.
>
> I'd like to hear opinions on this from members of the community.  I am
> especially interested in listening to people that normally don't seem to
> speak up about spatial matters. Perhaps Uwe Schindlerand Alan Woodward – I
> respect both of you guys a ton for your tenure with Lucene and aren't too
> pushy with your opinions. I can be convinced to change my mind, especially
> if coming from you two.  Of course anyone can respond -- this is an open
> discussion!
>
> As I understand it, there are two proposals loosely defined as follows:
>
> (A) Common spatial needs will be met in the "spatial" module.  The Lucene
> "spatial" module, currently in a weird gutted state, should have basically
> all spatial code currently in sandbox plus all geo stuff in Lucene core.
> Thus there will be no geo stuff in Lucene core.
>
> (B) Common spatial needs will be met by Lucene core.  Lucene core should
> expand it's current "geo" utilities to include the spatial stuff currently
> in the sandbox module.  It'd also take on what little remains in the Lucene
> spatial module and thus we can remove the spatial module.
>
> With either plan if a user has certain advanced/specialized needs they may
> need to go to spatial3d or spatial-extras modules.  These would be
> untouched in both proposals.
>
> I'm in favor of (A) on the grounds that we have modules for special
> feature areas, and spatial should be no different.  My gut estimation is
> that 75-90% of apps do not have spatial requirements and need not depend on
> any spatial module.  Other modules are probably used more (e.g. queries,
> suggest, etc.)
>
> Respectfully,
>   ~ David
>
> p.s. if I mischaracterized any proposal or overlooked another then I'm
> sorry, please correct me.
> --
> Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
> LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
> http://www.solrenterprisesearchserver.com
>


Re: [VOTE] Release Lucene/Solr 7.4.0 RC1

2018-06-20 Thread Varun Thacker
+1
SUCCESS! [2:53:31.027487]

On Wed, Jun 20, 2018 at 11:22 AM, Christian Moen  wrote:

> +1
> SUCCESS! [1:29:55.531758]
>
>
> On Tue, Jun 19, 2018 at 5:27 AM Adrien Grand  wrote:
>
>> Please vote for release candidate 1 for Lucene/Solr 7.4.0
>>
>> The artifacts can be downloaded from:
>> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-7.4.0-RC1-
>> rev9060ac689c270b02143f375de0348b7f626adebc
>>
>> You can run the smoke tester directly with this command:
>>
>> python3 -u dev-tools/scripts/smokeTestRelease.py \
>> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-7.4.0-RC1-
>> rev9060ac689c270b02143f375de0348b7f626adebc
>>
>>
>> 
>> Here’s my +1
>> SUCCESS! [0:48:15.228535]
>>
>


[jira] [Commented] (LUCENE-8364) Refactor and clean up core geo api

2018-06-20 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16518326#comment-16518326
 ] 

David Smiley commented on LUCENE-8364:
--

Thanks for the cleanup Nick; the cleanup is needed!

Before moving forward, lets see what becomes of the thread "DISCUSS] 
Geo/spatial organization in Lucene" I sent to the dev list today.

> Refactor and clean up core geo api
> --
>
> Key: LUCENE-8364
> URL: https://issues.apache.org/jira/browse/LUCENE-8364
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Nicholas Knize
>Priority: Major
> Attachments: LUCENE-8364.patch
>
>
> The core geo API is quite disorganized and confusing. For example there is 
> {{Polygon}} for creating an instance of polygon vertices and holes and 
> {{Polygon2D}} for computing relations between points and polygons. There is 
> also a {{PolygonPredicate}} and {{DistancePredicate}} in {{GeoUtils}} for 
> computing point in polygon and point distance relations, respectively, and a 
> {{GeoRelationUtils}} utility class which is no longer used for anything. This 
> disorganization is due to the organic improvements of simple {{LatLonPoint}} 
> indexing and search features and a little TLC is needed to clean up api to 
> make it more approachable and easy to understand. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12458) ADLS support for SOLR

2018-06-20 Thread Cao Manh Dat (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16518322#comment-16518322
 ] 

Cao Manh Dat commented on SOLR-12458:
-

[~MikeWingert] since SOLR-11216 get committed. Would you mind to update your 
patch to set {{isBuffer}} flag in {{AdlsUpdateLog#ensureBufferTlog()}}

> ADLS support for SOLR
> -
>
> Key: SOLR-12458
> URL: https://issues.apache.org/jira/browse/SOLR-12458
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration
>Affects Versions: master (8.0)
>Reporter: Mike Wingert
>Priority: Minor
>  Labels: features
> Fix For: master (8.0)
>
> Attachments: SOLR-12458.patch, SOLR-12458.patch, SOLR-12458.patch, 
> SOLR-12458.patch, SOLR-12458.patch, SOLR-12458.patch
>
>
> This is to track ADLS support for SOLR.
> ADLS is a HDFS like API available in Microsoft Azure.   
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[DISCUSS] Geo/spatial organization in Lucene

2018-06-20 Thread David Smiley
I think everyone agrees the current state of spatial code organization in
Lucene is not desirable.  We have a spatial module that has almost nothing
in it, we have mature spatial code in the sandbox that needs to "graduate"
somewhere, and we've got a handful of geo utilities in Lucene core (mostly
because I didn't notice).  No agreement has been reached on what the
desired state should be.

I'd like to hear opinions on this from members of the community.  I am
especially interested in listening to people that normally don't seem to
speak up about spatial matters. Perhaps Uwe Schindlerand Alan Woodward – I
respect both of you guys a ton for your tenure with Lucene and aren't too
pushy with your opinions. I can be convinced to change my mind, especially
if coming from you two.  Of course anyone can respond -- this is an open
discussion!

As I understand it, there are two proposals loosely defined as follows:

(A) Common spatial needs will be met in the "spatial" module.  The Lucene
"spatial" module, currently in a weird gutted state, should have basically
all spatial code currently in sandbox plus all geo stuff in Lucene core.
Thus there will be no geo stuff in Lucene core.

(B) Common spatial needs will be met by Lucene core.  Lucene core should
expand it's current "geo" utilities to include the spatial stuff currently
in the sandbox module.  It'd also take on what little remains in the Lucene
spatial module and thus we can remove the spatial module.

With either plan if a user has certain advanced/specialized needs they may
need to go to spatial3d or spatial-extras modules.  These would be
untouched in both proposals.

I'm in favor of (A) on the grounds that we have modules for special feature
areas, and spatial should be no different.  My gut estimation is that
75-90% of apps do not have spatial requirements and need not depend on any
spatial module.  Other modules are probably used more (e.g. queries,
suggest, etc.)

Respectfully,
  ~ David

p.s. if I mischaracterized any proposal or overlooked another then I'm
sorry, please correct me.
-- 
Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
http://www.solrenterprisesearchserver.com


[jira] [Updated] (LUCENE-8364) Refactor and clean up core geo api

2018-06-20 Thread Nicholas Knize (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicholas Knize updated LUCENE-8364:
---
Attachment: (was: LUCENE-8364.patch)

> Refactor and clean up core geo api
> --
>
> Key: LUCENE-8364
> URL: https://issues.apache.org/jira/browse/LUCENE-8364
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Nicholas Knize
>Priority: Major
>
> The core geo API is quite disorganized and confusing. For example there is 
> {{Polygon}} for creating an instance of polygon vertices and holes and 
> {{Polygon2D}} for computing relations between points and polygons. There is 
> also a {{PolygonPredicate}} and {{DistancePredicate}} in {{GeoUtils}} for 
> computing point in polygon and point distance relations, respectively, and a 
> {{GeoRelationUtils}} utility class which is no longer used for anything. This 
> disorganization is due to the organic improvements of simple {{LatLonPoint}} 
> indexing and search features and a little TLC is needed to clean up api to 
> make it more approachable and easy to understand. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8364) Refactor and clean up core geo api

2018-06-20 Thread Nicholas Knize (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicholas Knize updated LUCENE-8364:
---
Attachment: (was: LUCENE-8364.patch)

> Refactor and clean up core geo api
> --
>
> Key: LUCENE-8364
> URL: https://issues.apache.org/jira/browse/LUCENE-8364
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Nicholas Knize
>Priority: Major
> Attachments: LUCENE-8364.patch
>
>
> The core geo API is quite disorganized and confusing. For example there is 
> {{Polygon}} for creating an instance of polygon vertices and holes and 
> {{Polygon2D}} for computing relations between points and polygons. There is 
> also a {{PolygonPredicate}} and {{DistancePredicate}} in {{GeoUtils}} for 
> computing point in polygon and point distance relations, respectively, and a 
> {{GeoRelationUtils}} utility class which is no longer used for anything. This 
> disorganization is due to the organic improvements of simple {{LatLonPoint}} 
> indexing and search features and a little TLC is needed to clean up api to 
> make it more approachable and easy to understand. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11216) Race condition in peerSync

2018-06-20 Thread Cao Manh Dat (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16518315#comment-16518315
 ] 

Cao Manh Dat commented on SOLR-11216:
-

Committed: 
https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;a=commit;h=daff67e27931680c783485bdd197ef65c47971fe
 

> Race condition in peerSync
> --
>
> Key: SOLR-11216
> URL: https://issues.apache.org/jira/browse/SOLR-11216
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.4
>Reporter: Cao Manh Dat
>Priority: Major
> Attachments: SOLR-11216.patch, SOLR-11216.patch, SOLR-11216.patch, 
> SOLR-11216.patch, SOLR-11216.patch, SOLR-11216.patch, SOLR-11216.patch
>
>
> When digging into SOLR-10126. I found a case that can make peerSync fail.
> * leader and replica receive update from 1 to 4
> * replica stop
> * replica miss updates 5, 6
> * replica start recovery
> ## replica buffer updates 7, 8
> ## replica request versions from leader, 
> ## in the same time leader receive update 9, so it will return updates from 1 
> to 9 (for request versions) when replica get recent versions ( so it will be 
> 1,2,3,4,5,6,7,8,9 )
> ## replica do peersync and request updates 5, 6, 9 from leader 
> ## replica apply updates 5, 6, 9. Its index does not have update 7, 8 and 
> maxVersionSpecified for fingerprint is 9, therefore compare fingerprint will 
> fail



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8364) Refactor and clean up core geo api

2018-06-20 Thread Nicholas Knize (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicholas Knize updated LUCENE-8364:
---
Attachment: LUCENE-8364.patch

> Refactor and clean up core geo api
> --
>
> Key: LUCENE-8364
> URL: https://issues.apache.org/jira/browse/LUCENE-8364
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Nicholas Knize
>Priority: Major
> Attachments: LUCENE-8364.patch
>
>
> The core geo API is quite disorganized and confusing. For example there is 
> {{Polygon}} for creating an instance of polygon vertices and holes and 
> {{Polygon2D}} for computing relations between points and polygons. There is 
> also a {{PolygonPredicate}} and {{DistancePredicate}} in {{GeoUtils}} for 
> computing point in polygon and point distance relations, respectively, and a 
> {{GeoRelationUtils}} utility class which is no longer used for anything. This 
> disorganization is due to the organic improvements of simple {{LatLonPoint}} 
> indexing and search features and a little TLC is needed to clean up api to 
> make it more approachable and easy to understand. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8364) Refactor and clean up core geo api

2018-06-20 Thread Nicholas Knize (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16518292#comment-16518292
 ] 

Nicholas Knize commented on LUCENE-8364:


Initial patch provides:

* Refactor {{Polygon2D}} into a more descriptive {{GeoEdgeTree}} class (the end 
objective will be to make this package private and limit to computing relations 
between shapes)
* New {{Circle}} class for encapsulating point distance computations.
* Refactor {{Predicate}} out of {{GeoEncodingUtils}} into its own standalone 
package private base class
* Refactor {{DistancePredicate}} out of {{GeoEncodingUtils}} into new 
{{Circle}} class
* Refactor {{PolygonPredicate}} out of {{GeoEncodingUtils}} into {{Polygon}} 
class
* New {{Geometry}} interface and {{Shape}} class for providing a {{.relate}} 
method for computing relation between derived shapes with bounding boxes
* Removed unused {{GeoRelationUtils}} utility class
* Updated and added testing for new and existing geometries along with 
relations with bounding boxes


> Refactor and clean up core geo api
> --
>
> Key: LUCENE-8364
> URL: https://issues.apache.org/jira/browse/LUCENE-8364
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Nicholas Knize
>Priority: Major
> Attachments: LUCENE-8364.patch
>
>
> The core geo API is quite disorganized and confusing. For example there is 
> {{Polygon}} for creating an instance of polygon vertices and holes and 
> {{Polygon2D}} for computing relations between points and polygons. There is 
> also a {{PolygonPredicate}} and {{DistancePredicate}} in {{GeoUtils}} for 
> computing point in polygon and point distance relations, respectively, and a 
> {{GeoRelationUtils}} utility class which is no longer used for anything. This 
> disorganization is due to the organic improvements of simple {{LatLonPoint}} 
> indexing and search features and a little TLC is needed to clean up api to 
> make it more approachable and easy to understand. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12500) Add support for '<=' and '>=' operators in the autoscaling policy syntax

2018-06-20 Thread Noble Paul (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-12500:
--
Summary: Add support for '<=' and '>=' operators in the autoscaling policy 
syntax  (was: Add support for '<=' and '>=' operators)

> Add support for '<=' and '>=' operators in the autoscaling policy syntax
> 
>
> Key: SOLR-12500
> URL: https://issues.apache.org/jira/browse/SOLR-12500
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Priority: Major
>
> Add support for these commonly used operators to improve readability



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8364) Refactor and clean up core geo api

2018-06-20 Thread Nicholas Knize (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicholas Knize updated LUCENE-8364:
---
Attachment: LUCENE-8364.patch

> Refactor and clean up core geo api
> --
>
> Key: LUCENE-8364
> URL: https://issues.apache.org/jira/browse/LUCENE-8364
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Nicholas Knize
>Priority: Major
> Attachments: LUCENE-8364.patch
>
>
> The core geo API is quite disorganized and confusing. For example there is 
> {{Polygon}} for creating an instance of polygon vertices and holes and 
> {{Polygon2D}} for computing relations between points and polygons. There is 
> also a {{PolygonPredicate}} and {{DistancePredicate}} in {{GeoUtils}} for 
> computing point in polygon and point distance relations, respectively, and a 
> {{GeoRelationUtils}} utility class which is no longer used for anything. This 
> disorganization is due to the organic improvements of simple {{LatLonPoint}} 
> indexing and search features and a little TLC is needed to clean up api to 
> make it more approachable and easy to understand. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12502) Unify and reduce the number of SolrClient#add methods

2018-06-20 Thread David Smiley (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated SOLR-12502:

Component/s: SolrJ

> Unify and reduce the number of SolrClient#add methods
> -
>
> Key: SOLR-12502
> URL: https://issues.apache.org/jira/browse/SOLR-12502
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Reporter: Varun Thacker
>Priority: Major
>
> On SOLR-11654 we noticed that SolrClient#add has 10 overloaded methods which 
> can be very confusing to new users.
> Also the UpdateRequest class is public so that means if a user is looking for 
> a custom combination they can always choose to do so by writing a couple of 
> lines of code.
> For 8.0 which might not be very far away we can improve this situation
>  
> Quoting David from SOLR-11654
> {quote}Any way I guess we'll leave SolrClient alone.  Thanks for your input 
> Varun.  Yes it's a shame there are so many darned overloaded methods... I 
> think it's a large part due to the optional "collection" parameter which like 
> doubles the methods!  I've been bitten several times writing SolrJ code that 
> doesn't use the right overloaded version (forgot to specify collection).  I 
> think for 8.0, *either* all SolrClient methods without "collection" can be 
> removed in favor of insisting you use the overloaded variant accepting a 
> collection, *or* SolrClient itself could be locked down to one collection at 
> the time you create it *or* have a CollectionSolrClient interface retrieved 
> from a SolrClient.withCollection(collection) in which all the operations that 
> require a SolrClient are on that interface and not SolrClient proper.  
> Several ideas to consider.
> {quote}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12500) Add support for '<=' and '>=' operators

2018-06-20 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16518265#comment-16518265
 ] 

David Smiley commented on SOLR-12500:
-

Ok I think I answered my question; this is underneath another ticket.  Still; I 
find it useful when issue titles have a bit more info like "Autoscaling: Add 
support for <= and >= operators"

> Add support for '<=' and '>=' operators
> ---
>
> Key: SOLR-12500
> URL: https://issues.apache.org/jira/browse/SOLR-12500
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Priority: Major
>
> Add support for these commonly used operators to improve readability



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12500) Add support for '<=' and '>=' operators

2018-06-20 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16518264#comment-16518264
 ] 

David Smiley commented on SOLR-12500:
-

Add support to what syntax?  Some sort of policy framework stuff you've been 
busy with or do you mean the query parser or streaming expressions or?

> Add support for '<=' and '>=' operators
> ---
>
> Key: SOLR-12500
> URL: https://issues.apache.org/jira/browse/SOLR-12500
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Priority: Major
>
> Add support for these commonly used operators to improve readability



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12503) SolrJ deleteById doesn't work when authentication is active.

2018-06-20 Thread Federico Grillini (JIRA)
Federico Grillini created SOLR-12503:


 Summary: SolrJ deleteById doesn't work when authentication is 
active.
 Key: SOLR-12503
 URL: https://issues.apache.org/jira/browse/SOLR-12503
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: Authentication
Affects Versions: 7.3.1, 7.2.1, 5.5.5
Reporter: Federico Grillini


When solr authentication is active the following code fails:

{code:java}
String id = "xxx"; // same as List ids = ...

UpdateRequest upReq = new UpdateRequest();

upReq.setBasicAuthCredentials("user", "pwd");

upReq.deleteById(id).process(solrClient);
{code}

The error is (using *solrj 5.5.5*):

{quote}
org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from 
server at http://xxx_shard1_replica_n1: Expected mime type application/xml but 
got text/html. 


Error 401 require authentication

HTTP ERROR 401
Problem accessing /solr/XXX_shard1_replica_n1/update. Reason:
require authentication




org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:653)

org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1002)

org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:891)

org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:827)
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:166)
{quote}

The bug is in the method 

{{Map 
org.apache.solr.client.solrj.request.UpdateRequest.getRoutes(DocRouter router, 
DocCollection col, Map> urlMap, ModifiableSolrParams 
params, String idField)}}

At line 299 a new request is created without the credentials of the main 
request.

Also solrj *7.3.1* is affected by the bug.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8362) Add DocValue support for RangeFields

2018-06-20 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16518254#comment-16518254
 ] 

David Smiley commented on LUCENE-8362:
--

This would be a nice convenience.  Today, two Fields are required to handle 
both a Points + DocValues requirement.

> Add DocValue support for RangeFields 
> -
>
> Key: LUCENE-8362
> URL: https://issues.apache.org/jira/browse/LUCENE-8362
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Nicholas Knize
>Priority: Minor
>
> I'm opening this issue to discuss adding DocValue support to 
> {{\{Int|Long|Float|Double\}Range}} field types. Since existing numeric range 
> fields already provide the methods for encoding ranges as a byte array I 
> think this could be as simple as adding syntactic sugar to existing range 
> fields that simply build an instance of {{BinaryDocValues}} using that same 
> encoding. I'm envisioning something like 
> {{doc.add(IntRange.newDocValuesField("intDV", 100)}} But I'd like to solicit 
> other ideas or potential drawbacks to this approach.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8365) ArrayIndexOutOfBoundsException in UnifiedHighlighter

2018-06-20 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16518245#comment-16518245
 ] 

David Smiley commented on LUCENE-8365:
--

Thanks Marc & Simon.  If 7.4 needs to be respinned it'd be nice to get this in.

> ArrayIndexOutOfBoundsException in UnifiedHighlighter
> 
>
> Key: LUCENE-8365
> URL: https://issues.apache.org/jira/browse/LUCENE-8365
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/highlighter
>Affects Versions: 7.3.1
>Reporter: Marc Morissette
>Assignee: Simon Willnauer
>Priority: Major
> Fix For: master (8.0), 7.5, 7.4.1
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> We see ArrayIndexOutOfBoundsExceptions coming out of the UnifiedHighlighter 
> in our production logs from time to time:
> {code}
> java.lang.ArrayIndexOutOfBoundsException
>   at java.base/java.lang.System.arraycopy(Native Method)
>   at 
> org.apache.lucene.search.uhighlight.PhraseHelper$SpanCollectedOffsetsEnum.add(PhraseHelper.java:386)
>   at 
> org.apache.lucene.search.uhighlight.PhraseHelper$OffsetSpanCollector.collectLeaf(PhraseHelper.java:341)
>   at org.apache.lucene.search.spans.TermSpans.collect(TermSpans.java:121)
>   at 
> org.apache.lucene.search.spans.NearSpansOrdered.collect(NearSpansOrdered.java:149)
>   at 
> org.apache.lucene.search.spans.NearSpansUnordered.collect(NearSpansUnordered.java:171)
>   at 
> org.apache.lucene.search.spans.FilterSpans.collect(FilterSpans.java:120)
>   at 
> org.apache.lucene.search.uhighlight.PhraseHelper.createOffsetsEnumsForSpans(PhraseHelper.java:261)
> ...
> {code}
> It turns out that there is an "off by one" error in the UnifiedHighlighter's 
> code that, as far as I can tell, is only triggered when two nested 
> SpanNearQueries contain the same term.
> The resulting behaviour depends on the content of the highlighted document. 
> Either, some highlighted terms go missing or an 
> ArrayIndexOutOfBoundsException is thrown.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-8365) ArrayIndexOutOfBoundsException in UnifiedHighlighter

2018-06-20 Thread Simon Willnauer (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Willnauer resolved LUCENE-8365.
-
   Resolution: Fixed
Fix Version/s: 7.5
   master (8.0)
   7.4.1

> ArrayIndexOutOfBoundsException in UnifiedHighlighter
> 
>
> Key: LUCENE-8365
> URL: https://issues.apache.org/jira/browse/LUCENE-8365
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/highlighter
>Affects Versions: 7.3.1
>Reporter: Marc Morissette
>Assignee: Simon Willnauer
>Priority: Major
> Fix For: 7.4.1, master (8.0), 7.5
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> We see ArrayIndexOutOfBoundsExceptions coming out of the UnifiedHighlighter 
> in our production logs from time to time:
> {code}
> java.lang.ArrayIndexOutOfBoundsException
>   at java.base/java.lang.System.arraycopy(Native Method)
>   at 
> org.apache.lucene.search.uhighlight.PhraseHelper$SpanCollectedOffsetsEnum.add(PhraseHelper.java:386)
>   at 
> org.apache.lucene.search.uhighlight.PhraseHelper$OffsetSpanCollector.collectLeaf(PhraseHelper.java:341)
>   at org.apache.lucene.search.spans.TermSpans.collect(TermSpans.java:121)
>   at 
> org.apache.lucene.search.spans.NearSpansOrdered.collect(NearSpansOrdered.java:149)
>   at 
> org.apache.lucene.search.spans.NearSpansUnordered.collect(NearSpansUnordered.java:171)
>   at 
> org.apache.lucene.search.spans.FilterSpans.collect(FilterSpans.java:120)
>   at 
> org.apache.lucene.search.uhighlight.PhraseHelper.createOffsetsEnumsForSpans(PhraseHelper.java:261)
> ...
> {code}
> It turns out that there is an "off by one" error in the UnifiedHighlighter's 
> code that, as far as I can tell, is only triggered when two nested 
> SpanNearQueries contain the same term.
> The resulting behaviour depends on the content of the highlighted document. 
> Either, some highlighted terms go missing or an 
> ArrayIndexOutOfBoundsException is thrown.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11985) Allow percentage in replica attribute in policy

2018-06-20 Thread Noble Paul (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-11985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-11985:
--
Attachment: (was: SOLR-11985.patch)

> Allow percentage in replica attribute in policy
> ---
>
> Key: SOLR-11985
> URL: https://issues.apache.org/jira/browse/SOLR-11985
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Noble Paul
>Priority: Major
> Fix For: master (8.0), 7.5
>
> Attachments: SOLR-11985.patch, SOLR-11985.patch
>
>
> Today we can only specify an absolute number in the 'replica' attribute in 
> the policy rules. It'd be useful to write a percentage value to make certain 
> use-cases easier. For example:
> {code:java}
> // Keep a third of the the replicas of each shard in east region
> {"replica" : "<34%", "shard" : "#EACH", "sysprop:region": "east"}
> // Keep two thirds of the the replicas of each shard in west region
> {"replica" : "<67%", "shard" : "#EACH", "sysprop:region": "west"}
> {code}
> Today the above must be represented by different rules for each collection if 
> they have different replication factors. Also if the replication factor 
> changes later, the absolute value has to be changed in tandem. So expressing 
> a percentage removes both of these restrictions.
> This feature means that the value of the attribute {{"replica"}} is only 
> available just in time. We call such values {{"computed values"}} . The 
> computed value for this attribute depends on other attributes as well. 
>  Take the following 2 rules
> {code:java}
> //example 1
> {"replica" : "<34%", "shard" : "#EACH", "sysprop:region": "east"}
> //example 2
> {"replica" : "<34%",  "sysprop:region": "east"}
> {code}
> assume we have collection {{"A"}} with 2 shards and {{replicationFactor=3}}
> *example 1* would mean that the value of replica is computed as
> {{3 * 34 / 100 = 1.02}}
> Which means *_for each shard_* keep less than 1.02 replica in east 
> availability zone
>  
> *example 2* would mean that the value of replica is computed as 
> {{3 * 2 * 34 / 100 = 2.04}}
>  
> which means _*for each collection*_ keep less than 2.04 replicas on east 
> availability zone



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11654) TimePartitionedUpdateProcessor.lookupShardLeaderOfCollection should route to the ideal shard

2018-06-20 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16518222#comment-16518222
 ] 

David Smiley commented on SOLR-11654:
-

This new test takes a long time and I don't want it to take so long on average. 
 Randomization can help here so I chose shards & replicas randomly with the 
high end being the numbers you chose.  The patch shows this (and other 
changes).  However this test failed on me on waitFor the collections after the 
doc is added.  Try seed -Dtests.seed=613B69692267C615   Can you investigate 
[~gus_heck]?  The # shards & replicas should have no bearing on the test up to 
this point.

> TimePartitionedUpdateProcessor.lookupShardLeaderOfCollection should route to 
> the ideal shard
> 
>
> Key: SOLR-11654
> URL: https://issues.apache.org/jira/browse/SOLR-11654
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Major
> Attachments: SOLR-11654.patch, SOLR-11654.patch, SOLR-11654.patch, 
> SOLR-11654.patch
>
>
> {{TimePartitionedUpdateProcessor.lookupShardLeaderOfCollection}} looks up the 
> Shard/Slice to talk to for the given collection.  It currently picks the 
> first active Shard/Slice but it has a TODO to route to the ideal one based on 
> the router configuration of the target collection.  There is similar code in 
> CloudSolrClient & DistributedUpdateProcessor that should probably be 
> refactored/standardized so that we don't have to repeat this logic.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11654) TimePartitionedUpdateProcessor.lookupShardLeaderOfCollection should route to the ideal shard

2018-06-20 Thread David Smiley (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-11654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated SOLR-11654:

Attachment: SOLR-11654.patch

> TimePartitionedUpdateProcessor.lookupShardLeaderOfCollection should route to 
> the ideal shard
> 
>
> Key: SOLR-11654
> URL: https://issues.apache.org/jira/browse/SOLR-11654
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Major
> Attachments: SOLR-11654.patch, SOLR-11654.patch, SOLR-11654.patch, 
> SOLR-11654.patch
>
>
> {{TimePartitionedUpdateProcessor.lookupShardLeaderOfCollection}} looks up the 
> Shard/Slice to talk to for the given collection.  It currently picks the 
> first active Shard/Slice but it has a TODO to route to the ideal one based on 
> the router configuration of the target collection.  There is similar code in 
> CloudSolrClient & DistributedUpdateProcessor that should probably be 
> refactored/standardized so that we don't have to repeat this logic.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #408: Reproduces and fixes LUCENE-8365

2018-06-20 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/lucene-solr/pull/408


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-9.0.4) - Build # 22286 - Still Unstable!

2018-06-20 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/22286/
Java: 64bit/jdk-9.0.4 -XX:+UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  
org.apache.solr.security.hadoop.TestDelegationWithHadoopAuth.testDelegationTokenRenew

Error Message:
expected:<200> but was:<403>

Stack Trace:
java.lang.AssertionError: expected:<200> but was:<403>
at 
__randomizedtesting.SeedInfo.seed([1DEAE6BDAE3531F2:2A7112A396F9EC56]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.security.hadoop.TestDelegationWithHadoopAuth.renewDelegationToken(TestDelegationWithHadoopAuth.java:120)
at 
org.apache.solr.security.hadoop.TestDelegationWithHadoopAuth.verifyDelegationTokenRenew(TestDelegationWithHadoopAuth.java:302)
at 
org.apache.solr.security.hadoop.TestDelegationWithHadoopAuth.testDelegationTokenRenew(TestDelegationWithHadoopAuth.java:319)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 

[jira] [Commented] (LUCENE-8365) ArrayIndexOutOfBoundsException in UnifiedHighlighter

2018-06-20 Thread Simon Willnauer (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16518207#comment-16518207
 ] 

Simon Willnauer commented on LUCENE-8365:
-

fix looks good. I will run tests and pull it in. Thanks Marc!

> ArrayIndexOutOfBoundsException in UnifiedHighlighter
> 
>
> Key: LUCENE-8365
> URL: https://issues.apache.org/jira/browse/LUCENE-8365
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/highlighter
>Affects Versions: 7.3.1
>Reporter: Marc Morissette
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> We see ArrayIndexOutOfBoundsExceptions coming out of the UnifiedHighlighter 
> in our production logs from time to time:
> {code}
> java.lang.ArrayIndexOutOfBoundsException
>   at java.base/java.lang.System.arraycopy(Native Method)
>   at 
> org.apache.lucene.search.uhighlight.PhraseHelper$SpanCollectedOffsetsEnum.add(PhraseHelper.java:386)
>   at 
> org.apache.lucene.search.uhighlight.PhraseHelper$OffsetSpanCollector.collectLeaf(PhraseHelper.java:341)
>   at org.apache.lucene.search.spans.TermSpans.collect(TermSpans.java:121)
>   at 
> org.apache.lucene.search.spans.NearSpansOrdered.collect(NearSpansOrdered.java:149)
>   at 
> org.apache.lucene.search.spans.NearSpansUnordered.collect(NearSpansUnordered.java:171)
>   at 
> org.apache.lucene.search.spans.FilterSpans.collect(FilterSpans.java:120)
>   at 
> org.apache.lucene.search.uhighlight.PhraseHelper.createOffsetsEnumsForSpans(PhraseHelper.java:261)
> ...
> {code}
> It turns out that there is an "off by one" error in the UnifiedHighlighter's 
> code that, as far as I can tell, is only triggered when two nested 
> SpanNearQueries contain the same term.
> The resulting behaviour depends on the content of the highlighted document. 
> Either, some highlighted terms go missing or an 
> ArrayIndexOutOfBoundsException is thrown.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (LUCENE-8365) ArrayIndexOutOfBoundsException in UnifiedHighlighter

2018-06-20 Thread Simon Willnauer (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Willnauer reassigned LUCENE-8365:
---

Assignee: Simon Willnauer

> ArrayIndexOutOfBoundsException in UnifiedHighlighter
> 
>
> Key: LUCENE-8365
> URL: https://issues.apache.org/jira/browse/LUCENE-8365
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/highlighter
>Affects Versions: 7.3.1
>Reporter: Marc Morissette
>Assignee: Simon Willnauer
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> We see ArrayIndexOutOfBoundsExceptions coming out of the UnifiedHighlighter 
> in our production logs from time to time:
> {code}
> java.lang.ArrayIndexOutOfBoundsException
>   at java.base/java.lang.System.arraycopy(Native Method)
>   at 
> org.apache.lucene.search.uhighlight.PhraseHelper$SpanCollectedOffsetsEnum.add(PhraseHelper.java:386)
>   at 
> org.apache.lucene.search.uhighlight.PhraseHelper$OffsetSpanCollector.collectLeaf(PhraseHelper.java:341)
>   at org.apache.lucene.search.spans.TermSpans.collect(TermSpans.java:121)
>   at 
> org.apache.lucene.search.spans.NearSpansOrdered.collect(NearSpansOrdered.java:149)
>   at 
> org.apache.lucene.search.spans.NearSpansUnordered.collect(NearSpansUnordered.java:171)
>   at 
> org.apache.lucene.search.spans.FilterSpans.collect(FilterSpans.java:120)
>   at 
> org.apache.lucene.search.uhighlight.PhraseHelper.createOffsetsEnumsForSpans(PhraseHelper.java:261)
> ...
> {code}
> It turns out that there is an "off by one" error in the UnifiedHighlighter's 
> code that, as far as I can tell, is only triggered when two nested 
> SpanNearQueries contain the same term.
> The resulting behaviour depends on the content of the highlighted document. 
> Either, some highlighted terms go missing or an 
> ArrayIndexOutOfBoundsException is thrown.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.4-Windows (64bit/jdk-9.0.4) - Build # 7 - Still Unstable!

2018-06-20 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.4-Windows/7/
Java: 64bit/jdk-9.0.4 -XX:-UseCompressedOops -XX:+UseParallelGC

7 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.core.ConfigureRecoveryStrategyTest

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-7.4-Windows\solr\build\solr-core\test\J0\temp\solr.core.ConfigureRecoveryStrategyTest_B9E7BC83AA231C41-001\init-core-data-001:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.4-Windows\solr\build\solr-core\test\J0\temp\solr.core.ConfigureRecoveryStrategyTest_B9E7BC83AA231C41-001\init-core-data-001

C:\Users\jenkins\workspace\Lucene-Solr-7.4-Windows\solr\build\solr-core\test\J0\temp\solr.core.ConfigureRecoveryStrategyTest_B9E7BC83AA231C41-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.4-Windows\solr\build\solr-core\test\J0\temp\solr.core.ConfigureRecoveryStrategyTest_B9E7BC83AA231C41-001
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-7.4-Windows\solr\build\solr-core\test\J0\temp\solr.core.ConfigureRecoveryStrategyTest_B9E7BC83AA231C41-001\init-core-data-001:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.4-Windows\solr\build\solr-core\test\J0\temp\solr.core.ConfigureRecoveryStrategyTest_B9E7BC83AA231C41-001\init-core-data-001
   
C:\Users\jenkins\workspace\Lucene-Solr-7.4-Windows\solr\build\solr-core\test\J0\temp\solr.core.ConfigureRecoveryStrategyTest_B9E7BC83AA231C41-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.4-Windows\solr\build\solr-core\test\J0\temp\solr.core.ConfigureRecoveryStrategyTest_B9E7BC83AA231C41-001

at __randomizedtesting.SeedInfo.seed([B9E7BC83AA231C41]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:318)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  org.apache.solr.handler.TestReplicationHandler.doTestStressReplication

Error Message:
found:2[index.20180620203116304, index.20180620203116468, index.properties, 
replication.properties, snapshot_metadata]

Stack Trace:
java.lang.AssertionError: found:2[index.20180620203116304, 
index.20180620203116468, index.properties, replication.properties, 
snapshot_metadata]
at 
__randomizedtesting.SeedInfo.seed([B9E7BC83AA231C41:624CBC45AF0B75F2]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.handler.TestReplicationHandler.checkForSingleIndex(TestReplicationHandler.java:968)
at 
org.apache.solr.handler.TestReplicationHandler.checkForSingleIndex(TestReplicationHandler.java:939)
at 
org.apache.solr.handler.TestReplicationHandler.doTestStressReplication(TestReplicationHandler.java:915)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 

[jira] [Commented] (SOLR-12398) Make JSON Facet API support Heatmap Facet

2018-06-20 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16518191#comment-16518191
 ] 

David Smiley commented on SOLR-12398:
-

Patch notes:
 * Created new FacetHeatmap subclass of FacetRequest
 ** Unlike the other FacetRequest subclasses, I chose for this class to have 
all the functionality needed internally as sub-classes as I think it's better 
organized this way than having separate top-level Parser, Processor, and 
Merger.  Thus we have just one more top level class and one new source file.
 * Moved most logic in SpatialHeatmapFacets into FacetHeatmap
 * SpatialHeatmapFacets still exists (identical API) but is integration glue to 
work with FacetHeatmap.  It used FacetHeatmap to compute facets and also uses 
it's distributed/sharded merge logic.  I think we lost the debug timing 
diagnostic for PNG in particular for classic facet heatmaps, and I think that's 
fine.
 * Moved SpatialHeatmapFacetsTest to same package as FacetHeatmap, both because 
this is where the code being tested primarily lives now and because of package 
accessibility stuff for testing.
 * SpatialHeatmapFacetsTest.test is now duplicated to testClassicFacets 
testJsonFacets and with mostly the same code but tweaks to request differently. 
 I hate duplicating code but it would be pretty awkward to have one method, I 
think.  At least it's in the same class.
 ** Added test for hanging sub-heatmaps off of different query facets.  Not a 
particularly good assertion constraint but at least in my debugger I see it's 
working.
 * Improved BaseDistributedSearchTestCase to treat Longs & Integers of the same 
value as equivalent.  It appears that non-distributed vs distributed of json 
facets has different types on the "count".
 * Refactored/improved some aspects of the JSON Facet module code: [~mkhludnev] 
can you please review this part (includes SimpleFacets)
 ** FacetRequest now has a static parse() method (two actually) and is used by 
FacetModule and SimpleFacets (for UIF method) and SpatialHeatmapFacets.  The 
intent is to simplify the interaction by avoiding the need for these classes to 
even be aware of the notion of a FacetParser.  The second one is intended for 
external users (SimpleFacets) and avoids one needless outer layer of req/res.
 ** FacetRequest now has a static process() method (two actually) and is used 
by FacetModule, SimpleFacets (for UIF method), and SpatialHeatmapFacets, and 
FacetProcessor.processSubs.  This reduces duplication around debug tracking I 
saw, and it also reduces the need of the callers to even be aware of the notion 
of a FacetProcessor.
 ** Note that SimpleFacets method=uif integration glue is much simpler.

I ran tests & precommit.

What's needed is some documentation in the ref guide.  I'll add this soon.

> Make JSON Facet API support Heatmap Facet
> -
>
> Key: SOLR-12398
> URL: https://issues.apache.org/jira/browse/SOLR-12398
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module, faceting, JSON Request API, spatial
>Reporter: Jaime Yap
>Assignee: David Smiley
>Priority: Major
>  Labels: heatmap
> Attachments: SOLR-12398.patch
>
>
> The JSON query Facet API does not support Heatmap facets. For companies that 
> have standardized around generating queries for the JSON query API, it is a 
> major wart to need to also support falling back to the param encoding API in 
> order to make use of them.
> More importantly however, given it's more natural support for nested 
> subfacets, the JSON Query facet API is be able to compute more interesting 
> Heatmap layers for each facet bucket. Without resorting to the older (and 
> much more awkward) facet pivot syntax.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11807) maxShardsPerNode=-1 needs special handling while restoring collections

2018-06-20 Thread Noble Paul (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16518186#comment-16518186
 ] 

Noble Paul commented on SOLR-11807:
---

bq. I don't think that's true. If you run "bin/solr start -e cloud -noprompt" 
you'll get this in the state.json for the gettingstarted collection

I was mistaken. 
Anyway, it's safe to use {{-1}}

> maxShardsPerNode=-1 needs special handling while restoring collections
> --
>
> Key: SOLR-11807
> URL: https://issues.apache.org/jira/browse/SOLR-11807
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Backup/Restore
>Reporter: Varun Thacker
>Assignee: Varun Thacker
>Priority: Minor
> Fix For: master (8.0), 7.5
>
> Attachments: SOLR-11807.patch, SOLR-11807.patch, SOLR-11807.patch
>
>
> When you start Solr 6.6. and run the cloud example here's the log excerpt :
> {code:java}
> Connecting to ZooKeeper at localhost:9983 ...
> INFO  - 2018-06-20 13:44:47.491; 
> org.apache.solr.client.solrj.impl.ZkClientClusterStateProvider; Cluster at 
> localhost:9983 ready
> ...
> Creating new collection 'gettingstarted' using command:
> http://localhost:8983/solr/admin/collections?action=CREATE=gettingstarted=2=2=2=gettingstarted{code}
> maxShardsPerNode get's set to 2 . 
>  
> Compare this to Solr 7.3 
> {code:java}
> INFO  - 2018-06-20 13:55:33.823; 
> org.apache.solr.client.solrj.impl.ZkClientClusterStateProvider; Cluster at 
> localhost:9983 ready
> Created collection 'gettingstarted' with 2 shard(s), 2 replica(s) with 
> config-set 'gettingstarted'{code}
> So something changed and now we no longer set maxShardsPerNode and it 
> defaults to -1 . 
>  
> -1 has special handing while creating a collection ( it means max int ) . 
> This special handling is not there while restoring a collection and hence 
> this fails
> We should not set maxShardsPerNode to -1 in the first place
> Steps to reproduce:
> 1. ./bin/solr start -e cloud -noprompt : This creates a 2 node cluster and a 
> gettingstarted collection which 2X2
>  2. Add 4 docs (id=1,2,3,4) with commit=true and openSearcher=true (default)
>  3. Call backup: 
> [http://localhost:8983/solr/admin/collections?action=BACKUP=gettingstarted_backup=gettingstarted=/Users/varunthacker/solr-7.1.0]
>  4. Call restore:
>  
> [http://localhost:8983/solr/admin/collections?action=restore=gettingstarted_backup=restore_gettingstarted=/Users/varunthacker/solr-7.1.0]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-repro - Build # 855 - Still Failing

2018-06-20 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/855/

[...truncated 44 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-Tests-master/2568/consoleText

[repro] Revision: dcfbaf31dbe36b551c83dffdc7df8d55e078d946

[repro] Repro line:  ant test  -Dtestcase=TestSegmentSorting 
-Dtests.method=testAtomicUpdateOfSegmentSortField -Dtests.seed=8CD3345B91F14B23 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=ro 
-Dtests.timezone=America/Mexico_City -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=SolrRrdBackendFactoryTest 
-Dtests.method=testBasic -Dtests.seed=8CD3345B91F14B23 -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.locale=sr-Latn-BA 
-Dtests.timezone=Australia/Melbourne -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  
-Dtestcase=CollectionAdminRequestRequiredParamsTest 
-Dtests.method=testCreateCollection -Dtests.seed=4ED737752EFE8E4B 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=is-IS 
-Dtests.timezone=Pacific/Palau -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
bcbcb16b6a2ea0a825f31bfbc88fb737dc188c34
[repro] git fetch

[...truncated 2 lines...]
[repro] git checkout dcfbaf31dbe36b551c83dffdc7df8d55e078d946

[...truncated 2 lines...]
[repro] git merge

[...truncated 1 lines...]
[repro] Setting last failure code to 32768

[repro] Traceback (most recent call last):

[...truncated 4 lines...]
RuntimeError: ERROR: "git merge" failed.  See above.

Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Updated] (SOLR-12398) Make JSON Facet API support Heatmap Facet

2018-06-20 Thread David Smiley (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated SOLR-12398:

Attachment: SOLR-12398.patch

> Make JSON Facet API support Heatmap Facet
> -
>
> Key: SOLR-12398
> URL: https://issues.apache.org/jira/browse/SOLR-12398
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module, faceting, JSON Request API, spatial
>Reporter: Jaime Yap
>Assignee: David Smiley
>Priority: Major
>  Labels: heatmap
> Attachments: SOLR-12398.patch
>
>
> The JSON query Facet API does not support Heatmap facets. For companies that 
> have standardized around generating queries for the JSON query API, it is a 
> major wart to need to also support falling back to the param encoding API in 
> order to make use of them.
> More importantly however, given it's more natural support for nested 
> subfacets, the JSON Query facet API is be able to compute more interesting 
> Heatmap layers for each facet bucket. Without resorting to the older (and 
> much more awkward) facet pivot syntax.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11807) maxShardsPerNode=-1 needs special handling while restoring collections

2018-06-20 Thread Varun Thacker (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16518121#comment-16518121
 ] 

Varun Thacker commented on SOLR-11807:
--

Patch which expects that a user can pass in -1 and state.json can also contain 
-1 

The restore treats -1 as "unlimited"

> maxShardsPerNode=-1 needs special handling while restoring collections
> --
>
> Key: SOLR-11807
> URL: https://issues.apache.org/jira/browse/SOLR-11807
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Backup/Restore
>Reporter: Varun Thacker
>Assignee: Varun Thacker
>Priority: Minor
> Fix For: master (8.0), 7.5
>
> Attachments: SOLR-11807.patch, SOLR-11807.patch, SOLR-11807.patch
>
>
> When you start Solr 6.6. and run the cloud example here's the log excerpt :
> {code:java}
> Connecting to ZooKeeper at localhost:9983 ...
> INFO  - 2018-06-20 13:44:47.491; 
> org.apache.solr.client.solrj.impl.ZkClientClusterStateProvider; Cluster at 
> localhost:9983 ready
> ...
> Creating new collection 'gettingstarted' using command:
> http://localhost:8983/solr/admin/collections?action=CREATE=gettingstarted=2=2=2=gettingstarted{code}
> maxShardsPerNode get's set to 2 . 
>  
> Compare this to Solr 7.3 
> {code:java}
> INFO  - 2018-06-20 13:55:33.823; 
> org.apache.solr.client.solrj.impl.ZkClientClusterStateProvider; Cluster at 
> localhost:9983 ready
> Created collection 'gettingstarted' with 2 shard(s), 2 replica(s) with 
> config-set 'gettingstarted'{code}
> So something changed and now we no longer set maxShardsPerNode and it 
> defaults to -1 . 
>  
> -1 has special handing while creating a collection ( it means max int ) . 
> This special handling is not there while restoring a collection and hence 
> this fails
> We should not set maxShardsPerNode to -1 in the first place
> Steps to reproduce:
> 1. ./bin/solr start -e cloud -noprompt : This creates a 2 node cluster and a 
> gettingstarted collection which 2X2
>  2. Add 4 docs (id=1,2,3,4) with commit=true and openSearcher=true (default)
>  3. Call backup: 
> [http://localhost:8983/solr/admin/collections?action=BACKUP=gettingstarted_backup=gettingstarted=/Users/varunthacker/solr-7.1.0]
>  4. Call restore:
>  
> [http://localhost:8983/solr/admin/collections?action=restore=gettingstarted_backup=restore_gettingstarted=/Users/varunthacker/solr-7.1.0]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11807) maxShardsPerNode=-1 needs special handling while restoring collections

2018-06-20 Thread Varun Thacker (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-11807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-11807:
-
Attachment: SOLR-11807.patch

> maxShardsPerNode=-1 needs special handling while restoring collections
> --
>
> Key: SOLR-11807
> URL: https://issues.apache.org/jira/browse/SOLR-11807
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Backup/Restore
>Reporter: Varun Thacker
>Assignee: Varun Thacker
>Priority: Minor
> Fix For: master (8.0), 7.5
>
> Attachments: SOLR-11807.patch, SOLR-11807.patch, SOLR-11807.patch
>
>
> When you start Solr 6.6. and run the cloud example here's the log excerpt :
> {code:java}
> Connecting to ZooKeeper at localhost:9983 ...
> INFO  - 2018-06-20 13:44:47.491; 
> org.apache.solr.client.solrj.impl.ZkClientClusterStateProvider; Cluster at 
> localhost:9983 ready
> ...
> Creating new collection 'gettingstarted' using command:
> http://localhost:8983/solr/admin/collections?action=CREATE=gettingstarted=2=2=2=gettingstarted{code}
> maxShardsPerNode get's set to 2 . 
>  
> Compare this to Solr 7.3 
> {code:java}
> INFO  - 2018-06-20 13:55:33.823; 
> org.apache.solr.client.solrj.impl.ZkClientClusterStateProvider; Cluster at 
> localhost:9983 ready
> Created collection 'gettingstarted' with 2 shard(s), 2 replica(s) with 
> config-set 'gettingstarted'{code}
> So something changed and now we no longer set maxShardsPerNode and it 
> defaults to -1 . 
>  
> -1 has special handing while creating a collection ( it means max int ) . 
> This special handling is not there while restoring a collection and hence 
> this fails
> We should not set maxShardsPerNode to -1 in the first place
> Steps to reproduce:
> 1. ./bin/solr start -e cloud -noprompt : This creates a 2 node cluster and a 
> gettingstarted collection which 2X2
>  2. Add 4 docs (id=1,2,3,4) with commit=true and openSearcher=true (default)
>  3. Call backup: 
> [http://localhost:8983/solr/admin/collections?action=BACKUP=gettingstarted_backup=gettingstarted=/Users/varunthacker/solr-7.1.0]
>  4. Call restore:
>  
> [http://localhost:8983/solr/admin/collections?action=restore=gettingstarted_backup=restore_gettingstarted=/Users/varunthacker/solr-7.1.0]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11807) maxShardsPerNode=-1 needs special handling while restoring collections

2018-06-20 Thread Varun Thacker (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16518090#comment-16518090
 ] 

Varun Thacker commented on SOLR-11807:
--

I don't think that's true. If you run "bin/solr start -e cloud -noprompt" 
you'll get this in the state.json for the gettingstarted collection
{code:java}
"router":{"name":"compositeId"},
"maxShardsPerNode":"-1",
"autoAddReplicas":"false",
"nrtReplicas":"1",
"tlogReplicas":"0"}}{code}

> maxShardsPerNode=-1 needs special handling while restoring collections
> --
>
> Key: SOLR-11807
> URL: https://issues.apache.org/jira/browse/SOLR-11807
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Backup/Restore
>Reporter: Varun Thacker
>Assignee: Varun Thacker
>Priority: Minor
> Fix For: master (8.0), 7.5
>
> Attachments: SOLR-11807.patch, SOLR-11807.patch
>
>
> When you start Solr 6.6. and run the cloud example here's the log excerpt :
> {code:java}
> Connecting to ZooKeeper at localhost:9983 ...
> INFO  - 2018-06-20 13:44:47.491; 
> org.apache.solr.client.solrj.impl.ZkClientClusterStateProvider; Cluster at 
> localhost:9983 ready
> ...
> Creating new collection 'gettingstarted' using command:
> http://localhost:8983/solr/admin/collections?action=CREATE=gettingstarted=2=2=2=gettingstarted{code}
> maxShardsPerNode get's set to 2 . 
>  
> Compare this to Solr 7.3 
> {code:java}
> INFO  - 2018-06-20 13:55:33.823; 
> org.apache.solr.client.solrj.impl.ZkClientClusterStateProvider; Cluster at 
> localhost:9983 ready
> Created collection 'gettingstarted' with 2 shard(s), 2 replica(s) with 
> config-set 'gettingstarted'{code}
> So something changed and now we no longer set maxShardsPerNode and it 
> defaults to -1 . 
>  
> -1 has special handing while creating a collection ( it means max int ) . 
> This special handling is not there while restoring a collection and hence 
> this fails
> We should not set maxShardsPerNode to -1 in the first place
> Steps to reproduce:
> 1. ./bin/solr start -e cloud -noprompt : This creates a 2 node cluster and a 
> gettingstarted collection which 2X2
>  2. Add 4 docs (id=1,2,3,4) with commit=true and openSearcher=true (default)
>  3. Call backup: 
> [http://localhost:8983/solr/admin/collections?action=BACKUP=gettingstarted_backup=gettingstarted=/Users/varunthacker/solr-7.1.0]
>  4. Call restore:
>  
> [http://localhost:8983/solr/admin/collections?action=restore=gettingstarted_backup=restore_gettingstarted=/Users/varunthacker/solr-7.1.0]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11807) maxShardsPerNode=-1 needs special handling while restoring collections

2018-06-20 Thread Noble Paul (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16518089#comment-16518089
 ] 

Noble Paul commented on SOLR-11807:
---

Well, {{-1}} is not persisted to {{state.json}} what is persisted to 
{{state.json}} is {{Integer.MAX_VALUE}}. {{-1}} is  a special value understood 
by the command

> maxShardsPerNode=-1 needs special handling while restoring collections
> --
>
> Key: SOLR-11807
> URL: https://issues.apache.org/jira/browse/SOLR-11807
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Backup/Restore
>Reporter: Varun Thacker
>Assignee: Varun Thacker
>Priority: Minor
> Fix For: master (8.0), 7.5
>
> Attachments: SOLR-11807.patch, SOLR-11807.patch
>
>
> When you start Solr 6.6. and run the cloud example here's the log excerpt :
> {code:java}
> Connecting to ZooKeeper at localhost:9983 ...
> INFO  - 2018-06-20 13:44:47.491; 
> org.apache.solr.client.solrj.impl.ZkClientClusterStateProvider; Cluster at 
> localhost:9983 ready
> ...
> Creating new collection 'gettingstarted' using command:
> http://localhost:8983/solr/admin/collections?action=CREATE=gettingstarted=2=2=2=gettingstarted{code}
> maxShardsPerNode get's set to 2 . 
>  
> Compare this to Solr 7.3 
> {code:java}
> INFO  - 2018-06-20 13:55:33.823; 
> org.apache.solr.client.solrj.impl.ZkClientClusterStateProvider; Cluster at 
> localhost:9983 ready
> Created collection 'gettingstarted' with 2 shard(s), 2 replica(s) with 
> config-set 'gettingstarted'{code}
> So something changed and now we no longer set maxShardsPerNode and it 
> defaults to -1 . 
>  
> -1 has special handing while creating a collection ( it means max int ) . 
> This special handling is not there while restoring a collection and hence 
> this fails
> We should not set maxShardsPerNode to -1 in the first place
> Steps to reproduce:
> 1. ./bin/solr start -e cloud -noprompt : This creates a 2 node cluster and a 
> gettingstarted collection which 2X2
>  2. Add 4 docs (id=1,2,3,4) with commit=true and openSearcher=true (default)
>  3. Call backup: 
> [http://localhost:8983/solr/admin/collections?action=BACKUP=gettingstarted_backup=gettingstarted=/Users/varunthacker/solr-7.1.0]
>  4. Call restore:
>  
> [http://localhost:8983/solr/admin/collections?action=restore=gettingstarted_backup=restore_gettingstarted=/Users/varunthacker/solr-7.1.0]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11807) maxShardsPerNode=-1 needs special handling while restoring collections

2018-06-20 Thread Varun Thacker (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16518085#comment-16518085
 ] 

Varun Thacker commented on SOLR-11807:
--

So we want to keep -1 as a value to indicate "unlimited" ?

> maxShardsPerNode=-1 needs special handling while restoring collections
> --
>
> Key: SOLR-11807
> URL: https://issues.apache.org/jira/browse/SOLR-11807
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Backup/Restore
>Reporter: Varun Thacker
>Assignee: Varun Thacker
>Priority: Minor
> Fix For: master (8.0), 7.5
>
> Attachments: SOLR-11807.patch, SOLR-11807.patch
>
>
> When you start Solr 6.6. and run the cloud example here's the log excerpt :
> {code:java}
> Connecting to ZooKeeper at localhost:9983 ...
> INFO  - 2018-06-20 13:44:47.491; 
> org.apache.solr.client.solrj.impl.ZkClientClusterStateProvider; Cluster at 
> localhost:9983 ready
> ...
> Creating new collection 'gettingstarted' using command:
> http://localhost:8983/solr/admin/collections?action=CREATE=gettingstarted=2=2=2=gettingstarted{code}
> maxShardsPerNode get's set to 2 . 
>  
> Compare this to Solr 7.3 
> {code:java}
> INFO  - 2018-06-20 13:55:33.823; 
> org.apache.solr.client.solrj.impl.ZkClientClusterStateProvider; Cluster at 
> localhost:9983 ready
> Created collection 'gettingstarted' with 2 shard(s), 2 replica(s) with 
> config-set 'gettingstarted'{code}
> So something changed and now we no longer set maxShardsPerNode and it 
> defaults to -1 . 
>  
> -1 has special handing while creating a collection ( it means max int ) . 
> This special handling is not there while restoring a collection and hence 
> this fails
> We should not set maxShardsPerNode to -1 in the first place
> Steps to reproduce:
> 1. ./bin/solr start -e cloud -noprompt : This creates a 2 node cluster and a 
> gettingstarted collection which 2X2
>  2. Add 4 docs (id=1,2,3,4) with commit=true and openSearcher=true (default)
>  3. Call backup: 
> [http://localhost:8983/solr/admin/collections?action=BACKUP=gettingstarted_backup=gettingstarted=/Users/varunthacker/solr-7.1.0]
>  4. Call restore:
>  
> [http://localhost:8983/solr/admin/collections?action=restore=gettingstarted_backup=restore_gettingstarted=/Users/varunthacker/solr-7.1.0]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-repro - Build # 854 - Still Failing

2018-06-20 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/854/

[...truncated 28 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-7.x/84/consoleText

[repro] Revision: 103ab23c92e3b418f5db2c9c838de5a487c9b0d8

[repro] Repro line:  ant test  -Dtestcase=IndexSizeTriggerTest 
-Dtests.method=testSplitIntegration -Dtests.seed=235F49F0F35D4C25 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true 
-Dtests.locale=ar-EG -Dtests.timezone=Asia/Sakhalin -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=IndexSizeTriggerTest 
-Dtests.method=testMixedBounds -Dtests.seed=235F49F0F35D4C25 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true 
-Dtests.locale=ar-EG -Dtests.timezone=Asia/Sakhalin -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=TestExecutePlanAction 
-Dtests.method=testExecute -Dtests.seed=235F49F0F35D4C25 -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.badapples=true -Dtests.locale=sr-Latn-ME 
-Dtests.timezone=ACT -Dtests.asserts=true -Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  
-Dtestcase=CollectionAdminRequestRequiredParamsTest 
-Dtests.method=testCreateCollection -Dtests.seed=14BBEC4A86306716 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true 
-Dtests.locale=tr-TR -Dtests.timezone=Australia/Adelaide -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
daff67e27931680c783485bdd197ef65c47971fe
[repro] git fetch

[...truncated 2 lines...]
[repro] git checkout 103ab23c92e3b418f5db2c9c838de5a487c9b0d8

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   IndexSizeTriggerTest
[repro]   TestExecutePlanAction
[repro]solr/solrj
[repro]   CollectionAdminRequestRequiredParamsTest
[repro] ant compile-test

[...truncated 3318 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=10 
-Dtests.class="*.IndexSizeTriggerTest|*.TestExecutePlanAction" 
-Dtests.showOutput=onerror  -Dtests.seed=235F49F0F35D4C25 -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.badapples=true -Dtests.locale=ar-EG 
-Dtests.timezone=Asia/Sakhalin -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[...truncated 14249 lines...]
[repro] Setting last failure code to 256

[repro] ant compile-test

[...truncated 447 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.CollectionAdminRequestRequiredParamsTest" 
-Dtests.showOutput=onerror  -Dtests.seed=14BBEC4A86306716 -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.badapples=true -Dtests.locale=tr-TR 
-Dtests.timezone=Australia/Adelaide -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[...truncated 232 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   0/5 failed: 
org.apache.solr.cloud.autoscaling.sim.TestExecutePlanAction
[repro]   5/5 failed: 
org.apache.solr.client.solrj.CollectionAdminRequestRequiredParamsTest
[repro]   5/5 failed: org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest

[repro] Re-testing 100% failures at the tip of branch_7x
[repro] git fetch

[...truncated 2 lines...]
[repro] git checkout branch_7x

[...truncated 1 lines...]
[repro] Setting last failure code to 256

[...truncated 1 lines...]
[repro] git checkout -t -b branch_7x origin/branch_7x

[...truncated 2 lines...]
[repro] Setting last failure code to 32768

[repro] Traceback (most recent call last):

[...truncated 3 lines...]
raise RuntimeError('ERROR: "%s" failed.  See above.' % checkoutBranchCmd)
RuntimeError: ERROR: "git checkout -t -b branch_7x origin/branch_7x" failed.  
See above.

[repro] git checkout daff67e27931680c783485bdd197ef65c47971fe
error: Your local changes to the following files would be overwritten by 
checkout:
solr/CHANGES.txt

solr/core/src/java/org/apache/solr/handler/component/RealTimeGetComponent.java
Please, commit your changes or stash them before you can switch branches.
Aborting
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_172) - Build # 22285 - Unstable!

2018-06-20 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/22285/
Java: 32bit/jdk1.8.0_172 -client -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.MoveReplicaHDFSTest.testFailedMove

Error Message:
No live SolrServers available to handle this 
request:[https://127.0.0.1:46009/solr/MoveReplicaHDFSTest_failed_coll_true, 
https://127.0.0.1:44691/solr/MoveReplicaHDFSTest_failed_coll_true]

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this 
request:[https://127.0.0.1:46009/solr/MoveReplicaHDFSTest_failed_coll_true, 
https://127.0.0.1:44691/solr/MoveReplicaHDFSTest_failed_coll_true]
at 
__randomizedtesting.SeedInfo.seed([28F76BF8CC3236D2:823AB80A7BE1E302]:0)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:462)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1106)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:886)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:993)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:819)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:942)
at 
org.apache.solr.cloud.MoveReplicaTest.testFailedMove(MoveReplicaTest.java:290)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

  1   2   >