[jira] [Commented] (LUCENE-8558) Adding NumericDocValuesFields is slowing down the indexing process significantly

2018-11-06 Thread Lucene/Solr QA (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16677793#comment-16677793
 ] 

Lucene/Solr QA commented on LUCENE-8558:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Release audit (RAT) {color} | 
{color:green}  0m 32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Check forbidden APIs {color} | 
{color:green}  0m 32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Validate source patterns {color} | 
{color:green}  0m 32s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 14m  
4s{color} | {color:green} core in the patch passed. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 17m 50s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | LUCENE-8558 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12947143/LUCENE-8558.patch |
| Optional Tests |  compile  javac  unit  ratsources  checkforbiddenapis  
validatesourcepatterns  |
| uname | Linux lucene1-us-west 4.4.0-137-generic #163~14.04.1-Ubuntu SMP Mon 
Sep 24 17:14:57 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | ant |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-LUCENE-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh
 |
| git revision | master / 9952af0 |
| ant | version: Apache Ant(TM) version 1.9.3 compiled on July 24 2018 |
| Default Java | 1.8.0_191 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-LUCENE-Build/119/testReport/ |
| modules | C: lucene/core U: lucene/core |
| Console output | 
https://builds.apache.org/job/PreCommit-LUCENE-Build/119/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> Adding NumericDocValuesFields is slowing down the indexing process 
> significantly
> 
>
> Key: LUCENE-8558
> URL: https://issues.apache.org/jira/browse/LUCENE-8558
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/index
>Affects Versions: 7.4, 7.5
>Reporter: Kranthi
>Priority: Major
>  Labels: patch, performance
> Fix For: 7.4, 7.5
>
> Attachments: LUCENE-8558.patch
>
>
> The indexing time for my ~2M documents has gone up significantly when I 
> started adding fields of type NumericDocValuesField.
>  
> Upon debugging found the bottleneck to be in the 
> PerFieldMergeState#FilterFieldInfos constructor. The contains check in the 
> below code snippet was the culprit. 
> {code:java}
> this.filteredNames = new HashSet<>(filterFields);
> this.filtered = new ArrayList<>(filterFields.size());
> for (FieldInfo fi : src) {
>   if (filterFields.contains(fi.name)) {
> {code}
> A simple change as below seems to have fixed my issue
> {code:java}
> this.filteredNames = new HashSet<>(filterFields);
> this.filtered = new ArrayList<>(filterFields.size());
> for (FieldInfo fi : src) {
>   if (this.filteredNames.contains(fi.name)) {
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-7.x - Build # 1012 - Still Unstable

2018-11-06 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/1012/

1 tests failed.
FAILED:  
org.apache.solr.client.solrj.impl.CloudSolrClientTest.testCollectionDoesntExist

Error Message:
Tried fetching cluster state using the node names we knew of, i.e. 
[127.0.0.1:43331_solr, 127.0.0.1:38676_solr, 127.0.0.1:46237_solr]. However, 
succeeded in obtaining the cluster state from none of them.If you think your 
Solr cluster is up and is accessible, you could try re-creating a new 
CloudSolrClient using working solrUrl(s) or zkHost(s).

Stack Trace:
java.lang.RuntimeException: Tried fetching cluster state using the node names 
we knew of, i.e. [127.0.0.1:43331_solr, 127.0.0.1:38676_solr, 
127.0.0.1:46237_solr]. However, succeeded in obtaining the cluster state from 
none of them.If you think your Solr cluster is up and is accessible, you could 
try re-creating a new CloudSolrClient using working solrUrl(s) or zkHost(s).
at 
__randomizedtesting.SeedInfo.seed([FDF94741C814891B:9F5A59EFCA96ED0B]:0)
at 
org.apache.solr.client.solrj.impl.HttpClusterStateProvider.getState(HttpClusterStateProvider.java:108)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.resolveAliases(CloudSolrClient.java:1115)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:844)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:177)
at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:138)
at 
org.apache.solr.client.solrj.impl.CloudSolrClientTest.testCollectionDoesntExist(CloudSolrClientTest.java:779)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:110)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 

[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk-11) - Build # 3049 - Still Unstable!

2018-11-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/3049/
Java: 64bit/jdk-11 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

24 tests failed.
FAILED:  
org.apache.solr.cloud.DeleteShardTest.testDirectoryCleanupAfterDeleteShard

Error Message:
Could not find collection : deleteshard_test

Stack Trace:
org.apache.solr.common.SolrException: Could not find collection : 
deleteshard_test
at 
__randomizedtesting.SeedInfo.seed([A5F92449F52EA322:5E36F12515EE82E]:0)
at 
org.apache.solr.common.cloud.ClusterState.getCollection(ClusterState.java:118)
at 
org.apache.solr.cloud.SolrCloudTestCase.getCollectionState(SolrCloudTestCase.java:258)
at 
org.apache.solr.cloud.DeleteShardTest.testDirectoryCleanupAfterDeleteShard(DeleteShardTest.java:114)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:834)


FAILED:  
org.apache.solr.cloud.DeleteShardTest.testDirectoryCleanupAfterDeleteShard

Error Message:
Could not find collection : deleteshard_test

Stack Trace:

[jira] [Assigned] (LUCENE-8559) Tessellator: isIntersectingPolygon method skip polygon edges

2018-11-06 Thread Ignacio Vera (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ignacio Vera reassigned LUCENE-8559:


Assignee: Ignacio Vera

> Tessellator: isIntersectingPolygon method skip polygon edges
> 
>
> Key: LUCENE-8559
> URL: https://issues.apache.org/jira/browse/LUCENE-8559
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/sandbox
>Affects Versions: 7.6, master (8.0)
>Reporter: Ignacio Vera
>Assignee: Ignacio Vera
>Priority: Blocker
> Attachments: LUCENE-8559.patch
>
>
> The following condition seems wrong:
> {code:java}
> if(node.getX() != x0 && node.getY() != y0 && nextNode.getX() != x0
> && nextNode.getY() != y0 && node.getX() != x1 && node.getY() != y1
> && nextNode.getX() != x1 && nextNode.getY() != y1) {
>//check intersection
> }{code}
> Any node with the same X or Y is skipped. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8559) Tessellator: isIntersectingPolygon method skip polygon edges

2018-11-06 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16677761#comment-16677761
 ] 

ASF subversion and git services commented on LUCENE-8559:
-

Commit d214f968d765e5c30c8782c5545c38d9aef487fe in lucene-solr's branch 
refs/heads/branch_7x from iverase
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=d214f96 ]

LUCENE-8559: Fix bug where polygon edges were skipped when checking for 
intersections


> Tessellator: isIntersectingPolygon method skip polygon edges
> 
>
> Key: LUCENE-8559
> URL: https://issues.apache.org/jira/browse/LUCENE-8559
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/sandbox
>Affects Versions: 7.6, master (8.0)
>Reporter: Ignacio Vera
>Priority: Blocker
> Attachments: LUCENE-8559.patch
>
>
> The following condition seems wrong:
> {code:java}
> if(node.getX() != x0 && node.getY() != y0 && nextNode.getX() != x0
> && nextNode.getY() != y0 && node.getX() != x1 && node.getY() != y1
> && nextNode.getX() != x1 && nextNode.getY() != y1) {
>//check intersection
> }{code}
> Any node with the same X or Y is skipped. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8559) Tessellator: isIntersectingPolygon method skip polygon edges

2018-11-06 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16677760#comment-16677760
 ] 

ASF subversion and git services commented on LUCENE-8559:
-

Commit 9952af099ae65f051056fc8ff55c8e8f4cfb3b93 in lucene-solr's branch 
refs/heads/master from iverase
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=9952af0 ]

LUCENE-8559: Fix bug where polygon edges were skipped when checking for 
intersections


> Tessellator: isIntersectingPolygon method skip polygon edges
> 
>
> Key: LUCENE-8559
> URL: https://issues.apache.org/jira/browse/LUCENE-8559
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/sandbox
>Affects Versions: 7.6, master (8.0)
>Reporter: Ignacio Vera
>Priority: Blocker
> Attachments: LUCENE-8559.patch
>
>
> The following condition seems wrong:
> {code:java}
> if(node.getX() != x0 && node.getY() != y0 && nextNode.getX() != x0
> && nextNode.getY() != y0 && node.getX() != x1 && node.getY() != y1
> && nextNode.getX() != x1 && nextNode.getY() != y1) {
>//check intersection
> }{code}
> Any node with the same X or Y is skipped. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-7.x - Build # 367 - Failure

2018-11-06 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-7.x/367/

No tests ran.

Build Log:
[...truncated 23437 lines...]
[asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: 
invalid part, must have at least one section (e.g., chapter, appendix, etc.)
[asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid 
part, must have at least one section (e.g., chapter, appendix, etc.)
 [java] Processed 2435 links (1987 relative) to 3198 anchors in 247 files
 [echo] Validated Links & Anchors via: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/build/solr-ref-guide/bare-bones-html/

-dist-changes:
 [copy] Copying 4 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/package/changes

package:

-unpack-solr-tgz:

-ensure-solr-tgz-exists:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/build/solr.tgz.unpacked
[untar] Expanding: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/package/solr-7.6.0.tgz
 into 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/build/solr.tgz.unpacked

generate-maven-artifacts:

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings 

[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-12-ea+12) - Build # 23166 - Still Unstable!

2018-11-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/23166/
Java: 64bit/jdk-12-ea+12 -XX:-UseCompressedOops -XX:+UseSerialGC

44 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.client.solrj.request.TestV2Request

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.client.solrj.request.TestV2Request: 1) Thread[id=177, 
name=Connection evictor, state=TIMED_WAITING, group=TGRP-TestV2Request] 
at java.base@12-ea/java.lang.Thread.sleep(Native Method) at 
app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
 at java.base@12-ea/java.lang.Thread.run(Thread.java:835)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.client.solrj.request.TestV2Request: 
   1) Thread[id=177, name=Connection evictor, state=TIMED_WAITING, 
group=TGRP-TestV2Request]
at java.base@12-ea/java.lang.Thread.sleep(Native Method)
at 
app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
at java.base@12-ea/java.lang.Thread.run(Thread.java:835)
at __randomizedtesting.SeedInfo.seed([69450D6D4FA2D962]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.client.solrj.request.TestV2Request

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.client.solrj.request.TestV2Request: 1) Thread[id=646, 
name=Connection evictor, state=TIMED_WAITING, group=TGRP-TestV2Request] 
at java.base@12-ea/java.lang.Thread.sleep(Native Method) at 
app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
 at java.base@12-ea/java.lang.Thread.run(Thread.java:835)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.client.solrj.request.TestV2Request: 
   1) Thread[id=646, name=Connection evictor, state=TIMED_WAITING, 
group=TGRP-TestV2Request]
at java.base@12-ea/java.lang.Thread.sleep(Native Method)
at 
app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
at java.base@12-ea/java.lang.Thread.run(Thread.java:835)
at __randomizedtesting.SeedInfo.seed([69450D6D4FA2D962]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.client.solrj.request.TestV2Request

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.client.solrj.request.TestV2Request: 1) Thread[id=946, 
name=Connection evictor, state=TIMED_WAITING, group=TGRP-TestV2Request] 
at java.base@12-ea/java.lang.Thread.sleep(Native Method) at 
app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
 at java.base@12-ea/java.lang.Thread.run(Thread.java:835)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.client.solrj.request.TestV2Request: 
   1) Thread[id=946, name=Connection evictor, state=TIMED_WAITING, 
group=TGRP-TestV2Request]
at java.base@12-ea/java.lang.Thread.sleep(Native Method)
at 
app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
at java.base@12-ea/java.lang.Thread.run(Thread.java:835)
at __randomizedtesting.SeedInfo.seed([69450D6D4FA2D962]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.client.solrj.request.TestV2Request

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.client.solrj.request.TestV2Request: 1) Thread[id=1990, 
name=Connection evictor, state=TIMED_WAITING, group=TGRP-TestV2Request] 
at java.base@12-ea/java.lang.Thread.sleep(Native Method) at 
app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
 at java.base@12-ea/java.lang.Thread.run(Thread.java:835)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.client.solrj.request.TestV2Request: 
   1) Thread[id=1990, name=Connection evictor, state=TIMED_WAITING, 
group=TGRP-TestV2Request]
at java.base@12-ea/java.lang.Thread.sleep(Native Method)
at 
app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
at java.base@12-ea/java.lang.Thread.run(Thread.java:835)
at __randomizedtesting.SeedInfo.seed([69450D6D4FA2D962]:0)


FAILED:  
org.apache.solr.client.solrj.impl.CloudSolrClientTest.testCollectionDoesntExist

Error Message:
Tried fetching cluster state using the node names we knew of, i.e. 
[127.0.0.1:34453_solr, 127.0.0.1:39465_solr, 127.0.0.1:46537_solr]. However, 
succeeded in obtaining the cluster state from none of them.If you think your 
Solr cluster is up and is accessible, you could try re-creating a new 
CloudSolrClient using working solrUrl(s) or zkHost(s).

Stack Trace:
java.lang.RuntimeException: Tried fetching cluster state using the 

[JENKINS] Lucene-Solr-7.x-Windows (64bit/jdk-11) - Build # 875 - Unstable!

2018-11-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/875/
Java: 64bit/jdk-11 -XX:+UseCompressedOops -XX:+UseParallelGC

17 tests failed.
FAILED:  
org.apache.solr.handler.dataimport.TestSolrEntityProcessorEndToEnd.testFullImport

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\contrib\solr-dataimporthandler\test\J0\temp\solr.handler.dataimport.TestSolrEntityProcessorEndToEnd_6EDD2F3BC4B3D01-001\tempDir-002\collection1:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\contrib\solr-dataimporthandler\test\J0\temp\solr.handler.dataimport.TestSolrEntityProcessorEndToEnd_6EDD2F3BC4B3D01-001\tempDir-002\collection1

C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\contrib\solr-dataimporthandler\test\J0\temp\solr.handler.dataimport.TestSolrEntityProcessorEndToEnd_6EDD2F3BC4B3D01-001\tempDir-002:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\contrib\solr-dataimporthandler\test\J0\temp\solr.handler.dataimport.TestSolrEntityProcessorEndToEnd_6EDD2F3BC4B3D01-001\tempDir-002
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\contrib\solr-dataimporthandler\test\J0\temp\solr.handler.dataimport.TestSolrEntityProcessorEndToEnd_6EDD2F3BC4B3D01-001\tempDir-002\collection1:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\contrib\solr-dataimporthandler\test\J0\temp\solr.handler.dataimport.TestSolrEntityProcessorEndToEnd_6EDD2F3BC4B3D01-001\tempDir-002\collection1
   
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\contrib\solr-dataimporthandler\test\J0\temp\solr.handler.dataimport.TestSolrEntityProcessorEndToEnd_6EDD2F3BC4B3D01-001\tempDir-002:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\contrib\solr-dataimporthandler\test\J0\temp\solr.handler.dataimport.TestSolrEntityProcessorEndToEnd_6EDD2F3BC4B3D01-001\tempDir-002

at 
__randomizedtesting.SeedInfo.seed([6EDD2F3BC4B3D01:8341AF680444A321]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:318)
at 
org.apache.solr.handler.dataimport.TestSolrEntityProcessorEndToEnd$SolrInstance.tearDown(TestSolrEntityProcessorEndToEnd.java:361)
at 
org.apache.solr.handler.dataimport.TestSolrEntityProcessorEndToEnd.tearDown(TestSolrEntityProcessorEndToEnd.java:142)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:993)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 

[jira] [Commented] (SOLR-12938) ClusterStatus should not spew an exception trace if it gets an alias name

2018-11-06 Thread Gus Heck (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16677663#comment-16677663
 ] 

Gus Heck commented on SOLR-12938:
-

This is fixed on my machine, the individual test class successfully beasted 50 
rounds using 5 vms at a time with Mark Miller's beasting 
[gist|https://bit.ly/2SRS8kL]

(but not applying his hardening PR) and I got zero failures. Without the fix I 
get frequent failures under the same conditions.

However, I'm seeing some failures (10-20% of beast rounds) in a wide variety of 
methods in that test class if I crank up the number of vm's such that all my 
cores are in use, so I think this test class (CloudSolrClientTest) is generally 
flakey under heavy load.  The number of 404 error messages with text/html 
unexpected almost tripples from 60ish to 160ish if I run with 25 vms instead of 
5.

I'm running the full test suites now and will again in the morning and will 
likely commit the fix tomorrow assuming all goes well.

> ClusterStatus should not spew an exception trace if it gets an alias name
> -
>
> Key: SOLR-12938
> URL: https://issues.apache.org/jira/browse/SOLR-12938
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.5
>Reporter: Gus Heck
>Assignee: Gus Heck
>Priority: Blocker
> Fix For: 7.6, master (8.0)
>
> Attachments: SOLR-12938.patch, SOLR-12938.patch, SOLR-12938.patch
>
>
> This has been a lingering irritant in debugging tests for time routed 
> aliases, previously mentioned in SOLR-11949 and can be seen frequently in 
> logs attached to SOLR-12928. Basically what happens is for one reason or 
> another cluster status is called on an alias rather than a collection and 
> this is treated identically to a collection name that doesn't exist. 
> This also has lead this bit of lovely exception message parsing in 
> HttpClusteStateProvider.java
> {code:java}
>   } catch (SolrServerException | RemoteSolrException | IOException e) {
> if (e.getMessage().contains(collection + " not found")) {
>   // Cluster state for the given collection was not found.
>   // Lets fetch/update our aliases:
>   getAliases(true);
>   return null;
> }
> log.warn("Attempt to fetch cluster state from " +
> Utils.getBaseUrlForNodeName(nodeName, urlScheme) + " failed.", e);
>   }
> {code}
> Cluster status is already handled in the case of no collection name provided 
> by returning status on all collections. It would make more sense if this 
> command returned status on the component collections for the alias. 
> If that turns out to be difficult or cause too many problems this should at 
> least be downgraded to a non-stack trace warning message since this situation 
> does not represent a failure of the system. The error/stack should of course 
> be retained if neither a collection nor an alias exist.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-11997) Suggestions API/UI should show a message when violations exist but no suggestions are possible

2018-11-06 Thread Noble Paul (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-11997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul reassigned SOLR-11997:
-

Assignee: Noble Paul

> Suggestions API/UI should show a message when violations exist but no 
> suggestions are possible
> --
>
> Key: SOLR-11997
> URL: https://issues.apache.org/jira/browse/SOLR-11997
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Shalin Shekhar Mangar
>Assignee: Noble Paul
>Priority: Major
> Fix For: 7.6, master (8.0)
>
>
> If violations exist but no suggestions are possible because any operation 
> will only increase violations then the suggestions UI/API does not show 
> anything. This is confusing. We should at least have a message which 
> indicates such a situation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-11997) Suggestions API/UI should show a message when violations exist but no suggestions are possible

2018-11-06 Thread Noble Paul (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-11997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul resolved SOLR-11997.
---
Resolution: Fixed

> Suggestions API/UI should show a message when violations exist but no 
> suggestions are possible
> --
>
> Key: SOLR-11997
> URL: https://issues.apache.org/jira/browse/SOLR-11997
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Shalin Shekhar Mangar
>Assignee: Noble Paul
>Priority: Major
> Fix For: 7.6, master (8.0)
>
>
> If violations exist but no suggestions are possible because any operation 
> will only increase violations then the suggestions UI/API does not show 
> anything. This is confusing. We should at least have a message which 
> indicates such a situation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12971) Add pivot Stream Evaluator to pivot facet results into a matrix

2018-11-06 Thread Joel Bernstein (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-12971:
--
Attachment: SOLR-12971.patch

> Add pivot Stream Evaluator to pivot facet results into a matrix
> ---
>
> Key: SOLR-12971
> URL: https://issues.apache.org/jira/browse/SOLR-12971
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Attachments: SOLR-12971.patch, SOLR-12971.patch
>
>
> This ticket adds the *pivot* Stream Evaluator which pivots two dimensional 
> facet results into a matrix that can be operated on by statistical and 
> machine learning functions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12971) Add pivot Stream Evaluator to pivot facet results into a matrix

2018-11-06 Thread Joel Bernstein (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-12971:
--
Description: This ticket adds the *pivot* Stream Evaluator which pivots two 
dimensional facet results into a matrix that can be operated on by statistical 
and machine learning functions.  (was: This ticket adds the *pivot* Stream 
Evaluator which pivots two dimensional facet results into a matrix that can be 
operated on be operated on statistical and machine learning functions.)

> Add pivot Stream Evaluator to pivot facet results into a matrix
> ---
>
> Key: SOLR-12971
> URL: https://issues.apache.org/jira/browse/SOLR-12971
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Attachments: SOLR-12971.patch
>
>
> This ticket adds the *pivot* Stream Evaluator which pivots two dimensional 
> facet results into a matrix that can be operated on by statistical and 
> machine learning functions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-master - Build # 2935 - Still Unstable

2018-11-06 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/2935/

2 tests failed.
FAILED:  org.apache.solr.cloud.RecoveryAfterSoftCommitTest.test

Error Message:
Timeout occured while waiting response from server at: 
http://127.0.0.1:42766/collection1

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: http://127.0.0.1:42766/collection1
at 
__randomizedtesting.SeedInfo.seed([4D17E6E345E45826:C543D939EB1835DE]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:654)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1107)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1260)
at 
org.apache.solr.cloud.RecoveryAfterSoftCommitTest.test(RecoveryAfterSoftCommitTest.java:87)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1010)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Updated] (SOLR-12971) Add pivot Stream Evaluator to pivot facet results into a matrix

2018-11-06 Thread Joel Bernstein (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-12971:
--
Attachment: SOLR-12971.patch

> Add pivot Stream Evaluator to pivot facet results into a matrix
> ---
>
> Key: SOLR-12971
> URL: https://issues.apache.org/jira/browse/SOLR-12971
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Attachments: SOLR-12971.patch
>
>
> This ticket adds the *pivot* Stream Evaluator which pivots two dimensional 
> facet results into a matrix that can be operated on be operated on 
> statistical and machine learning functions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-12971) Add pivot Stream Evaluator to pivot facet results into a matrix

2018-11-06 Thread Joel Bernstein (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein reassigned SOLR-12971:
-

Assignee: Joel Bernstein

> Add pivot Stream Evaluator to pivot facet results into a matrix
> ---
>
> Key: SOLR-12971
> URL: https://issues.apache.org/jira/browse/SOLR-12971
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
>
> This ticket adds the *pivot* Stream Evaluator which pivots two dimensional 
> facet results into a matrix that can be operated on be operated on 
> statistical and machine learning functions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12971) Add pivot Stream Evaluator to pivot facet results into a matrix

2018-11-06 Thread Joel Bernstein (JIRA)
Joel Bernstein created SOLR-12971:
-

 Summary: Add pivot Stream Evaluator to pivot facet results into a 
matrix
 Key: SOLR-12971
 URL: https://issues.apache.org/jira/browse/SOLR-12971
 Project: Solr
  Issue Type: New Feature
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Joel Bernstein


This ticket adds the *pivot* Stream Evaluator which pivots two dimensional 
facet results into a matrix that can be operated on be operated on statistical 
and machine learning functions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Windows (64bit/jdk1.8.0_172) - Build # 7609 - Still Unstable!

2018-11-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7609/
Java: 64bit/jdk1.8.0_172 -XX:+UseCompressedOops -XX:+UseSerialGC

17 tests failed.
FAILED:  org.apache.solr.cloud.TestPullReplica.testKillPullReplica

Error Message:
Replica core_node4 not up to date after 10 seconds expected:<1> but was:<0>

Stack Trace:
java.lang.AssertionError: Replica core_node4 not up to date after 10 seconds 
expected:<1> but was:<0>
at 
__randomizedtesting.SeedInfo.seed([9A4DD1D790943A0B:16BCCD423025DB33]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.cloud.TestPullReplica.waitForNumDocsInAllReplicas(TestPullReplica.java:542)
at 
org.apache.solr.cloud.TestPullReplica.waitForNumDocsInAllReplicas(TestPullReplica.java:533)
at 
org.apache.solr.cloud.TestPullReplica.waitForNumDocsInAllActiveReplicas(TestPullReplica.java:529)
at 
org.apache.solr.cloud.TestPullReplica.testKillPullReplica(TestPullReplica.java:506)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 

[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk-9.0.4) - Build # 3048 - Still Unstable!

2018-11-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/3048/
Java: 64bit/jdk-9.0.4 -XX:-UseCompressedOops -XX:+UseG1GC

5 tests failed.
FAILED:  
org.apache.solr.client.solrj.impl.CloudSolrClientTest.testParallelUpdateQTime

Error Message:
Error from server at http://127.0.0.1:45241/solr/collection1_shard2_replica_n2: 
Expected mime type application/octet-stream but got text/html.   
 
Error 404 Can not find: 
/solr/collection1_shard2_replica_n2/update  HTTP ERROR 
404 Problem accessing /solr/collection1_shard2_replica_n2/update. 
Reason: Can not find: 
/solr/collection1_shard2_replica_n2/updatehttp://eclipse.org/jetty;>Powered by Jetty:// 9.4.11.v20180605  
  

Stack Trace:
org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from 
server at http://127.0.0.1:45241/solr/collection1_shard2_replica_n2: Expected 
mime type application/octet-stream but got text/html. 


Error 404 Can not find: 
/solr/collection1_shard2_replica_n2/update

HTTP ERROR 404
Problem accessing /solr/collection1_shard2_replica_n2/update. Reason:
Can not find: 
/solr/collection1_shard2_replica_n2/updatehttp://eclipse.org/jetty;>Powered by Jetty:// 9.4.11.v20180605




at 
__randomizedtesting.SeedInfo.seed([17D0FFD724395036:F90884466A4685A5]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:551)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1016)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.impl.CloudSolrClientTest.testParallelUpdateQTime(CloudSolrClientTest.java:146)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:110)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)

[JENKINS] Lucene-Solr-repro - Build # 1878 - Unstable

2018-11-06 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/1878/

[...truncated 28 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-7.x/209/consoleText

[repro] Revision: 7d6d77d06753bd131aeb37531b70c59193917683

[repro] Repro line:  ant test  -Dtestcase=CloudSolrClientTest 
-Dtests.method=testCollectionDoesntExist -Dtests.seed=9DC67BF13497195B 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true -Dtests.locale=fi 
-Dtests.timezone=Pacific/Fakaofo -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
6f6a880ec2126690bb363b2a591bed36c406caee
[repro] git fetch
[repro] git checkout 7d6d77d06753bd131aeb37531b70c59193917683

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/solrj
[repro]   CloudSolrClientTest
[repro] ant compile-test

[...truncated 2716 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.CloudSolrClientTest" -Dtests.showOutput=onerror  
-Dtests.seed=9DC67BF13497195B -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.badapples=true -Dtests.locale=fi -Dtests.timezone=Pacific/Fakaofo 
-Dtests.asserts=true -Dtests.file.encoding=UTF-8

[...truncated 2147 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   5/5 failed: org.apache.solr.client.solrj.impl.CloudSolrClientTest

[repro] Re-testing 100% failures at the tip of branch_7x
[repro] git fetch
[repro] git checkout branch_7x

[...truncated 4 lines...]
[repro] git merge --ff-only

[...truncated 157 lines...]
[repro] ant clean

[...truncated 8 lines...]
[repro] Test suites by module:
[repro]solr/solrj
[repro]   CloudSolrClientTest
[repro] ant compile-test

[...truncated 2716 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.CloudSolrClientTest" -Dtests.showOutput=onerror  
-Dtests.seed=9DC67BF13497195B -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.badapples=true -Dtests.locale=fi -Dtests.timezone=Pacific/Fakaofo 
-Dtests.asserts=true -Dtests.file.encoding=UTF-8

[...truncated 2027 lines...]
[repro] Setting last failure code to 256

[repro] Failures at the tip of branch_7x:
[repro]   5/5 failed: org.apache.solr.client.solrj.impl.CloudSolrClientTest

[repro] Re-testing 100% failures at the tip of branch_7x without a seed
[repro] ant clean

[...truncated 8 lines...]
[repro] Test suites by module:
[repro]solr/solrj
[repro]   CloudSolrClientTest
[repro] ant compile-test

[...truncated 2716 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.CloudSolrClientTest" -Dtests.showOutput=onerror  
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true -Dtests.locale=fi 
-Dtests.timezone=Pacific/Fakaofo -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[...truncated 2197 lines...]
[repro] Setting last failure code to 256

[repro] Failures at the tip of branch_7x without a seed:
[repro]   5/5 failed: org.apache.solr.client.solrj.impl.CloudSolrClientTest
[repro] git checkout 6f6a880ec2126690bb363b2a591bed36c406caee

[...truncated 8 lines...]
[repro] Exiting with code 256

[...truncated 6 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-12023) Autoscaling policy engine shuffles replicas needlessly and can also suggest nonexistent replicas to be moved

2018-11-06 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16677526#comment-16677526
 ] 

ASF subversion and git services commented on SOLR-12023:


Commit 6f6a880ec2126690bb363b2a591bed36c406caee in lucene-solr's branch 
refs/heads/master from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=6f6a880 ]

SOLR-12023: correcting wrong git merge


> Autoscaling policy engine shuffles replicas needlessly and can also suggest 
> nonexistent replicas to be moved
> 
>
> Key: SOLR-12023
> URL: https://issues.apache.org/jira/browse/SOLR-12023
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Noble Paul
>Priority: Major
> Fix For: 7.6, master (8.0)
>
> Attachments: SOLR-11066-failing.patch, SOLR-12023.patch
>
>
> A test that I wrote in SOLR-11066 found the following problem:
> Cluster: 2 nodes
> Collection: 1 shard, 3 replicas, maxShardsPerNode=5
> No autoscaling policy or preference applied
> When the trigger runs, the computed plan needlessly shuffles all three 
> replicas and then proceeds to return suggestions with only numbers as core 
> names. These cores do not exist. I found that these numbers are generated 
> internally by the framework as placeholders for moved cores for further 
> calculations. They should never ever be suggested to the users.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-BadApples-NightlyTests-7.x - Build # 34 - Unstable

2018-11-06 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-NightlyTests-7.x/34/

6 tests failed.
FAILED:  org.apache.solr.cloud.ShardRoutingTest.test

Error Message:
expected:<3> but was:<5>

Stack Trace:
java.lang.AssertionError: expected:<3> but was:<5>
at 
__randomizedtesting.SeedInfo.seed([724D68C8926F4AC2:FA1957123C93273A]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.cloud.ShardRoutingTest.doTestNumRequests(ShardRoutingTest.java:256)
at 
org.apache.solr.cloud.ShardRoutingTest.test(ShardRoutingTest.java:110)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1010)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 

[JENKINS] Lucene-Solr-BadApples-Tests-master - Build # 204 - Still Unstable

2018-11-06 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-master/204/

1 tests failed.
FAILED:  
org.apache.solr.client.solrj.impl.CloudSolrClientTest.testCollectionDoesntExist

Error Message:
Tried fetching cluster state using the node names we knew of, i.e. 
[127.0.0.1:32992_solr, 127.0.0.1:40696_solr, 127.0.0.1:34260_solr]. However, 
succeeded in obtaining the cluster state from none of them.If you think your 
Solr cluster is up and is accessible, you could try re-creating a new 
CloudSolrClient using working solrUrl(s) or zkHost(s).

Stack Trace:
java.lang.RuntimeException: Tried fetching cluster state using the node names 
we knew of, i.e. [127.0.0.1:32992_solr, 127.0.0.1:40696_solr, 
127.0.0.1:34260_solr]. However, succeeded in obtaining the cluster state from 
none of them.If you think your Solr cluster is up and is accessible, you could 
try re-creating a new CloudSolrClient using working solrUrl(s) or zkHost(s).
at 
__randomizedtesting.SeedInfo.seed([CCDC3FC6108EE54E:AE7F2168120C815E]:0)
at 
org.apache.solr.client.solrj.impl.HttpClusterStateProvider.getState(HttpClusterStateProvider.java:108)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.resolveAliases(CloudSolrClient.java:1115)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:844)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:177)
at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:138)
at 
org.apache.solr.client.solrj.impl.CloudSolrClientTest.testCollectionDoesntExist(CloudSolrClientTest.java:779)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:110)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
  

[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-11) - Build # 23165 - Still Unstable!

2018-11-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/23165/
Java: 64bit/jdk-11 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

50 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.DocValuesNotIndexedTest

Error Message:
Collection not found: dv_coll

Stack Trace:
org.apache.solr.common.SolrException: Collection not found: dv_coll
at __randomizedtesting.SeedInfo.seed([1C9738B76BA1BE0]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:851)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.cloud.DocValuesNotIndexedTest.createCluster(DocValuesNotIndexedTest.java:154)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:875)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:834)


FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.DocValuesNotIndexedTest

Error Message:
Collection not found: dv_coll

Stack Trace:
org.apache.solr.common.SolrException: Collection not found: dv_coll
at __randomizedtesting.SeedInfo.seed([1C9738B76BA1BE0]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:851)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.cloud.DocValuesNotIndexedTest.createCluster(DocValuesNotIndexedTest.java:154)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:875)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 

[jira] [Created] (SOLR-12970) Inconsistency between VariableResolver returning "" and FieldStreamDataSource testing for null

2018-11-06 Thread Pierre Beck (JIRA)
Pierre Beck created SOLR-12970:
--

 Summary: Inconsistency between VariableResolver returning "" and 
FieldStreamDataSource testing for null
 Key: SOLR-12970
 URL: https://issues.apache.org/jira/browse/SOLR-12970
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: contrib - DataImportHandler
Affects Versions: 7.5
 Environment: solr 7.5.0, tried on win7 32 and ubuntu x64.

I get the problem with oracle data source, because I typed field name lower 
case, I had to type it upper case.
Reporter: Pierre Beck


If one mistype dataField, we should get an error "No field available for name : 
"

Instead you get a mystic  "unsupported type : class java.lang.String"

 

in FieldStreamDataSource.java
https://github.com/apache/lucene-solr/blob/master/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/FieldStreamDataSource.java

l. 63, if (o == null) {
 the code is testing if the freshly returned o is null


But in VariableResolver.java 
(https://github.com/apache/lucene-solr/blob/master/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/VariableResolver.java)
public Object resolve(String name) {
l 118 : return r == null ? "" : r;

If field cannot be resolved, resolve() returns "" instead of null. But 
getData() is expecting null if unresolved and throw a more explanary error l. 
64 throw new DataImportHandlerException(SEVERE, "No field available for name : 
" + dataField);



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10816) Change uniqueKey to use docValues and not stored field

2018-11-06 Thread Erick Erickson (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-10816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16677488#comment-16677488
 ] 

Erick Erickson commented on SOLR-10816:
---

This is probably obsolete at this point in at least the following way:

Given the work on SOLR-12625, we shouldn't be paying the performance penalty 
for first-pass fetching of the doc ID.

that does not address Uwe's comments about not storing the uniqueKey, so it's 
probably best to store it _and_ make it docValues.

> Change uniqueKey to use docValues and not stored field
> --
>
> Key: SOLR-10816
> URL: https://issues.apache.org/jira/browse/SOLR-10816
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Priority: Major
>
> This issue is about the performance improvements you can get by avoiding 
> decompression during the first phase of a distributed search where only id 
> and score is needed.
> The improvements will be noticed for users if the docs are large or have lots 
> of fields in them. 
> For users who don't have this scenario it shouldn't slow things done by any 
> noticeable amounts?
> We should default the unique key field to use docValuues='true' and 
> stored='false' 
> Links to the discussion that lead to this idea:
> - 
> https://issues.apache.org/jira/browse/SOLR-5478?focusedCommentId=16036951=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16036951
> - 
> http://mail-archives.apache.org/mod_mbox/lucene-dev/201706.mbox/%3C008201d2ddf9%2429435740%247bca05c0%24%40thetaphi.de%3E



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Solaris (64bit/jdk1.8.0) - Build # 898 - Still unstable!

2018-11-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Solaris/898/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC

7 tests failed.
FAILED:  
org.apache.solr.client.solrj.impl.CloudSolrClientTest.testCollectionDoesntExist

Error Message:
Tried fetching cluster state using the node names we knew of, i.e. 
[127.0.0.1:34537_solr, 127.0.0.1:53006_solr, 127.0.0.1:45631_solr]. However, 
succeeded in obtaining the cluster state from none of them.If you think your 
Solr cluster is up and is accessible, you could try re-creating a new 
CloudSolrClient using working solrUrl(s) or zkHost(s).

Stack Trace:
java.lang.RuntimeException: Tried fetching cluster state using the node names 
we knew of, i.e. [127.0.0.1:34537_solr, 127.0.0.1:53006_solr, 
127.0.0.1:45631_solr]. However, succeeded in obtaining the cluster state from 
none of them.If you think your Solr cluster is up and is accessible, you could 
try re-creating a new CloudSolrClient using working solrUrl(s) or zkHost(s).
at 
__randomizedtesting.SeedInfo.seed([8A06D034CC386303:E8A5CE9ACEBA0713]:0)
at 
org.apache.solr.client.solrj.impl.HttpClusterStateProvider.getState(HttpClusterStateProvider.java:108)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.resolveAliases(CloudSolrClient.java:1115)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:844)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:177)
at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:138)
at 
org.apache.solr.client.solrj.impl.CloudSolrClientTest.testCollectionDoesntExist(CloudSolrClientTest.java:779)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:110)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 

[JENKINS] Lucene-Solr-SmokeRelease-master - Build # 1175 - Still Failing

2018-11-06 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/1175/

No tests ran.

Build Log:
[...truncated 23412 lines...]
[asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: 
invalid part, must have at least one section (e.g., chapter, appendix, etc.)
[asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid 
part, must have at least one section (e.g., chapter, appendix, etc.)
 [java] Processed 2436 links (1988 relative) to 3206 anchors in 248 files
 [echo] Validated Links & Anchors via: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr-ref-guide/bare-bones-html/

-dist-changes:
 [copy] Copying 4 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/changes

package:

-unpack-solr-tgz:

-ensure-solr-tgz-exists:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr.tgz.unpacked
[untar] Expanding: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/solr-8.0.0.tgz
 into 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr.tgz.unpacked

generate-maven-artifacts:

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:


[jira] [Commented] (LUCENE-8558) Adding NumericDocValuesFields is slowing down the indexing process significantly

2018-11-06 Thread Simon Willnauer (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16677401#comment-16677401
 ] 

Simon Willnauer commented on LUCENE-8558:
-

patch LGTM

> Adding NumericDocValuesFields is slowing down the indexing process 
> significantly
> 
>
> Key: LUCENE-8558
> URL: https://issues.apache.org/jira/browse/LUCENE-8558
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/index
>Affects Versions: 7.4, 7.5
>Reporter: Kranthi
>Priority: Major
>  Labels: patch, performance
> Fix For: 7.4, 7.5
>
> Attachments: LUCENE-8558.patch
>
>
> The indexing time for my ~2M documents has gone up significantly when I 
> started adding fields of type NumericDocValuesField.
>  
> Upon debugging found the bottleneck to be in the 
> PerFieldMergeState#FilterFieldInfos constructor. The contains check in the 
> below code snippet was the culprit. 
> {code:java}
> this.filteredNames = new HashSet<>(filterFields);
> this.filtered = new ArrayList<>(filterFields.size());
> for (FieldInfo fi : src) {
>   if (filterFields.contains(fi.name)) {
> {code}
> A simple change as below seems to have fixed my issue
> {code:java}
> this.filteredNames = new HashSet<>(filterFields);
> this.filtered = new ArrayList<>(filterFields.size());
> for (FieldInfo fi : src) {
>   if (this.filteredNames.contains(fi.name)) {
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8558) Adding NumericDocValuesFields is slowing down the indexing process significantly

2018-11-06 Thread Kranthi (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kranthi updated LUCENE-8558:

Attachment: LUCENE-8558.patch

> Adding NumericDocValuesFields is slowing down the indexing process 
> significantly
> 
>
> Key: LUCENE-8558
> URL: https://issues.apache.org/jira/browse/LUCENE-8558
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/index
>Affects Versions: 7.4, 7.5
>Reporter: Kranthi
>Priority: Major
>  Labels: patch, performance
> Fix For: 7.4, 7.5
>
> Attachments: LUCENE-8558.patch
>
>
> The indexing time for my ~2M documents has gone up significantly when I 
> started adding fields of type NumericDocValuesField.
>  
> Upon debugging found the bottleneck to be in the 
> PerFieldMergeState#FilterFieldInfos constructor. The contains check in the 
> below code snippet was the culprit. 
> {code:java}
> this.filteredNames = new HashSet<>(filterFields);
> this.filtered = new ArrayList<>(filterFields.size());
> for (FieldInfo fi : src) {
>   if (filterFields.contains(fi.name)) {
> {code}
> A simple change as below seems to have fixed my issue
> {code:java}
> this.filteredNames = new HashSet<>(filterFields);
> this.filtered = new ArrayList<>(filterFields.size());
> for (FieldInfo fi : src) {
>   if (this.filteredNames.contains(fi.name)) {
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-12964) Use advanceExact instead of advance in a few remaining json facet use cases

2018-11-06 Thread David Smiley (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley reassigned SOLR-12964:
---

Assignee: David Smiley

> Use advanceExact instead of advance in a few remaining json facet use cases
> ---
>
> Key: SOLR-12964
> URL: https://issues.apache.org/jira/browse/SOLR-12964
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module
>Affects Versions: 7.5
>Reporter: Tim Underwood
>Assignee: David Smiley
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This updates 2 places in the JSON Facets code that uses the advance()/docID() 
> pattern instead of the simpler advanceExact().  Most other usages in the 
> faceting code already make use of advanceExact().
> The only remaining usage of advance() in org.apache.solr.search.facet is in:
>  * UniqueAgg.BaseNumericAcc.collect
>  * HLLAgg..BaseNumericAcc.collect
> The code for those of those looks very similar and probably makes sense to 
> update but it would require changing the return type of the protected 
> docIdSetIterator() method to return a DocValuesIterator in order to be able 
> to call the advanceExact() method.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk-10.0.1) - Build # 3047 - Still Unstable!

2018-11-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/3047/
Java: 64bit/jdk-10.0.1 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

6 tests failed.
FAILED:  
org.apache.solr.client.solrj.impl.CloudSolrClientTest.testCollectionDoesntExist

Error Message:
Tried fetching cluster state using the node names we knew of, i.e. 
[127.0.0.1:1_solr, 127.0.0.1:39221_solr, 127.0.0.1:34759_solr]. However, 
succeeded in obtaining the cluster state from none of them.If you think your 
Solr cluster is up and is accessible, you could try re-creating a new 
CloudSolrClient using working solrUrl(s) or zkHost(s).

Stack Trace:
java.lang.RuntimeException: Tried fetching cluster state using the node names 
we knew of, i.e. [127.0.0.1:1_solr, 127.0.0.1:39221_solr, 
127.0.0.1:34759_solr]. However, succeeded in obtaining the cluster state from 
none of them.If you think your Solr cluster is up and is accessible, you could 
try re-creating a new CloudSolrClient using working solrUrl(s) or zkHost(s).
at 
__randomizedtesting.SeedInfo.seed([ACA76F22B27D559B:CE04718CB0FF318B]:0)
at 
org.apache.solr.client.solrj.impl.HttpClusterStateProvider.getState(HttpClusterStateProvider.java:108)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.resolveAliases(CloudSolrClient.java:1115)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:844)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:177)
at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:138)
at 
org.apache.solr.client.solrj.impl.CloudSolrClientTest.testCollectionDoesntExist(CloudSolrClientTest.java:779)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:110)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 

[jira] [Commented] (SOLR-12938) ClusterStatus should not spew an exception trace if it gets an alias name

2018-11-06 Thread Gus Heck (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16677372#comment-16677372
 ] 

Gus Heck commented on SOLR-12938:
-

Ah this test seems to pass 5 out of 15 tries for me... unlucky me.  I think I'm 
going to use this as an excuse to get better with beasting...

> ClusterStatus should not spew an exception trace if it gets an alias name
> -
>
> Key: SOLR-12938
> URL: https://issues.apache.org/jira/browse/SOLR-12938
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.5
>Reporter: Gus Heck
>Assignee: Gus Heck
>Priority: Blocker
> Fix For: 7.6, master (8.0)
>
> Attachments: SOLR-12938.patch, SOLR-12938.patch, SOLR-12938.patch
>
>
> This has been a lingering irritant in debugging tests for time routed 
> aliases, previously mentioned in SOLR-11949 and can be seen frequently in 
> logs attached to SOLR-12928. Basically what happens is for one reason or 
> another cluster status is called on an alias rather than a collection and 
> this is treated identically to a collection name that doesn't exist. 
> This also has lead this bit of lovely exception message parsing in 
> HttpClusteStateProvider.java
> {code:java}
>   } catch (SolrServerException | RemoteSolrException | IOException e) {
> if (e.getMessage().contains(collection + " not found")) {
>   // Cluster state for the given collection was not found.
>   // Lets fetch/update our aliases:
>   getAliases(true);
>   return null;
> }
> log.warn("Attempt to fetch cluster state from " +
> Utils.getBaseUrlForNodeName(nodeName, urlScheme) + " failed.", e);
>   }
> {code}
> Cluster status is already handled in the case of no collection name provided 
> by returning status on all collections. It would make more sense if this 
> command returned status on the component collections for the alias. 
> If that turns out to be difficult or cause too many problems this should at 
> least be downgraded to a non-stack trace warning message since this situation 
> does not represent a failure of the system. The error/stack should of course 
> be retained if neither a collection nor an alias exist.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 2146 - Unstable!

2018-11-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/2146/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseG1GC

10 tests failed.
FAILED:  org.apache.solr.cloud.RecoveryAfterSoftCommitTest.test

Error Message:
Timeout occured while waiting response from server at: 
http://127.0.0.1:33403/_/ev/collection1

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: http://127.0.0.1:33403/_/ev/collection1
at 
__randomizedtesting.SeedInfo.seed([8D419274F4FDFFAC:515ADAE5A019254]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:654)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1107)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1260)
at 
org.apache.solr.cloud.RecoveryAfterSoftCommitTest.test(RecoveryAfterSoftCommitTest.java:87)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1010)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 

[jira] [Commented] (SOLR-12967) MOVEREPLICA converting replica to NRT

2018-11-06 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16677352#comment-16677352
 ] 

Shawn Heisey commented on SOLR-12967:
-

In general, I completely agree that MOVEREPLICA should preserve the replica 
type that already exists.

But when I was thinking about the idea where somebody could specify a default 
replica type, I wondered if some people might want that to override what things 
like MOVEREPLICA do by default.  I'm not sure that such an option should be 
implemented, but I did think of it.

[~gilson.nascimento] also noticed that UTILIZENODE created NRT replicas.  Which 
might really be the same problem -- it would be reasonable for UTILIZENODE to 
be implemented internally as MOVEREPLICA.

> MOVEREPLICA converting replica to NRT
> -
>
> Key: SOLR-12967
> URL: https://issues.apache.org/jira/browse/SOLR-12967
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.5
>Reporter: Gilson
>Priority: Minor
>  Labels: collection-api, solr
>
> When calling Collections API's MOVEREPLICA, the new replica created is always 
> NRT type, even when the original replica is PULL or TLOG. As discussed on 
> IRC, it should use the source replica type, or provide a parameter for the 
> user to choose the new replica's type, similar to ADDREPLICA  parameter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-7.x - Build # 1011 - Unstable

2018-11-06 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/1011/

1 tests failed.
FAILED:  
org.apache.solr.client.solrj.impl.CloudSolrClientTest.testCollectionDoesntExist

Error Message:
Tried fetching cluster state using the node names we knew of, i.e. 
[127.0.0.1:37124_solr, 127.0.0.1:33518_solr, 127.0.0.1:45875_solr]. However, 
succeeded in obtaining the cluster state from none of them.If you think your 
Solr cluster is up and is accessible, you could try re-creating a new 
CloudSolrClient using working solrUrl(s) or zkHost(s).

Stack Trace:
java.lang.RuntimeException: Tried fetching cluster state using the node names 
we knew of, i.e. [127.0.0.1:37124_solr, 127.0.0.1:33518_solr, 
127.0.0.1:45875_solr]. However, succeeded in obtaining the cluster state from 
none of them.If you think your Solr cluster is up and is accessible, you could 
try re-creating a new CloudSolrClient using working solrUrl(s) or zkHost(s).
at 
__randomizedtesting.SeedInfo.seed([F774E230489AE0B3:95D7FC9E4A1884A3]:0)
at 
org.apache.solr.client.solrj.impl.HttpClusterStateProvider.getState(HttpClusterStateProvider.java:108)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.resolveAliases(CloudSolrClient.java:1115)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:844)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:177)
at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:138)
at 
org.apache.solr.client.solrj.impl.CloudSolrClientTest.testCollectionDoesntExist(CloudSolrClientTest.java:779)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:110)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 

Re: Lucene/Solr 7.6

2018-11-06 Thread Tim Underwood
Hello,

I have 2 small simple changes (that are not blockers) that would be nice to
get into 7.6 if anybody has a chance to review:

SOLR-12880  - Updates
JSON Facets debug output to show something like
"FacetFieldProcessorByHashDV" instead of just "FacetField" for the
"processor" field
SOLR-12964  - Switch to
using advanceExact for a few JSON facet use cases

Thanks,
-Tim

On Tue, Nov 6, 2018 at 1:42 PM Nicholas Knize  wrote:

> Hello all,
>
> It looks like we only have two remaining blockers for Solr: SOLR-12927
>  and SOLR-12927
> ,
> and two remaining blockers for Lucene: LUCENE-8556
>  and LUCENE-8559
> 
>
> Let me know if there are any other blockers that need to be resolved prior
> to cutting the branch. If not, I will plan to cut the branch on Friday or
> (provided they are close to resolution) whenever these issues are resolved.
>
> Thanks!
>
>
> On Fri, Nov 2, 2018 at 9:53 AM Bram Van Dam  wrote:
>
>> On 02/11/2018 15:41, Nicholas Knize wrote:
>> > If needed I can hold off on cutting the 7.6 branch and feature freezing
>> > until Friday of next week. That would still give at least two weeks of
>> > jenkins testing & bug fixing before a target release the last week of
>> > November.
>>
>> If you're cutting 7.6 soon, could you be so kind as to have a look at
>> including SOLR-12953?
>>
>> Thanks!
>>
>>  - Bram
>>
> --
>
> Nicholas Knize, Ph.D., GISP
> Geospatial Software Guy  |  Elasticsearch
> Apache Lucene Committer
> nkn...@apache.org
>


Re: Lucene/Solr 7.6

2018-11-06 Thread Nicholas Knize
Hello all,

It looks like we only have two remaining blockers for Solr: SOLR-12927
 and SOLR-12927
,
and two remaining blockers for Lucene: LUCENE-8556
 and LUCENE-8559


Let me know if there are any other blockers that need to be resolved prior
to cutting the branch. If not, I will plan to cut the branch on Friday or
(provided they are close to resolution) whenever these issues are resolved.

Thanks!


On Fri, Nov 2, 2018 at 9:53 AM Bram Van Dam  wrote:

> On 02/11/2018 15:41, Nicholas Knize wrote:
> > If needed I can hold off on cutting the 7.6 branch and feature freezing
> > until Friday of next week. That would still give at least two weeks of
> > jenkins testing & bug fixing before a target release the last week of
> > November.
>
> If you're cutting 7.6 soon, could you be so kind as to have a look at
> including SOLR-12953?
>
> Thanks!
>
>  - Bram
>
-- 

Nicholas Knize, Ph.D., GISP
Geospatial Software Guy  |  Elasticsearch
Apache Lucene Committer
nkn...@apache.org


[jira] [Commented] (LUCENE-8559) Tessellator: isIntersectingPolygon method skip polygon edges

2018-11-06 Thread Nicholas Knize (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16677313#comment-16677313
 ] 

Nicholas Knize commented on LUCENE-8559:


+1  Thank you [~ivera]

> Tessellator: isIntersectingPolygon method skip polygon edges
> 
>
> Key: LUCENE-8559
> URL: https://issues.apache.org/jira/browse/LUCENE-8559
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/sandbox
>Affects Versions: 7.6, master (8.0)
>Reporter: Ignacio Vera
>Priority: Blocker
> Attachments: LUCENE-8559.patch
>
>
> The following condition seems wrong:
> {code:java}
> if(node.getX() != x0 && node.getY() != y0 && nextNode.getX() != x0
> && nextNode.getY() != y0 && node.getX() != x1 && node.getY() != y1
> && nextNode.getX() != x1 && nextNode.getY() != y1) {
>//check intersection
> }{code}
> Any node with the same X or Y is skipped. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-10.0.1) - Build # 23164 - Unstable!

2018-11-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/23164/
Java: 64bit/jdk-10.0.1 -XX:-UseCompressedOops -XX:+UseG1GC

3 tests failed.
FAILED:  
org.apache.solr.client.solrj.impl.CloudSolrClientTest.testParallelUpdateQTime

Error Message:
Error from server at 
https://127.0.0.1:46371/solr/collection1_shard2_replica_n3: Expected mime type 
application/octet-stream but got text/html.Error 404 
Can not find: /solr/collection1_shard2_replica_n3/update  
HTTP ERROR 404 Problem accessing 
/solr/collection1_shard2_replica_n3/update. Reason: Can not find: 
/solr/collection1_shard2_replica_n3/updatehttp://eclipse.org/jetty;>Powered by Jetty:// 9.4.11.v20180605  
  

Stack Trace:
org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from 
server at https://127.0.0.1:46371/solr/collection1_shard2_replica_n3: Expected 
mime type application/octet-stream but got text/html. 


Error 404 Can not find: 
/solr/collection1_shard2_replica_n3/update

HTTP ERROR 404
Problem accessing /solr/collection1_shard2_replica_n3/update. Reason:
Can not find: 
/solr/collection1_shard2_replica_n3/updatehttp://eclipse.org/jetty;>Powered by Jetty:// 9.4.11.v20180605




at 
__randomizedtesting.SeedInfo.seed([7B2FE9F5A144E6A9:95F79264EF3B333A]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:551)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1016)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.impl.CloudSolrClientTest.testParallelUpdateQTime(CloudSolrClientTest.java:146)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:110)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 

[jira] [Commented] (SOLR-12967) MOVEREPLICA converting replica to NRT

2018-11-06 Thread Erick Erickson (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16677299#comment-16677299
 ] 

Erick Erickson commented on SOLR-12967:
---

Shawn:

I would think that the original proposal of "use the source replica's type or 
an explicit override" is best for MOVEREPLICA.

For other kinds of operations, <2> is confusing to me. ADDREPLICA already has a 
"type" you can specify that defaults to NRT. What other operations do you think 
need this? At any rate the general approach of "use NRT unless there's a 'type' 
override" seems like the right thing to do.

 

 

> MOVEREPLICA converting replica to NRT
> -
>
> Key: SOLR-12967
> URL: https://issues.apache.org/jira/browse/SOLR-12967
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.5
>Reporter: Gilson
>Priority: Minor
>  Labels: collection-api, solr
>
> When calling Collections API's MOVEREPLICA, the new replica created is always 
> NRT type, even when the original replica is PULL or TLOG. As discussed on 
> IRC, it should use the source replica type, or provide a parameter for the 
> user to choose the new replica's type, similar to ADDREPLICA  parameter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-12878) FacetFieldProcessorByHashDV is reconstructing FieldInfos on every instantiation

2018-11-06 Thread David Smiley (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley resolved SOLR-12878.
-
Resolution: Duplicate

> FacetFieldProcessorByHashDV is reconstructing FieldInfos on every 
> instantiation
> ---
>
> Key: SOLR-12878
> URL: https://issues.apache.org/jira/browse/SOLR-12878
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module
>Affects Versions: 7.5
>Reporter: Tim Underwood
>Assignee: David Smiley
>Priority: Major
>  Labels: performance
> Fix For: 7.6, master (8.0)
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The FacetFieldProcessorByHashDV constructor is currently calling:
> {noformat}
> FieldInfo fieldInfo = 
> fcontext.searcher.getSlowAtomicReader().getFieldInfos().fieldInfo(sf.getName());
> {noformat}
> Which is reconstructing FieldInfos each time.  Simply switching it to:
> {noformat}
> FieldInfo fieldInfo = 
> fcontext.searcher.getFieldInfos().fieldInfo(sf.getName());
> {noformat}
>  
> causes it to use the cached version of FieldInfos in the SolrIndexSearcher.
> On my index the FacetFieldProcessorByHashDV is 2-3 times slower than the 
> legacy facets without this fix.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-8557) LeafReader.getFieldInfos should always return the same instance

2018-11-06 Thread David Smiley (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley resolved LUCENE-8557.
--
Resolution: Fixed

> LeafReader.getFieldInfos should always return the same instance
> ---
>
> Key: LUCENE-8557
> URL: https://issues.apache.org/jira/browse/LUCENE-8557
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 7.5
>Reporter: Tim Underwood
>Assignee: David Smiley
>Priority: Major
> Attachments: LUCENE-8557.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Most implementations of the LeafReader cache an instance of FieldInfos which 
> is returned in the LeafReader.getFieldInfos() method.  There are a few places 
> that currently do not and this can cause performance problems.
> The most notable example is the lack of caching in Solr's 
> SlowCompositeReaderWrapper which caused unexpected performance slowdowns when 
> trying to use Solr's JSON Facets compared to the legacy facets.
> This proposed change is mostly relevant to Solr but touches a few Lucene 
> classes.  Specifically:
> *1.* Adds a check to TestUtil.checkReader to verify that 
> LeafReader.getFieldInfos() returns the same instance:
>  
> {code:java}
> // FieldInfos should be cached at the reader and always return the same 
> instance
>  if (reader.getFieldInfos() != reader.getFieldInfos()) {
>  throw new RuntimeException("getFieldInfos() returned different instances for 
> class: "+reader.getClass());
>  }
> {code}
> I'm not entirely sure this is wanted or needed but adding it uncovered most 
> of the other LeafReader implementations that were not caching FieldInfos.  
> I'm happy to remove this part of the patch though.
>  
> *2.* Adds a FieldInfos.EMPTY that can be used in a handful of places
>  
> {code:java}
> public final static FieldInfos EMPTY = new FieldInfos(new FieldInfo[0]);
> {code}
> There are several places in the Lucene/Solr tests that were creating empty 
> instances of FieldInfos which were causing the check in #1 to fail.  This 
> fixes those failures and cleans up the code a bit.
> *3.* Fixes a few LeafReader implementations that were not caching FieldInfos
> Specifically:
>  * *MemoryIndex.MemoryIndexReader* - The constructor was already looping over 
> the fields so it seemed natural to just create the FieldInfos at that time
>  * *SlowCompositeReaderWrapper* - This was the one causing me trouble.  I've 
> moved the caching of FieldInfos from SolrIndexSearcher to 
> SlowCompositeReaderWrapper.
>  * *CollapsingQParserPlugin.ReaderWrapper* - getFieldInfos() is immediately 
> called twice after this is constructed
>  * *ExpandComponent.ReaderWrapper* - getFieldInfos() is immediately called 
> twice after this is constructed
>  
> *4.* Minor Solr tweak to avoid calling SolrIndexSearcher.getSlowAtomicReader 
> in FacetFieldProcessorByHashDV.  This change is now optional since 
> SlowCompositeReaderWrapper caches FieldInfos.
>  
> As suggested by [~dsmiley] this takes the place of SOLR-12878 since it 
> touches some Lucene code.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-MacOSX (64bit/jdk-9) - Build # 922 - Unstable!

2018-11-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-MacOSX/922/
Java: 64bit/jdk-9 -XX:-UseCompressedOops -XX:+UseSerialGC

10 tests failed.
FAILED:  org.apache.solr.cloud.TestTlogReplica.testRecovery

Error Message:
Can not find doc 7 in https://127.0.0.1:50738/solr

Stack Trace:
java.lang.AssertionError: Can not find doc 7 in https://127.0.0.1:50738/solr
at 
__randomizedtesting.SeedInfo.seed([F0F7C8246F516529:3107B1884201AF8E]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNotNull(Assert.java:526)
at 
org.apache.solr.cloud.TestTlogReplica.checkRTG(TestTlogReplica.java:902)
at 
org.apache.solr.cloud.TestTlogReplica.testRecovery(TestTlogReplica.java:567)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  org.apache.solr.cloud.TestTlogReplica.testRecovery

Error Message:
Can not find doc 7 in https://127.0.0.1:51059/solr

Stack Trace:

[jira] [Commented] (SOLR-12969) Solr replication failure

2018-11-06 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16677277#comment-16677277
 ] 

Kevin Risden commented on SOLR-12969:
-

[~caomanhdat] - Do you have any ideas here? I think you did work on replication.

> Solr replication failure
> 
>
> Key: SOLR-12969
> URL: https://issues.apache.org/jira/browse/SOLR-12969
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: replication (java)
>Reporter: Jeremy Smith
>Priority: Major
>
> Under certain circumstances, replication fails between a leader and follower. 
>  The follower will not receive updates from the leader, even though the 
> leader has a newer version.  If the leader is restarted, it will get the 
> older version from the follower.
>  
> This was discussed on the [mailing 
> list|https://mail-archives.apache.org/mod_mbox/lucene-solr-user/201810.mbox/%3CBYAPR04MB4406710795EA07E93BF80913ADCD0%40BYAPR04MB4406.namprd04.prod.outlook.com%3E]
>  and [~risdenk] [wrote a 
> script|https://github.com/risdenk/test-solr-start-stop-replica-consistency] 
> that demonstrates this error.  He also verified that the error occurs when 
> the script is run outside of docker.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12969) Solr replication failure

2018-11-06 Thread Jeremy Smith (JIRA)
Jeremy Smith created SOLR-12969:
---

 Summary: Solr replication failure
 Key: SOLR-12969
 URL: https://issues.apache.org/jira/browse/SOLR-12969
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: replication (java)
Reporter: Jeremy Smith


Under certain circumstances, replication fails between a leader and follower.  
The follower will not receive updates from the leader, even though the leader 
has a newer version.  If the leader is restarted, it will get the older version 
from the follower.

 

This was discussed on the [mailing 
list|https://mail-archives.apache.org/mod_mbox/lucene-solr-user/201810.mbox/%3CBYAPR04MB4406710795EA07E93BF80913ADCD0%40BYAPR04MB4406.namprd04.prod.outlook.com%3E]
 and [~risdenk] [wrote a 
script|https://github.com/risdenk/test-solr-start-stop-replica-consistency] 
that demonstrates this error.  He also verified that the error occurs when the 
script is run outside of docker.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Welcome Gus Heck as Lucene/Solr committer

2018-11-06 Thread Tommaso Teofili
welcome Gus!

Regards,
Tommaso
Il giorno mar 6 nov 2018 alle ore 01:24 Christian Moen
 ha scritto:
>
> Congrats, Gus!
>
> On Tue, Nov 6, 2018 at 9:11 AM Otis Gospodnetić  
> wrote:
>>
>> Another welcome, Gus!
>>
>> Otis
>> --
>> Monitoring - Log Management - Alerting - Anomaly Detection
>> Solr & Elasticsearch Consulting Support Training - http://sematext.com/
>>
>>
>>
>> On Thu, Nov 1, 2018 at 8:22 AM David Smiley  wrote:
>>>
>>> Hi all,
>>>
>>> Please join me in welcoming Gus Heck as the latest Lucene/Solr committer!
>>>
>>> Congratulations and Welcome, Gus!
>>>
>>> Gus, it's traditional for you to introduce yourself with a brief bio.
>>>
>>> ~ David
>>> --
>>> Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
>>> LinkedIn: http://linkedin.com/in/davidwsmiley | Book: 
>>> http://www.solrenterprisesearchserver.com

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8557) LeafReader.getFieldInfos should always return the same instance

2018-11-06 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16677244#comment-16677244
 ] 

ASF subversion and git services commented on LUCENE-8557:
-

Commit 12719d19609d87ab0e2a4132d4988dd4362b6575 in lucene-solr's branch 
refs/heads/branch_7x from [~dsmiley]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=12719d1 ]

LUCENE-8557: LeafReader.getFieldInfos should always return the same instance
MemoryIndex: compute/cache up-front
Solr Collapse/Expand with top_fc: compute/cache up-front
Json Facets numerics / hash DV: use the cached fieldInfos on SolrIndexSearcher
SolrIndexSearcher: move the cached FieldInfos to SlowCompositeReaderWrapper

Closes #487
(cherry picked from commit d0cd4245bdb8363e9adf3812817b9989ce4f506c)


> LeafReader.getFieldInfos should always return the same instance
> ---
>
> Key: LUCENE-8557
> URL: https://issues.apache.org/jira/browse/LUCENE-8557
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 7.5
>Reporter: Tim Underwood
>Assignee: David Smiley
>Priority: Major
> Attachments: LUCENE-8557.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Most implementations of the LeafReader cache an instance of FieldInfos which 
> is returned in the LeafReader.getFieldInfos() method.  There are a few places 
> that currently do not and this can cause performance problems.
> The most notable example is the lack of caching in Solr's 
> SlowCompositeReaderWrapper which caused unexpected performance slowdowns when 
> trying to use Solr's JSON Facets compared to the legacy facets.
> This proposed change is mostly relevant to Solr but touches a few Lucene 
> classes.  Specifically:
> *1.* Adds a check to TestUtil.checkReader to verify that 
> LeafReader.getFieldInfos() returns the same instance:
>  
> {code:java}
> // FieldInfos should be cached at the reader and always return the same 
> instance
>  if (reader.getFieldInfos() != reader.getFieldInfos()) {
>  throw new RuntimeException("getFieldInfos() returned different instances for 
> class: "+reader.getClass());
>  }
> {code}
> I'm not entirely sure this is wanted or needed but adding it uncovered most 
> of the other LeafReader implementations that were not caching FieldInfos.  
> I'm happy to remove this part of the patch though.
>  
> *2.* Adds a FieldInfos.EMPTY that can be used in a handful of places
>  
> {code:java}
> public final static FieldInfos EMPTY = new FieldInfos(new FieldInfo[0]);
> {code}
> There are several places in the Lucene/Solr tests that were creating empty 
> instances of FieldInfos which were causing the check in #1 to fail.  This 
> fixes those failures and cleans up the code a bit.
> *3.* Fixes a few LeafReader implementations that were not caching FieldInfos
> Specifically:
>  * *MemoryIndex.MemoryIndexReader* - The constructor was already looping over 
> the fields so it seemed natural to just create the FieldInfos at that time
>  * *SlowCompositeReaderWrapper* - This was the one causing me trouble.  I've 
> moved the caching of FieldInfos from SolrIndexSearcher to 
> SlowCompositeReaderWrapper.
>  * *CollapsingQParserPlugin.ReaderWrapper* - getFieldInfos() is immediately 
> called twice after this is constructed
>  * *ExpandComponent.ReaderWrapper* - getFieldInfos() is immediately called 
> twice after this is constructed
>  
> *4.* Minor Solr tweak to avoid calling SolrIndexSearcher.getSlowAtomicReader 
> in FacetFieldProcessorByHashDV.  This change is now optional since 
> SlowCompositeReaderWrapper caches FieldInfos.
>  
> As suggested by [~dsmiley] this takes the place of SOLR-12878 since it 
> touches some Lucene code.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-BadApples-Tests-7.x - Build # 209 - Still Unstable

2018-11-06 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-7.x/209/

1 tests failed.
FAILED:  
org.apache.solr.client.solrj.impl.CloudSolrClientTest.testCollectionDoesntExist

Error Message:
Tried fetching cluster state using the node names we knew of, i.e. 
[127.0.0.1:41274_solr, 127.0.0.1:38396_solr, 127.0.0.1:46641_solr]. However, 
succeeded in obtaining the cluster state from none of them.If you think your 
Solr cluster is up and is accessible, you could try re-creating a new 
CloudSolrClient using working solrUrl(s) or zkHost(s).

Stack Trace:
java.lang.RuntimeException: Tried fetching cluster state using the node names 
we knew of, i.e. [127.0.0.1:41274_solr, 127.0.0.1:38396_solr, 
127.0.0.1:46641_solr]. However, succeeded in obtaining the cluster state from 
none of them.If you think your Solr cluster is up and is accessible, you could 
try re-creating a new CloudSolrClient using working solrUrl(s) or zkHost(s).
at 
__randomizedtesting.SeedInfo.seed([9DC67BF13497195B:FF65655F36157D4B]:0)
at 
org.apache.solr.client.solrj.impl.HttpClusterStateProvider.getState(HttpClusterStateProvider.java:108)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.resolveAliases(CloudSolrClient.java:1115)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:844)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:177)
at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:138)
at 
org.apache.solr.client.solrj.impl.CloudSolrClientTest.testCollectionDoesntExist(CloudSolrClientTest.java:779)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:110)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)

[jira] [Created] (SOLR-12968) consider mailing_lists.pdf refresh

2018-11-06 Thread Christine Poerschke (JIRA)
Christine Poerschke created SOLR-12968:
--

 Summary: consider mailing_lists.pdf refresh
 Key: SOLR-12968
 URL: https://issues.apache.org/jira/browse/SOLR-12968
 Project: Solr
  Issue Type: Task
  Components: documentation, Tests
Reporter: Christine Poerschke


I randomly discovered this file: 
https://github.com/apache/lucene-solr/blob/releases/lucene-solr/7.5.0/solr/core/src/test-files/mailing_lists.pdf

This task here is to consider options and (possibly) make changes, e.g.
 * to keep the file as is
 * to remove the file
 * to refresh the file content (keep it as .pdf format) e.g. to mention the 
http://lucene.apache.org/solr/community.html page and/or the Solr Reference 
Guide
 * to replace the file (moving away from .pdf format) and to refresh its content

It appears three tests are using the file i.e. a simple remove or replace might 
not be a practical option?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8557) LeafReader.getFieldInfos should always return the same instance

2018-11-06 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16677235#comment-16677235
 ] 

ASF subversion and git services commented on LUCENE-8557:
-

Commit d0cd4245bdb8363e9adf3812817b9989ce4f506c in lucene-solr's branch 
refs/heads/master from [~dsmiley]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=d0cd424 ]

LUCENE-8557: LeafReader.getFieldInfos should always return the same instance
MemoryIndex: compute/cache up-front
Solr Collapse/Expand with top_fc: compute/cache up-front
Json Facets numerics / hash DV: use the cached fieldInfos on SolrIndexSearcher
SolrIndexSearcher: move the cached FieldInfos to SlowCompositeReaderWrapper


> LeafReader.getFieldInfos should always return the same instance
> ---
>
> Key: LUCENE-8557
> URL: https://issues.apache.org/jira/browse/LUCENE-8557
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 7.5
>Reporter: Tim Underwood
>Assignee: David Smiley
>Priority: Major
> Attachments: LUCENE-8557.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Most implementations of the LeafReader cache an instance of FieldInfos which 
> is returned in the LeafReader.getFieldInfos() method.  There are a few places 
> that currently do not and this can cause performance problems.
> The most notable example is the lack of caching in Solr's 
> SlowCompositeReaderWrapper which caused unexpected performance slowdowns when 
> trying to use Solr's JSON Facets compared to the legacy facets.
> This proposed change is mostly relevant to Solr but touches a few Lucene 
> classes.  Specifically:
> *1.* Adds a check to TestUtil.checkReader to verify that 
> LeafReader.getFieldInfos() returns the same instance:
>  
> {code:java}
> // FieldInfos should be cached at the reader and always return the same 
> instance
>  if (reader.getFieldInfos() != reader.getFieldInfos()) {
>  throw new RuntimeException("getFieldInfos() returned different instances for 
> class: "+reader.getClass());
>  }
> {code}
> I'm not entirely sure this is wanted or needed but adding it uncovered most 
> of the other LeafReader implementations that were not caching FieldInfos.  
> I'm happy to remove this part of the patch though.
>  
> *2.* Adds a FieldInfos.EMPTY that can be used in a handful of places
>  
> {code:java}
> public final static FieldInfos EMPTY = new FieldInfos(new FieldInfo[0]);
> {code}
> There are several places in the Lucene/Solr tests that were creating empty 
> instances of FieldInfos which were causing the check in #1 to fail.  This 
> fixes those failures and cleans up the code a bit.
> *3.* Fixes a few LeafReader implementations that were not caching FieldInfos
> Specifically:
>  * *MemoryIndex.MemoryIndexReader* - The constructor was already looping over 
> the fields so it seemed natural to just create the FieldInfos at that time
>  * *SlowCompositeReaderWrapper* - This was the one causing me trouble.  I've 
> moved the caching of FieldInfos from SolrIndexSearcher to 
> SlowCompositeReaderWrapper.
>  * *CollapsingQParserPlugin.ReaderWrapper* - getFieldInfos() is immediately 
> called twice after this is constructed
>  * *ExpandComponent.ReaderWrapper* - getFieldInfos() is immediately called 
> twice after this is constructed
>  
> *4.* Minor Solr tweak to avoid calling SolrIndexSearcher.getSlowAtomicReader 
> in FacetFieldProcessorByHashDV.  This change is now optional since 
> SlowCompositeReaderWrapper caches FieldInfos.
>  
> As suggested by [~dsmiley] this takes the place of SOLR-12878 since it 
> touches some Lucene code.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12938) ClusterStatus should not spew an exception trace if it gets an alias name

2018-11-06 Thread Hoss Man (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16677210#comment-16677210
 ] 

Hoss Man commented on SOLR-12938:
-

{quote}Hoss, can you please share the git bissect command line you ran to find 
the problem? I'd like to save this so I can use it to aid in my own test 
investigations.
{quote}
In general...
{noformat}
git bisect start KNOWN_GOOD KNOWN_BAD
git bisect run bash -c 'ant clean && cd PARENT_DIR_OF_TEST && REPRODUCE_LINE'
{noformat}
today specifically...
{noformat}
git bisect start 7d6d77d06753bd131aeb37531b70c59193917683 
be8f611db1cbaf51622d8af5cd6efced4e338968
git bisect run bash -e 'ant clean && cd solr/solrj/ && ant test 
-Dtestcase=CloudSolrClientTest -Dtests.seed=949992ED4AFA660A 
-Dtests.multiplier=3 -Dtests.slow=true -Dtests.badapples=true -Dtests.locale=kl 
-Dtests.timezone=Europe/Oslo -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1 -Dtests.method=testCollectionDoesntExist'
{noformat}

> ClusterStatus should not spew an exception trace if it gets an alias name
> -
>
> Key: SOLR-12938
> URL: https://issues.apache.org/jira/browse/SOLR-12938
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.5
>Reporter: Gus Heck
>Assignee: Gus Heck
>Priority: Blocker
> Fix For: 7.6, master (8.0)
>
> Attachments: SOLR-12938.patch, SOLR-12938.patch, SOLR-12938.patch
>
>
> This has been a lingering irritant in debugging tests for time routed 
> aliases, previously mentioned in SOLR-11949 and can be seen frequently in 
> logs attached to SOLR-12928. Basically what happens is for one reason or 
> another cluster status is called on an alias rather than a collection and 
> this is treated identically to a collection name that doesn't exist. 
> This also has lead this bit of lovely exception message parsing in 
> HttpClusteStateProvider.java
> {code:java}
>   } catch (SolrServerException | RemoteSolrException | IOException e) {
> if (e.getMessage().contains(collection + " not found")) {
>   // Cluster state for the given collection was not found.
>   // Lets fetch/update our aliases:
>   getAliases(true);
>   return null;
> }
> log.warn("Attempt to fetch cluster state from " +
> Utils.getBaseUrlForNodeName(nodeName, urlScheme) + " failed.", e);
>   }
> {code}
> Cluster status is already handled in the case of no collection name provided 
> by returning status on all collections. It would make more sense if this 
> command returned status on the component collections for the alias. 
> If that turns out to be difficult or cause too many problems this should at 
> least be downgraded to a non-stack trace warning message since this situation 
> does not represent a failure of the system. The error/stack should of course 
> be retained if neither a collection nor an alias exist.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12931) Move Solr's ExitableDirectoryReader test to it's own package

2018-11-06 Thread Mark Miller (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16677203#comment-16677203
 ] 

Mark Miller commented on SOLR-12931:


+1 on putting them in the same place. Things are often weirdly spread out and 
it can be hard to know what exists.

> Move Solr's ExitableDirectoryReader test to it's own package
> 
>
> Key: SOLR-12931
> URL: https://issues.apache.org/jira/browse/SOLR-12931
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12938) ClusterStatus should not spew an exception trace if it gets an alias name

2018-11-06 Thread Gus Heck (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16677202#comment-16677202
 ] 

Gus Heck commented on SOLR-12938:
-

eek. I'll look into it tonight.

> ClusterStatus should not spew an exception trace if it gets an alias name
> -
>
> Key: SOLR-12938
> URL: https://issues.apache.org/jira/browse/SOLR-12938
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.5
>Reporter: Gus Heck
>Assignee: Gus Heck
>Priority: Blocker
> Fix For: 7.6, master (8.0)
>
> Attachments: SOLR-12938.patch, SOLR-12938.patch, SOLR-12938.patch
>
>
> This has been a lingering irritant in debugging tests for time routed 
> aliases, previously mentioned in SOLR-11949 and can be seen frequently in 
> logs attached to SOLR-12928. Basically what happens is for one reason or 
> another cluster status is called on an alias rather than a collection and 
> this is treated identically to a collection name that doesn't exist. 
> This also has lead this bit of lovely exception message parsing in 
> HttpClusteStateProvider.java
> {code:java}
>   } catch (SolrServerException | RemoteSolrException | IOException e) {
> if (e.getMessage().contains(collection + " not found")) {
>   // Cluster state for the given collection was not found.
>   // Lets fetch/update our aliases:
>   getAliases(true);
>   return null;
> }
> log.warn("Attempt to fetch cluster state from " +
> Utils.getBaseUrlForNodeName(nodeName, urlScheme) + " failed.", e);
>   }
> {code}
> Cluster status is already handled in the case of no collection name provided 
> by returning status on all collections. It would make more sense if this 
> command returned status on the component collections for the alias. 
> If that turns out to be difficult or cause too many problems this should at 
> least be downgraded to a non-stack trace warning message since this situation 
> does not represent a failure of the system. The error/stack should of course 
> be retained if neither a collection nor an alias exist.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12938) ClusterStatus should not spew an exception trace if it gets an alias name

2018-11-06 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16677194#comment-16677194
 ] 

David Smiley commented on SOLR-12938:
-

Take heart Gus; I broke half the tests on my first commit :)

Hoss, can you please share the git bissect command line you ran to find the 
problem?  I'd like to save this so I can use it to aid in my own test 
investigations.

> ClusterStatus should not spew an exception trace if it gets an alias name
> -
>
> Key: SOLR-12938
> URL: https://issues.apache.org/jira/browse/SOLR-12938
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.5
>Reporter: Gus Heck
>Assignee: Gus Heck
>Priority: Blocker
> Fix For: 7.6, master (8.0)
>
> Attachments: SOLR-12938.patch, SOLR-12938.patch, SOLR-12938.patch
>
>
> This has been a lingering irritant in debugging tests for time routed 
> aliases, previously mentioned in SOLR-11949 and can be seen frequently in 
> logs attached to SOLR-12928. Basically what happens is for one reason or 
> another cluster status is called on an alias rather than a collection and 
> this is treated identically to a collection name that doesn't exist. 
> This also has lead this bit of lovely exception message parsing in 
> HttpClusteStateProvider.java
> {code:java}
>   } catch (SolrServerException | RemoteSolrException | IOException e) {
> if (e.getMessage().contains(collection + " not found")) {
>   // Cluster state for the given collection was not found.
>   // Lets fetch/update our aliases:
>   getAliases(true);
>   return null;
> }
> log.warn("Attempt to fetch cluster state from " +
> Utils.getBaseUrlForNodeName(nodeName, urlScheme) + " failed.", e);
>   }
> {code}
> Cluster status is already handled in the case of no collection name provided 
> by returning status on all collections. It would make more sense if this 
> command returned status on the component collections for the alias. 
> If that turns out to be difficult or cause too many problems this should at 
> least be downgraded to a non-stack trace warning message since this situation 
> does not represent a failure of the system. The error/stack should of course 
> be retained if neither a collection nor an alias exist.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12967) MOVEREPLICA converting replica to NRT

2018-11-06 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16677188#comment-16677188
 ] 

Shawn Heisey commented on SOLR-12967:
-

I advised Gilson to open this issue in the #solr channel.

Do we need separate issues for work on other Collections API actions that don't 
consider the replica type, or will we just expand this issue to cover checking 
the whole API?

I had a thought for a feature request -- add a couple of new settings:  1) a 
default replica type, to be used instead of NRT when nothing else indicates 
what type to use.  2) A flag to indicate whether the default replica type 
should override an existing type, which would cover things like MOVEREPLICA and 
maybe others.  When the user's request explicitly asks for a type, that would 
of course take precedence over both of these settings.

> MOVEREPLICA converting replica to NRT
> -
>
> Key: SOLR-12967
> URL: https://issues.apache.org/jira/browse/SOLR-12967
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.5
>Reporter: Gilson
>Priority: Minor
>  Labels: collection-api, solr
>
> When calling Collections API's MOVEREPLICA, the new replica created is always 
> NRT type, even when the original replica is PULL or TLOG. As discussed on 
> IRC, it should use the source replica type, or provide a parameter for the 
> user to choose the new replica's type, similar to ADDREPLICA  parameter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12967) MOVEREPLICA converting replica to NRT

2018-11-06 Thread Gilson (JIRA)
Gilson created SOLR-12967:
-

 Summary: MOVEREPLICA converting replica to NRT
 Key: SOLR-12967
 URL: https://issues.apache.org/jira/browse/SOLR-12967
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: SolrCloud
Affects Versions: 7.5
Reporter: Gilson


When calling Collections API's MOVEREPLICA, the new replica created is always 
NRT type, even when the original replica is PULL or TLOG. As discussed on IRC, 
it should use the source replica type, or provide a parameter for the user to 
choose the new replica's type, similar to ADDREPLICA  parameter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 4915 - Still Unstable!

2018-11-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/4915/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseG1GC

11 tests failed.
FAILED:  
org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica

Error Message:
Expected new active leader null Live Nodes: [127.0.0.1:62934_solr, 
127.0.0.1:62935_solr, 127.0.0.1:62936_solr] Last available state: 
DocCollection(raceDeleteReplica_false//collections/raceDeleteReplica_false/state.json/13)={
   "pullReplicas":"0",   "replicationFactor":"2",   "shards":{"shard1":{   
"range":"8000-7fff",   "state":"active",   "replicas":{ 
"core_node3":{   "core":"raceDeleteReplica_false_shard1_replica_n1",
   "base_url":"http://127.0.0.1:62937/solr;,   
"node_name":"127.0.0.1:62937_solr",   "state":"down",   
"type":"NRT",   "force_set_state":"false",   "leader":"true"},  
   "core_node6":{   
"core":"raceDeleteReplica_false_shard1_replica_n5",   
"base_url":"http://127.0.0.1:62937/solr;,   
"node_name":"127.0.0.1:62937_solr",   "state":"down",   
"type":"NRT",   "force_set_state":"false",   
"router":{"name":"compositeId"},   "maxShardsPerNode":"1",   
"autoAddReplicas":"false",   "nrtReplicas":"2",   "tlogReplicas":"0"}

Stack Trace:
java.lang.AssertionError: Expected new active leader
null
Live Nodes: [127.0.0.1:62934_solr, 127.0.0.1:62935_solr, 127.0.0.1:62936_solr]
Last available state: 
DocCollection(raceDeleteReplica_false//collections/raceDeleteReplica_false/state.json/13)={
  "pullReplicas":"0",
  "replicationFactor":"2",
  "shards":{"shard1":{
  "range":"8000-7fff",
  "state":"active",
  "replicas":{
"core_node3":{
  "core":"raceDeleteReplica_false_shard1_replica_n1",
  "base_url":"http://127.0.0.1:62937/solr;,
  "node_name":"127.0.0.1:62937_solr",
  "state":"down",
  "type":"NRT",
  "force_set_state":"false",
  "leader":"true"},
"core_node6":{
  "core":"raceDeleteReplica_false_shard1_replica_n5",
  "base_url":"http://127.0.0.1:62937/solr;,
  "node_name":"127.0.0.1:62937_solr",
  "state":"down",
  "type":"NRT",
  "force_set_state":"false",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false",
  "nrtReplicas":"2",
  "tlogReplicas":"0"}
at 
__randomizedtesting.SeedInfo.seed([55FCD68D4DE2FE2D:3FEAB75D2510B4E7]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.SolrCloudTestCase.waitForState(SolrCloudTestCase.java:280)
at 
org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica(DeleteReplicaTest.java:328)
at 
org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica(DeleteReplicaTest.java:224)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 

[JENKINS] Lucene-Solr-7.x-Linux (32bit/jdk1.8.0_172) - Build # 3046 - Still Unstable!

2018-11-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/3046/
Java: 32bit/jdk1.8.0_172 -server -XX:+UseG1GC

6 tests failed.
FAILED:  
org.apache.solr.client.solrj.impl.CloudSolrClientTest.testCollectionDoesntExist

Error Message:
Tried fetching cluster state using the node names we knew of, i.e. 
[127.0.0.1:43989_solr, 127.0.0.1:40385_solr, 127.0.0.1:39251_solr]. However, 
succeeded in obtaining the cluster state from none of them.If you think your 
Solr cluster is up and is accessible, you could try re-creating a new 
CloudSolrClient using working solrUrl(s) or zkHost(s).

Stack Trace:
java.lang.RuntimeException: Tried fetching cluster state using the node names 
we knew of, i.e. [127.0.0.1:43989_solr, 127.0.0.1:40385_solr, 
127.0.0.1:39251_solr]. However, succeeded in obtaining the cluster state from 
none of them.If you think your Solr cluster is up and is accessible, you could 
try re-creating a new CloudSolrClient using working solrUrl(s) or zkHost(s).
at 
__randomizedtesting.SeedInfo.seed([5F48815F2AB0FFA7:3DEB9FF128329BB7]:0)
at 
org.apache.solr.client.solrj.impl.HttpClusterStateProvider.getState(HttpClusterStateProvider.java:108)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.resolveAliases(CloudSolrClient.java:1115)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:844)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:177)
at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:138)
at 
org.apache.solr.client.solrj.impl.CloudSolrClientTest.testCollectionDoesntExist(CloudSolrClientTest.java:779)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:110)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 

[jira] [Updated] (SOLR-12938) ClusterStatus should not spew an exception trace if it gets an alias name

2018-11-06 Thread Hoss Man (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-12938:

Priority: Blocker  (was: Minor)

setting as blocker to ensure that we either roll back and get to the bottom of 
test failures before releasing in 7.6

> ClusterStatus should not spew an exception trace if it gets an alias name
> -
>
> Key: SOLR-12938
> URL: https://issues.apache.org/jira/browse/SOLR-12938
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.5
>Reporter: Gus Heck
>Assignee: Gus Heck
>Priority: Blocker
> Fix For: 7.6, master (8.0)
>
> Attachments: SOLR-12938.patch, SOLR-12938.patch, SOLR-12938.patch
>
>
> This has been a lingering irritant in debugging tests for time routed 
> aliases, previously mentioned in SOLR-11949 and can be seen frequently in 
> logs attached to SOLR-12928. Basically what happens is for one reason or 
> another cluster status is called on an alias rather than a collection and 
> this is treated identically to a collection name that doesn't exist. 
> This also has lead this bit of lovely exception message parsing in 
> HttpClusteStateProvider.java
> {code:java}
>   } catch (SolrServerException | RemoteSolrException | IOException e) {
> if (e.getMessage().contains(collection + " not found")) {
>   // Cluster state for the given collection was not found.
>   // Lets fetch/update our aliases:
>   getAliases(true);
>   return null;
> }
> log.warn("Attempt to fetch cluster state from " +
> Utils.getBaseUrlForNodeName(nodeName, urlScheme) + " failed.", e);
>   }
> {code}
> Cluster status is already handled in the case of no collection name provided 
> by returning status on all collections. It would make more sense if this 
> command returned status on the component collections for the alias. 
> If that turns out to be difficult or cause too many problems this should at 
> least be downgraded to a non-stack trace warning message since this situation 
> does not represent a failure of the system. The error/stack should of course 
> be retained if neither a collection nor an alias exist.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8557) LeafReader.getFieldInfos should always return the same instance

2018-11-06 Thread Tim Underwood (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16677088#comment-16677088
 ] 

Tim Underwood commented on LUCENE-8557:
---

Updated patch looks good to me.

> LeafReader.getFieldInfos should always return the same instance
> ---
>
> Key: LUCENE-8557
> URL: https://issues.apache.org/jira/browse/LUCENE-8557
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 7.5
>Reporter: Tim Underwood
>Assignee: David Smiley
>Priority: Major
> Attachments: LUCENE-8557.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Most implementations of the LeafReader cache an instance of FieldInfos which 
> is returned in the LeafReader.getFieldInfos() method.  There are a few places 
> that currently do not and this can cause performance problems.
> The most notable example is the lack of caching in Solr's 
> SlowCompositeReaderWrapper which caused unexpected performance slowdowns when 
> trying to use Solr's JSON Facets compared to the legacy facets.
> This proposed change is mostly relevant to Solr but touches a few Lucene 
> classes.  Specifically:
> *1.* Adds a check to TestUtil.checkReader to verify that 
> LeafReader.getFieldInfos() returns the same instance:
>  
> {code:java}
> // FieldInfos should be cached at the reader and always return the same 
> instance
>  if (reader.getFieldInfos() != reader.getFieldInfos()) {
>  throw new RuntimeException("getFieldInfos() returned different instances for 
> class: "+reader.getClass());
>  }
> {code}
> I'm not entirely sure this is wanted or needed but adding it uncovered most 
> of the other LeafReader implementations that were not caching FieldInfos.  
> I'm happy to remove this part of the patch though.
>  
> *2.* Adds a FieldInfos.EMPTY that can be used in a handful of places
>  
> {code:java}
> public final static FieldInfos EMPTY = new FieldInfos(new FieldInfo[0]);
> {code}
> There are several places in the Lucene/Solr tests that were creating empty 
> instances of FieldInfos which were causing the check in #1 to fail.  This 
> fixes those failures and cleans up the code a bit.
> *3.* Fixes a few LeafReader implementations that were not caching FieldInfos
> Specifically:
>  * *MemoryIndex.MemoryIndexReader* - The constructor was already looping over 
> the fields so it seemed natural to just create the FieldInfos at that time
>  * *SlowCompositeReaderWrapper* - This was the one causing me trouble.  I've 
> moved the caching of FieldInfos from SolrIndexSearcher to 
> SlowCompositeReaderWrapper.
>  * *CollapsingQParserPlugin.ReaderWrapper* - getFieldInfos() is immediately 
> called twice after this is constructed
>  * *ExpandComponent.ReaderWrapper* - getFieldInfos() is immediately called 
> twice after this is constructed
>  
> *4.* Minor Solr tweak to avoid calling SolrIndexSearcher.getSlowAtomicReader 
> in FacetFieldProcessorByHashDV.  This change is now optional since 
> SlowCompositeReaderWrapper caches FieldInfos.
>  
> As suggested by [~dsmiley] this takes the place of SOLR-12878 since it 
> touches some Lucene code.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12938) ClusterStatus should not spew an exception trace if it gets an alias name

2018-11-06 Thread Hoss Man (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16677072#comment-16677072
 ] 

Hoss Man commented on SOLR-12938:
-

FWIW: searching jenkins emails for {{"FAILED:  
org.apache.solr.client.solrj.impl.CloudSolrClientTest.testCollectionDoesntExist"}}
 matches 26 messages -- 25 of them in the past 2 days (the lone holdout being 
over a year old) ... hence i started git bisecting with 
7d6d77d06753bd131aeb37531b70c59193917683 and identified 
5ad78734384104d7e26d51917d04936b849a692d as the root cause.



> ClusterStatus should not spew an exception trace if it gets an alias name
> -
>
> Key: SOLR-12938
> URL: https://issues.apache.org/jira/browse/SOLR-12938
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.5
>Reporter: Gus Heck
>Assignee: Gus Heck
>Priority: Minor
> Fix For: 7.6, master (8.0)
>
> Attachments: SOLR-12938.patch, SOLR-12938.patch, SOLR-12938.patch
>
>
> This has been a lingering irritant in debugging tests for time routed 
> aliases, previously mentioned in SOLR-11949 and can be seen frequently in 
> logs attached to SOLR-12928. Basically what happens is for one reason or 
> another cluster status is called on an alias rather than a collection and 
> this is treated identically to a collection name that doesn't exist. 
> This also has lead this bit of lovely exception message parsing in 
> HttpClusteStateProvider.java
> {code:java}
>   } catch (SolrServerException | RemoteSolrException | IOException e) {
> if (e.getMessage().contains(collection + " not found")) {
>   // Cluster state for the given collection was not found.
>   // Lets fetch/update our aliases:
>   getAliases(true);
>   return null;
> }
> log.warn("Attempt to fetch cluster state from " +
> Utils.getBaseUrlForNodeName(nodeName, urlScheme) + " failed.", e);
>   }
> {code}
> Cluster status is already handled in the case of no collection name provided 
> by returning status on all collections. It would make more sense if this 
> command returned status on the component collections for the alias. 
> If that turns out to be difficult or cause too many problems this should at 
> least be downgraded to a non-stack trace warning message since this situation 
> does not represent a failure of the system. The error/stack should of course 
> be retained if neither a collection nor an alias exist.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (SOLR-12023) Autoscaling policy engine shuffles replicas needlessly and can also suggest nonexistent replicas to be moved

2018-11-06 Thread Hoss Man (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man reopened SOLR-12023:
-

Noble: what the heck happened with your push to master?  did you just force 
push an old commit

eb359ca0790af505debf33a57c3bfb18eecbab4e broke a tone of stuff in CHANGES.txt 
-- including removing the entire 8.0 section, and deleting/moving a bunch of 
stuff going back as far as 7.3!?!

https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;a=blobdiff;f=solr/CHANGES.txt;h=be31fe7f4793fd27f1213779bc0880281fa9c1df;hp=90cce5b27f9b770ceb85a9c4a10b4979041a8b05;hb=eb359ca;hpb=f669a1fb0e1ff974df93229c41cd397956cb1e9a

> Autoscaling policy engine shuffles replicas needlessly and can also suggest 
> nonexistent replicas to be moved
> 
>
> Key: SOLR-12023
> URL: https://issues.apache.org/jira/browse/SOLR-12023
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Noble Paul
>Priority: Major
> Fix For: 7.6, master (8.0)
>
> Attachments: SOLR-11066-failing.patch, SOLR-12023.patch
>
>
> A test that I wrote in SOLR-11066 found the following problem:
> Cluster: 2 nodes
> Collection: 1 shard, 3 replicas, maxShardsPerNode=5
> No autoscaling policy or preference applied
> When the trigger runs, the computed plan needlessly shuffles all three 
> replicas and then proceeds to return suggestions with only numbers as core 
> names. These cores do not exist. I found that these numbers are generated 
> internally by the framework as placeholders for moved cores for further 
> calculations. They should never ever be suggested to the users.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12938) ClusterStatus should not spew an exception trace if it gets an alias name

2018-11-06 Thread Hoss Man (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16677061#comment-16677061
 ] 

Hoss Man commented on SOLR-12938:
-

Gus: digging into recent jenkins failures of CloudSolrClientTest shows that 
this change seems to have caused a *lot* of reproduible failures.

{noformat}
ant test -Dtestcase=CloudSolrClientTest -Dtests.seed=949992ED4AFA660A 
-Dtests.multiplier=3 -Dtests.slow=true -Dtests.badapples=true -Dtests.locale=kl 
-Dtests.timezone=Europe/Oslo -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1 -Dtests.method=testCollectionDoesntExist
{noformat}

...fails reliably on HEAD of branch_7x 
(7d6d77d06753bd131aeb37531b70c59193917683) and against the cmmit from this 
issue (5ad78734384104d7e26d51917d04936b849a692d) but does not fail against the 
previous commit on branch_7x


> ClusterStatus should not spew an exception trace if it gets an alias name
> -
>
> Key: SOLR-12938
> URL: https://issues.apache.org/jira/browse/SOLR-12938
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.5
>Reporter: Gus Heck
>Assignee: Gus Heck
>Priority: Minor
> Fix For: 7.6, master (8.0)
>
> Attachments: SOLR-12938.patch, SOLR-12938.patch, SOLR-12938.patch
>
>
> This has been a lingering irritant in debugging tests for time routed 
> aliases, previously mentioned in SOLR-11949 and can be seen frequently in 
> logs attached to SOLR-12928. Basically what happens is for one reason or 
> another cluster status is called on an alias rather than a collection and 
> this is treated identically to a collection name that doesn't exist. 
> This also has lead this bit of lovely exception message parsing in 
> HttpClusteStateProvider.java
> {code:java}
>   } catch (SolrServerException | RemoteSolrException | IOException e) {
> if (e.getMessage().contains(collection + " not found")) {
>   // Cluster state for the given collection was not found.
>   // Lets fetch/update our aliases:
>   getAliases(true);
>   return null;
> }
> log.warn("Attempt to fetch cluster state from " +
> Utils.getBaseUrlForNodeName(nodeName, urlScheme) + " failed.", e);
>   }
> {code}
> Cluster status is already handled in the case of no collection name provided 
> by returning status on all collections. It would make more sense if this 
> command returned status on the component collections for the alias. 
> If that turns out to be difficult or cause too many problems this should at 
> least be downgraded to a non-stack trace warning message since this situation 
> does not represent a failure of the system. The error/stack should of course 
> be retained if neither a collection nor an alias exist.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (SOLR-12938) ClusterStatus should not spew an exception trace if it gets an alias name

2018-11-06 Thread Hoss Man (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man reopened SOLR-12938:
-

> ClusterStatus should not spew an exception trace if it gets an alias name
> -
>
> Key: SOLR-12938
> URL: https://issues.apache.org/jira/browse/SOLR-12938
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.5
>Reporter: Gus Heck
>Assignee: Gus Heck
>Priority: Minor
> Fix For: 7.6, master (8.0)
>
> Attachments: SOLR-12938.patch, SOLR-12938.patch, SOLR-12938.patch
>
>
> This has been a lingering irritant in debugging tests for time routed 
> aliases, previously mentioned in SOLR-11949 and can be seen frequently in 
> logs attached to SOLR-12928. Basically what happens is for one reason or 
> another cluster status is called on an alias rather than a collection and 
> this is treated identically to a collection name that doesn't exist. 
> This also has lead this bit of lovely exception message parsing in 
> HttpClusteStateProvider.java
> {code:java}
>   } catch (SolrServerException | RemoteSolrException | IOException e) {
> if (e.getMessage().contains(collection + " not found")) {
>   // Cluster state for the given collection was not found.
>   // Lets fetch/update our aliases:
>   getAliases(true);
>   return null;
> }
> log.warn("Attempt to fetch cluster state from " +
> Utils.getBaseUrlForNodeName(nodeName, urlScheme) + " failed.", e);
>   }
> {code}
> Cluster status is already handled in the case of no collection name provided 
> by returning status on all collections. It would make more sense if this 
> command returned status on the component collections for the alias. 
> If that turns out to be difficult or cause too many problems this should at 
> least be downgraded to a non-stack trace warning message since this situation 
> does not represent a failure of the system. The error/stack should of course 
> be retained if neither a collection nor an alias exist.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8557) LeafReader.getFieldInfos should always return the same instance

2018-11-06 Thread David Smiley (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated LUCENE-8557:
-
Attachment: LUCENE-8557.patch

> LeafReader.getFieldInfos should always return the same instance
> ---
>
> Key: LUCENE-8557
> URL: https://issues.apache.org/jira/browse/LUCENE-8557
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 7.5
>Reporter: Tim Underwood
>Assignee: David Smiley
>Priority: Major
> Attachments: LUCENE-8557.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Most implementations of the LeafReader cache an instance of FieldInfos which 
> is returned in the LeafReader.getFieldInfos() method.  There are a few places 
> that currently do not and this can cause performance problems.
> The most notable example is the lack of caching in Solr's 
> SlowCompositeReaderWrapper which caused unexpected performance slowdowns when 
> trying to use Solr's JSON Facets compared to the legacy facets.
> This proposed change is mostly relevant to Solr but touches a few Lucene 
> classes.  Specifically:
> *1.* Adds a check to TestUtil.checkReader to verify that 
> LeafReader.getFieldInfos() returns the same instance:
>  
> {code:java}
> // FieldInfos should be cached at the reader and always return the same 
> instance
>  if (reader.getFieldInfos() != reader.getFieldInfos()) {
>  throw new RuntimeException("getFieldInfos() returned different instances for 
> class: "+reader.getClass());
>  }
> {code}
> I'm not entirely sure this is wanted or needed but adding it uncovered most 
> of the other LeafReader implementations that were not caching FieldInfos.  
> I'm happy to remove this part of the patch though.
>  
> *2.* Adds a FieldInfos.EMPTY that can be used in a handful of places
>  
> {code:java}
> public final static FieldInfos EMPTY = new FieldInfos(new FieldInfo[0]);
> {code}
> There are several places in the Lucene/Solr tests that were creating empty 
> instances of FieldInfos which were causing the check in #1 to fail.  This 
> fixes those failures and cleans up the code a bit.
> *3.* Fixes a few LeafReader implementations that were not caching FieldInfos
> Specifically:
>  * *MemoryIndex.MemoryIndexReader* - The constructor was already looping over 
> the fields so it seemed natural to just create the FieldInfos at that time
>  * *SlowCompositeReaderWrapper* - This was the one causing me trouble.  I've 
> moved the caching of FieldInfos from SolrIndexSearcher to 
> SlowCompositeReaderWrapper.
>  * *CollapsingQParserPlugin.ReaderWrapper* - getFieldInfos() is immediately 
> called twice after this is constructed
>  * *ExpandComponent.ReaderWrapper* - getFieldInfos() is immediately called 
> twice after this is constructed
>  
> *4.* Minor Solr tweak to avoid calling SolrIndexSearcher.getSlowAtomicReader 
> in FacetFieldProcessorByHashDV.  This change is now optional since 
> SlowCompositeReaderWrapper caches FieldInfos.
>  
> As suggested by [~dsmiley] this takes the place of SOLR-12878 since it 
> touches some Lucene code.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12795) Introduce 'rows' and 'offset' parameter in FacetStream

2018-11-06 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16676996#comment-16676996
 ] 

ASF subversion and git services commented on SOLR-12795:


Commit 3d942131104a38a470b21020bfeb4a12c2dcd99b in lucene-solr's branch 
refs/heads/master from [~joel.bernstein]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=3d94213 ]

SOLR-12795: Introduce 'rows' and 'offset' parameter in FacetStream


> Introduce 'rows' and 'offset' parameter in FacetStream
> --
>
> Key: SOLR-12795
> URL: https://issues.apache.org/jira/browse/SOLR-12795
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Reporter: Amrit Sarkar
>Assignee: Joel Bernstein
>Priority: Major
> Attachments: SOLR-12795.patch, SOLR-12795.patch, SOLR-12795.patch, 
> SOLR-12795.patch, SOLR-12795.patch, SOLR-12795.patch, SOLR-12795.patch, 
> SOLR-12795.patch, SOLR-12795.patch, SOLR-12795.patch, SOLR-12795.patch, 
> SOLR-12795.patch
>
>
> Today facetStream takes a "bucketSizeLimit" parameter. Here is what the doc 
> says about this parameter -  The number of buckets to include. This value is 
> applied to each dimension.
> Now let's say we create a facet stream with 3 nested facets. For example 
> "year_i,month_i,day_i" and provide 10 as the bucketSizeLimit. 
> FacetStream would return 10 results to us for this facet expression while the 
> total number of unqiue values are 1000 (10*10*10 )
> The API should have a separate parameter "limit" which limits the number of 
> tuples (say 500) while bucketSizeLimit should be used to specify the size of 
> each bucket in the JSON Facet API.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12795) Introduce 'rows' and 'offset' parameter in FacetStream

2018-11-06 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16676997#comment-16676997
 ] 

ASF subversion and git services commented on SOLR-12795:


Commit b230543b47df4f9ff3de4414f4f787fc3286d60d in lucene-solr's branch 
refs/heads/master from [~joel.bernstein]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=b230543 ]

SOLR-12795: Fix precommit


> Introduce 'rows' and 'offset' parameter in FacetStream
> --
>
> Key: SOLR-12795
> URL: https://issues.apache.org/jira/browse/SOLR-12795
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Reporter: Amrit Sarkar
>Assignee: Joel Bernstein
>Priority: Major
> Attachments: SOLR-12795.patch, SOLR-12795.patch, SOLR-12795.patch, 
> SOLR-12795.patch, SOLR-12795.patch, SOLR-12795.patch, SOLR-12795.patch, 
> SOLR-12795.patch, SOLR-12795.patch, SOLR-12795.patch, SOLR-12795.patch, 
> SOLR-12795.patch
>
>
> Today facetStream takes a "bucketSizeLimit" parameter. Here is what the doc 
> says about this parameter -  The number of buckets to include. This value is 
> applied to each dimension.
> Now let's say we create a facet stream with 3 nested facets. For example 
> "year_i,month_i,day_i" and provide 10 as the bucketSizeLimit. 
> FacetStream would return 10 results to us for this facet expression while the 
> total number of unqiue values are 1000 (10*10*10 )
> The API should have a separate parameter "limit" which limits the number of 
> tuples (say 500) while bucketSizeLimit should be used to specify the size of 
> each bucket in the JSON Facet API.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Failures under JDK 9, 10, 11

2018-11-06 Thread David Smiley
I think even if we next support 11 as a minimum version, it's still useful
to test with the intermediate versions since our users are using them now.

On Tue, Nov 6, 2018 at 11:34 AM Erick Erickson 
wrote:

> I didn't mean to suggest we switch to 11 as a minimum now, rather  I
> meant to say "If the next supported version will be 11 (whenever it's
> stable enough), does it make sense to stop testing 9 and 10 now?".
>
> A better question would have been "do we ever expect to have Java 9 or
> 10 as the minimum stable version? If not should we stop testing with
> them?"
>
> Up to you of course.
> On Tue, Nov 6, 2018 at 8:19 AM Uwe Schindler  wrote:
> >
> > Hi,
> > > Do we have any plans to release any Solr with minimal versions 9 or
> > > 10? I'm wondering if it makes sense to stop testing 9 and 10 and plan
> > > on the next supported Java version being 11 (whenever we do that).
> >
> > I don't think, we should now switch to Java 11 as minimum version yet.
> I'd propose to do this after release of Lucene 8 (once branch_8x is
> created) and only do that in the master branch. Of course, we can leave out
> Java 9 and 10 and jump to 11.
> >
> > But interestingly: Java 9 and Java 10 are as stable as Java 8 (approx
> same number of failures). Java 11 caused many more failures in Solr because
> of some changes in TLS infrastructure (Java's support for TLS 1.3). We may
> need to work on those problems.
> >
> > > > reduce the noise for failed tests from 9 and 10
> > > > repurpose those runs for more testing of 8 or 11 or 12
> > >
> > > I don't have any strong feelings either way, it just popped into my
> > > head and I thought I'd ask.
> >
> > See above, 9 and 10 are as satbel as 8, it's Java 11 and 12 that cause
> more noise (of course this does not count crashes in some JVM versions).
> >
> > Uwe
> >
> >
> > -
> > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> > For additional commands, e-mail: dev-h...@lucene.apache.org
> >
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
> --
Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
http://www.solrenterprisesearchserver.com


[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-12-ea+12) - Build # 23163 - Failure!

2018-11-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/23163/
Java: 64bit/jdk-12-ea+12 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

8 tests failed.
FAILED:  
org.apache.solr.cloud.DeleteShardTest.testDirectoryCleanupAfterDeleteShard

Error Message:
Could not find collection : deleteshard_test

Stack Trace:
org.apache.solr.common.SolrException: Could not find collection : 
deleteshard_test
at 
__randomizedtesting.SeedInfo.seed([D159AC2C93AFA285:7143E77737DFE989]:0)
at 
org.apache.solr.common.cloud.ClusterState.getCollection(ClusterState.java:118)
at 
org.apache.solr.cloud.SolrCloudTestCase.getCollectionState(SolrCloudTestCase.java:258)
at 
org.apache.solr.cloud.DeleteShardTest.testDirectoryCleanupAfterDeleteShard(DeleteShardTest.java:114)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:835)


FAILED:  org.apache.solr.cloud.OverseerRolesTest.testOverseerRole

Error Message:
Timed out waiting for overseer state change

Stack Trace:

Re: Failures under JDK 9, 10, 11

2018-11-06 Thread Erick Erickson
I didn't mean to suggest we switch to 11 as a minimum now, rather  I
meant to say "If the next supported version will be 11 (whenever it's
stable enough), does it make sense to stop testing 9 and 10 now?".

A better question would have been "do we ever expect to have Java 9 or
10 as the minimum stable version? If not should we stop testing with
them?"

Up to you of course.
On Tue, Nov 6, 2018 at 8:19 AM Uwe Schindler  wrote:
>
> Hi,
> > Do we have any plans to release any Solr with minimal versions 9 or
> > 10? I'm wondering if it makes sense to stop testing 9 and 10 and plan
> > on the next supported Java version being 11 (whenever we do that).
>
> I don't think, we should now switch to Java 11 as minimum version yet. I'd 
> propose to do this after release of Lucene 8 (once branch_8x is created) and 
> only do that in the master branch. Of course, we can leave out Java 9 and 10 
> and jump to 11.
>
> But interestingly: Java 9 and Java 10 are as stable as Java 8 (approx same 
> number of failures). Java 11 caused many more failures in Solr because of 
> some changes in TLS infrastructure (Java's support for TLS 1.3). We may need 
> to work on those problems.
>
> > > reduce the noise for failed tests from 9 and 10
> > > repurpose those runs for more testing of 8 or 11 or 12
> >
> > I don't have any strong feelings either way, it just popped into my
> > head and I thought I'd ask.
>
> See above, 9 and 10 are as satbel as 8, it's Java 11 and 12 that cause more 
> noise (of course this does not count crashes in some JVM versions).
>
> Uwe
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: Failures under JDK 9, 10, 11

2018-11-06 Thread Uwe Schindler
Hi,
> Do we have any plans to release any Solr with minimal versions 9 or
> 10? I'm wondering if it makes sense to stop testing 9 and 10 and plan
> on the next supported Java version being 11 (whenever we do that).

I don't think, we should now switch to Java 11 as minimum version yet. I'd 
propose to do this after release of Lucene 8 (once branch_8x is created) and 
only do that in the master branch. Of course, we can leave out Java 9 and 10 
and jump to 11.

But interestingly: Java 9 and Java 10 are as stable as Java 8 (approx same 
number of failures). Java 11 caused many more failures in Solr because of some 
changes in TLS infrastructure (Java's support for TLS 1.3). We may need to work 
on those problems.

> > reduce the noise for failed tests from 9 and 10
> > repurpose those runs for more testing of 8 or 11 or 12
> 
> I don't have any strong feelings either way, it just popped into my
> head and I thought I'd ask.

See above, 9 and 10 are as satbel as 8, it's Java 11 and 12 that cause more 
noise (of course this does not count crashes in some JVM versions).

Uwe


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8497) Rethink multi-term analysis handling

2018-11-06 Thread Erick Erickson (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16676945#comment-16676945
 ] 

Erick Erickson commented on LUCENE-8497:


LGTM +1

> Rethink multi-term analysis handling
> 
>
> Key: LUCENE-8497
> URL: https://issues.apache.org/jira/browse/LUCENE-8497
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Alan Woodward
>Priority: Major
> Attachments: LUCENE-8497.patch, LUCENE-8497.patch, LUCENE-8497.patch, 
> LUCENE-8497.patch
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> The current framework for handling term normalisation works via instanceof 
> checks for MultiTermAwareComponent and casts.  MultiTermAwareComponent itself 
> deals in AbstractAnalysisComponents, and so callers need to cast to the 
> correct component type before use, which is ripe for misuse.
> We should re-organise all this to be type-safe and usable without casts.  One 
> possibility is to add `normalize` methods to CharFilterFactory and 
> TokenFilterFactory that mirror their existing `create` methods.  The default 
> implementation would return the input unchanged, while filters that should 
> apply at normalization time can delegate to `create`.
> Related to this, we should deprecate and remove LowerCaseTokenizer, which 
> combines tokenization and normalization in a way that will break this API.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-master - Build # 2934 - Still Unstable

2018-11-06 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/2934/

6 tests failed.
FAILED:  
org.apache.lucene.index.TestIndexWriterWithThreads.testIOExceptionDuringAbortWithThreadsOnlyOnce

Error Message:
MockDirectoryWrapper: cannot close: there are still 25 open files: {_1.tvd=1, 
_1.nvd=1, _1_BlockTreeOrds_0.pos=1, _0.nvd=1, _2.fdt=1, 
_0_BlockTreeOrds_0.doc=1, _1_BlockTreeOrds_0.tio=1, _0_Asserting_0.dvd=1, 
_2_BlockTreeOrds_0.pos=1, _2.nvd=1, _2.tvd=1, _2_BlockTreeOrds_0.tio=1, 
_1.fdt=1, _5.fdx=1, _0_BlockTreeOrds_0.pos=1, _2_Asserting_0.dvd=1, _5.fdt=1, 
_5.tvx=1, _0_BlockTreeOrds_0.tio=1, _1_BlockTreeOrds_0.doc=1, _0.fdt=1, 
_0.tvd=1, _2_BlockTreeOrds_0.doc=1, _5.tvd=1, _1_Asserting_0.dvd=1}

Stack Trace:
java.lang.RuntimeException: MockDirectoryWrapper: cannot close: there are still 
25 open files: {_1.tvd=1, _1.nvd=1, _1_BlockTreeOrds_0.pos=1, _0.nvd=1, 
_2.fdt=1, _0_BlockTreeOrds_0.doc=1, _1_BlockTreeOrds_0.tio=1, 
_0_Asserting_0.dvd=1, _2_BlockTreeOrds_0.pos=1, _2.nvd=1, _2.tvd=1, 
_2_BlockTreeOrds_0.tio=1, _1.fdt=1, _5.fdx=1, _0_BlockTreeOrds_0.pos=1, 
_2_Asserting_0.dvd=1, _5.fdt=1, _5.tvx=1, _0_BlockTreeOrds_0.tio=1, 
_1_BlockTreeOrds_0.doc=1, _0.fdt=1, _0.tvd=1, _2_BlockTreeOrds_0.doc=1, 
_5.tvd=1, _1_Asserting_0.dvd=1}
at 
__randomizedtesting.SeedInfo.seed([6BB139F8C54F836:51ED347E29E5BD68]:0)
at 
org.apache.lucene.store.MockDirectoryWrapper.close(MockDirectoryWrapper.java:838)
at 
org.apache.lucene.index.TestIndexWriterWithThreads._testMultipleThreadsFailure(TestIndexWriterWithThreads.java:341)
at 
org.apache.lucene.index.TestIndexWriterWithThreads.testIOExceptionDuringAbortWithThreadsOnlyOnce(TestIndexWriterWithThreads.java:464)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 

Failures under JDK 9, 10, 11

2018-11-06 Thread Erick Erickson
Do we have any plans to release any Solr with minimal versions 9 or
10? I'm wondering if it makes sense to stop testing 9 and 10 and plan
on the next supported Java version being 11 (whenever we do that).

> reduce the noise for failed tests from 9 and 10
> repurpose those runs for more testing of 8 or 11 or 12

I don't have any strong feelings either way, it just popped into my
head and I thought I'd ask.

Erick

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6336) AnalyzingInfixSuggester needs duplicate handling

2018-11-06 Thread JIRA


[ 
https://issues.apache.org/jira/browse/LUCENE-6336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16676877#comment-16676877
 ] 

Samuel Solís commented on LUCENE-6336:
--

Hi,

I'm a new Solr user and this is my first comment in a issue. Sorry if my 
knowledge is not the best to report an issue.

I'm created a suggest system like the described in the issue and the problem is 
exactly the same. I have configured a BlendedInfixLookupFactory with a 
multivalue field and 

DocumentExpressionDictionaryFactory as a dictionaryImpl. The problem is that 
the suggestions contain duplicates if the weight are different and it's a bad 
behavior I think. The idea of remove duplicates using params like "_unique=true 
and weightCalculus =max|min|avg_" seems nice.

I know that the issue is for a 5.0 version but I'm using 6.6 and it's still 
active and the problem is not resolved yet. how can I help? I'm not a Java 
developer (I'm developer but I don't use Java) but I can test something if you 
want or create tests or something. Or if somebody know a better solution just 
to discuss it.

 

Thanks!

 

 

> AnalyzingInfixSuggester needs duplicate handling
> 
>
> Key: LUCENE-6336
> URL: https://issues.apache.org/jira/browse/LUCENE-6336
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 4.10.3, 5.0
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
>  Labels: lookup, suggester
> Attachments: LUCENE-6336.patch
>
>
> Spinoff from LUCENE-5833 but else unrelated.
> Using {{AnalyzingInfixSuggester}} which is backed by a Lucene index and 
> stores payload and score together with the suggest text.
> I did some testing with Solr, producing the DocumentDictionary from an index 
> with multiple documents containing the same text, but with random weights 
> between 0-100. Then I got duplicate identical suggestions sorted by weight:
> {code}
> {
>   "suggest":{"languages":{
>   "engl":{
> "numFound":101,
> "suggestions":[{
> "term":"English",
> "weight":100,
> "payload":"0"},
>   {
> "term":"English",
> "weight":99,
> "payload":"0"},
>   {
> "term":"English",
> "weight":98,
> "payload":"0"},
> ---etc all the way down to 0---
> {code}
> I also reproduced the same behavior in AnalyzingInfixSuggester directly. So 
> there is a need for some duplicate removal here, either while building the 
> local suggest index or during lookup. Only the highest weight suggestion for 
> a given term should be returned.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Windows (64bit/jdk-9.0.4) - Build # 7608 - Still Unstable!

2018-11-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7608/
Java: 64bit/jdk-9.0.4 -XX:-UseCompressedOops -XX:+UseParallelGC

20 tests failed.
FAILED:  org.apache.solr.cloud.TestTlogReplica.testRecovery

Error Message:
Can not find doc 7 in https://127.0.0.1:49675/solr

Stack Trace:
java.lang.AssertionError: Can not find doc 7 in https://127.0.0.1:49675/solr
at 
__randomizedtesting.SeedInfo.seed([1BD06714A401E5DE:DA201EB889512F79]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNotNull(Assert.java:526)
at 
org.apache.solr.cloud.TestTlogReplica.checkRTG(TestTlogReplica.java:902)
at 
org.apache.solr.cloud.TestTlogReplica.testRecovery(TestTlogReplica.java:567)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  org.apache.solr.cloud.TestTlogReplica.testRecovery

Error Message:
Can not find doc 7 in https://127.0.0.1:51141/solr

Stack Trace:

[JENKINS-EA] Lucene-Solr-7.x-Linux (64bit/jdk-12-ea+12) - Build # 3045 - Still Unstable!

2018-11-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/3045/
Java: 64bit/jdk-12-ea+12 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

17 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.admin.AutoscalingHistoryHandlerTest

Error Message:
Could not find collection : AutoscalingHistoryHandlerTest_collection

Stack Trace:
org.apache.solr.common.SolrException: Could not find collection : 
AutoscalingHistoryHandlerTest_collection
at __randomizedtesting.SeedInfo.seed([9D86A4486CC42B1D]:0)
at 
org.apache.solr.common.cloud.ClusterState.getCollection(ClusterState.java:118)
at 
org.apache.solr.cloud.SolrCloudTestCase.getCollectionState(SolrCloudTestCase.java:258)
at 
org.apache.solr.handler.admin.AutoscalingHistoryHandlerTest.waitForRecovery(AutoscalingHistoryHandlerTest.java:403)
at 
org.apache.solr.handler.admin.AutoscalingHistoryHandlerTest.setupCluster(AutoscalingHistoryHandlerTest.java:97)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:875)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:835)


FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.admin.AutoscalingHistoryHandlerTest

Error Message:
Could not find collection : AutoscalingHistoryHandlerTest_collection

Stack Trace:
org.apache.solr.common.SolrException: Could not find collection : 
AutoscalingHistoryHandlerTest_collection
at __randomizedtesting.SeedInfo.seed([9D86A4486CC42B1D]:0)
at 
org.apache.solr.common.cloud.ClusterState.getCollection(ClusterState.java:118)
at 
org.apache.solr.cloud.SolrCloudTestCase.getCollectionState(SolrCloudTestCase.java:258)
at 
org.apache.solr.handler.admin.AutoscalingHistoryHandlerTest.waitForRecovery(AutoscalingHistoryHandlerTest.java:403)
at 
org.apache.solr.handler.admin.AutoscalingHistoryHandlerTest.setupCluster(AutoscalingHistoryHandlerTest.java:97)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:875)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 

[jira] [Updated] (SOLR-12795) Introduce 'rows' and 'offset' parameter in FacetStream

2018-11-06 Thread Joel Bernstein (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-12795:
--
Attachment: SOLR-12795.patch

> Introduce 'rows' and 'offset' parameter in FacetStream
> --
>
> Key: SOLR-12795
> URL: https://issues.apache.org/jira/browse/SOLR-12795
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Reporter: Amrit Sarkar
>Assignee: Joel Bernstein
>Priority: Major
> Attachments: SOLR-12795.patch, SOLR-12795.patch, SOLR-12795.patch, 
> SOLR-12795.patch, SOLR-12795.patch, SOLR-12795.patch, SOLR-12795.patch, 
> SOLR-12795.patch, SOLR-12795.patch, SOLR-12795.patch, SOLR-12795.patch, 
> SOLR-12795.patch
>
>
> Today facetStream takes a "bucketSizeLimit" parameter. Here is what the doc 
> says about this parameter -  The number of buckets to include. This value is 
> applied to each dimension.
> Now let's say we create a facet stream with 3 nested facets. For example 
> "year_i,month_i,day_i" and provide 10 as the bucketSizeLimit. 
> FacetStream would return 10 results to us for this facet expression while the 
> total number of unqiue values are 1000 (10*10*10 )
> The API should have a separate parameter "limit" which limits the number of 
> tuples (say 500) while bucketSizeLimit should be used to specify the size of 
> each bucket in the JSON Facet API.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #455: SOLR-12638

2018-11-06 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/455#discussion_r231127727
  
--- Diff: 
solr/core/src/java/org/apache/solr/handler/component/RealTimeGetComponent.java 
---
@@ -639,12 +651,32 @@ public static SolrInputDocument 
getInputDocument(SolrCore core, BytesRef idBytes
   sid = new SolrInputDocument();
 } else {
   Document luceneDocument = docFetcher.doc(docid);
-  sid = toSolrInputDocument(luceneDocument, 
core.getLatestSchema());
+  sid = toSolrInputDocument(luceneDocument, schema);
 }
-if (onlyTheseNonStoredDVs != null) {
-  docFetcher.decorateDocValueFields(sid, docid, 
onlyTheseNonStoredDVs);
-} else {
-  docFetcher.decorateDocValueFields(sid, docid, 
docFetcher.getNonStoredDVsWithoutCopyTargets());
+ensureDocFieldsDecorated(onlyTheseNonStoredDVs, sid, docid, 
docFetcher, resolveRootDoc ||
+resolveChildren || 
schema.hasExplicitField(IndexSchema.NEST_PATH_FIELD_NAME));
+SolrInputField rootField = 
sid.getField(IndexSchema.ROOT_FIELD_NAME);
+if((resolveChildren || resolveRootDoc) && 
schema.isUsableForChildDocs() && rootField!=null) {
+  // doc is part of a nested structure
+  String id = resolveRootDoc? (String) rootField.getFirstValue(): 
(String) sid.getField(idField.getName()).getFirstValue();
+  ModifiableSolrParams params = new ModifiableSolrParams()
+  .set("fl", "*, _nest_path_, [child]")
+  .set("limit", "-1");
+  SolrQueryRequest nestedReq = new LocalSolrQueryRequest(core, 
params);
+  final BytesRef rootIdBytes = new BytesRef(id);
+  final int rootDocId = searcher.getFirstMatch(new 
Term(idField.getName(), rootIdBytes));
+  final DocTransformer childDocTransformer = 
TransformerFactory.defaultFactories.get("child").create("child", params, 
nestedReq);
--- End diff --

no, use `core.getTransformerFactory("child")`


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-12966) Add @since javadoc tags to the URP classes

2018-11-06 Thread Alexandre Rafalovitch (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch resolved SOLR-12966.
--
   Resolution: Implemented
Fix Version/s: 7.6

> Add @since javadoc tags to the URP classes
> --
>
> Key: SOLR-12966
> URL: https://issues.apache.org/jira/browse/SOLR-12966
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Affects Versions: 7.5
>Reporter: Alexandre Rafalovitch
>Priority: Minor
> Fix For: 7.6
>
>
> Continuing work started in SOLR-11490, add @since javadoc tags to all 
> descendants of UpdateRequestProcessorFactory.
> Most of them have been tagged already, just SignatureUpdateProcessorFactory 
> to be marked as 3.1 and new OpenNLPLangDetectUpdateProcessorFactory as 7.3.0



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12966) Add @since javadoc tags to the URP classes

2018-11-06 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16676731#comment-16676731
 ] 

ASF subversion and git services commented on SOLR-12966:


Commit 7d6d77d06753bd131aeb37531b70c59193917683 in lucene-solr's branch 
refs/heads/branch_7x from [~arafalov]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=7d6d77d ]

SOLR-12966: Add Javadoc @since tag to URP classes

(cherry picked from commit 0ddbc4bf953e65ff7af5140b95f8d9edcc245875)


> Add @since javadoc tags to the URP classes
> --
>
> Key: SOLR-12966
> URL: https://issues.apache.org/jira/browse/SOLR-12966
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Affects Versions: 7.5
>Reporter: Alexandre Rafalovitch
>Priority: Minor
>
> Continuing work started in SOLR-11490, add @since javadoc tags to all 
> descendants of UpdateRequestProcessorFactory.
> Most of them have been tagged already, just SignatureUpdateProcessorFactory 
> to be marked as 3.1 and new OpenNLPLangDetectUpdateProcessorFactory as 7.3.0



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12966) Add @since javadoc tags to the URP classes

2018-11-06 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16676729#comment-16676729
 ] 

ASF subversion and git services commented on SOLR-12966:


Commit 0ddbc4bf953e65ff7af5140b95f8d9edcc245875 in lucene-solr's branch 
refs/heads/master from [~arafalov]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=0ddbc4b ]

SOLR-12966: Add Javadoc @since tag to URP classes


> Add @since javadoc tags to the URP classes
> --
>
> Key: SOLR-12966
> URL: https://issues.apache.org/jira/browse/SOLR-12966
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Affects Versions: 7.5
>Reporter: Alexandre Rafalovitch
>Priority: Minor
>
> Continuing work started in SOLR-11490, add @since javadoc tags to all 
> descendants of UpdateRequestProcessorFactory.
> Most of them have been tagged already, just SignatureUpdateProcessorFactory 
> to be marked as 3.1 and new OpenNLPLangDetectUpdateProcessorFactory as 7.3.0



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12966) Add @since javadoc tags to the URP classes

2018-11-06 Thread Alexandre Rafalovitch (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch updated SOLR-12966:
-
Description: 
Continuing work started in SOLR-11490, add @since javadoc tags to all 
descendants of UpdateRequestProcessorFactory.

Most of them have been tagged already, just SignatureUpdateProcessorFactory to 
be marked as 3.1 and new OpenNLPLangDetectUpdateProcessorFactory as 7.3.0

  was:
Continuing work started in SOLR-11490, add @since javadoc tags to all 
descendants of UpdateRequestProcessorFactory.

Most of them have been tagged already, just SignatureUpdateProcessorFactory and 
UpdateRequestProcessorChain to be marked as 3.1 and new 
OpenNLPLangDetectUpdateProcessorFactory as 7.3.0


> Add @since javadoc tags to the URP classes
> --
>
> Key: SOLR-12966
> URL: https://issues.apache.org/jira/browse/SOLR-12966
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Affects Versions: 7.5
>Reporter: Alexandre Rafalovitch
>Priority: Minor
>
> Continuing work started in SOLR-11490, add @since javadoc tags to all 
> descendants of UpdateRequestProcessorFactory.
> Most of them have been tagged already, just SignatureUpdateProcessorFactory 
> to be marked as 3.1 and new OpenNLPLangDetectUpdateProcessorFactory as 7.3.0



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12966) Add @since javadoc tags to the URP classes

2018-11-06 Thread Alexandre Rafalovitch (JIRA)
Alexandre Rafalovitch created SOLR-12966:


 Summary: Add @since javadoc tags to the URP classes
 Key: SOLR-12966
 URL: https://issues.apache.org/jira/browse/SOLR-12966
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: documentation
Affects Versions: 7.5
Reporter: Alexandre Rafalovitch


Continuing work started in SOLR-11490, add @since javadoc tags to all 
descendants of UpdateRequestProcessorFactory.

Most of them have been tagged already, just SignatureUpdateProcessorFactory and 
UpdateRequestProcessorChain to be marked as 3.1 and new 
OpenNLPLangDetectUpdateProcessorFactory as 7.3.0



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12965) Add faceting support to JsonQueryRequest

2018-11-06 Thread Jason Gerlowski (JIRA)
Jason Gerlowski created SOLR-12965:
--

 Summary: Add faceting support to JsonQueryRequest
 Key: SOLR-12965
 URL: https://issues.apache.org/jira/browse/SOLR-12965
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: clients - java, SolrJ
Affects Versions: 7.5, master (8.0)
Reporter: Jason Gerlowski
Assignee: Jason Gerlowski


SOLR-12947 created {{JsonQueryRequest}}, a SolrJ class that makes it easier for 
users to make JSON-api requests in their Java/SolrJ code.  Currently this class 
is missing any sort of faceting capabilities (I'd held off on adding this as a 
part of SOLR-12947 just to keep the issues smaller).

This JIRA covers adding that missing faceting capability.

There's a few ways we could handle it, but my first attempt at adding faceting 
support will probably have users specify a Map for each facet 
that they wish to add, similar to how complex queries were supported in 
SOLR-12947.  This approach has some pros and cons:

The benefit is how general the approach is- our interface stays resilient to 
any future changes to the syntax of the JSON API, and users can build facets 
that I'd never thought to explicitly test.  The downside is that this doesn't 
offer much abstraction for users who are unfamiliar with our JSON syntax- they 
still have to know the JSON "schema" to build a map representing their facet.  
But in practice we can probably mitigate this downside by providing "facet 
builders" or some other helper classes to provide this abstraction in the 
common case.

Hope to have a skeleton patch up soon.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-12956) Add @since javadoc tags to the Analyzer component classes

2018-11-06 Thread Alexandre Rafalovitch (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch resolved SOLR-12956.
--
   Resolution: Implemented
Fix Version/s: 7.6

> Add @since javadoc tags to the Analyzer component classes
> -
>
> Key: SOLR-12956
> URL: https://issues.apache.org/jira/browse/SOLR-12956
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Affects Versions: 7.5
>Reporter: Alexandre Rafalovitch
>Assignee: Alexandre Rafalovitch
>Priority: Minor
> Fix For: 7.6
>
> Attachments: SOLR-12956.patch
>
>
> Continuing work started in SOLR-11490, add @since javadoc tags to all 
> Analyzer, Tokenizer, Char and Token filter classes that are used in the 
> fieldtype definitions.
> As per the previous guidance, earliest version tag applied will be 3.1.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12956) Add @since javadoc tags to the Analyzer component classes

2018-11-06 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16676709#comment-16676709
 ] 

ASF subversion and git services commented on SOLR-12956:


Commit 0c37fbf9bc376f4636038e07039be34f2fd97021 in lucene-solr's branch 
refs/heads/branch_7x from [~arafalov]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=0c37fbf ]

SOLR-12956: Add Javadoc @since tag to Analyzer component classes

(cherry picked from commit c07df196664b84cd2d58ce1ba9040a6b06e0a3c5)


> Add @since javadoc tags to the Analyzer component classes
> -
>
> Key: SOLR-12956
> URL: https://issues.apache.org/jira/browse/SOLR-12956
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Affects Versions: 7.5
>Reporter: Alexandre Rafalovitch
>Assignee: Alexandre Rafalovitch
>Priority: Minor
> Attachments: SOLR-12956.patch
>
>
> Continuing work started in SOLR-11490, add @since javadoc tags to all 
> Analyzer, Tokenizer, Char and Token filter classes that are used in the 
> fieldtype definitions.
> As per the previous guidance, earliest version tag applied will be 3.1.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12932) ant test (without badapples=false) should pass easily for developers.

2018-11-06 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16676707#comment-16676707
 ] 

Kevin Risden commented on SOLR-12932:
-

Looks like there have been new commits and that last 5 tests have passed 
successfully.

> ant test (without badapples=false) should pass easily for developers.
> -
>
> Key: SOLR-12932
> URL: https://issues.apache.org/jira/browse/SOLR-12932
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Major
>
> If we fix the tests we will end up here anyway, but we can shortcut this.
> Once I get my first patch in, anyone who mentions a test that fails locally 
> for them at any time (not jenkins), I will fix it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12956) Add @since javadoc tags to the Analyzer component classes

2018-11-06 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16676705#comment-16676705
 ] 

ASF subversion and git services commented on SOLR-12956:


Commit c07df196664b84cd2d58ce1ba9040a6b06e0a3c5 in lucene-solr's branch 
refs/heads/master from [~arafalov]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=c07df19 ]

SOLR-12956: Add Javadoc @since tag to Analyzer component classes


> Add @since javadoc tags to the Analyzer component classes
> -
>
> Key: SOLR-12956
> URL: https://issues.apache.org/jira/browse/SOLR-12956
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Affects Versions: 7.5
>Reporter: Alexandre Rafalovitch
>Assignee: Alexandre Rafalovitch
>Priority: Minor
> Attachments: SOLR-12956.patch
>
>
> Continuing work started in SOLR-11490, add @since javadoc tags to all 
> Analyzer, Tokenizer, Char and Token filter classes that are used in the 
> fieldtype definitions.
> As per the previous guidance, earliest version tag applied will be 3.1.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12947) SolrJ Helper for JSON Request API

2018-11-06 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12947?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16676699#comment-16676699
 ] 

ASF subversion and git services commented on SOLR-12947:


Commit 2d95b740db1fa4ae25ccf53432e3060565cc8da2 in lucene-solr's branch 
refs/heads/master from [~gerlowskija]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=2d95b74 ]

SOLR-12947: Add SolrJ helper for making JSON DSL requests

The JSON request API is great, but it's hard to use from SolrJ.  This
commit adds 'JsonQueryRequest', which makes it much easier to write
JSON API requests in SolrJ applications.


> SolrJ Helper for JSON Request API
> -
>
> Key: SOLR-12947
> URL: https://issues.apache.org/jira/browse/SOLR-12947
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: clients - java, SolrJ
>Affects Versions: 7.5
>Reporter: Jason Gerlowski
>Assignee: Jason Gerlowski
>Priority: Minor
> Attachments: SOLR-12947.patch, SOLR-12947.patch
>
>
> The JSON request API is becoming increasingly popular for sending querying or 
> accessing the JSON faceting functionality. The query DSL is simple and easy 
> to understand, but crafting requests programmatically is tough in SolrJ. 
> Currently, SolrJ users must hardcode in the JSON body they want their request 
> to convey.  Nothing helps them build the JSON request they're going for, 
> making use of these APIs manual and painful.
> We should see what we can do to alleviate this.  I'd like to tackle this work 
> in two pieces.  This (the first piece) would introduces classes that make it 
> easier to craft non-faceting requests that use the JSON Request API.  
> Improving JSON Faceting support is a bit more involved (it likely requires 
> improvements to the Response as well as the Request objects), so I'll aim to 
> tackle that in a separate JIRA to keep things moving.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11997) Suggestions API/UI should show a message when violations exist but no suggestions are possible

2018-11-06 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16676648#comment-16676648
 ] 

ASF subversion and git services commented on SOLR-11997:


Commit 9c5626d6862bd5833b3e6fdd502746a1e6dfd9b7 in lucene-solr's branch 
refs/heads/branch_7x from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=9c5626d ]

SOLR-11997: Suggestions API/UI should show an entry where a violation could not 
be resolved


> Suggestions API/UI should show a message when violations exist but no 
> suggestions are possible
> --
>
> Key: SOLR-11997
> URL: https://issues.apache.org/jira/browse/SOLR-11997
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Shalin Shekhar Mangar
>Priority: Major
> Fix For: 7.6, master (8.0)
>
>
> If violations exist but no suggestions are possible because any operation 
> will only increase violations then the suggestions UI/API does not show 
> anything. This is confusing. We should at least have a message which 
> indicates such a situation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11997) Suggestions API/UI should show a message when violations exist but no suggestions are possible

2018-11-06 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16676640#comment-16676640
 ] 

ASF subversion and git services commented on SOLR-11997:


Commit 08fcce4c98fbcb951a4d96db355fd7961770a61f in lucene-solr's branch 
refs/heads/master from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=08fcce4 ]

SOLR-11997: Suggestions API/UI should show an entry where a violation could not 
be resolved


> Suggestions API/UI should show a message when violations exist but no 
> suggestions are possible
> --
>
> Key: SOLR-11997
> URL: https://issues.apache.org/jira/browse/SOLR-11997
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Shalin Shekhar Mangar
>Priority: Major
> Fix For: 7.6, master (8.0)
>
>
> If violations exist but no suggestions are possible because any operation 
> will only increase violations then the suggestions UI/API does not show 
> anything. This is confusing. We should at least have a message which 
> indicates such a situation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12313) TestInjection#waitForInSyncWithLeader needs improvement.

2018-11-06 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16676634#comment-16676634
 ] 

ASF subversion and git services commented on SOLR-12313:


Commit 13a83564bb44c1d0b4355e9f85e9947b0490af33 in lucene-solr's branch 
refs/heads/master from [~caomanhdat]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=13a8356 ]

SOLR-12313: Make the test finish quicker by lower down intervals


> TestInjection#waitForInSyncWithLeader needs improvement.
> 
>
> Key: SOLR-12313
> URL: https://issues.apache.org/jira/browse/SOLR-12313
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Priority: Major
>
> This really should have some doc for why it would be used.
> I also think it causes BasicDistributedZkTest to take forever for sometimes 
> and perhaps other tests?
> I think checking for uncommitted data is probably a race condition and should 
> be removed.
> Checking index versions should follow the rules that replication does - if 
> the slave is higher than the leader, it's in sync, being equal is not 
> required. If it's expected for a test it should be a specific test that 
> fails. This just introduces massive delays.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-7.x-Linux (64bit/jdk-12-ea+12) - Build # 3044 - Still Unstable!

2018-11-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/3044/
Java: 64bit/jdk-12-ea+12 -XX:-UseCompressedOops -XX:+UseSerialGC

47 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.TestCloudRecovery

Error Message:
Could not find collection:collection1

Stack Trace:
java.lang.AssertionError: Could not find collection:collection1
at __randomizedtesting.SeedInfo.seed([FCB49071485FAAFE]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNotNull(Assert.java:526)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:155)
at 
org.apache.solr.cloud.TestCloudRecovery.setupCluster(TestCloudRecovery.java:70)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:875)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:835)


FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.TestCloudRecovery

Error Message:
Could not find collection:collection1

Stack Trace:
java.lang.AssertionError: Could not find collection:collection1
at __randomizedtesting.SeedInfo.seed([FCB49071485FAAFE]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNotNull(Assert.java:526)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:155)
at 
org.apache.solr.cloud.TestCloudRecovery.setupCluster(TestCloudRecovery.java:70)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:875)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 

[jira] [Commented] (LUCENE-8497) Rethink multi-term analysis handling

2018-11-06 Thread Alan Woodward (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16676578#comment-16676578
 ] 

Alan Woodward commented on LUCENE-8497:
---

I plan on committing this soon - any objections, speak up now...

> Rethink multi-term analysis handling
> 
>
> Key: LUCENE-8497
> URL: https://issues.apache.org/jira/browse/LUCENE-8497
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Alan Woodward
>Priority: Major
> Attachments: LUCENE-8497.patch, LUCENE-8497.patch, LUCENE-8497.patch, 
> LUCENE-8497.patch
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> The current framework for handling term normalisation works via instanceof 
> checks for MultiTermAwareComponent and casts.  MultiTermAwareComponent itself 
> deals in AbstractAnalysisComponents, and so callers need to cast to the 
> correct component type before use, which is ripe for misuse.
> We should re-organise all this to be type-safe and usable without casts.  One 
> possibility is to add `normalize` methods to CharFilterFactory and 
> TokenFilterFactory that mirror their existing `create` methods.  The default 
> implementation would return the input unchanged, while filters that should 
> apply at normalization time can delegate to `create`.
> Related to this, we should deprecate and remove LowerCaseTokenizer, which 
> combines tokenization and normalization in a way that will break this API.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1690 - Still Failing

2018-11-06 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1690/

3 tests failed.
FAILED:  org.apache.lucene.document.TestLatLonLineShapeQueries.testRandomBig

Error Message:
Java heap space

Stack Trace:
java.lang.OutOfMemoryError: Java heap space
at 
__randomizedtesting.SeedInfo.seed([D1F3A2C42F9D6954:56A4DF4BBEC415D4]:0)
at java.util.Arrays.copyOf(Arrays.java:3332)
at 
java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:124)
at 
java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:448)
at java.lang.StringBuilder.append(StringBuilder.java:136)
at 
org.apache.lucene.store.NIOFSDirectory.openInput(NIOFSDirectory.java:82)
at 
org.apache.lucene.store.NRTCachingDirectory.openInput(NRTCachingDirectory.java:182)
at 
org.apache.lucene.util.LuceneTestCase.slowFileExists(LuceneTestCase.java:2800)
at 
org.apache.lucene.store.MockDirectoryWrapper.openInput(MockDirectoryWrapper.java:747)
at 
org.apache.lucene.store.FilterDirectory.openInput(FilterDirectory.java:100)
at 
org.apache.lucene.store.FilterDirectory.openInput(FilterDirectory.java:100)
at 
org.apache.lucene.store.FilterDirectory.openInput(FilterDirectory.java:100)
at 
org.apache.lucene.store.Directory.openChecksumInput(Directory.java:157)
at 
org.apache.lucene.util.bkd.BKDWriter.verifyChecksum(BKDWriter.java:1427)
at org.apache.lucene.util.bkd.BKDWriter.build(BKDWriter.java:1851)
at org.apache.lucene.util.bkd.BKDWriter.build(BKDWriter.java:1870)
at org.apache.lucene.util.bkd.BKDWriter.finish(BKDWriter.java:1022)
at 
org.apache.lucene.codecs.lucene60.Lucene60PointsWriter.writeField(Lucene60PointsWriter.java:131)
at 
org.apache.lucene.codecs.PointsWriter.mergeOneField(PointsWriter.java:62)
at 
org.apache.lucene.codecs.lucene60.Lucene60PointsWriter.merge(Lucene60PointsWriter.java:225)
at 
org.apache.lucene.index.SegmentMerger.mergePoints(SegmentMerger.java:201)
at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:161)
at 
org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4453)
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:4075)
at 
org.apache.lucene.index.SerialMergeScheduler.merge(SerialMergeScheduler.java:40)
at org.apache.lucene.index.IndexWriter.maybeMerge(IndexWriter.java:2178)
at 
org.apache.lucene.index.IndexWriter.processEvents(IndexWriter.java:5110)
at 
org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1620)
at 
org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1236)
at 
org.apache.lucene.document.BaseLatLonShapeTestCase.indexRandomShapes(BaseLatLonShapeTestCase.java:256)
at 
org.apache.lucene.document.BaseLatLonShapeTestCase.verify(BaseLatLonShapeTestCase.java:232)
at 
org.apache.lucene.document.BaseLatLonShapeTestCase.doTestRandom(BaseLatLonShapeTestCase.java:213)
at 
org.apache.lucene.document.BaseLatLonShapeTestCase.testRandomBig(BaseLatLonShapeTestCase.java:189)


FAILED:  
org.apache.solr.cloud.autoscaling.sim.TestSimPolicyCloud.testCreateCollectionAddReplica

Error Message:
Timeout waiting for collection to become active Live Nodes: 
[127.0.0.1:10017_solr, 127.0.0.1:10018_solr, 127.0.0.1:10016_solr, 
127.0.0.1:10020_solr, 127.0.0.1:10019_solr] Last available state: 
DocCollection(testCreateCollectionAddReplica//clusterstate.json/30)={   
"replicationFactor":"1",   "pullReplicas":"0",   
"router":{"name":"compositeId"},   "maxShardsPerNode":"1",   
"autoAddReplicas":"false",   "nrtReplicas":"1",   "tlogReplicas":"0",   
"autoCreated":"true",   "policy":"c1",   "shards":{"shard1":{   
"replicas":{"core_node1":{   
"core":"testCreateCollectionAddReplica_shard1_replica_n1",   
"SEARCHER.searcher.maxDoc":0,   "SEARCHER.searcher.deletedDocs":0,  
 "INDEX.sizeInBytes":10240,   "node_name":"127.0.0.1:10019_solr",   
"state":"active",   "type":"NRT",   
"INDEX.sizeInGB":9.5367431640625E-6,   "SEARCHER.searcher.numDocs":0}}, 
  "range":"8000-7fff",   "state":"active"}}}

Stack Trace:
java.lang.AssertionError: Timeout waiting for collection to become active
Live Nodes: [127.0.0.1:10017_solr, 127.0.0.1:10018_solr, 127.0.0.1:10016_solr, 
127.0.0.1:10020_solr, 127.0.0.1:10019_solr]
Last available state: 
DocCollection(testCreateCollectionAddReplica//clusterstate.json/30)={
  "replicationFactor":"1",
  "pullReplicas":"0",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false",
  "nrtReplicas":"1",
  "tlogReplicas":"0",
  "autoCreated":"true",
  "policy":"c1",
  "shards":{"shard1":{
  "replicas":{"core_node1":{
  "core":"testCreateCollectionAddReplica_shard1_replica_n1",
  "SEARCHER.searcher.maxDoc":0,
  

[jira] [Comment Edited] (SOLR-12524) CdcrBidirectionalTest.testBiDir() regularly fails

2018-11-06 Thread Amrit Sarkar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16676442#comment-16676442
 ] 

Amrit Sarkar edited comment on SOLR-12524 at 11/6/18 9:44 AM:
--

Another set of exceptions occurring:

{code}
  [beaster]   2> 22099 ERROR (cdcr-replicator-61-thread-1) [] 
o.a.s.c.u.ExecutorUtil Uncaught exception java.lang.AssertionError thrown by 
thread: cdcr-replicator-61-thread-1
  [beaster]   2> java.lang.Exception: Submitter stack trace
  [beaster]   2>at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.execute(ExecutorUtil.java:184)
 ~[java/:?]
  [beaster]   2>at 
org.apache.solr.handler.CdcrReplicatorScheduler.lambda$start$1(CdcrReplicatorScheduler.java:76)
 ~[java/:?]
  [beaster]   2>at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[?:1.8.0_181]
  [beaster]   2>at 
java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) ~[?:1.8.0_181]
  [beaster]   2>at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
 ~[?:1.8.0_181]
  [beaster]   2>at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
 ~[?:1.8.0_181]
  [beaster]   2>at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
[?:1.8.0_181]
  [beaster]   2>at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
[?:1.8.0_181]
  [beaster]   2>at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181]
  [beaster]   2> nov. 06, 2018 11:37:34 AM 
com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
 uncaughtException
  [beaster]   2> WARNING: Uncaught exception in thread: 
Thread[cdcr-replicator-61-thread-1,5,TGRP-CdcrBidirectionalTest]
  [beaster]   2> java.lang.AssertionError
  [beaster]   2>at 
__randomizedtesting.SeedInfo.seed([E87F434F86998C33]:0)
  [beaster]   2>at 
org.apache.solr.update.TransactionLog$LogReader.next(TransactionLog.java:677)
  [beaster]   2>at 
org.apache.solr.update.CdcrTransactionLog$CdcrLogReader.next(CdcrTransactionLog.java:304)
  [beaster]   2>at 
org.apache.solr.update.CdcrUpdateLog$CdcrLogReader.next(CdcrUpdateLog.java:630)
  [beaster]   2>at 
org.apache.solr.handler.CdcrReplicator.run(CdcrReplicator.java:77)
  [beaster]   2>at 
org.apache.solr.handler.CdcrReplicatorScheduler.lambda$null$0(CdcrReplicatorScheduler.java:81)
  [beaster]   2>at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
  [beaster]   2>at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  [beaster]   2>at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  [beaster]   2>at java.lang.Thread.run(Thread.java:748)
  [beaster]   2> 
{code}


was (Author: sarkaramr...@gmail.com):
Another set of exceptions occurring:

  [beaster]   2> 22099 ERROR (cdcr-replicator-61-thread-1) [] 
o.a.s.c.u.ExecutorUtil Uncaught exception java.lang.AssertionError thrown by 
thread: cdcr-replicator-61-thread-1
  [beaster]   2> java.lang.Exception: Submitter stack trace
  [beaster]   2>at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.execute(ExecutorUtil.java:184)
 ~[java/:?]
  [beaster]   2>at 
org.apache.solr.handler.CdcrReplicatorScheduler.lambda$start$1(CdcrReplicatorScheduler.java:76)
 ~[java/:?]
  [beaster]   2>at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[?:1.8.0_181]
  [beaster]   2>at 
java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) ~[?:1.8.0_181]
  [beaster]   2>at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
 ~[?:1.8.0_181]
  [beaster]   2>at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
 ~[?:1.8.0_181]
  [beaster]   2>at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
[?:1.8.0_181]
  [beaster]   2>at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
[?:1.8.0_181]
  [beaster]   2>at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181]
  [beaster]   2> nov. 06, 2018 11:37:34 AM 
com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
 uncaughtException
  [beaster]   2> WARNING: Uncaught exception in thread: 
Thread[cdcr-replicator-61-thread-1,5,TGRP-CdcrBidirectionalTest]
  [beaster]   2> java.lang.AssertionError
  [beaster]   2>at 
__randomizedtesting.SeedInfo.seed([E87F434F86998C33]:0)
  [beaster]   2>at 

[jira] [Commented] (SOLR-12524) CdcrBidirectionalTest.testBiDir() regularly fails

2018-11-06 Thread Amrit Sarkar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16676442#comment-16676442
 ] 

Amrit Sarkar commented on SOLR-12524:
-

Another set of exceptions occurring:

  [beaster]   2> 22099 ERROR (cdcr-replicator-61-thread-1) [] 
o.a.s.c.u.ExecutorUtil Uncaught exception java.lang.AssertionError thrown by 
thread: cdcr-replicator-61-thread-1
  [beaster]   2> java.lang.Exception: Submitter stack trace
  [beaster]   2>at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.execute(ExecutorUtil.java:184)
 ~[java/:?]
  [beaster]   2>at 
org.apache.solr.handler.CdcrReplicatorScheduler.lambda$start$1(CdcrReplicatorScheduler.java:76)
 ~[java/:?]
  [beaster]   2>at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[?:1.8.0_181]
  [beaster]   2>at 
java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) ~[?:1.8.0_181]
  [beaster]   2>at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
 ~[?:1.8.0_181]
  [beaster]   2>at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
 ~[?:1.8.0_181]
  [beaster]   2>at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
[?:1.8.0_181]
  [beaster]   2>at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
[?:1.8.0_181]
  [beaster]   2>at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181]
  [beaster]   2> nov. 06, 2018 11:37:34 AM 
com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
 uncaughtException
  [beaster]   2> WARNING: Uncaught exception in thread: 
Thread[cdcr-replicator-61-thread-1,5,TGRP-CdcrBidirectionalTest]
  [beaster]   2> java.lang.AssertionError
  [beaster]   2>at 
__randomizedtesting.SeedInfo.seed([E87F434F86998C33]:0)
  [beaster]   2>at 
org.apache.solr.update.TransactionLog$LogReader.next(TransactionLog.java:677)
  [beaster]   2>at 
org.apache.solr.update.CdcrTransactionLog$CdcrLogReader.next(CdcrTransactionLog.java:304)
  [beaster]   2>at 
org.apache.solr.update.CdcrUpdateLog$CdcrLogReader.next(CdcrUpdateLog.java:630)
  [beaster]   2>at 
org.apache.solr.handler.CdcrReplicator.run(CdcrReplicator.java:77)
  [beaster]   2>at 
org.apache.solr.handler.CdcrReplicatorScheduler.lambda$null$0(CdcrReplicatorScheduler.java:81)
  [beaster]   2>at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
  [beaster]   2>at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  [beaster]   2>at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  [beaster]   2>at java.lang.Thread.run(Thread.java:748)
  [beaster]   2> 

> CdcrBidirectionalTest.testBiDir() regularly fails
> -
>
> Key: SOLR-12524
> URL: https://issues.apache.org/jira/browse/SOLR-12524
> Project: Solr
>  Issue Type: Test
>  Components: CDCR, Tests
>Reporter: Christine Poerschke
>Priority: Major
> Attachments: SOLR-12524.patch, SOLR-12524.patch, SOLR-12524.patch, 
> SOLR-12524.patch, SOLR-12524.patch, SOLR-12524.patch, beast-test-run
>
>
> e.g. from 
> https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/4701/consoleText
> {code}
> [junit4] ERROR   20.4s J0 | CdcrBidirectionalTest.testBiDir <<<
> [junit4]> Throwable #1: 
> com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an 
> uncaught exception in thread: Thread[id=28371, 
> name=cdcr-replicator-11775-thread-1, state=RUNNABLE, 
> group=TGRP-CdcrBidirectionalTest]
> [junit4]> at 
> __randomizedtesting.SeedInfo.seed([CA5584AC7009CD50:8F8E744E68278112]:0)
> [junit4]> Caused by: java.lang.AssertionError
> [junit4]> at 
> __randomizedtesting.SeedInfo.seed([CA5584AC7009CD50]:0)
> [junit4]> at 
> org.apache.solr.update.CdcrUpdateLog$CdcrLogReader.forwardSeek(CdcrUpdateLog.java:611)
> [junit4]> at 
> org.apache.solr.handler.CdcrReplicator.run(CdcrReplicator.java:125)
> [junit4]> at 
> org.apache.solr.handler.CdcrReplicatorScheduler.lambda$null$0(CdcrReplicatorScheduler.java:81)
> [junit4]> at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
> [junit4]> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> [junit4]> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> [junit4]> at java.lang.Thread.run(Thread.java:748)
> {code}



--

  1   2   >